Skip to content
Longterm Wiki
All Source Checks
Citation

Elicit (AI Research Tool) - Footnote 29

partial85% confidence

1 evidence check

Last checked: 4/3/2026

The claim mentions sentence-level citations for verifying AI claims, but the source does not explicitly mention sentence-level citations. The claim mentions the integration of models like Anthropic's Claude, which employs Constitutional AI for alignment, but the source only mentions Anthropic in the context of organizations leading AI safety efforts.

Evidence — 1 source, 1 check

partial85%Haiku 4.5 · 4/3/2026
Found: The platform uses reinforcement learning from human feedback (<EntityLink id="rlhf">RLHF</EntityLink>) to ensure outputs align with researcher intentions and avoid harmful content. Red-teaming process

Note: The claim mentions sentence-level citations for verifying AI claims, but the source does not explicitly mention sentence-level citations. The claim mentions the integration of models like Anthropic's Claude, which employs Constitutional AI for alignment, but the source only mentions Anthropic in the context of organizations leading AI safety efforts.

Debug info

Record type: citation

Record ID: page:elicit:fn29