Skip to content
Longterm Wiki
All Source Checks
Citation

Schmidt Futures - Footnote 55

contradicted50% confidence

1 evidence check

Last checked: 4/3/2026

WRONG NUMBERS: The source does not mention the grant amount of $500,000. FABRICATED DETAILS: The source does not mention Daniel Kang's work on AI agent cybersecurity attack benchmarks or risks like AI systems escaping laboratory control and self-replication. FABRICATED DETAILS: The source does not mention Bo Li's funding for developing virtual environments with red teaming algorithms to evaluate appropriate access levels for AI systems.

Evidence — 1 source, 1 check

contradicted50%Haiku 4.5 · 4/3/2026
Found: The program provides grants up to \$500,000 each, along with computational support from the Center for AI Safety and OpenAI API access for research requiring frontier model testing. Funded projects in

Note: WRONG NUMBERS: The source does not mention the grant amount of $500,000. FABRICATED DETAILS: The source does not mention Daniel Kang's work on AI agent cybersecurity attack benchmarks or risks like AI systems escaping laboratory control and self-replication. FABRICATED DETAILS: The source does not mention Bo Li's funding for developing virtual environments with red teaming algorithms to evaluate appropriate access levels for AI systems.

Debug info

Record type: citation

Record ID: page:schmidt-futures:fn55

Source Check: Schmidt Futures - Footnote 55 | Longterm Wiki