Skip to content
Longterm Wiki
All Source Checks
Citation

Schmidt Futures - Footnote 58

partial80% confidence

1 evidence check

Last checked: 4/3/2026

The source does not mention the grant amount of $500,000. The claim mentions risks like AI systems escaping laboratory control and self-replication, which are not explicitly mentioned in the source. The source only mentions Kang's work on cybersecurity capabilities of AI agents. The source does not explicitly state that Bo Li is developing virtual environments with red teaming algorithms to evaluate appropriate access levels for AI systems, but rather to evaluate AI systems and AI agents, with a focus on exploring what level of access to AI models is needed for different levels of evaluation.

Evidence — 1 source, 1 check

partial80%Haiku 4.5 · 4/3/2026
Found: The program provides grants up to \$500,000 each, along with computational support from the Center for AI Safety and OpenAI API access for research requiring frontier model testing. Funded projects in

Note: The source does not mention the grant amount of $500,000. The claim mentions risks like AI systems escaping laboratory control and self-replication, which are not explicitly mentioned in the source. The source only mentions Kang's work on cybersecurity capabilities of AI agents. The source does not explicitly state that Bo Li is developing virtual environments with red teaming algorithms to evaluate appropriate access levels for AI systems, but rather to evaluate AI systems and AI agents, with a focus on exploring what level of access to AI models is needed for different levels of evaluation.

Debug info

Record type: citation

Record ID: page:schmidt-futures:fn58

Source Check: Schmidt Futures - Footnote 58 | Longterm Wiki