AI for Human Reasoning Fellowship
ai-for-human-reasoning-fellowship (E585)← Back to pagePath: /knowledge-base/responses/ai-for-human-reasoning-fellowship/
Page Metadata
{
"id": "ai-for-human-reasoning-fellowship",
"numericId": null,
"path": "/knowledge-base/responses/ai-for-human-reasoning-fellowship/",
"filePath": "knowledge-base/responses/ai-for-human-reasoning-fellowship.mdx",
"title": "AI for Human Reasoning Fellowship",
"quality": 55,
"importance": 45,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-02-01",
"llmSummary": "FLF's inaugural 12-week fellowship (July-October 2025) combined research fellowship with startup incubator format. 30 fellows received $25-50K stipends to build AI tools for human reasoning. Produced 25+ projects across epistemic tools (Community Notes AI, fact-checking), forecasting (Deliberation Markets, scenario planning), negotiation mediation, and collective decision-making. Notable outputs include world's first AI-written approved Community Note, Polis 2.0 survey of 1,000+ Americans on AI, and multiple open-source tools.",
"structuredSummary": null,
"description": "A 12-week fellowship program by the Future of Life Foundation (FLF) that brought together 30 fellows to develop AI tools for coordination, epistemics, and collective decision-making. The inaugural 2025 cohort produced 25+ projects including Polis 2.0, Deliberation Markets, AI Community Notes writers, and various forecasting and sensemaking tools.",
"ratings": {
"novelty": 7,
"rigor": 5,
"actionability": 7,
"completeness": 6
},
"category": "responses",
"subcategory": "field-building",
"clusters": [
"community",
"ai-safety"
],
"metrics": {
"wordCount": 2196,
"tableCount": 10,
"diagramCount": 0,
"internalLinks": 15,
"externalLinks": 55,
"footnoteCount": 3,
"bulletRatio": 0.14,
"sectionCount": 21,
"hasOverview": true,
"structuralScore": 13
},
"suggestedQuality": 87,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 2196,
"unconvertedLinks": [
{
"text": "Website",
"url": "https://pol.is",
"resourceId": "73ba60cd43a92b18",
"resourceTitle": "Polis platform"
}
],
"unconvertedLinkCount": 1,
"convertedLinkCount": 0,
"backlinkCount": 0,
"redundancy": {
"maxSimilarity": 13,
"similarPages": [
{
"id": "quri",
"title": "QURI (Quantified Uncertainty Research Institute)",
"path": "/knowledge-base/organizations/quri/",
"similarity": 13
},
{
"id": "ai-futures-project",
"title": "AI Futures Project",
"path": "/knowledge-base/organizations/ai-futures-project/",
"similarity": 11
},
{
"id": "coefficient-giving",
"title": "Coefficient Giving",
"path": "/knowledge-base/organizations/coefficient-giving/",
"similarity": 11
},
{
"id": "fli",
"title": "Future of Life Institute (FLI)",
"path": "/knowledge-base/organizations/fli/",
"similarity": 11
},
{
"id": "fri",
"title": "Forecasting Research Institute",
"path": "/knowledge-base/organizations/fri/",
"similarity": 11
}
]
}
}Entity Data
{
"id": "ai-for-human-reasoning-fellowship",
"type": "approach",
"title": "AI for Human Reasoning Fellowship",
"description": "FLF's inaugural 12-week fellowship (July-October 2025) combined research fellowship with startup incubator format. 30 fellows received $25-50K stipends to build AI tools for human reasoning. Produced 25+ projects across epistemic tools (Community Notes AI, fact-checking), forecasting (Deliberation M",
"tags": [],
"relatedEntries": [],
"sources": [],
"lastUpdated": "2026-02",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (0)
No backlinks
Frontmatter
{
"title": "AI for Human Reasoning Fellowship",
"description": "A 12-week fellowship program by the Future of Life Foundation (FLF) that brought together 30 fellows to develop AI tools for coordination, epistemics, and collective decision-making. The inaugural 2025 cohort produced 25+ projects including Polis 2.0, Deliberation Markets, AI Community Notes writers, and various forecasting and sensemaking tools.",
"sidebar": {
"order": 5
},
"quality": 55,
"llmSummary": "FLF's inaugural 12-week fellowship (July-October 2025) combined research fellowship with startup incubator format. 30 fellows received $25-50K stipends to build AI tools for human reasoning. Produced 25+ projects across epistemic tools (Community Notes AI, fact-checking), forecasting (Deliberation Markets, scenario planning), negotiation mediation, and collective decision-making. Notable outputs include world's first AI-written approved Community Note, Polis 2.0 survey of 1,000+ Americans on AI, and multiple open-source tools.",
"lastEdited": "2026-02-01",
"importance": 45,
"update_frequency": 45,
"ratings": {
"novelty": 7,
"rigor": 5,
"actionability": 7,
"completeness": 6
},
"clusters": [
"community",
"ai-safety"
],
"subcategory": "field-building",
"entityType": "approach"
}Raw MDX Source
---
title: AI for Human Reasoning Fellowship
description: A 12-week fellowship program by the Future of Life Foundation (FLF) that brought together 30 fellows to develop AI tools for coordination, epistemics, and collective decision-making. The inaugural 2025 cohort produced 25+ projects including Polis 2.0, Deliberation Markets, AI Community Notes writers, and various forecasting and sensemaking tools.
sidebar:
order: 5
quality: 55
llmSummary: FLF's inaugural 12-week fellowship (July-October 2025) combined research fellowship with startup incubator format. 30 fellows received $25-50K stipends to build AI tools for human reasoning. Produced 25+ projects across epistemic tools (Community Notes AI, fact-checking), forecasting (Deliberation Markets, scenario planning), negotiation mediation, and collective decision-making. Notable outputs include world's first AI-written approved Community Note, Polis 2.0 survey of 1,000+ Americans on AI, and multiple open-source tools.
lastEdited: "2026-02-01"
importance: 45
update_frequency: 45
ratings:
novelty: 7
rigor: 5
actionability: 7
completeness: 6
clusters:
- community
- ai-safety
subcategory: field-building
entityType: approach
---
import {EntityLink} from '@components/wiki';
## Quick Assessment
| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Program Type** | Fellowship + Incubator Hybrid | Combines research exploration with prototype building |
| **Scale** | 30 fellows | Inaugural cohort, July-October 2025 |
| **Funding** | \$25K-\$50K per fellow | Based on experience level |
| **Outputs** | 25+ projects | Most with working demos, papers, or deployed tools |
| **Focus Area** | Epistemic tools & coordination | AI for sensemaking, forecasting, negotiation, fact-checking |
| **Location** | SF Bay Area | Shared office space; some remote participation |
| **Notable Achievement** | First AI-approved Community Note | Nathan Young's bot wrote world's first approved AI community note on X |
## Program Details
| Attribute | Details |
|-----------|---------|
| **Full Name** | AI for Human Reasoning Fellowship |
| **Organizer** | Future of Life Foundation (FLF) |
| **Duration** | 12 weeks (July 14 - October 3, 2025) |
| **Stipend** | \$25,000 (junior) to \$50,000 (senior) |
| **Fellows** | 30 participants |
| **Location** | SF Bay Area shared office |
| **Program Director** | Ben Goldhaber |
| **Program Managers** | Timothy Telleen Lawton, Kathleen Finlinson |
| **Website** | [aiforhumanreasoning.com](https://aiforhumanreasoning.com/) |
| **Organizer Website** | [flf.org](https://www.flf.org/) |
| **Est. Program Cost** | \$1-2M total (stipends + operations) |
## Overview
The AI for Human Reasoning Fellowship was an inaugural program run by the Future of Life Foundation (FLF) that brought together researchers, builders, and entrepreneurs to develop AI tools designed to enhance human reasoning and coordination.[^1] The program operated as a hybrid between a research fellowship and a startup incubator, with fellows receiving substantial stipends (\$25K-\$50K) to explore and prototype beneficial AI applications.
The fellowship addressed what FLF describes as a critical gap: the world is "radically underinvested" in AI applications that could enhance human decision-making and coordination capabilities.[^2] While much attention focuses on AI risks, relatively little goes toward building tools that could help humanity navigate complex challenges—including those posed by AI itself.
The program structure consisted of three phases:
1. **Explore phase**: Research, discussion, and ideation on potential projects
2. **Build phase**: Creating prototypes and real-world implementations
3. **Translation phase**: Polishing work, reflection, and public presentation at demo day
## Core Focus Areas
The fellowship targeted six key application areas for AI-augmented human reasoning:
| Area | Description | Example Projects |
|------|-------------|------------------|
| **Epistemic Tools** | Fact-checking, rhetoric detection, information verification | Community Notes AI, Evidentry, Epistemic Evals |
| **Forecasting & Scenarios** | <EntityLink id="E228">Prediction markets</EntityLink>, strategic foresight, scenario planning | Deliberation Markets, Deep Future, Sentinel |
| **Negotiation** | AI-mediated high-stakes bargaining | Negotiation Station |
| **Decision Support** | Reasoning scaffolds, bias navigation | Confidence Interval, Chord |
| **Evaluations** | Benchmarking epistemic virtue and AI trustworthiness | DeliberationBench, Society Library evals |
| **Coordination** | Consensus-finding, collective sensemaking | Polis 2.0, Pivotal, Updraft |
## Notable Projects
### Epistemic & Fact-Checking Tools
**Community Notes AI** (Nathan Young, Robert Gordon)
Built an AI system that writes Community Notes for X (formerly Twitter). Achieved a significant milestone: the world's first AI-written Community Note to be approved through X's rating system. The bot's notes have been viewed over 2.5 million times.[^3]
- [GitHub: community-notes-writer](https://github.com/Goodheart-Labs/community-notes-writer)
- [Announcement post](https://nathanpmyoung.substack.com/p/worlds-first-ai-community-note)
**Open Note Network** (Steve Isley)
An AI system generating Community Notes paired with a dedicated website hosting long-form fact checks, linking short-form social media corrections to comprehensive analysis.
- [Website](https://www.opennotenetwork.com/)
**AI for the Epistemic Commons** (Herbie Bradley)
Research on "Community Notes everywhere" for browsers and AI-written Wikipedia improvements. Built evaluations measuring model capability to fix errors and expand Wikipedia articles.
**Evidentry** (Agita Pasaribu)
Coalition infrastructure connecting survivors, platforms, and regulators to verify and remove AI-generated intimate imagery. Features multi-detector aggregation and verification workflows that reduce removal time from days to minutes.
- [Website](https://evidentry.app)
### Forecasting & Strategic Foresight
**Deliberation Markets** (Siddarth Srinivasan)
A novel prediction market mechanism where participants write explanations supervised by LLMs. Instead of buying YES/NO contracts, users provide reasoning that LLMs synthesize into probability estimates.
- [Demo](https://deli-markets.onrender.com/)
- [Paper](https://drive.google.com/file/d/1MqXQKXIB1XtRT5tlzhPuWKlHbIetfSZ/view)
**Deep Future** (Gordon Brander)
Strategic foresight tool powered by scenario methods from US military and RAND. AI agents map strategic landscapes, identify driving forces, explore trajectories, and discover leverage points.
- [Website](https://deepfuture.now/)
**<EntityLink id="E566">Sentinel</EntityLink>** (Nuno Sempere)
Systems detecting and tracking global risks including the "Eye of Sauron" monitoring system, xrisk.fyi tool, automated Twitter reports, and forecasting infrastructure using HDBSCAN clustering for risk analysis.
- [Blog](https://blog.sentinel-team.org/)
- [Tools](https://tools.sentinel-team.org)
### Collective Decision-Making
**Polis 2.0** (Colin Megill, Maximilian Kroner Dale)
Real-time system gathering and analyzing what large groups think using advanced statistics and ML. Conducted a survey with 1,000+ quota-sampled Americans who voted 90,000+ times on 1,000+ statements about AI concerns.
- [Website](https://pol.is)
**Chord** (Alex Bleakley)
AI-orchestrated communication tool that parallelizes conversations to help groups make better, faster decisions. Led to founding of Sylience post-fellowship.
- [Demo](https://chord.sylience.com)
**Pivotal** (Anand Shah, Parker Whitfill, Kai Sandbrink, Ben Sklaroff)
Multi-agent orchestration tool helping teams coordinate on scheduling, action items, and organizational context. Integrates with existing software and automates workflows.
- [GitHub](https://github.com/pivotal-flf/pivotal)
- [Website](https://coop.tech)
**Updraft** (Robert Gordon)
Real-time facilitation tool where groups and AI collaboratively map, cluster, and evolve ideas on a shared 2D canvas. Part of a connected experiment suite including Winnow and Prune.
- [Website](https://www.updraft.to/)
### Evaluations & Benchmarks
**DeliberationBench** (Maximilian Kroner Dale, Luke Hewitt, Paul de Font-Reaulx)
Novel AI persuasiveness benchmark using Deliberative Polls as a normative reference. Demonstrated findings in a 4,000-person randomized LLM persuasiveness experiment.
- [Paper](https://docsend.com/view/3ptd5mgjg29b8ej2)
**Epistemic Evals** (Alejandro Botas)
Evaluated human and model outputs for epistemic quality attributes. Used LLMs to assess EA Forum posts on Reasoning/Clarity/Value dimensions and tested model sensitivity to epistemically irrelevant contexts.
- [Leaderboard](https://calibrated-evals.streamlit.app/)
**Society Library** (Jamie Joyce)
Built a semi-automated information processing pipeline producing a 600+ page intelligence report on complex government events with multiple viewpoints. Researching how structured datasets can serve as benchmarks for LLM truth-seeking.
- [Website](https://www.societylibrary.org/)
### Other Notable Projects
| Project | Fellow(s) | Description | Video |
|---------|-----------|-------------|-------|
| **Collective Agency** | Blake Borgeson | Framework for increasing humanity's collective agency through AI intermediaries. Presented motifs for group collaboration (facilitators, orchestrators, AI intermediaries) and advocates for "shovel-ready wisdom"—ideas for better collective processes ready to become technology. | [Watch](https://www.youtube.com/watch?v=P_uMaOzBH_Q) |
| **Future Visions Hub** | Sofia Vanhanen | Group decision-making software and <EntityLink id="E122">epistemic infrastructure</EntityLink> for collective sensemaking about desirable futures | [Watch](https://www.youtube.com/watch?v=lCqQIabLKVo) |
| **Confidence Interval** | Vaughn Tan | Self-service webapp using LLMs as Socratic mirror for making subjective arguments rigorous. Currently used by college students with interest from startups and governments. | [Watch](https://www.youtube.com/watch?v=ErYC-F7lJac) |
| **Negotiation Station** | Kai Sandbrink | AI tools for high-stakes negotiations between nations/corporations as trusted mediators | [Watch](https://www.youtube.com/watch?v=eHxQRoE3MmA) |
| **AI Policy Simulation** | Alexander van Grootel, Emma Kumleben | AI aiding institutional decision-making through strategic foresight and forecasting for AGI transition navigation | [Watch](https://www.youtube.com/watch?v=_xtHcBQGYpE) |
| **Virtuous** | Paul de Font-Reaulx | Epistemic evaluations for frontier models; developing DeliberationBench | [Watch](https://www.youtube.com/watch?v=TZSCkqxl8q8) |
| **RiskWatch** | Alyssia Jovellanos, Martin Ciesielski-Listwan | Risk Threat Observatory enabled by prediction markets | [Watch](https://www.youtube.com/watch?v=m5h8Sx8kx18) |
| **Worker-Owned Startups** | Ben Sklaroff | Governance models for worker-owned startup structures | [Watch](https://www.youtube.com/watch?v=20yWMxAx6QI) |
| **Agent Strategy Arena** | Joshua Levy | Platform for scalable, grounded evaluations of AI agents' prediction accuracy | [Watch](https://www.youtube.com/watch?v=qkvFfS_nTI8) |
| **AI Discourse Sensemaking** | Matthew Brooks, Emma Kumleben, Niki Dupuis | Using LLMs to map opinion landscapes and detect polarization; built semi-automated argument mapper | [Watch](https://www.youtube.com/watch?v=vqDRlSWTOUQ) |
## Fellows
The 30 fellows came from diverse backgrounds including academia, entrepreneurship, policy, and technology:
| Fellow | Background/Affiliation |
|--------|------------------------|
| Blake Borgeson | Collaboration AI researcher |
| Colin Megill | Polis creator |
| Nathan Young | <EntityLink id="E546">Manifold</EntityLink> community, forecasting |
| Robert Gordon | Goodheart Labs |
| Herbie Bradley | AI researcher |
| Nuno Sempere | <EntityLink id="E560">Samotsvety</EntityLink>, <EntityLink id="E238">QURI</EntityLink>, <EntityLink id="E566">Sentinel</EntityLink> founder |
| Kai Sandbrink | Multi-project contributor |
| Gordon Brander | Strategic foresight |
| Jamie Joyce | Society Library |
| Sofia Vanhanen | Future Visions Hub |
| Agita Pasaribu | Evidentry |
| And 19 others | Various backgrounds |
## Advisors
The fellowship included advisors with expertise in AI safety, mechanism design, and coordination:
- **Anthony Aguirre** - President of FLF, Executive Director of <EntityLink id="E528">Future of Life Institute</EntityLink>
- **Andreas Stuhlmüller** - Founder of <EntityLink id="E526">Elicit</EntityLink>
- **Brendan Fong** - Category theory and applied mathematics
- Additional advisors from academia and industry
## Relationship to FLI and FLF
The Future of Life Foundation (FLF) is a separate organization from the <EntityLink id="E528">Future of Life Institute (FLI)</EntityLink>, though both share leadership (Anthony Aguirre serves as Executive Director of FLI and President of FLF). While FLI focuses primarily on existential risk advocacy and grantmaking, FLF operates more as an incubator for beneficial AI applications.
## Outcomes and Impact
The fellowship produced several concrete outcomes:
| Outcome Type | Count/Details |
|--------------|---------------|
| **Projects launched** | 25+ with demos or papers |
| **Open-source tools** | Multiple GitHub repositories |
| **Academic papers** | DeliberationBench, Deliberation Markets |
| **Companies founded** | Sylience (from Chord project) |
| **Real-world deployment** | AI Community Notes with 2M+ views |
| **Research artifacts** | Evaluations, benchmarks, datasets |
FLF indicated willingness to provide funding beyond the fellowship period or assist fellows in launching new organizations based on their work.[^2]
## Demo Day Presentations
All fellows presented their work at a demo day. Video presentations with auto-generated transcripts are available on YouTube.
| Project | Presenter(s) | Video |
|---------|--------------|-------|
| Collective Agency | Blake Borgeson | [Watch](https://www.youtube.com/watch?v=P_uMaOzBH_Q) |
| Pivotal | Anand Shah, Parker Whitfill, Kai Sandbrink, Ben Sklaroff | [Watch](https://www.youtube.com/watch?v=9lX6cwiw0Ac) |
| Polis 2.0 | Colin Megill, Maximilian Kroner Dale | [Watch](https://www.youtube.com/watch?v=EzVN2IJhP7Q) |
| Deliberation Markets | Siddarth Srinivasan | [Watch](https://www.youtube.com/watch?v=OA-nLfXV7Ks) |
| Community Notes AI | Nathan Young, Robert Gordon | [Watch](https://www.youtube.com/watch?v=yqpcsat1Mxw) |
| Open Note Network | Steve Isley | [Watch](https://www.youtube.com/watch?v=jqss-3RYjaE) |
| AI for Epistemic Commons | Herbie Bradley | [Watch](https://www.youtube.com/watch?v=Q-2Ci4Ajmh8) |
| Evidentry | Agita Pasaribu | [Watch](https://www.youtube.com/watch?v=NOYGvoB3pk4) |
| Worker-Owned Startups | Ben Sklaroff | [Watch](https://www.youtube.com/watch?v=20yWMxAx6QI) |
| Society Library | Jamie Joyce | [Watch](https://www.youtube.com/watch?v=xsvCYhcxDX4) |
| AI Discourse Sensemaking | Matthew Brooks, Emma Kumleben, Niki Dupuis | [Watch](https://www.youtube.com/watch?v=vqDRlSWTOUQ) |
| Confidence Interval | Vaughn Tan | [Watch](https://www.youtube.com/watch?v=ErYC-F7lJac) |
| Virtuous | Paul de Font-Reaulx | [Watch](https://www.youtube.com/watch?v=TZSCkqxl8q8) |
| Epistemic Evals | Alejandro Botas | [Watch](https://www.youtube.com/watch?v=6CVBHgyo-V0) |
| DeliberationBench | Max Kroner Dale, Luke Hewitt, Paul de Font-Reaulx | [Watch](https://www.youtube.com/watch?v=T3JAWlc1dq0) |
| Forecasting & Provenance | Alyssia Jovellanos, Martin Ciesielski-Listwan | [Watch](https://www.youtube.com/watch?v=m5h8Sx8kx18) |
| Agent Strategy Arena | Joshua Levy | [Watch](https://www.youtube.com/watch?v=qkvFfS_nTI8) |
| AI Policy Simulation | Alexander van Grootel, Emma Kumleben | [Watch](https://www.youtube.com/watch?v=_xtHcBQGYpE) |
| Deep Future | Gordon Brander | [Watch](https://www.youtube.com/watch?v=r_vdUeoKbJE) |
| Future Visions Hub | Sofia Vanhanen | [Watch](https://www.youtube.com/watch?v=lCqQIabLKVo) |
| Tools for Sensemaking | Matthew Brooks | [Watch](https://www.youtube.com/watch?v=uX3EdKWo3ZA) |
| Negotiation Station | Kai Sandbrink | [Watch](https://www.youtube.com/watch?v=eHxQRoE3MmA) |
| Chord | Alex Bleakley | [Watch](https://www.youtube.com/watch?v=IkwKzNl6J-g) |
### Presentation Highlights
Key insights from the demo day presentations (based on transcript analysis):
- **Polis 2.0**: Surveyed 1,000+ Americans who voted 90,000+ times on AI concerns. Found bridging consensus on deep fakes and privacy, with partisan split on regulation approach. New features include semantic topic clustering and LLM-generated consensus summaries.
- **Deliberation Markets**: Live demo showed market probability shifting 32%→65% from a single well-reasoned argument—demonstrating how explanation quality drives predictions. Core innovation: LLMs evaluate reasoning quality and trade on synthesized probabilities.
- **Community Notes AI**: Early notes reduce misleading tweet shares by 25-50%; late notes have almost no effect. Team estimates \$150K-\$500K needed to scale to TikTok, Chrome extensions, and Perplexity.
- **Deep Future**: Compresses week-long RAND-style scenario planning workshops into 10-15 minutes. Demo identified 38 driving forces for "How will AI agents transform the web by 2030?" and generated strategic reports with opportunities, threats, and early warning signals.
- **Collective Agency**: Blake Borgeson presented motifs for AI-human collaboration patterns—facilitators, orchestrators, and AI intermediaries—across collaboration phases (understand, explore, decide, coordinate, create, share, reflect).
## Assessment
### Strengths
| Strength | Evidence |
|----------|----------|
| **Novel focus area** | Few programs specifically target AI for epistemics/coordination |
| **Concrete outputs** | Most projects have working demos, not just research |
| **Diverse approaches** | Covered forecasting, fact-checking, negotiation, collective decision-making |
| **Real deployment** | Some tools already in use (Community Notes AI) |
| **Open source** | Many projects released code publicly |
### Limitations
| Limitation | Notes |
|------------|-------|
| **First cohort** | No track record yet for long-term impact |
| **Prototype stage** | Most projects still early; unclear which will scale |
| **Narrow ecosystem** | Fellows largely from EA/rationalist adjacent networks |
| **Evaluation difficulty** | Hard to measure impact of "reasoning improvement" tools |
## Related
| Item | Relationship |
|------|--------------|
| **<EntityLink id="E528">Future of Life Institute</EntityLink>** | Related organization; shared leadership with FLF |
| **<EntityLink id="E238">QURI</EntityLink>** | Similar focus on epistemic tools; Nuno Sempere connection |
| **<EntityLink id="E546">Manifold Markets</EntityLink>** | Prediction market platform; Nathan Young connection |
| **<EntityLink id="E526">Elicit</EntityLink>** | AI research tool; Andreas Stuhlmüller as advisor |
| **<EntityLink id="E468">AI Safety Training Programs</EntityLink>** | Complementary fellowship in different focus area |
## Sources
[^1]: [AI for Human Reasoning Fellowship website](https://aiforhumanreasoning.com/)
[^2]: [FLF Fellowship on AI for Human Reasoning: \$25-50k, 12 weeks](https://forum.effectivealtruism.org/posts/dQWHB2s3frDjXwGFe/funded-fellowship-ai-for-human-reasoning-fellowship-with-the), EA Forum
[^3]: [World's First AI Community Note](https://nathanpmyoung.substack.com/p/worlds-first-ai-community-note), Nathan Young's Substack