Helen Toner
helen-toner (E575)← Back to pagePath: /knowledge-base/people/helen-toner/
Page Metadata
{
"id": "helen-toner",
"numericId": null,
"path": "/knowledge-base/people/helen-toner/",
"filePath": "knowledge-base/people/helen-toner.mdx",
"title": "Helen Toner",
"quality": 43,
"importance": 18,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-29",
"llmSummary": "Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisis where she voted to remove Sam Altman. Compiles public testimony, publications, and media appearances but offers minimal original analysis beyond chronicling events and her policy positions favoring government AI regulation.",
"structuredSummary": null,
"description": "Australian AI governance researcher, Georgetown CSET Interim Executive Director, and former OpenAI board member who participated in Sam Altman's November 2023 removal. TIME 100 Most Influential People in AI 2024.",
"ratings": {
"novelty": 2,
"rigor": 4.5,
"actionability": 1.5,
"completeness": 7
},
"category": "people",
"subcategory": null,
"clusters": [
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 5505,
"tableCount": 46,
"diagramCount": 1,
"internalLinks": 6,
"externalLinks": 22,
"footnoteCount": 0,
"bulletRatio": 0.03,
"sectionCount": 74,
"hasOverview": true,
"structuralScore": 14
},
"suggestedQuality": 93,
"updateFrequency": null,
"evergreen": true,
"wordCount": 5505,
"unconvertedLinks": [],
"unconvertedLinkCount": 0,
"convertedLinkCount": 0,
"backlinkCount": 0,
"redundancy": {
"maxSimilarity": 15,
"similarPages": [
{
"id": "sam-altman",
"title": "Sam Altman",
"path": "/knowledge-base/people/sam-altman/",
"similarity": 15
},
{
"id": "centre-for-long-term-resilience",
"title": "Centre for Long-Term Resilience",
"path": "/knowledge-base/organizations/centre-for-long-term-resilience/",
"similarity": 13
},
{
"id": "controlai",
"title": "ControlAI",
"path": "/knowledge-base/organizations/controlai/",
"similarity": 13
},
{
"id": "secure-ai-project",
"title": "Secure AI Project",
"path": "/knowledge-base/organizations/secure-ai-project/",
"similarity": 13
},
{
"id": "yann-lecun",
"title": "Yann LeCun",
"path": "/knowledge-base/people/yann-lecun/",
"similarity": 13
}
]
}
}Entity Data
{
"id": "helen-toner",
"type": "person",
"title": "Helen Toner",
"description": "Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisis where she voted to remove Sam Altman. Compiles public testimony, publications, and media appearances ",
"tags": [],
"relatedEntries": [],
"sources": [],
"lastUpdated": "2026-02",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (0)
No backlinks
Frontmatter
{
"title": "Helen Toner",
"description": "Australian AI governance researcher, Georgetown CSET Interim Executive Director, and former OpenAI board member who participated in Sam Altman's November 2023 removal. TIME 100 Most Influential People in AI 2024.",
"sidebar": {
"order": 15
},
"quality": 43,
"llmSummary": "Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisis where she voted to remove Sam Altman. Compiles public testimony, publications, and media appearances but offers minimal original analysis beyond chronicling events and her policy positions favoring government AI regulation.",
"lastEdited": "2026-01-29",
"importance": 18,
"ratings": {
"novelty": 2,
"rigor": 4.5,
"actionability": 1.5,
"completeness": 7
},
"clusters": [
"ai-safety",
"governance"
],
"entityType": "person"
}Raw MDX Source
---
title: "Helen Toner"
description: "Australian AI governance researcher, Georgetown CSET Interim Executive Director, and former OpenAI board member who participated in Sam Altman's November 2023 removal. TIME 100 Most Influential People in AI 2024."
sidebar:
order: 15
quality: 43
llmSummary: "Comprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisis where she voted to remove Sam Altman. Compiles public testimony, publications, and media appearances but offers minimal original analysis beyond chronicling events and her policy positions favoring government AI regulation."
lastEdited: "2026-01-29"
importance: 18
ratings:
novelty: 2
rigor: 4.5
actionability: 1.5
completeness: 7
clusters: ["ai-safety","governance"]
entityType: person
---
import {Mermaid, DataExternalLinks, EntityLink} from '@components/wiki';
<DataExternalLinks pageId="helen-toner" />
## Quick Assessment
| Dimension | Assessment | Notes |
|-----------|------------|-------|
| Primary Role | <EntityLink id="E608">AI Governance</EntityLink> Researcher | Georgetown CSET Interim Executive Director |
| Global Recognition | TIME 100 AI 2024 | Listed among most influential people in AI |
| <EntityLink id="E218">OpenAI</EntityLink> Board | 2021-2023 | Voted to remove <EntityLink id="E269">Sam Altman</EntityLink>; resigned after his reinstatement |
| Policy Influence | High | Congressional testimony, Foreign Affairs, The Economist |
| Research Focus | U.S.-China AI competition, AI safety, governance | CSET publications and grants |
| Academic Credentials | MA Security Studies (Georgetown), BSc Chemical Engineering (Melbourne) | Strong interdisciplinary background |
| EA Movement | Early leader | Founded EA Melbourne chapter, worked at GiveWell and Coefficient Giving |
## Personal Details
| Attribute | Information |
|-----------|-------------|
| Birth Year | 1992 |
| Birthplace | Melbourne, Victoria, Australia |
| Nationality | Australian |
| Education | BSc Chemical Engineering, University of Melbourne (2014); Diploma in Languages, University of Melbourne; MA Security Studies, Georgetown University (2021) |
| High School | Melbourne Girls Grammar School |
| University Score | 99.95 (Australian university admission) |
| Current Position | Interim Executive Director, Georgetown CSET (September 2025-present) |
| Previous Positions | Director of Strategy and Foundational Research Grants, CSET; Senior Research Analyst, <EntityLink id="E521">Coefficient Giving</EntityLink>; OpenAI Board Member |
| Languages | English, Mandarin Chinese (studied in Beijing) |
## Overview
Helen Toner is an Australian AI governance researcher who became one of the most prominent figures in AI policy after her role in the November 2023 removal of Sam Altman as OpenAI's CEO. She serves as Interim Executive Director of Georgetown University's <EntityLink id="E524">Center for Security and Emerging Technology</EntityLink> (CSET), a think tank she helped establish in 2019 with \$55 million in funding from Coefficient Giving (then <EntityLink id="E552">Coefficient Giving</EntityLink>).
Her career trajectory represents one of the most successful examples of effective altruism's strategy of placing safety-focused individuals in positions of influence over AI development. From leading a student effective altruism group in Melbourne to sitting on the board of one of the world's most powerful AI companies, Toner's path demonstrates both the opportunities and limitations of this approach.
Toner's expertise spans U.S.-China AI competition, AI safety research, and technology governance. She has testified before multiple Congressional committees, written for Foreign Affairs and The Economist, and was named to TIME's 100 Most Influential People in AI in 2024. Her work emphasizes that AI governance requires active government intervention rather than relying on industry self-regulation.
## Career Timeline
<Mermaid chart={`
flowchart TD
subgraph Early["Early Career (2014-2017)"]
A[University of Melbourne<br/>BSc Chemical Engineering<br/>2014] --> B[EA Melbourne<br/>Chapter Leader<br/>2014]
B --> C[GiveWell<br/>Research<br/>2015-2016]
C --> D[Coefficient Giving<br/>Senior Research Analyst<br/>2016-2017]
end
subgraph China["China Research (2018)"]
D --> E[Oxford GovAI<br/>Research Affiliate<br/>Beijing 2018]
end
subgraph CSET["Georgetown CSET (2019-Present)"]
E --> F[CSET Director of Strategy<br/>January 2019]
F --> G[CSET Director of Strategy<br/>& Research Grants<br/>March 2022]
G --> H[CSET Interim<br/>Executive Director<br/>September 2025]
end
subgraph OpenAI["OpenAI Board (2021-2023)"]
F --> I[Joined OpenAI Board<br/>2021]
I --> J[Voted to Remove<br/>Sam Altman<br/>November 2023]
J --> K[Resigned from Board<br/>November 2023]
end
style J fill:#ff6b6b
style H fill:#4ecdc4
`} />
### Detailed Career Progression
| Period | Role | Organization | Key Activities |
|--------|------|--------------|----------------|
| 2014 | Chapter Founder/Leader | Effective Altruism Melbourne | Introduced to EA movement as university student; became skeptical-turned-believer on AI risk |
| 2015-2016 | Research Analyst | GiveWell | Researched AI policy issues including military applications and geopolitics |
| 2016-2017 | Senior Research Analyst | Coefficient Giving (then Open Philanthropy) | Advised policymakers on AI policy; recommended \$1.75M+ in grants for AI governance |
| 2018 | Research Affiliate | Oxford Center for the Governance of AI | Spent 9 months in Beijing studying Chinese AI ecosystem and Mandarin |
| Jan 2019 | Director of Strategy | Georgetown CSET | Helped found and shape CSET's research agenda |
| Mar 2022 | Director of Strategy & Foundational Research Grants | Georgetown CSET | Led multimillion-dollar technical grantmaking function |
| 2021-2023 | Board Member | OpenAI | Invited by Holden Karnofsky to replace him on board |
| Sep 2025 | Interim Executive Director | Georgetown CSET | Appointed to lead the center |
## The OpenAI Board Crisis (November 2023)
The most consequential moment of Toner's career came on November 17, 2023, when she and three other OpenAI board members voted to remove Sam Altman as CEO. The five-day crisis that followed revealed deep tensions between AI safety governance and commercial AI development.
### Timeline of Events
| Date | Time | Event | Details |
|------|------|-------|---------|
| Nov 17, 2023 | ≈12:00 PM PST | Board votes to remove Altman | 4 board members (Toner, McCauley, D'Angelo, Sutskever) vote to fire Altman |
| Nov 17, 2023 | ≈12:05 PM | Altman learns of removal | Informed on Google Meet while watching Las Vegas Grand Prix; told 5-10 minutes before announcement |
| Nov 17, 2023 | Afternoon | Public announcement | Board cites Altman "not consistently candid in his communications" |
| Nov 18, 2023 | | Anthropic merger discussions | Active discussions about merging OpenAI with Anthropic; Toner "most supportive" per Sutskever testimony |
| Nov 18-21 | | Pressure campaign | Microsoft, VCs, 95% of OpenAI employees threaten to leave |
| Nov 21, 2023 | | Altman reinstated | Returns as CEO; Toner, McCauley resign from board |
### Board's Stated Reasons
The board's official statement said Altman had "not been consistently candid in his communications." In her May 2024 TED AI Show interview, Toner provided more detailed allegations:
| Allegation | Toner's Claim | OpenAI Response |
|------------|---------------|-----------------|
| ChatGPT launch | Board learned about ChatGPT release from Twitter in November 2022, not informed in advance | ChatGPT was "released as a research project" built on GPT-3.5 already available for 8 months |
| Startup Fund ownership | Altman did not disclose he owned the OpenAI Startup Fund while claiming to be an independent board member | Not addressed |
| Safety processes | Altman gave "inaccurate information" about company's safety processes | Independent review found firing "not based on concerns regarding product safety" |
| Executive complaints | Two executives reported "psychological abuse" from Altman with screenshots and documentation | Taylor: Review concluded decision not based on safety concerns |
| Pattern of behavior | "For years, Sam had made it really difficult for the board... withholding information, misrepresenting things... in some cases outright lying" | Disputed by OpenAI current leadership |
### The 52-Page Memo
In October 2025, Ilya Sutskever's deposition in the Musk v. Altman lawsuit revealed additional details:
- Sutskever prepared a 52-page memo for independent board members (Toner, McCauley, D'Angelo) weeks before the removal
- The memo stated: "Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another"
- "Most or all" supporting material came from OpenAI CTO Mira Murati
- Altman was not shown the memo because Sutskever "felt that, had he become aware of these discussions, he would just find a way to make them disappear"
### Proposed Anthropic Merger
One of the most striking revelations was that within 48 hours of Altman's firing, discussions were underway to potentially merge OpenAI with Anthropic:
| Aspect | Details |
|--------|---------|
| Timing | Saturday, November 18, 2023 |
| Toner's Position | According to Sutskever, Toner was "most supportive" of merger direction |
| Sutskever's Position | "Very unhappy" about it; "really did not want OpenAI to merge with Anthropic" |
| Rationale | When warned company would collapse without Altman, Toner allegedly responded that destroying OpenAI "could be consistent with its safety mission" |
| Toner's Response | Disputed Sutskever's account on social media after deposition release |
### Aftermath and Legacy
| Outcome | Description |
|---------|-------------|
| Immediate | Toner and McCauley resigned from board; Altman reinstated |
| Governance changes | OpenAI reformed board structure; added new independent directors |
| SEC investigation | February 2024: SEC reportedly investigating whether Altman misled investors |
| Toner's influence | Named to TIME 100 AI 2024; increased requests from policymakers worldwide |
| Policy impact | Crisis highlighted tensions between AI safety governance and commercial interests |
## Research and Publications
### CSET Research Focus Areas
Toner's research at CSET spans three primary domains:
| Research Area | Description | Key Publications |
|---------------|-------------|------------------|
| U.S.-China AI Competition | Analysis of Chinese AI capabilities, military applications, and competitive dynamics | Congressional testimony, Foreign Affairs articles |
| AI Safety Research | Robustness, interpretability, reward learning, uncertainty quantification | CSET AI Safety series |
| AI Governance | Standards, testing, safety processes, accident prevention | Policy briefs, congressional testimony |
### Key Publications and Media
| Year | Type | Publication/Outlet | Topic |
|------|------|-------------------|-------|
| 2019 | Testimony | U.S.-China Economic and Security Review Commission | China's Pursuit of AI |
| 2023 | Research Paper | CSET | "Artificial Intelligence and Costly Signals" (co-authored with Andrew Imbrie, Owen Daniels) |
| 2024 | Op-Ed | Foreign Affairs | "The Illusion of China's AI Prowess" |
| 2024 | Op-Ed | The Economist | U.S.-China bilateral meetings on AI |
| 2024 | Testimony | Senate Judiciary Subcommittee | AI Oversight: Insider Perspectives |
| 2024 | Talk | TED2024 | "How to Govern AI, Even if it's Hard to Predict" |
| 2025 | Testimony | House Judiciary Subcommittee | Trade Secrets and the Global AI Arms Race |
### AI Safety Research Contributions
Toner has authored or contributed to multiple papers examining AI safety:
| Topic | Key Findings |
|-------|--------------|
| Robustness | Research tracking how ML systems behave under distribution shift and adversarial conditions |
| Interpretability | Analysis of research trends in understanding ML system decision-making |
| Reward Learning | Study of how systems can be trained to align with human intentions |
| Uncertainty Quantification | Work introducing the concept to non-technical audiences |
She has stated: "Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems."
### Citation Impact
According to Google Scholar, Toner's research has been cited over 3,286 times, indicating significant academic influence in the AI governance field.
## Congressional Testimony
Toner has testified before multiple Congressional committees on AI policy and U.S.-China competition.
### Key Testimony Summary
| Date | Committee | Topic | Key Arguments |
|------|-----------|-------|---------------|
| June 2019 | U.S.-China Economic and Security Review Commission | China's Pursuit of AI | AI research is unusually open/collaborative; strategic immigration policy critical; China's approach to data privacy differs |
| September 2024 | Senate Judiciary Subcommittee | AI Oversight | Concerns about regulation slowing U.S. innovation are "not nearly as strong as it seems"; China "far from being poised to overtake the United States" |
| May 2025 | House Judiciary Subcommittee | Trade Secrets and AI Arms Race | "AI IP is as core to U.S. competitiveness as rapid innovation"; adversaries cannot have easy access to U.S. technology |
### Policy Recommendations
Based on her testimony and public statements, Toner advocates for:
| Policy Area | Position |
|-------------|----------|
| Immigration | Access to skilled researchers and engineers is key; U.S. ability to attract foreign talent is critical advantage |
| Federal Research | No major federal effort has strengthened fundamental AI research during current deep learning wave, unlike China |
| Regulation | Government must actively regulate AI; self-governance by companies "doesn't actually work" |
| Safety Requirements | Supports mandatory safety testing and oversight for advanced AI systems |
| International Coordination | "Laboratory of democracy" approach: different jurisdictions should try different approaches and learn from experiments |
## Views on AI Risk and Governance
### Position on AI Existential Risk
Toner takes a nuanced position on AI existential risk:
| Aspect | Her View |
|--------|----------|
| Existential scenarios | Acknowledges "whole discourse around existential risk from AI" while noting "people who are being directly impacted by algorithmic systems and AI in really serious ways" already |
| Polarization concern | Worried about polarization where some want to "keep those existential or catastrophic issues totally off the table" while others are easily "freaked out about the more cataclysmic possibilities" |
| Industry concentration | Notes "natural tension" between view that fewer AI players helps coordination/regulation vs. concerns about power concentration |
| Government role | Believes government regulation is necessary; industry self-governance insufficient |
### Framework for AI Governance
Based on her TED2024 talk and public statements:
| Principle | Explanation |
|-----------|-------------|
| Adaptive Regulation | "Different experiments that are being run in how to govern this technology are treated as experiments, and can be adjusted and improved along the way" |
| Epistemic Humility | Policy should be developed despite uncertainty about AI capabilities and timelines |
| International Learning | "Laboratory of democracy has always seemed pretty valuable to me" - countries should try different approaches |
| Implementation Focus | "We're shifting from a year of initial excitement to a year more of implementation, and coming back to earth" |
### On China Competition Concerns
In her Foreign Affairs article "The Illusion of China's AI Prowess," Toner argued:
| Point | Assessment |
|-------|------------|
| Regulation Impact | Concerns about U.S. regulation enabling Chinese dominance are "overblown" |
| Chinese Capabilities | Chinese AI development "lags behind" U.S.; Chinese LLMs "heavily rely on American research and technology" |
| Chinese Regulation | China is already imposing AI regulations of its own |
| Macro Headwinds | China faces significant economic and demographic challenges |
| U.S. Advantage | Strength in fundamental research is "backbone of American advantage" |
## Connection to Effective Altruism
### EA Movement Involvement
| Period | Role | Activities |
|--------|------|------------|
| 2014 | University student | Introduced to EA movement by organizers of EA Melbourne |
| 2014 | Initial skepticism | "Initially skeptical, dismissed them as philosophically confused and overly enthusiastic science fiction enthusiasts" |
| 2014 | Conversion | "Eventually embraced their perspective" and assumed leadership of Melbourne chapter |
| 2015-2017 | Professional | Worked at GiveWell and Coefficient Giving (then Open Philanthropy), both EA-aligned organizations |
| 2019-Present | CSET | CSET was established through \$55 million grant from Coefficient Giving |
### EA Philosophy in Practice
Toner's career exemplifies the EA approach of:
- **Career capital building**: Gaining expertise and credentials in a high-impact area
- **Institutional leverage**: Positioning within influential organizations (OpenAI board, CSET)
- **Longtermism**: Focus on AI risk as a priority concern for humanity's future
- **Impact-focused grantmaking**: Recommending grants while at Coefficient Giving (\$1.5M to UCLA for AI governance fellowship, \$260K to CNAS for advanced technology risk research)
### Key Grant Recommendations at Coefficient Giving
| Year | Amount | Recipient | Purpose |
|------|--------|-----------|---------|
| May 2017 | \$1,500,000 | UCLA School of Law | Fellowship, research, and meetings on AI governance and policy |
| August 2017 | \$260,000 | CNAS (Richard Danzig) | Publication on potential risks from advanced technologies |
### EA as a Career Path Case Study
Toner's trajectory from EA student organizer to influential AI governance figure represents a model the EA movement has promoted for "building career capital" in high-impact areas. Her path illustrates several key elements:
| Career Capital Element | Toner's Example |
|----------------------|-----------------|
| Early commitment | Joined EA movement as undergraduate; took leadership role immediately |
| Skills development | Chemical engineering degree provided analytical foundation; security studies MA added policy expertise |
| Network building | GiveWell and Coefficient Giving connected her to funders and researchers |
| International experience | Beijing research affiliate role built China expertise few Western researchers possess |
| Institutional positioning | CSET founding role and OpenAI board provided influence levers |
The CSET founding exemplifies the EA strategy of building institutions: Coefficient Giving (then Open Philanthropy) provided \$55 million over five years specifically to create a think tank that would shape AI policy from within Washington's foreign policy establishment. Toner was positioned as Director of Strategy from the beginning, allowing her to shape the center's research agenda toward AI safety and governance concerns.
### Relationship Between EA and CSET
| Aspect | Details |
|--------|---------|
| Funding source | Coefficient Giving (\$55M founding grant) |
| Mission alignment | CSET focuses on AI safety, security, and governance - core EA longtermist concerns |
| Staff pipeline | Multiple CSET researchers have EA movement connections |
| Research priorities | U.S.-China competition, AI accidents, standards/testing align with EA cause areas |
| Policy influence | Government briefings and congressional testimony extend EA ideas into policy |
Note: 80,000 Hours, the EA career advice organization that has featured Toner in multiple podcast episodes, is also funded by the same major donor (Coefficient Giving) that funds CSET.
## Recognition and Influence
### TIME 100 Most Influential People in AI (2024)
TIME's profile noted:
> "In mid-November of 2023, Helen Toner made what will likely be the most pivotal decision of her career... One outcome of the drama was that Toner, a formerly obscure expert in AI governance, now has the ear of policymakers around the world trying to regulate AI."
| Recognition Aspect | Details |
|-------------------|---------|
| Category | 100 Most Influential People in AI 2024 |
| Impact | "More senior officials have requested her insights than in any previous year" |
| Stated Mission | "Life's work" is to consult with lawmakers on sensible AI policy |
### Other Recognition
| Type | Details |
|------|---------|
| Podcast Features | 80,000 Hours (multiple appearances), TED AI Show, Cognitive Revolution, Clearer Thinking |
| Media Platforms | ChinaFile contributor, Sourcelist expert |
| Government Briefings | Has briefed senior officials across U.S. government |
## Key Relationships
### Professional Network
| Person | Relationship | Context |
|--------|--------------|---------|
| Holden Karnofsky | Mentor/predecessor | Karnofsky invited Toner to replace him on OpenAI board in 2021 |
| Tasha McCauley | Board colleague | Co-voted to remove Altman; co-authored post-crisis Economist piece |
| Adam D'Angelo | Board colleague | Remained on OpenAI board after crisis; received 52-page memo |
| Ilya Sutskever | Board colleague | Co-voted to remove Altman; later disputed Toner's account of events |
| Sam Altman | Adversary | Removed as OpenAI CEO by Toner and board colleagues |
| Jason Matheny | CSET colleague | CSET founding director; Toner was early hire |
## Critical Assessment
### Strengths
| Strength | Evidence |
|----------|----------|
| Policy expertise | Congressional testimony, Foreign Affairs publications, TIME 100 recognition |
| Interdisciplinary background | Engineering + security studies + China expertise |
| Institutional access | Built relationships across government, academia, and industry |
| Research impact | 3,286+ Google Scholar citations |
| Risk awareness | Early EA convert; focused career on AI governance |
### Limitations and Criticisms
| Criticism | Context |
|-----------|---------|
| OpenAI board outcome | Altman reinstated within 5 days; governance approach failed to achieve lasting change |
| Communication | Board's initial silence created "information vacuum" that enabled pressure campaign |
| Process | Independent review reportedly found firing not based on product safety or security concerns |
| Disputed accounts | Sutskever and Toner have conflicting accounts of merger discussions and other events |
### Open Questions
| Question | Relevance |
|----------|-----------|
| Was removal justified? | Evidence remains contested; no public resolution |
| Did safety concerns exist? | Toner claims safety process misrepresentations; OpenAI review reportedly found otherwise |
| What were alternatives? | Could board have achieved safety goals through different approaches? |
| Long-term impact? | Did crisis ultimately help or hurt AI safety governance? |
## Current Work
As of September 2025, Toner serves as Interim Executive Director of Georgetown CSET, leading a research center with approximately 30 researchers focused on:
| Focus Area | Description |
|------------|-------------|
| AI Safety Research | Robustness, interpretability, testing, standards |
| National Security | Military AI applications, intelligence implications |
| China Analysis | Chinese AI ecosystem, U.S.-China technology competition |
| Policy Development | Congressional testimony, government briefings, public writing |
She continues to advocate for active government regulation of AI, arguing that the "laboratory of democracy" approach of trying different regulatory experiments across jurisdictions is preferable to either inaction or one-size-fits-all approaches.
### CSET Under Toner's Leadership
| Initiative | Description | Status |
|------------|-------------|--------|
| AI Safety Series | Publications on robustness, interpretability, reward learning | Ongoing |
| China AI Tracker | Monitoring Chinese AI ecosystem developments | Active |
| Congressional Engagement | Regular testimony and briefings | Active |
| Foundational Research Grants | Multimillion-dollar grantmaking for technical AI safety research | Expanded since 2022 |
| Government Fellowships | Placing researchers in policy positions | Ongoing |
### Future Research Directions
Based on public statements, CSET under Toner's leadership is expanding focus on:
| Area | Rationale |
|------|-----------|
| AI Standards and Testing | Need for rigorous evaluation before deployment in high-stakes settings |
| Accident Investigation | Learning from AI failures similar to aviation safety processes |
| Military AI Applications | Autonomous weapons, intelligence analysis, command and control |
| Compute Governance | Hardware controls as a lever for AI governance |
| International Coordination | Mechanisms for global AI governance despite geopolitical tensions |
## The "Artificial Intelligence and Costly Signals" Paper Controversy
In October 2023, shortly before the OpenAI board crisis, Toner co-authored a paper with Andrew Imbrie and Owen Daniels that reportedly caused tension with Sam Altman.
### Paper Details
| Aspect | Details |
|--------|---------|
| Title | "Artificial Intelligence and Costly Signals" |
| Publication | CSET, October 2023 |
| Co-authors | Andrew Imbrie, Owen Daniels |
| Topic | International signaling theory applied to AI development |
### Reported Tension
According to reports, the paper contained analysis that Altman viewed as unfavorable to OpenAI or as potentially undermining the company's position. While the specific nature of the disagreement has not been fully disclosed, it illustrates the inherent tensions of having safety-focused researchers on commercial AI company boards:
| Tension | Description |
|---------|-------------|
| Academic freedom | Researchers expect to publish without corporate approval |
| Fiduciary duty | Board members owe duty to the organization |
| Competitive concerns | Analysis may affect company's competitive position |
| Governance role | Board members need to maintain independence for effective oversight |
## Lessons from the OpenAI Board Experience
Toner's experience on the OpenAI board, while ending in resignation, offers several lessons for AI governance:
### Structural Challenges
| Challenge | Description | Toner's Experience |
|-----------|-------------|-------------------|
| Information asymmetry | Boards depend on management for information | Board allegedly not informed of ChatGPT launch or other key developments |
| Resource imbalance | Management has full-time staff; board members serve part-time | Board lacked resources to verify management claims |
| Stakeholder pressure | Employees, investors, customers may oppose board actions | 95% employee letter, Microsoft pressure reversed board decision |
| Nonprofit/for-profit tension | OpenAI's unusual structure created conflicts | Safety mission vs. commercial success difficult to balance |
### Governance Lessons
Based on Toner's public statements and the crisis outcome:
| Lesson | Implication |
|--------|-------------|
| Communication matters | Board's silence created vacuum filled by critics |
| Coalition building | Safety-focused board members were isolated when crisis hit |
| Structural power | Legal and financial structures determine who wins disputes |
| Transparency norms | AI companies may need new norms around board-management communication |
### Toner's Post-Crisis Recommendations
In her September 2024 Senate testimony, Toner stated:
> "This technology would be enormously consequential, potentially extremely dangerous, and should only be developed with careful forethought and oversight."
She has advocated for:
| Recommendation | Rationale |
|----------------|-----------|
| External oversight | Company self-governance insufficient |
| Mandatory safety testing | Prevent deployment of dangerous systems |
| Whistleblower protections | Enable internal critics to raise concerns |
| Regulatory experimentation | Different approaches across jurisdictions to learn what works |
## Comparative Analysis: Toner vs. Other AI Safety Figures
### Background Comparison
| Figure | Background | Current Role | Primary Focus |
|--------|------------|--------------|---------------|
| Helen Toner | Chemical engineering + security studies | Georgetown CSET Interim ED | Governance, U.S.-China |
| Holden Karnofsky | Economics (Harvard) | Former Coefficient Giving co-CEO | Funding strategy, risk prioritization |
| Dario Amodei | Physics PhD (Princeton) | Anthropic CEO | Technical safety, constitutional AI |
| Jan Leike | ML PhD (Toronto) | Anthropic Alignment Lead | Technical alignment research |
| Paul Christiano | CS PhD (UC Berkeley) | ARC founder | AI alignment, evaluation |
### Approach Comparison
| Approach | Toner | Karnofsky | Amodei |
|----------|-------|-----------|--------|
| Primary lever | Policy/governance | Grantmaking | Lab leadership |
| Technical focus | Low (policy-oriented) | Medium (strategy) | High (research) |
| China focus | High | Low | Low |
| Government engagement | Very high | Medium | Medium |
| Public communication | High | High | Medium |
### Influence Mechanisms
| Figure | Mechanism | Estimated Impact |
|--------|-----------|------------------|
| Toner | Congressional testimony, CSET research, media | Moderate policy influence; limited on technical development |
| Karnofsky | \$300M+ in grants | High influence on field direction and funding |
| Amodei | Controls Anthropic resources | Very high on one major lab's approach |
## Public Communication and Media Presence
### Major Podcast Appearances
| Podcast | Host | Date | Topic |
|---------|------|------|-------|
| 80,000 Hours | Rob Wiblin | 2019 | CSET founding and AI policy careers |
| 80,000 Hours | Rob Wiblin | 2024 | Geopolitics of AI in China and Middle East |
| TED AI Show | Bilawal Sidhu | May 2024 | OpenAI board crisis, AI regulation |
| Cognitive Revolution | Nathan Labenz | 2024 | AI safety, regulatory approaches |
| Clearer Thinking | Spencer Greenberg | 2024 | AI, U.S.-China relations, OpenAI board |
| Foresight Institute | | 2024 | "Who gets to decide AI's future?" |
### Writing Venues
| Publication | Type | Topics |
|-------------|------|--------|
| Foreign Affairs | Op-eds | U.S.-China competition, Chinese AI |
| The Economist | Op-eds | U.S.-China bilateral relations |
| TIME | Op-eds | AI governance |
| GiveWell Blog | Analysis | AI policy research (2015-2016) |
| CSET Publications | Research | AI safety, China, standards |
### Social Media Presence
Toner maintains active presence on X (formerly Twitter) at @hlntnr, where she shares research, responds to coverage, and occasionally disputes inaccurate reporting about her role in the OpenAI crisis.
## Beijing Experience and China Expertise
### Nine Months in Beijing (2018)
| Aspect | Details |
|--------|---------|
| Duration | 9 months |
| Affiliation | Oxford University's Center for the Governance of AI (Research Affiliate) |
| Focus | Chinese AI ecosystem, AI and defense |
| Language Study | Mandarin Chinese |
| Outcome | Built rare firsthand expertise on Chinese AI among Western researchers |
### China Research Focus Areas
| Area | Key Findings |
|------|--------------|
| AI Capabilities | Chinese AI lags U.S.; heavily relies on American research/technology |
| Data Governance | Different approach to privacy; potential training data advantages |
| Military AI | Military-civil fusion creates different development dynamics |
| Talent | Competition for researchers is key variable |
| Regulation | China is implementing AI regulations despite perception otherwise |
### Implications for U.S. Policy
Toner's China expertise shapes her policy recommendations:
| Policy Area | Toner's Position Based on China Research |
|-------------|------------------------------------------|
| Export Controls | Supports protecting AI IP; "adversaries cannot have easy access" |
| Immigration | U.S. must maintain talent advantage; China competes for researchers |
| Regulation | U.S. regulation won't cede leadership to China; concerns "overblown" |
| Research Funding | U.S. needs major federal investment in fundamental AI research |
## Sources and References
### Primary Sources
| Type | Source | Description |
|------|--------|-------------|
| Profile | [CSET Staff Page](https://cset.georgetown.edu/staff/helen-toner/) | Official biography and publication list |
| Profile | [TIME 100 AI 2024](https://time.com/7012863/helen-toner/) | TIME's profile on Toner's influence |
| Interview | [TED AI Show (May 2024)](https://www.ted.com/pages/what-really-went-down-at-openai-and-the-future-of-regulation-w-helen-toner) | First longform interview after OpenAI investigation |
| Interview | [80,000 Hours Podcast](https://80000hours.org/podcast/episodes/helen-toner-ai-policy-washington-dc/) | In-depth discussion of AI geopolitics |
| Testimony | [Senate Judiciary (Sep 2024)](https://www.judiciary.senate.gov/imo/media/doc/2024-09-17_pm_-_testimony_-_toner.pdf) | Written testimony on AI oversight |
| Testimony | [USCC (June 2019)](https://cset.georgetown.edu/wp-content/uploads/Toner-USCC-Testimony-6.7.19.pdf) | Testimony on China's pursuit of AI |
### Secondary Sources
| Type | Source | Description |
|------|--------|-------------|
| Wikipedia | [Helen Toner](https://en.wikipedia.org/wiki/Helen_Toner) | Comprehensive biographical article |
| News | [CNBC (May 2024)](https://www.cnbc.com/2024/05/29/former-openai-board-member-explains-why-ceo-sam-altman-was-fired.html) | Coverage of Toner's TED AI Show revelations |
| News | [Fortune (May 2024)](https://fortune.com/2024/05/29/openai-sam-altman-helen-toner-ted-ouster-non-profit-board/) | Details on ChatGPT launch disclosure |
| Analysis | [Decrypt (2025)](https://decrypt.co/347349/inside-deposition-showed-openai-nearly-destroyed-itself) | Coverage of Sutskever deposition revelations |
| Article | [Foreign Affairs](https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation-helen-toner) | Toner's op-ed on China's AI capabilities |
| News | [Fast Company](https://www.fastcompany.com/91401132/helen-toner-wants-to-be-the-peoples-voice-in-the-ai-safety-debate) | Profile on Toner's AI safety advocacy |
| Interview | [Journal of Political Risk](https://www.jpolrisk.com/the-risks-of-ai-an-interview-with-georgetowns-helen-toner/) | In-depth interview on AI risks |
| Announcement | [CSET Interim ED](https://cset.georgetown.edu/article/helen-toner-named-interim-executive-director/) | Appointment announcement |
### Related Profiles
- Holden Karnofsky - Former Coefficient Giving co-CEO who invited Toner to OpenAI board
- Ilya Sutskever - OpenAI co-founder and board member who co-voted to remove Altman
- Sam Altman - OpenAI CEO removed and reinstated in November 2023
- Dario Amodei - Anthropic CEO; Anthropic was discussed as potential merger partner
### Academic Sources
| Type | Citation |
|------|----------|
| Scholar Profile | [Google Scholar](https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en) - 3,286+ citations |
| EA Forum | [Helen Toner: Building Organizations](https://forum.effectivealtruism.org/posts/hP8PEXjDQkw5Wyx7x/helen-toner-building-organizations) |
| EA Forum | [Helen Toner: Sustainable Motivation](https://forum.effectivealtruism.org/posts/WuWDS4SmtLNd6sKtb/helen-toner-sustainable-motivation) |
### Additional News Coverage
| Outlet | Article | Date |
|--------|---------|------|
| Bloomberg | [Ex-OpenAI Director Says Board Learned of ChatGPT Launch on Twitter](https://www.bloomberg.com/news/articles/2024-05-28/ex-openai-director-says-board-learned-of-chatgpt-launch-on-twitter) | May 2024 |
| South China Morning Post | [Former OpenAI director details ousting of CEO Sam Altman](https://www.scmp.com/tech/tech-trends/article/3264508/former-openai-director-details-ousting-ceo-sam-altman-says-board-learned-chatgpt-launch-twitter) | May 2024 |
| Engadget | [OpenAI's board allegedly learned about ChatGPT launch on Twitter](https://www.engadget.com/openais-board-allegedly-learned-about-chatgpt-launch-on-twitter-235643014.html) | May 2024 |
| The Wire China | [Helen Toner on Setting the Rules for AI](https://www.thewirechina.com/2024/10/06/helen-toner-on-setting-the-rules-for-ai/) | October 2024 |
| Axios | [Helen Toner on the AI risk "you could not really talk about"](https://www.axios.com/2025/09/19/helen-toner-ai-risk) | September 2025 |
## Timeline Summary
| Year | Event |
|------|-------|
| 1992 | Born in Melbourne, Victoria, Australia |
| 2014 | BSc Chemical Engineering, University of Melbourne; founded EA Melbourne chapter |
| 2015-2016 | Research Analyst at GiveWell |
| 2016-2017 | Senior Research Analyst at Coefficient Giving (then Open Philanthropy) |
| 2017 | Recommended \$1.76M in AI governance grants |
| 2018 | Research Affiliate at Oxford GovAI; lived in Beijing studying Chinese AI |
| Jan 2019 | Joined Georgetown CSET as Director of Strategy at founding |
| 2021 | MA Security Studies, Georgetown University; joined OpenAI board |
| Mar 2022 | Became CSET Director of Strategy and Foundational Research Grants |
| Oct 2023 | Co-authored "AI and Costly Signals" paper creating reported tension with Altman |
| Nov 17, 2023 | Voted to remove Sam Altman as OpenAI CEO |
| Nov 21, 2023 | Resigned from OpenAI board after Altman's reinstatement |
| May 2024 | First public interview about OpenAI crisis (TED AI Show) |
| Sep 2024 | Testified before Senate Judiciary Subcommittee |
| 2024 | Named to TIME 100 Most Influential People in AI |
| Sep 2025 | Appointed CSET Interim Executive Director |
| May 2025 | Testified before House Judiciary Subcommittee |
## Key Quotes
### On AI Safety
> "Building AI systems that are safe, reliable, fair, and interpretable is an enormous open problem. Research into these areas has grown over the past few years, but still only makes up a fraction of the total effort poured into building and deploying AI systems. If we're going to end up with trustworthy AI systems, we'll need far greater investment and research progress in these areas."
### On Governance
> "The laboratory of democracy has always seemed pretty valuable to me. I hope that these different experiments that are being run in how to govern this technology are treated as experiments, and can be adjusted and improved along the way."
### On Sam Altman
> "For years, Sam had made it really difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board."
### On Destroying OpenAI
According to Sutskever's deposition testimony, when warned that OpenAI would collapse without Altman, Toner allegedly responded that destroying OpenAI "could be consistent with its safety mission." Toner has disputed this characterization.
### On China Competition
> "Looking at Chinese AI development, the AI regulations they are already imposing, and the macro headwinds they face leads her to conclude they are far from being poised to overtake the United States."
### On Her Life's Work
> "My life's work is to consult with lawmakers to help them design AI policy that is sensible and connected to the realities of the technology."