Longterm Wiki
Updated 2026-02-01HistoryData
Page StatusContent
Edited 12 days ago6.7k words4 backlinks
40
QualityAdequate
22
ImportancePeripheral
14
Structure14/15
48118058%3%
Updated weeklyOverdue by 5 days
Summary

Comprehensive biographical profile of Sam Altman documenting his role as OpenAI CEO, timeline predictions (AGI within presidential term, superintelligence in "few thousand days"), and controversies including November 2023 board crisis and safety team departures. Includes detailed 'Statements & Track Record' section analyzing prediction accuracy—noting pattern of directional correctness on AI trajectory but consistent overoptimism on specific timelines, plus tension between safety rhetoric and deployment practices.

Issues2
QualityRated 40 but structure suggests 93 (underrated by 53 points)
Links7 links could use <R> components

Sam Altman

Person

Sam Altman

Comprehensive biographical profile of Sam Altman documenting his role as OpenAI CEO, timeline predictions (AGI within presidential term, superintelligence in "few thousand days"), and controversies including November 2023 board crisis and safety team departures. Includes detailed 'Statements & Track Record' section analyzing prediction accuracy—noting pattern of directional correctness on AI trajectory but consistent overoptimism on specific timelines, plus tension between safety rhetoric and deployment practices.

Key Facts
Net Worth$2.8 billion
as of 2024
Openai Salary$76,001/year
as of 2024
Loopt Sale Price$43 million
Early Stripe Investment$15,000 for 2%
as of 2009
Yc Companies Funded1,900
6.7k words · 4 backlinks

Quick Assessment

DimensionAssessmentEvidence
RoleCEO of OpenAILeading developer of GPT-4, ChatGPT, and frontier AI systems
Influence LevelVery HighOversees company valued at $500 billion+; ChatGPT reached 100M users faster than any product in history
AI Safety StanceModerate/PragmaticSigned extinction risk statement; advocates gradual deployment; criticized by safety researchers for prioritizing capabilities
Timeline ViewsNear-term AGI"AGI will probably get developed during this president's term" (2024); "superintelligence in a few thousand days"
Regulatory PositionPro-regulationCalled for licensing agency in Senate testimony; supports "thoughtful" government oversight
Key ControversyNovember 2023 FiringBoard cited lack of candor; reinstated after 95% of employees threatened to quit
Net Worth≈$2.8 billionFrom venture investments (Reddit, Stripe, Helion); holds no OpenAI equity
Other VenturesWorldcoin, Helion, OkloEye-scanning crypto project; nuclear fusion; nuclear fission

Key Links

SourceLink
Official Websiteopenai.com
Wikipediaen.wikipedia.org
Britannicabritannica.com

Personal Details

AttributeDetails
Full NameSamuel Harris Altman
BornApril 22, 1985, Chicago, Illinois
EducationStanford University (dropped out after 2 years); computer science
SpouseOliver Mulherin (married January 2024)
ChildrenOne child (born February 2025)
ResidenceSan Francisco, California
Net Worth≈$2.8 billion (primarily venture investments)
OpenAI Salary$76,001/year (holds no equity)
WikipediaSam Altman

Overview

Sam Altman is the CEO of OpenAI, the artificial intelligence company behind ChatGPT, GPT-4, and DALL-E. He has become one of the most influential figures in AI development, navigating the company through its transformation from a nonprofit research lab to a commercial powerhouse valued at over $500 billion+. His leadership has been marked by both remarkable commercial success and significant controversy, including his brief firing and rapid reinstatement in November 2023.

Altman's career before OpenAI established him as a prominent Silicon Valley figure. He co-founded the location-based social network Loopt at age 19, became president of Y Combinator at 28, and helped fund hundreds of startups including Airbnb, Stripe, Reddit, and DoorDash. His transition to full-time OpenAI leadership in 2019 marked a pivot from startup investing to direct involvement in AI development.

His positions on AI risk occupy a complex middle ground. He has signed statements declaring AI an extinction-level threat alongside nuclear war, while simultaneously racing to deploy increasingly powerful systems. This tension between acknowledging catastrophic risks and accelerating capabilities development has made him a controversial figure in AI safety debates. Critics argue his warnings are performative while his actions prioritize commercial success over safety; supporters contend his gradual deployment philosophy represents the most realistic path to beneficial AI.

Career Timeline

YearEventDetails
1985BornApril 22, Chicago, Illinois; raised in St. Louis, Missouri
≈1993First computerReceived at age 8; attended John Burroughs School
2003StanfordEnrolled to study computer science
2005Loopt foundedCo-founded location-based social network at age 19; Y Combinator's first batch
2005Stanford dropoutLeft after 2 years to focus on Loopt
2011Y CombinatorBecame part-time partner at YC
2012Loopt acquiredSold to Green Dot Corporation for $43 million
2012Hydrazine CapitalCo-founded venture fund with brother Jack; $21 million initial fund
2014YC PresidentBecame president of Y Combinator, succeeding Paul Graham
2015OpenAI co-foundedCo-founded with Elon Musk, Greg Brockman, Ilya Sutskever, and others
2015YC ContinuityLaunched $700 million equity fund for maturing YC companies
2018Musk departureElon Musk resigned from OpenAI board
2019OpenAI CEOLeft Y Combinator to become full-time OpenAI CEO
2019Tools for HumanityCo-founded Worldcoin parent company
2022ChatGPT launchNovember release; 100 million users in 2 months
2023Senate testimonyMay 16; called for AI licensing agency
2023Board crisisNovember 17-22; fired and reinstated within 5 days
2024MarriageJanuary 24; married Oliver Mulherin in Hawaii
2024Restructuring beginsSeptember; plans announced to convert to for-profit
2025Child bornFebruary 2025; first child with husband
2025OpenAI PBCOctober; OpenAI restructured as public benefit corporation

Pre-OpenAI Career

Loopt (2005-2012)

AspectDetails
RoleCo-founder, CEO
ProductLocation-based social networking mobile app
FundingRaised $30+ million in venture capital
Y CombinatorOne of first 8 companies in YC's inaugural batch (2005)
Initial YC Investment$6,000 per founder
PartnershipsSprint, AT&T, other wireless carriers
OutcomeFailed to achieve user traction; acquired for $43 million
AcquirerGreen Dot Corporation (March 2012)

Loopt was Altman's first significant venture, founded when he was 19 and still a Stanford undergraduate. The app allowed users to share their location with friends, a concept that was early to the market but failed to gain widespread adoption. Despite partnerships with major carriers and significant venture funding, the company never achieved product-market fit.

Y Combinator (2011-2019)

AspectDetails
RolePartner (2011), President (2014-2019)
PredecessorPaul Graham (co-founder)
Companies Funded≈1,900 during tenure
Notable CompaniesAirbnb, Stripe, Reddit, DoorDash, Instacart, Twitch, Dropbox
YC ContinuityFounded $700 million growth fund (2015)
YC ResearchFounded nonprofit research lab; contributed $10 million
GoalAimed to fund 1,000 companies per year

Under Altman's leadership, Y Combinator expanded dramatically. He broadened the types of companies funded to include "hard technology" startups in areas like nuclear energy, biotechnology, and aerospace. By the time he departed in 2019, YC had become the most prestigious startup accelerator globally.

Hydrazine Capital (2012-present)

AspectDetails
Co-founderJack Altman (brother)
Initial Fund$21 million
Major BackerPeter Thiel (largest contributor)
Portfolio400+ companies
Strategy75% allocated to Y Combinator companies
Notable ReturnsReddit (9% stake pre-IPO, ≈$1.4B value); Stripe ($15K for 2% in 2009)

Hydrazine Capital became a major source of Altman's personal wealth. His early bet on Stripe in 2009, paying $15,000 for a 2% stake, grew to be worth hundreds of millions as Stripe's valuation reached $65 billion.

OpenAI Founding and Evolution

The Founding (2015)

Loading diagram...

OpenAI emerged from Altman and Musk's shared concerns about the concentration of AI capabilities at Google following its 2014 acquisition of DeepMind. In March 2015, Altman emailed Musk with a proposal for a "Manhattan Project" for AI under Y Combinator's umbrella. The two co-chairs recruited a founding team including Ilya Sutskever, Greg Brockman, and others.

The organization was structured as a nonprofit with a stated mission to ensure artificial general intelligence benefits "all of humanity." Co-founders pledged $1 billion, though actual donations fell far short; by 2019, only $130 million had been collected.

Structural Evolution

PeriodStructureKey Changes
2015-2019NonprofitPure research focus; mission-driven
2019Capped-profit LPCreated to attract talent and capital; returns capped at 100x
2019-2024Nonprofit-controlledNonprofit board retained ultimate control
October 2025Public benefit corporationFor-profit with charitable foundation; removes profit caps

The 2019 creation of the capped-profit subsidiary was justified as necessary to compete for talent and compute resources. Altman later explained: "Wary of the incentives of investors influencing AGI, OpenAI's leadership team developed a 'capped profit' subsidiary, which could raise funds for investors but would be governed by a nonprofit board."

Microsoft Partnership

MilestoneDateAmountTerms
Initial investment2019$1 billionExclusive cloud partnership
Extended partnershipJanuary 2023$10 billionLargest single AI investment
Total committed2023≈$13 billionMicrosoft receives 49% of profits until recouped
Current stakeOctober 2025≈27%Post-restructuring; valued at ≈$135 billion

The Microsoft relationship transformed OpenAI from a research lab into a commercial powerhouse. The partnership provided both capital and cloud infrastructure, enabling the training runs that produced GPT-4 and subsequent models. However, the relationship has also drawn criticism for potentially compromising OpenAI's independence and mission focus.

November 2023 Board Crisis

Timeline of Events

DateEventDetails
November 17Firing announcedBoard stated Altman "not consistently candid"; Mira Murati named interim CEO
November 17Brockman resignsCo-founder learned of firing moments before announcement; resigned same day
November 18-19Negotiations beginInvestors and employees press for reversal
November 20Microsoft offerSatya Nadella announces Altman will lead new Microsoft AI team
November 20Employee letter738 of 770 employees sign letter threatening to quit
November 20Sutskever regretChief scientist publicly expresses regret for role in firing
November 20New interim CEOTwitch co-founder Emmett Shear named interim CEO
November 21Board negotiationsAgreement reached for new board composition
November 22ReinstatementAltman returns as CEO; new board: Bret Taylor (Chair), Larry Summers, Adam D'Angelo

Board's Stated Reasons

Former board member Helen Toner later provided detailed explanations for the board's decision in a TED AI Show interview:

"For years, Sam had made it very difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board."

IssueAllegationSource
ChatGPT launchBoard not informed before November 2022 release; learned on TwitterHelen Toner interviews
Startup fund ownershipAltman did not disclose he owned the OpenAI startup fund while "constantly claiming to be an independent board member with no financial interest"Board members
Safety processesProvided "inaccurate information about the small number of formal safety processes that the company did have in place"Helen Toner
Executive complaintsTwo executives reported "psychological abuse" with documentationOctober 2023 board conversations
Information withholdingPattern of "misrepresenting things" and "in some cases outright lying"Helen Toner

The Toner paper incident (October 2023): Toner published a paper that Altman perceived as critical of OpenAI while praising Anthropic's approach. According to Toner and The New Yorker:

  • Altman reprimanded her, claiming the paper "could cause problems" due to an FTC investigation
  • He then approached other board members to push for her removal
  • "He'd play them off against each other by lying about what other people thought. Things like that had been happening for years." (The New Yorker)
  • Toner: "The problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board"

This incident was occurring while the board was "already talking pretty seriously about whether we needed to fire him."

Resolution and New Governance

The crisis resolved when 95% of OpenAI employees signed an open letter threatening to leave if the board didn't reinstate Altman. Microsoft's simultaneous offer to hire Altman and the entire OpenAI team created leverage that forced the board's capitulation.

The new board replaced the mission-focused nonprofit directors with business-oriented members:

  • Bret Taylor (Chair): Former Salesforce co-CEO, Twitter chairman
  • Larry Summers: Former Treasury Secretary, Harvard president
  • Adam D'Angelo: Quora CEO (only remaining original board member)

This governance change represented a significant shift away from the safety-focused oversight that had originally prompted the firing.

Analysis of the Crisis

The November 2023 crisis revealed several structural tensions in AI governance:

TensionManifestationOutcome
Mission vs. CommercialNonprofit board vs. $90B valuationCommercial interests prevailed
Safety vs. SpeedBoard concerns vs. deployment pressureSpeed prioritized
Oversight vs. CEO PowerBoard authority vs. employee loyaltyCEO power consolidated
Investor vs. Public InterestMicrosoft's stake vs. nonprofit missionInvestor interests protected

The crisis demonstrated that traditional nonprofit governance mechanisms may be insufficient to constrain AI companies with significant commercial value. The threat of mass employee departure, combined with investor pressure, effectively nullified the board's oversight function.

Views on AI Safety and Timelines

AGI Timeline Predictions

StatementDateContext
"AGI will probably get developed during this president's term"2024Bloomberg Businessweek interview
"We may see the first AI agents join the workforce" in 2025January 2025Blog post "Reflections"
"Superintelligence in a few thousand days"2024OpenAI blog
"I think AGI will probably hit sooner than most people think and it will matter much less"2024NYT Dealbook summit
"We are now confident we know how to build AGI as we have traditionally understood it"2025Blog post "Reflections"

Altman's timeline predictions have become progressively more aggressive. In 2024, he stated OpenAI is "beginning to turn our aim beyond [AGI], to superintelligence in the true sense of the word."

On Existential Risk

Altman has made numerous statements acknowledging AI's potential for catastrophic harm:

"The development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

"AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there will be great companies built." (2015 tech conference)

"If this technology goes wrong, it can go quite wrong." (Senate testimony, May 2023)

"The bad case... is like lights out for all of us." (Lex Fridman podcast)

In May 2023, Altman signed the Center for AI Safety statement declaring: "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Gradual Deployment Philosophy

Altman advocates for iterative release as a safety strategy:

"The best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer."

"A slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt."

"The world I think we're heading to and the safest world, the one I most hope for, is the short timeline slow takeoff."

This philosophy has been criticized by those who argue that commercial pressures make genuine caution impossible, and that "gradual deployment" has in practice meant racing to release capabilities as fast as possible.

Regulatory Positions

In his May 2023 Senate testimony, Altman proposed:

ProposalDetails
Licensing agencyNew U.S. or global body to license powerful AI systems
Safety testingMandatory testing before deployment of dangerous models
Independent auditsThird-party evaluation of AI systems
International coordinationSuggested IAEA as model for global AI governance
Capability thresholdsRegulation above certain capability levels

However, critics note that OpenAI has continued to deploy increasingly powerful systems without waiting for such regulatory frameworks to be established.

Evolution of Safety Rhetoric

Altman's public statements on AI risk have shifted over time:

PeriodStanceRepresentative Quote
2015Maximally alarmed"AI will probably, most likely, sort of lead to the end of the world"
2019-2022Cautiously concernedEmphasized gradual deployment and safety research
2023Publicly advocating regulation"If this technology goes wrong, it can go quite wrong"
2024-2025Confident in approach"We are now confident we know how to build AGI"

This evolution tracks with OpenAI's commercial success and may reflect either genuine confidence in safety progress or the influence of commercial pressures on public messaging.

Statements & Track Record

For a detailed analysis of Altman's predictions and their accuracy, see the full track record page.

Summary: Directionally correct on AI trajectory; consistently overoptimistic on specific timelines; rhetoric has shifted from "existential threat" to "will matter less than people think."

CategoryExamples
CorrectAI needing massive capital, cost declines, legal/medical AI capability
WrongSelf-driving (2015), ChatGPT Pro profitability, GPT-5 launch execution
PendingAGI by 2025-2029, "superintelligence in a few thousand days"

Notable tension: His safety rhetoric ("greatest threat to humanity" in 2015; signed extinction risk statement in 2023) contrasts with aggressive deployment practices and later claims that "AGI will matter much less than people think."

Communication Patterns and Consistency

As analyzed in "What's Going on With OpenAI's Messaging?", OpenAI appears to make conflicting promises tailored to different audiences:

AudienceMessage
Talent"We will reach transformative AI"
Public/PR"We care deeply about safety"
Regulators"We're not making anything dangerous"
Investors"Safety won't require significant resources or hurt competitiveness"

The recommendation from that analysis: "ignore the noise" and "look at actual actions taken alone."

AI Risk Rhetoric Evolution

DateStatementContextSource
2015"AI will probably, most likely, sort of lead to the end of the world"Conference (tech/startup audience)Tom's Guide
2023"Worst-case scenario is lights out for all of us"Podcast (general public)Lex Fridman Podcast
May 2023Testified AI poses "existential risk" requiring oversightSenate hearing (regulators)Senate Judiciary
2024-2025"AGI will probably hit sooner than most people think and it will matter much less"Bloomberg interview (investors/business)Bloomberg

Assessment: Major rhetoric shift. Early statements emphasized existential danger; recent statements minimize significance. Risk-minimizing language appeared after OpenAI's commercial pivot accelerated and during fundraising.

Regulation Stance Evolution

DateStatementContextSource
May 2023Testified AI needs government oversight; proposed federal licensing agency as "#1 recommendation"Senate hearingSenate Judiciary
May 2023Simultaneously lobbied behind the scenes to weaken the EU AI ActPrivate lobbyingTIME Exclusive
Aug 2024OpenAI officially opposed California SB 1047Corporate positionNewsweek
May 2025Said requiring government approval before releases would be "disastrous"Senate testimonyFortune

Former OpenAI researchers William Saunders and Daniel Kokotajlo wrote: "When actual regulation is on the table, he opposes it." (Windows Central)

Open Source Stance

DatePositionContextSource
2015-2018OpenAI published research openly (GPT, GPT-2 initial release)Founding eraOpenAI blog archives
2019-2023Shifted to closed-source; GPT-4 technical details not released, citing safetyCommercial pivotGPT-4 Technical Report
Jan 2025"I personally think we have been on the wrong side of history here and need to figure out a different open source strategy"Reddit AMA, days after DeepSeekVentureBeat

The timing of the "wrong side of history" admission—coming days after DeepSeek's open-source R1 model caused Nvidia's stock to lose $600 billion—suggests reactive positioning.

Hype Cycles and Twitter Patterns

Altman's Twitter/X activity shows a consistent pattern: cryptic teasers, bold claims, hype accumulation, then partial walkbacks while retaining attention benefits. (Riley Brown analysis)

PhaseTacticEffect
TeaserCryptic tweets, vague hints about upcoming releasesBuilds anticipation
ClaimBold statements about capabilities (AGI, benchmarks)Generates headlines
SpeculationLet rumors run without correctionAmplifies attention
WalkbackBlame "Twitter hype," claim misunderstandingMaintains credibility

Examples include:

  • The "her" tweet during GPT-4o launch (May 2024) referencing Scarlett Johansson
  • The o3/AGI hype cycle (December 2024), claiming it "passed the ARC-AGI challenge" then walking back with "pls chill and cut your expectations 100x"
  • Cryptic singularity tweets (January 2025): "near the singularity; unclear which side"

Patterns from Karen Hao's Empire of AI

PatternDescriptionSource
"Paper cuts""He sometimes lied about details so insignificant that it was hard to say why the dishonesty mattered at all. But over time, those tiny 'paper cuts'... led to an atmosphere of pervasive distrust"LessWrong summary
Mirroring"He's so good at adjusting to what you say, and you really feel like you're making progress with him. And then you realize over time that you're actually just running in place"Same
Opposing views"Listened carefully to people to understand what they wanted, then promised to deliver on those wants, only to do the same for others with opposing views"idratherbewriting review

Key observation from Hao: "No one could definitively say what Sam actually believes, since he seems to align with opposing views depending on who he talks to."

The 20% Compute Promise

DateCommitment/EventSource
July 2023OpenAI announced Superalignment team, promised 20% of compute for AI safety workOpenAI announcement
May 2024Jan Leike resigned, stated team had been "struggling for compute"Wired
May 2024OpenAI confirmed Superalignment team "is no more"Same

Commitments vs. Reality: "Planning for AGI and Beyond"

OpenAI's February 2023 policy document made several commitments:

CommitmentWhat Was SaidWhat HappenedStatus
Nonprofit control"A nonprofit that governs us and lets us operate for the good of humanity"Nonprofit reduced from 100% control to 26% equity stake❌ Walked back
Profit capsProfits capped at 100x ROI, decreasing over timeCaps eliminated entirely in 2024❌ Eliminated
Gradual deployment"A gradual transition gives people, policymakers, and institutions time"Deployed ChatGPT without board knowledge⚠️ Questionable

Note: The original blog post now carries a disclaimer: "Updated October 28, 2025: This post contains outdated information about our structure."

The $7 Trillion Example

In February 2024, while publicly discussing AI risks and the need for careful development, Altman was simultaneously seeking up to $7 trillion for AI chip infrastructure—an amount larger than the GDP of Germany and 13x the entire global chip industry's 2023 revenues. (WSJ via CNBC)

Criticisms and Controversies

Pre-OpenAI: Alleged Pattern

Helen Toner alleged that Altman's behavior at OpenAI followed a pattern from earlier roles:

RoleAllegationSource
Loopt (CEO, 2005-2012)"The management team went to the board there twice and asked the board to fire him for what they called deceptive and chaotic behavior"Toner on TED AI Show
Y Combinator (President, 2014-2019)Toner claimed he "was actually fired from his previous job at Y Combinator, which was hushed up at the time"Same

Disputed: Paul Graham (YC co-founder) denied Altman was "fired," saying YC told him to choose between YC and OpenAI, and he chose OpenAI. The Washington Post reported he was "asked to leave" as "part of a pattern of clashes that some attribute to his self-serving approach." (TechCrunch, Washington Post)

Early OpenAI: Co-Founders' Trust Concerns (2017)

Leaked emails from the Musk lawsuit reveal OpenAI's own co-founders questioned Altman's motivations early on:

DateFromQuoteSource
Sept 2017Sutskever & Brockman to Altman"We haven't been able to fully trust your judgements throughout this process, because we don't understand your cost function... Is AGI truly your primary motivation?"Transformer News
Sept 2017Sutskever & Brockman"You stated that you don't want to control the final AGI, but during this negotiation, you've shown to us that absolute control is extremely important to you"Same
Sept 2017Sutskever & BrockmanExpressed concern that Altman's proposed structure would allow him to become "a dictator in the company"Same

Altman "admitted that he lost a lot of trust with Greg and Ilya through this process."

Other leaked email revelations:

  • Altman proposed an ICO (cryptocurrency offering) to fund OpenAI; Musk opposed it, warning it would "result in a massive loss of credibility" (TechCrunch)
  • Most of the announced $1 billion in founding commitments never materialized; Musk provided nearly all early capital ($38-44M by 2018)

Founding Purpose vs. Current Direction

OpenAI was founded in 2015 explicitly as a "countervailing force" to prevent AI power concentration—specifically to counter Google/DeepMind:

SourceStatement
Sutskever & Blackwell emailDescribed OpenAI as a project to avoid an "AGI dictatorship"
Daniel Kokotajlo"They wanted to create OpenAI to be this countervailing force... not concentrate power so much on Demis' hands"
Musk's stated motivationCounter Google's "reckless pursuit of AI advancements"

OpenAI is now itself one of the largest concentrations of AI capability, having lost most founding members (many citing safety concerns).

Safety Team Departures (2024)

PersonRoleDepartureReason
Ilya SutskeverCo-founder, Chief ScientistMay 2024Resigned after board crisis involvement
Jan LeikeSuperalignment co-leadMay 2024Cited safety concerns; said compute was deprioritized
Leopold AschenbrennerSafety researcher2024Allegedly fired for sharing safety document externally
Mira MuratiCTOSeptember 2024Announced departure after return to role post-crisis

The departure of key safety personnel raised questions about OpenAI's commitment to alignment research. Jan Leike stated publicly that OpenAI had deprioritized safety work in favor of "shiny products."

Equity Clawback Contradiction (May 2024)

DateEventSource
April 2023Altman signed documents authorizing equity clawback provisions that could strip departing employees of vested equity if they criticized OpenAIVox
May 17, 2024Vox reports on the existence of these provisionsSame
May 18, 2024Altman tweets: "this is on me and one of the few times I've been genuinely embarrassed running OpenAI; I did not know this was happening and I should have"X/Twitter
May 22, 2024Leaked documents emerge showing Altman's signature (along with Jason Kwon and Brad Lightcap) on documents explicitly authorizing the provisionsThe Deep Dive

The contradiction: On May 18, Altman tweeted that he "did not know this was happening." Four days later, documents emerged showing he had personally signed the authorization in April 2023. Former employee Daniel Kokotajlo stated that refusing to sign the NDA cost him equity worth about 85% of his family's net worth. Altman later stated the provision would be removed.

Scarlett Johansson Voice Controversy (May 2024)

DateEventSource
Sept 2023Altman contacted Johansson to voice ChatGPT; she declinedFortune
May 11, 2024Two days before demo, Altman contacts Johansson's agent asking her to reconsiderSame
May 13, 2024GPT-4o demo launches with "Sky" voice; Altman tweets single word: "her"CNBC
May 20, 2024Johansson: "shocked, angered and in disbelief"; says Altman "insinuated that the similarity was intentional" with his tweetSame
May 30, 2024Altman: "It's not her voice, it's not supposed to be. I'm sorry for the confusion"Hollywood Reporter

Altman has publicly described Her as his favorite movie and said it "certainly more than a little bit inspired" OpenAI (BGR). The timeline—contact for permission → denial → proceed anyway → coy tweet → walkback—led Johansson to say Altman "would make a good Marvel villain." (Fortune)

Worldcoin Privacy Concerns

Altman's Worldcoin project (now "World") has faced regulatory action in multiple jurisdictions:

JurisdictionActionIssue
SpainSuspended operationsData protection concerns
ArgentinaFines issuedData terms violations
KenyaCriminal investigation, haltBiometric data collection
Hong KongOrdered to cease"Excessive and unnecessary" data collection

2025 Business Challenges

In late 2025, OpenAI faced significant headwinds that tested Altman's leadership:

ChallengeDetailsResponse
Market share declineChatGPT visits fell below 6B monthly; second decline in 2025"Code red" memo issued
Enterprise competitionMarket share dropped to 27%; Anthropic led at 40%Refocused on enterprise features
Cash burn≈$8 billion burned in 2025Plans to introduce advertising
Revenue delaysAgentic systems, e-commerce postponed"Rough vibes" warning to employees
Suicide lawsuitFamily sued after teen's death involving ChatGPTAltman expressed it weighs on him heavily

Altman described advertising as OpenAI's "last resort" but acknowledged the company would pursue it given financial pressures.

The OpenAI Files (June 2025)

"The OpenAI Files", compiled by nonprofit watchdog organizations (Midas Project and Tech Oversight Project), aggregates documentation of leadership and governance issues. Key findings:

  • Dario and Daniela Amodei (former VP of Research and VP of Safety) described Altman's tactics as "gaslighting" and "psychological abuse" after leaving to found Anthropic, according to Karen Hao's book Empire of AI (Fortune)
  • The WilmerHale law firm conducted an independent review after Altman's firing. Board member Larry Summers privately told people the investigation found "many instances of Altman saying different things to different people" — though the board decided this didn't preclude him from continuing as CEO (OpenAI Files)
  • Sutskever's documentation: Chief Scientist Ilya Sutskever provided the board with a self-destructing PDF containing Slack screenshots documenting "dozens of examples of lying or other toxic behavior." In his deposition, Sutskever revealed he wrote a 52-page brief stating: "Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another." When asked how long he'd considered firing Altman, Sutskever answered: "At least a year." (Decrypt)
  • Former CTO Mira Murati told staffers she didn't feel "comfortable about Sam leading us to AGI." Sutskever said: "I don't think Sam is the guy who should have the finger on the button for AGI."

Note: The OpenAI Files states it "received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor."

Relationship with Elon Musk

The Altman-Musk relationship has deteriorated from co-founding partnership to legal warfare:

PeriodRelationship StatusKey Events
2015Close alliesCo-founded OpenAI after dinner meetings about AI risk
2017Tensions emergeMusk complained about nonprofit direction
2017Control disputeMusk requested majority equity, CEO position; rejected
2018DepartureMusk resigned from board; told team "probability of success was zero"
2023Open hostilityMusk mocked Altman firing as "OpenAI Telenovela"
February 2024First lawsuitMusk sued alleging breach of founding agreement
August 2024Expanded lawsuitAccused OpenAI of racketeering; claimed $134.5B in damages
February 2025Buyout attemptMusk consortium offered $97.4B; rejected by board
April 2025OpenAI countersuesAccused Musk of harassment, acting for personal benefit

The Musk-Altman conflict represents more than personal animosity; it reflects fundamental disagreements about AI governance, the role of profit in AI development, and who should control transformative technology. OpenAI has published internal emails showing Musk originally supported the for-profit transition, while Musk argues the current structure betrays the nonprofit mission he helped establish.

Other Ventures

Tools for Humanity / Worldcoin

AspectDetails
Founded2019
RoleChairman
ProductIris-scanning cryptocurrency verification
Technology"Orb" scans iris to create unique "IrisCode"
TokenWLD cryptocurrency
Users26 million on network; 12 million verified
Funding≈$200 million from Blockchain Capital, Bain Capital Crypto, a16z
US LaunchApril 30, 2025 (Austin, Atlanta, LA, Nashville, Miami, San Francisco)
GoalUniversal verification of humanity; potential UBI distribution

Altman envisions Worldcoin as both proof-of-humanity infrastructure for an AI-saturated world and potentially a mechanism for universal basic income distribution.

Energy Investments

CompanyTypeInvestmentRole
Helion EnergyNuclear fusion$375 million personal investmentChairman
Oklo Inc.Nuclear fissionSignificant stakeChairman

Altman has been outspoken about AI's massive energy requirements, stating these investments aim to ensure sufficient clean energy for AI infrastructure.

Other Investments

CompanySectorDetails
RedditSocial media9% stake pre-IPO (≈$1.4B value)
StripePayments$15K for 2% in 2009
Retro BiosciencesLongevity$180 million personal investment
HumaneAI hardwareEarly investor
Boom TechnologySupersonic aviationInvestor
CruiseAutonomous vehiclesInvestor

2024-2025 Corporate Restructuring

Timeline

DateDevelopment
September 2024Plans leaked: Altman to receive 7% equity; nonprofit control to end
December 2024Board announces public benefit corporation plan
May 2025Initial reversal: announced would remain nonprofit-controlled
October 2025Final restructuring completed as PBC

Final Structure

ElementDetails
For-profit entityOpenAI Group PBC (public benefit corporation)
Nonprofit entityOpenAI Foundation (oversight role)
Foundation stake≈26% of OpenAI Group (≈$130B value)
Microsoft stake≈27% (≈$135B value)
Profit capsRemoved; unlimited investor returns now possible
Altman equityNone (controversial decision not to grant equity)
Foundation commitment$25 billion for healthcare, disease research, AI resilience
IPO plansAltman indicated "most likely path" but no timeline

AGI Definition Changes

Previously, the Microsoft partnership included a provision that Microsoft's access to OpenAI technology would terminate if OpenAI achieved AGI. Under the new terms, any AGI claims will be verified by an independent expert panel, preventing unilateral declarations.

Public Assessment

Supporters' View

ArgumentEvidence Cited
Responsible leaderCalled for regulation; signed extinction risk statement
Transparency advocatePushed for gradual deployment to build public familiarity
Mission-drivenTakes only $76K salary; holds no equity
Effective executiveBuilt OpenAI from research lab to $157B company
Realistic about safetyAcknowledges risks while arguing racing is unavoidable

Critics' View

ArgumentEvidence Cited
Says safety, does capabilitySafety team departures; compute deprioritized for products
Performative risk warningsWarns of extinction while racing to deploy
Corporate captureTransition from nonprofit to for-profit betrays founding mission
Governance failuresBoard crisis revealed pattern of non-candor with oversight
Concentrating powerRestructuring removes safety-focused oversight

Center for AI Policy Assessment

The Center for AI Policy has been particularly critical:

"A few years later, Musk left OpenAI, and Altman's interest in existential risk withered away. Once Altman had Musk's money, existential risk was no longer a top priority, and Altman could stop pretending to care about safety."

Influence on AI Policy

Altman has become a significant voice in AI policy discussions globally:

Congressional Engagement

DateVenueTopicOutcome
May 2023Senate Judiciary SubcommitteeAI oversightCalled for licensing agency
2023House dinner (60+ lawmakers)ChatGPT demonstrationBuilt bipartisan relationships
2024-2025Various committeesOngoing testimonyContinued policy engagement

International Engagement

Altman has conducted world tours meeting with heads of state and regulators:

RegionKey Engagements
EuropeMet with UK PM, French President; engaged with EU AI Act process
AsiaJapan, South Korea, Singapore government meetings
Middle EastUAE, Saudi Arabia discussions on AI investment
AfricaKenya (related to Worldcoin operations)

Policy Positions Summary

IssueAltman's PositionConsistency
Licensing for powerful AISupportsConsistent since 2023
International coordinationSupports IAEA-style bodyConsistent
Open-source frontier modelsGenerally opposedShifted from early OpenAI stance
Export controlsGenerally supportsPragmatic alignment with US policy
Compute governanceSupportsConsistent

Key Uncertainties

UncertaintyStakesCurrent Trajectory
Does gradual deployment actually improve safety?Whether commercial AI development can be made safeUnclear; some evidence of adaptation, but capabilities accelerating
Will Altman's timeline predictions prove accurate?Resource allocation, policy urgencyBecoming more aggressive; "few thousand days" to superintelligence
Can OpenAI maintain safety focus post-restructuring?Whether commercial pressures overwhelm missionConcerning; safety team departures, governance changes
Will regulatory frameworks emerge in time?Government capacity to oversee AISlow progress despite Altman's calls for regulation
How will Musk litigation affect OpenAI?Corporate stability, public trustOngoing legal battles; $134.5B damages claimed

Sources and Citations

Primary Sources

TypeSourceContent
TestimonySenate Judiciary Committee (May 2023)AI regulation proposals
BlogSam Altman's Blog"Reflections," "Three Observations"
InterviewsLex Fridman PodcastAI safety views transcript
StatementCAIS Extinction Risk StatementSigned May 2023

News Coverage

SourceCoverage
Wikipedia: Sam AltmanBiography
Wikipedia: Removal of Sam AltmanNovember 2023 crisis
TIME: OpenAI TimelineCorporate history
CNN: AI Risk TakerRisk acknowledgment while deploying
Fortune: Altman QuotesSafety concerns statements
CNBC: Board ExplanationHelen Toner interview
TIME: Accusations TimelineControversies overview
TechCrunch: WorldcoinWorld rebrand
Bloomberg: RestructuringCorporate changes

Analysis

SourceFocus
Center for AI PolicyCritical assessment
Britannica MoneyBiography and facts
OpenAI: Elon MuskMusk relationship history

Related Entities

EntityRelationship
OpenAICEO since 2019; co-founder 2015
Elon MuskFormer co-chair; now adversary
Ilya SutskeverCo-founder; departed May 2024
Greg BrockmanCo-founder; President
MicrosoftMajor investor (≈27% stake)
AnthropicCompetitor; founded by former OpenAI employees

Related Pages

Top Related Pages

Concepts

AnthropicOpenAIGoogle DeepMindJan LeikeElon Musk (AI Industry)Ilya Sutskever

Analysis

Anthropic IPO

Labs

Safe Superintelligence Inc.

Historical

Mainstream Era

Organizations

Chan Zuckerberg Initiative