Skip to content
Longterm Wiki
Navigation
Updated 2026-03-18HistoryData
Citations verified33 accurate1 flagged
Page StatusContent
Edited 3 weeks ago4.0k words259 backlinksUpdated every 3 daysOverdue by 15 days
62QualityGood •72.4ImportanceHigh44.5ResearchLow
Content11/13
SummaryScheduleEntityEdit history5Overview
Tables16/ ~16Diagrams0/ ~2Int. links40/ ~32Ext. links14/ ~20Footnotes41/ ~12References29/ ~12Quotes34/34Accuracy34/34RatingsN:3.5 R:5.8 A:4.5 C:7.5Backlinks259
Change History5
Auto-improve (standard): OpenAI5 weeks ago

Improved "OpenAI" via standard pipeline (403.2s). Quality score: 74. Issues resolved: Footnote [^4] is missing — footnotes skip from [^3] to [^5],; Footnote [^24], [^25], [^26] are missing — footnotes skip fr; Footnotes [^40] and [^41] cite sources (LessWrong OpenAI los.

403.2s · $5-8

Add concrete shareable data tables to high-value pages6 weeks ago

Added three concrete, screenshot-worthy data tables to high-value wiki pages: (1) OpenAI ownership/stakeholder table to openai.mdx showing the 2024-2025 PBC restructuring with Foundation ~26%, Microsoft transitioning from 49% profit share to ~2.5% equity, and Sam Altman's proposed 7% grant; (2) Budget and headcount comparison table to safety-orgs-overview.mdx covering MIRI, ARC, METR, Redwood Research, CAIS, Apollo Research, GovAI, Conjecture, and FAR AI with annual budgets, headcounts, and cost-per-researcher; (3) Per-company compensation comparison table to ai-talent-market-dynamics.mdx comparing Anthropic, OpenAI, Google DeepMind, xAI, Meta AI, and Microsoft Research by total comp range, base salary, equity type, and benefits including Anthropic's unique DAF matching program.

sonnet-4 · ~45min

Source unsourced facts7 weeks ago

Sourced 25 of 30 previously unsourced facts across all 4 fact files (anthropic, sam-altman, openai, jaan-tallinn). Created 21 new resource entries in news-media.yaml and ai-labs.yaml with proper SHA256-based IDs. Added 8 new publications (Bloomberg, The Information, Quartz, Benzinga, Britannica, World, Sherwood News). Fixed date accuracy issues (Worldcoin stats from 2024 to 2025-05, OpenAI revenue from Oct to Jun 2024) and improved notes. Source coverage improved from 29% to 88%.

opus-4-6 · ~45min

Fix audit report findings from PR #2167 weeks ago

Reviewed PR #216 (comprehensive wiki audit report) and implemented fixes for the major issues it identified: fixed 181 path-style EntityLink IDs across 33 files, converted 164 broken EntityLinks (referencing non-existent entities) to plain text across 38 files, fixed a temporal inconsistency in anthropic.mdx, and added missing description fields to 53 ai-transition-model pages.

Audit wiki pages for factual errors and hallucinations7 weeks ago

Systematic audit of ~20 wiki pages for factual errors, hallucinations, and inconsistencies. Found and fixed 25+ confirmed errors across 17 pages, including wrong dates, fabricated statistics, false attributions, missing major events, broken entity references, misattributed techniques, and internal inconsistencies.

Issues2
QualityRated 62 but structure suggests 87 (underrated by 25 points)
Links11 links could use <R> components

OpenAI

Frontier Lab

OpenAI

Comprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuring (conversion from capped-profit LLC to PBC, completed October 2025), key leadership departures, and capability advancement (o1/o3 reasoning models). Updated with 2025 developments including o3-mini release, 800M weekly active users, Altman's AGI timeline statements, enterprise market share decline from 50% to 25% between 2023 and 2025, and joint safety evaluation with Anthropic in summer 2025.

TypeFrontier Lab
Founded2015
LocationSan Francisco, CA
Employees~3500
Funding$18B+
Websiteopenai.com
Related
People
Sam AltmanIlya SutskeverJan LeikeSam Altman
Organizations
Anthropic
Risks
AI Development Racing DynamicsDeceptive Alignment
4k words · 259 backlinks

Overview

OpenAI is the AI research company that catalyzed mainstream artificial intelligence adoption through ChatGPT and the GPT model series. Founded in 2015 as a non-profit with the mission to ensure AGI benefits humanity, OpenAI has undergone significant organizational evolution: from open research lab to commercial entity, and from a non-profit governance structure to a Public Benefit Corporation pursuing stated AGI development goals.

The company achieved capability advances through massive scale (175B parameters for GPT-3), pioneered Reinforcement Learning from Human Feedback as a practical alignment technique, and launched ChatGPT—reaching 800 million weekly active users by October 20251 and maintaining 81.13% market share in generative AI chatbot usage2 (note: this consumer chatbot figure is distinct from overall enterprise LLM market share, which has declined from approximately 50% to 25% between 2023 and 2025, as discussed in the Competitive Landscape section below). OpenAI's trajectory has involved ongoing tensions between commercial pressures and safety priorities, exemplified by the November 2023 board crisis that temporarily ousted CEO Sam Altman and the 2024 departures of key safety researchers including co-founder Ilya Sutskever.

With over $13 billion in Microsoft investment, a $40 billion funding round from SoftBank in March 20253, and capability advancement through reasoning models like o1 and the recent o3-mini release4, OpenAI sits at the center of debates about AI safety governance, racing dynamics, and whether commercial incentives can align with existential risk mitigation. In 2024–2025, the company undertook a formal legal restructuring from a capped-profit LLC to a Public Benefit Corporation (PBC), a transition with significant implications for ownership, governance, and mission accountability.

Ownership Structure

OpenAI's 2024–2025 conversion from a capped-profit LLC to a Public Benefit Corporation (PBC) substantially altered how equity and economic interests are distributed among stakeholders. Under the previous structure, Microsoft held approximately 49% of capped profits in exchange for its multi-billion-dollar investment, while the non-profit board nominally controlled the organization's mission. The PBC restructuring replaced profit-share arrangements with direct equity stakes and introduced new stakeholder claims—most notably, a widely reported proposal for an equity grant to CEO Sam Altman, who held no equity under the original charter. However, in the final restructuring completed in October 2025, Altman did not receive equity. The restructuring received approval from the California Attorney General.

StakeholderInterest TypeNotes
OpenAI Non-Profit FoundationDirect equity stake (≈26%)Retains an equity stake to fund charitable mission
MicrosoftDirect equity stake (≈27%)Original structure gave Microsoft 49% of profits up to a return cap; PBC restructuring converted this to a direct equity stake
Sam AltmanNo equityUnder OpenAI's original non-profit charter, Altman received no equity. Despite widely reported proposals for a 7% equity grant during restructuring negotiations, Altman did not receive equity in the final PBC conversion
October 2024 funding round investorsEquity (primary round)October 2024 funding round led by Thrive Capital at $157B valuation; Tiger Global, Khosla Ventures among reported participants
SoftBankEquity (primary round, March 2025)$40B round led by SoftBank at a $300B valuation (March 2025)3; part of a broader strategic partnership including a joint venture

Valuation context: OpenAI's October 2024 funding round valued the company at $157B. The subsequent SoftBank-led $40B round in March 2025 raised the valuation to approximately $300B3, representing a roughly doubling within six months. Secondary market activity in late 2025 has suggested valuations approaching $500B.

Regulatory context: The restructuring required approval from the California Attorney General, given OpenAI's non-profit origins. Legal challenges to the conversion were filed by outside parties, but the restructuring was completed in October 2025.

Recent Developments (2024-2025)

Capability Advances

ModelRelease DateKey CapabilitiesPerformanceStrategic Impact
o1 (December 2024)December 2024Full reasoning model releaseAdvanced mathematical/scientific reasoningDemonstrated test-time compute scaling
o3-miniJanuary 31, 2025Latest reasoning modelMore efficient reasoning capabilities5Broader reasoning model availability
Sora 22025Video generationEnhanced video creation6Multimodal generation

Market Dominance and Financial Performance

User Growth and Market Position:

  • 800 million weekly active users as of October 2025 (doubled from 400M in February 2025)7
  • 15.5 million paying subscribers generating approximately $3 billion annually8
  • Additional $1 billion from API access9
  • Over 92% of Fortune 500 companies now use OpenAI products or APIs10

Developer Ecosystem Growth:

  • API business generates ≈$41M monthly revenue from ≈530 billion tokens11
  • 10% monthly growth in API usage between December 2023 and June 202412
  • GPT Store reached 3 million custom GPTs, of which 159,000 are publicly listed, with approximately 1,500 new models added daily13
  • OpenAI's share of API-based AI infrastructure now exceeds 50%14

International Expansion Strategy

OpenAI for Countries Initiative:

  • Launched partnership program with individual nations for data center capacity15
  • Focus on data sovereignty and local industry building
  • 10 planned country-specific projects

Asia-Pacific Growth:

  • APAC region shows highest user growth globally16
  • ChatGPT usage in APAC grew more than fourfold over 2024
  • Regional offices established in Tokyo and Seoul from Singapore hub17

AGI Timeline and Leadership Confidence

Sam Altman's 2025 Statements

In January 2025, CEO Sam Altman made notably confident statements about AGI development:

"We are now confident we know how to build AGI as we have traditionally understood it... AGI will probably get developed during Trump's term."18

Key Claims:

  • AGI defined as AI capable of working as a remote software engineer19
  • "In 2025, we may see the first AI agents join the workforce"
  • Capability to "materially change the output of companies"
  • Acknowledgment that "AGI has become a very sloppy term"

Context:

  • These statements are among the most specific public predictions Altman has made regarding AGI timelines
  • Some observers interpret this as an acceleration relative to prior public statements; Altman has made optimistic statements at various points and his characterization of what "AGI" means has evolved over time
  • The statements may influence competitive dynamics and regulatory responses
  • Other industry voices have offered more cautious assessments of near-term AGI timelines

Risk Assessment

The following risk assessments synthesize published views from safety researchers, academic commentators, and industry analysts; they do not represent the editorial position of this wiki. Trend labels reflect the direction of risk as assessed in cited safety literature, not an independent determination.

Risk CategorySeverityLikelihoodTimelineTrend (per safety literature)Evidence
Capability-Safety MisalignmentHighHigh1-2 yearsIncreasing concernSafety team departures, Superalignment dissolution
AGI RaceHighHighImmediateIncreasing concernConfident AGI timeline statements, competitive pressure
Governance FailureHighMediumOngoingStableNov 2023 crisis showed constraints on board authority
Commercial Override of SafetyHighHigh1-2 yearsIncreasing concernJan Leike: "Safety culture has taken backseat to shiny products"20
AGI Deployment Without AlignmentVery HighMedium2-3 yearsUncertaino3 shows rapid capability gains; alignment solutions remain an active research area

Organizational Evolution

Founding Vision vs. Current Reality

Aspect2015 Foundation2025 RealityChange
StructureNon-profitPublic Benefit Corporation (converted from capped-profit LLC)Major structural change
Funding≈$1B founder commitment$57.9B+ total (including $13B+ Microsoft, $40B SoftBank round)57x+ scale increase
Openness"Open by default" research publishingProprietary models, limited disclosureSubstantial shift toward proprietary development
Mission Priority"AGI benefits all humanity"Product revenue and market leadershipContested; company argues commercial success funds mission
Safety Approach"Safety over competitive advantage"Safety integrated as constraint within product developmentDisputed; safety researchers cite deprioritization, company disputes characterization
GovernanceIndependent non-profit boardPost-November 2023 board with commercial representation; critics argue reduced independent oversight capacityRestructured; interpretations differ

Key Milestones and Capability Jumps

DateDevelopmentParameters/ScaleSignificanceSafety Implications
2018GPT-1117MFirst transformer LMEstablished architecture
2019GPT-21.5BInitially withheldDemonstrated misuse concerns
2020GPT-3175BFew-shot learning breakthroughSparked scaling race
2022InstructGPT/ChatGPTGPT-3.5 + RLHFMainstream AI adoptionRLHF as alignment technique
2023GPT-4Undisclosed multimodalHuman-level performance on many tasksDangerous capabilities acknowledged
2024o1 reasoningAdvanced chain-of-thoughtMathematical/scientific reasoningHidden reasoning, deception risks
2024o3 previewNext-generation reasoningNear-frontier performance on some tasksRapid capability advancement
2025o3-miniEfficient reasoningBroader reasoning availabilityDemocratized advanced capabilities

Technical Contributions and Evolution

Major Research Breakthroughs

InnovationImpactAdoptionLimitations
GPT ArchitectureEstablished transformer LMs as dominant paradigmUniversal across industryScaling may encounter physical limits
RLHF/InstructGPTMade LMs helpful, harmless, honestStandard alignment techniqueMay not scale to superhuman tasks
Scaling LawsPredictable performance from compute/dataDrove $100B+ industry investmentUnclear whether they continue to AGI-level systems
Chain-of-Thought ReasoningTest-time compute for complex problemsAdopted by Anthropic, Google DeepMindHidden reasoning creates interpretability challenges
Deliberative AlignmentReasoning-based safety specificationsUsed in o-series models21Limited external evaluation in practice

Safety Research Evolution

Current Methodology (2025):

  • Deliberative Alignment: Teaching reasoning models human-written safety specifications22
  • Scalable Evaluations: Automated tests measuring capability proxies23
  • Cross-Lab Collaboration: Joint safety evaluations with Anthropic and other labs, including a summer 2025 joint evaluation where each company tested the other's models24
  • Red Teaming: Human adversarial testing complementing automated evaluations

Safety Framework Assessment:

  • Preparedness Framework established capability thresholds and evaluation protocols
  • Safety evaluations now include third-party assessments beyond internal teams
  • Alignment research continues post-Superalignment dissolution but with reduced external visibility
  • Safety measures are integrated into product development rather than maintained as a separate research track

Competitive Landscape Analysis

Capability Comparison (Late 2025)

CompanyLatest ModelKey StrengthsMarket Position
OpenAIo3-mini, o1Reasoning capabilities, broad deploymentLeading consumer chatbot share (81% generative AI chatbot); enterprise LLM share approximately 25% as of mid-202525
AnthropicClaude (current series)Safety research emphasis, coding benchmarksStrong challenger in enterprise; coding share more than double OpenAI's as of July 202526
GoogleGemini 2.5Research depth, multimodal, integrationSignificant technology position
Meta AILlama 4Open source approachAlternative paradigm

Market Share Context: OpenAI's 81.13% figure refers to consumer-facing generative AI chatbot market share. This is distinct from enterprise LLM market share across API and developer use cases, where OpenAI's position has declined from approximately 50% to 25% between mid-2023 and mid-2025, according to industry reports.25 In enterprise coding specifically, OpenAI held approximately 21% market share as of July 2025, with Anthropic's enterprise coding usage reported at more than double that figure.26

Performance Benchmarks (o1 series):

  • o1 leads mathematical reasoning: 83% on AIME math competition
  • o1 on SWE-bench Verified: 71.7%
  • Context length and safety remain key differentiators across providers

Comparative Safety Evaluations (Summer 2025): In summer 2025, OpenAI and Anthropic published results of a joint safety evaluation in which each company tested the other's models using their own internal safety methodologies.27 Results indicated that OpenAI's o3 and o4-mini models showed greater resistance to certain jailbreak attacks (including past-tense prompts) as measured by the StrongREJECT v2 benchmark, while Claude 4 models showed advantages in maintaining instruction hierarchy—the system prioritizing safety constraints over other directives.27 O3's failure modes were primarily limited to base64-style prompts and low-resource language translations.28

Developer Ecosystem and Business Strategy

API and Integration Platform

Market Penetration:

  • API monthly revenue: ≈$41M from 530 billion tokens (June 2024)29
  • Gross margins: 75% decreasing to 55% with pricing adjustments30
  • Azure OpenAI Service: 64% year-over-year growth adoption31
  • Enterprise integration across Microsoft Office 365, GitHub Copilot

Developer Adoption:

  • GPT Store: 159,000 public GPTs from 3 million total created32
  • Average 1,500 new models added daily to marketplace33
  • API infrastructure market share exceeding 50% industry-wide
  • Integration partnerships with major enterprise software providers

Financial and Commercial Dynamics

Revenue and Investment Structure

Revenue History
6 data points
DateValueVerificationSourceNotes
Feb 2026$25 billionsacra.comAnnualized revenue run rate as of February 2026 per Sacra; up from $20B at end of 2025
Dec 2025$13.1 billionpymnts.comFull-year 2025 actual revenue of $13.1B, beating $13B forecast by $100M. ARR exceeded $20B by year-end per CFO Sarah Friar.
202520000000000ARR per CFO Sarah Friar disclosure
Oct 20243400000000Annualized run rate as of mid-2024
Jun 2024$3.4 billionAnnualized run rate as of mid-2024
$5.4 millionwikidata.orgFrom Wikidata Q21708200
Valuation History
3 data points
DateValueVerificationSourceNotes
Oct 2025$500 billioncnbc.comPBC restructuring valuation, October 2025
Mar 2025$300 billionopenai.comSoftBank $40B funding round valuation, March 2025
Oct 2024$157 billionOctober 2024 funding round valuation

2024-2025 Financial Performance:

  • Projected 2024 revenue: $3.4 billion (ChatGPT subscriptions + API)34
  • Growth rate: 1,700% from early 2023 to September 2024
  • Operating losses: $5 billion in 2024 despite revenue growth35
  • Primary cost drivers: compute infrastructure, talent acquisition, research investment. OpenAI leadership has characterized these losses as reflecting deliberate investment in infrastructure and talent rather than financial distress.

Major 2025 Funding:

  • March 2025: SoftBank-led $40 billion round at a $300 billion valuation3, the largest single funding round in AI history at that time. SoftBank contributed directly and facilitated third-party investment through a joint venture structure. This round brought OpenAI's total cumulative funding to approximately $57.9 billion.
  • The SoftBank round was part of a broader "Stargate" strategic partnership for AI infrastructure investment in the United States.

Microsoft Partnership

ComponentDetailsStrategic Implications
Investment$13B+ total (as of Oct 2024); 49% profit share under original structure, converted to ≈27% direct equity stake in PBC restructuring (completed October 2025)Creates commercial pressure for rapid deployment; restructuring altered long-term economic alignment
Compute AccessExclusive Azure partnershipEnables massive model training but creates infrastructure dependency
Product IntegrationBing, Office 365, GitHub CopilotDrives revenue but requires consumer-ready systems
API MonetizationEnterprise and developer accessSuccess depends on maintaining capability lead
Funding History
7 entries
DateRaisedValuationLead InvestorNotes
2021-01$1 billion$1B raise from investors including Khosla Ventures and Reid Hoffman at approximately $30B valuation.
2019-07$1 billionMicrosoft$1B strategic investment from Microsoft in July 2019, establishing a long-term partnership.
2023-01$10 billionMicrosoftMulti-year investment from Microsoft (reported as ~$10B), extending the partnership announced in 2019.
2015-12$1 billion$1B pledged at founding by Elon Musk, Sam Altman, Greg Brockman, Reid Hoffman, Peter Thiel, Microsoft, Amazon Web Services, Infosys, and YC Research.
2025-03$40 billion$300 billionSoftBank$40B at $300B valuation. Largest private funding round on record at the time. SoftBank $30B. Per OpenAI blog.
2023$300 million$29 billion2023 round at $29B valuation. Per multiple sources.
2024-10$6.6 billion$157 billionThrive Capital$6.6B round at $157B valuation. Investors: Microsoft, Nvidia, SoftBank, Thrive Capital. Per CNBC.

Early Funding History: OpenAI's initial funding came primarily from founder pledges, including a $1 billion commitment from a group including Elon Musk, Sam Altman, Peter Thiel, and Reid Hoffman. In 2017, Open Philanthropy (now Coefficient Giving) granted $30 million to OpenAI36 as part of its early investment in AI safety research—at that time OpenAI was a nonprofit and Open Philanthropy viewed supporting frontier AI safety research as a priority. This grant predated OpenAI's commercial pivot and the creation of the capped-profit subsidiary in 2019.

Governance Crisis Analysis

November 2023 Board Transition

TimelineEventStakeholdersOutcome
Nov 17Board removes Sam Altman, citing lack of candorNon-profit board, Ilya SutskeverInitial dismissal
Nov 18-19Employee letter, Microsoft intervention738 of 770 employees signed letter; Microsoft leadershipPressure for reversal
Nov 21-22Altman reinstated, board reconstitutedNew board (Bret Taylor chair, Lawrence Summers)Governance restructured

Structural observations (contested interpretations):

  • The episode demonstrated that employee and investor sentiment significantly constrained the non-profit board's practical authority
  • Critics argue the reconstituted board has reduced independence from commercial leadership; OpenAI leadership contends the new board has greater relevant expertise
  • The Microsoft partnership's scale creates financial interdependencies that influence operational decisions; the degree to which this constitutes a constraint over safety-motivated decisions is disputed
  • The episode is cited by governance scholars as a case study in the practical limits of non-profit oversight of commercial AI operations

Key Leadership Departures (2024)

Key People

No data available.

The following departures occurred in 2024. Stated reasons varied across individuals; the safety-motivation framing is most explicitly supported by Leike and Schulman's public statements, while other departures involved personal or exploratory motivations.

ResearcherRoleDeparture DateStated ReasonsDestination
Ilya SutskeverCo-founder, Chief ScientistMay 2024"Personal project" (SSI)Safe Superintelligence Inc
Jan LeikeSuperalignment Co-leadMay 2024"Safety culture backseat to products"20Anthropic Head of Alignment
John SchulmanCo-founder, PPO inventorAug 2024"Deepen AI alignment focus"Anthropic
Mira MuratiChief Technology OfficerSept 2024"Personal exploration"Not disclosed

Context:

  • 75% of co-founders had departed within 9 years of founding
  • Leike and Schulman explicitly cited safety prioritization concerns in public statements; Sutskever and Murati gave different reasons
  • Anthropic subsequently hired multiple senior OpenAI researchers into alignment-focused roles
  • OpenAI leadership disputed characterizations of a systematic safety deprioritization, pointing to ongoing safety investments and the Preparedness Framework

Current Capability Assessment

Reasoning Models Performance (o1/o3 Series)

DomainCapability LevelBenchmark PerformanceRisk Assessment
MathematicsPhD+83% on AIME, IMO medal performanceAdvanced problem-solving
ProgrammingExpert71.7% on SWE-bench VerifiedCode generation/analysis
Scientific ReasoningGraduate+High performance on PhD-level physicsResearch acceleration potential
Strategic ReasoningNot well-characterizedChain-of-thought reasoning partially hiddenDeceptive alignment risks; active interpretability research area

Key Technical Developments:

  • Test-time compute scaling enables reasoning capability improvements
  • Partially hidden reasoning processes limit interpretability and alignment verification
  • Performance approaching human expert level across cognitive domains
  • Deliberative alignment methodology integrated into training process

Economic Impact and Industry Transformation

Enterprise Adoption and Integration

Fortune 500 Penetration:

  • 92% of Fortune 500 companies actively using OpenAI products or APIs37
  • Primary use cases: customer service automation, content generation, code assistance
  • Integration through Microsoft ecosystem (Office 365, Teams, Azure)
  • Custom enterprise solutions and fine-tuning services

Industry Transformation Metrics:

  • Sparked $100B+ investment across AI industry following ChatGPT launch
  • Developer productivity improvements: 10-40% in coding tasks (GitHub Copilot studies)
  • Content creation acceleration across marketing, education, professional services
  • Job market evolution with AI-augmented roles emerging alongside traditional functions

International Strategy and Regulatory Engagement

Government Relations and Policy Influence

JurisdictionEngagement TypeOpenAI PositionPolicy Impact
US CongressAltman testimony, lobbyingSelf-regulation advocacyInfluenced Senate AI framework
EU AI ActCompliance preparationGeographic market accessFoundation model regulations apply
UK AI SafetyAISI collaborationPartnership approachSafety institute cooperation
ChinaNo direct engagementTechnology export controlsLimited model access

Global Expansion Framework

Data Sovereignty Approach:

  • OpenAI for Countries program supporting local data centers38
  • Partnerships for in-country infrastructure development
  • Balance between global access and national security concerns
  • Custom deployment models for government and enterprise clients

Safety Methodology and Alignment Research

Current Safety Framework (2025)

Evaluation Processes:

  • Scalable Evaluations: Automated testing measuring capability proxies39
  • Deep Dives: Human red-teaming and third-party assessments40
  • Capability Thresholds: Predetermined criteria triggering additional safety measures
  • Cross-Lab Collaboration: Joint safety evaluations with industry partners, including a summer 2025 joint evaluation with Anthropic27

Deliberative Alignment Implementation:

  • Integration of human-written safety specifications into reasoning models41
  • Training models to explicitly reason about safety considerations
  • Applied to o-series models with ongoing evaluation
  • Represents an evolution beyond RLHF toward more explicit safety reasoning in model outputs

Alignment Research Post-Superalignment

Current Research Directions:

  • Scalable oversight methods for superhuman AI systems
  • Interpretability research for understanding model reasoning
  • Robustness testing across diverse deployment scenarios
  • Integration of safety measures into product development cycles

Resource Allocation:

  • Original 20% compute allocation for safety research; current structure is not publicly confirmed
  • Safety research integrated into product teams rather than housed in an independent research division
  • External commentators have raised concerns about the sufficiency of dedicated safety resources; OpenAI disputes these characterizations
  • Balance between product development velocity and safety thoroughness remains a subject of ongoing public debate

Expert Perspectives and Current Debates

Internal Alignment (Current Leadership)

Sam Altman's Position (2025):

  • AGI development is expected to proceed and OpenAI believes it is better positioned than alternatives to pursue it responsibly
  • Commercial success is argued to enable greater safety research investment
  • Rapid deployment with iterative safety improvements is preferred over delayed release
  • Maintaining technological leadership is framed as necessary given competitive dynamics

Technical Leadership Perspective:

  • Integration of safety measures into the development process rather than maintaining separate research tracks
  • Emphasis on real-world deployment experience as a source of safety learning
  • Collaborative industry approach to safety standards and evaluation

External Safety Community Assessment

Academic and Safety Researcher Views:

  • Yoshua Bengio: Has publicly expressed concern about commercial mission drift from original safety focus
  • Stuart Russell: Has publicly warned about commercial capture of safety research priorities
  • Former OpenAI safety researchers (Leike, Schulman): Cited systematic deprioritization of safety relative to capabilities in public departure statements20

Policy and Governance Experts:

  • External oversight mechanisms beyond self-regulation have been advocated by multiple governance researchers
  • Concentration of AGI development in a single organization raises questions about democratic accountability
  • Legal scholars have assessed the PBC structure as a formal improvement over the capped-profit LLC for mission preservation; critics argue it remains an insufficient safeguard without stronger enforcement mechanisms

Future Trajectories and Critical Decisions

Timeline Projections (Updated 2025)

ScenarioProbability EstimateTimelineKey Indicators
AGI DevelopmentHigh (per Altman)1-3 yearsAltman confidence, o3+ performance
Regulatory InterventionMedium-High1-2 yearsGovernment AI governance initiatives
Safety BreakthroughLow-MediumUnknownScalable alignment advances
Competitive DisruptionMedium2-3 yearsOpen source parity, international advances

Strategic Decision Points

Immediate (2025):

  • AGI timeline communications and expectation management
  • Response to increasing regulatory scrutiny and safety criticism
  • Resource allocation between reasoning model advancement and safety research
  • International expansion pace and partnership selection
  • Post-PBC-restructuring governance and mission accountability (restructuring completed October 2025)

Medium-term (2026-2027):

  • AGI deployment framework and access policies
  • Safety standard establishment and industry coordination
  • Relationship management with government oversight bodies
  • Competitive response to potential capability disruptions

Key Research Questions

Key Questions

  • ?Can OpenAI maintain safety priorities while pursuing aggressive AGI timelines?
  • ?Will deliberative alignment scale to superintelligent systems with hidden reasoning?
  • ?How will international coordination develop around OpenAI's AGI deployment decisions?
  • ?What governance mechanisms could effectively constrain rapid AGI development?
  • ?Can the developer ecosystem and API strategy support sustainable business model?
  • ?How will competitive dynamics evolve as multiple labs approach AGI capabilities?
  • ?How will the PBC restructuring affect the non-profit foundation's ability to enforce mission-aligned constraints?
  • ?What role will the California AG's oversight play in shaping the final restructuring terms?
  • ?How will OpenAI's enterprise LLM market share trajectory affect its competitive and financial position as Anthropic and others gain ground?

Sources and Resources

Primary Documents

SourceTypeKey ContentLink
GPT-4 System CardTechnical reportRisk assessment, red teaming resultsOpenAI GPT-4 System Card
Preparedness FrameworkPolicy documentCatastrophic risk evaluation frameworkOpenAI Preparedness
Deliberative AlignmentResearch paperReasoning-based safety methodologyOpenAI Deliberative Alignment
OpenAI for CountriesPolicy initiativeInternational partnership frameworkGlobal Affairs Initiative

Recent Announcements and Performance

SourceTypeKey ContentLink
Sora 2 ReleaseProduct announcementVideo generation capabilitiesSora 2 Launch
o3-mini LaunchModel releaseLatest reasoning model availabilityComputerworld Coverage
AGI Timeline InterviewExecutive statementAltman's AGI predictionsTIME Magazine Interview
SoftBank $40B RoundFunding announcementLargest single AI funding round; $300B valuationOpenAI Announcement

Academic Research

PaperAuthorsContributionCitation
Language Models are Few-Shot LearnersBrown et al.GPT-3 capabilities demonstrationarXiv:2005.14165
Training language models to follow instructionsOuyang et al.InstructGPT/RLHF methodologyarXiv:2203.02155
Weak-to-Strong GeneralizationBurns et al.Superalignment research directionarXiv:2312.09390
GPT-4 Technical ReportOpenAI (279 contributors)Official technical documentationarXiv:2303.08774

Footnotes

  1. ChatGPT Users Statistics (February 2026) – Growth & Usage DataChatGPT Users Statistics (February 2026) – Growth & Usage Data

  2. ChatGPT Users Statistics (February 2026) – Growth & Usage DataChatGPT Users Statistics (February 2026) – Growth & Usage Data

  3. OpenAI and SoftBank Joint Announcement, OpenAI, March 2025. SoftBank-led $40B funding round at $300B valuation, as part of the Stargate AI infrastructure initiative. 2 3 4

  4. OpenAI Latest News and InsightsOpenAI Latest News and Insights

  5. OpenAI Latest News and InsightsOpenAI Latest News and Insights

  6. Sora 2 is hereSora 2 is here

  7. ChatGPT Users Statistics (February 2026) – Growth & Usage DataChatGPT Users Statistics (February 2026) – Growth & Usage Data

  8. OpenAI lost $5 billion in 2024 (and its losses are increasing)OpenAI lost $5 billion in 2024 (and its losses are increasing)

  9. OpenAI lost $5 billion in 2024 (and its losses are increasing)OpenAI lost $5 billion in 2024 (and its losses are increasing)

  10. Citation rc-40d1

  11. OpenAI's API Profitability in 2024OpenAI's API Profitability in 2024

  12. OpenAI's API Profitability in 2024OpenAI's API Profitability in 2024

  13. The Era of Tailored Intelligence: Charting the Growth and Market Impact of Custom GPTsThe Era of Tailored Intelligence: Charting the Growth and Market Impact of Custom GPTs; GPT Store Statistics & Facts: Contains 159.000 of the 3 million created GPTs

  14. OpenAI Statistics 2026: Adoption, Integration & InnovationOpenAI Statistics 2026: Adoption, Integration & Innovation

  15. Introducing OpenAI for CountriesIntroducing OpenAI for Countries

  16. Inside OpenAI's Global Business ExpansionInside OpenAI's Global Business Expansion

  17. Inside OpenAI's Global Business ExpansionInside OpenAI's Global Business Expansion

  18. How OpenAI's Sam Altman Is Thinking About AGI and Superintelligence in 2025How OpenAI's Sam Altman Is Thinking About AGI and Superintelligence in 2025

  19. We know how to build AGI - Sam AltmanWe know how to build AGI - Sam Altman

  20. Jan Leike departure statement on X/Twitter, May 2024 2 3

  21. Deliberative alignment: reasoning enables safer language modelsDeliberative alignment: reasoning enables safer language models

  22. Deliberative alignment: reasoning enables safer language modelsDeliberative alignment: reasoning enables safer language models

  23. All the labs AI safety plans: 2025 editionAll the labs AI safety plans: 2025 edition

  24. All the labs AI safety plans: 2025 editionAll the labs AI safety plans: 2025 edition

  25. Enterprise LLM Market Share Report, mid-2025Enterprise LLM Market Share Report, mid-2025 — industry reports cited in multiple sources indicate OpenAI enterprise LLM share declined from approximately 50% (mid-2023) to approximately 25% (mid-2025) 2

  26. Citation rc-1c86 2

  27. Citation rc-fda9 2 3

  28. OpenAI and Anthropic joint safety evaluation, summer 2025OpenAI and Anthropic joint safety evaluation, summer 2025

  29. OpenAI's API Profitability in 2024OpenAI's API Profitability in 2024

  30. OpenAI's API Profitability in 2024OpenAI's API Profitability in 2024

  31. OpenAI Statistics 2026: Adoption, Integration & InnovationOpenAI Statistics 2026: Adoption, Integration & Innovation

  32. GPT Store Statistics & Facts: Contains 159.000 of the 3 million created GPTsGPT Store Statistics & Facts: Contains 159.000 of the 3 million created GPTs

  33. The Era of Tailored Intelligence: Charting the Growth and Market Impact of Custom GPTsThe Era of Tailored Intelligence: Charting the Growth and Market Impact of Custom GPTs

  34. OpenAI lost $5 billion in 2024 (and its losses are increasing)OpenAI lost $5 billion in 2024 (and its losses are increasing)

  35. OpenAI lost $5 billion in 2024 (and its losses are increasing)OpenAI lost $5 billion in 2024 (and its losses are increasing)

  36. Open Philanthropy — OpenAI General Support, Open Philanthropy, March 2017. $30 million grant to support OpenAI's work on technical AI safety research.

  37. OpenAI Statistics 2026: Adoption, Integration & InnovationOpenAI Statistics 2026: Adoption, Integration & Innovation

  38. Introducing OpenAI for CountriesIntroducing OpenAI for Countries

  39. All the labs AI safety plans: 2025 editionAll the labs AI safety plans: 2025 edition

  40. All the labs AI safety plans: 2025 editionAll the labs AI safety plans: 2025 edition

  41. Deliberative alignment: reasoning enables safer language modelsDeliberative alignment: reasoning enables safer language models

References

RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technology, governance, and emerging risks. It produces influential studies on AI policy, cybersecurity, and global governance challenges. RAND's work is frequently cited by governments and policymakers worldwide.

★★★★☆

This OpenAI paper introduces the 'weak-to-strong generalization' problem as an analogy for superalignment: can a weak supervisor (humans) elicit good behavior from a much stronger model (superintelligence)? Experiments show that strong pretrained models can generalize beyond weak labels, and simple techniques like auxiliary confidence loss can significantly improve this generalization.

★★★☆☆

OpenAI successfully closed a $6.6 billion funding round, one of the largest in startup history, reflecting massive investor confidence in AI development. This capital infusion signals continued rapid scaling of frontier AI capabilities and raises questions about governance, safety investment, and competitive dynamics in the AI industry.

★★★★☆

This paper introduces InstructGPT, a method for aligning language models with human intent using Reinforcement Learning from Human Feedback (RLHF). By fine-tuning GPT-3 with human preference data, the authors demonstrate that smaller aligned models can outperform much larger unaligned models on user-preferred outputs. The work establishes RLHF as a foundational technique for making LLMs safer and more helpful.

★★★☆☆

OpenAI has launched 'OpenAI for Countries,' an initiative to help governments build AI infrastructure aligned with democratic principles, framed as an alternative to authoritarian AI models. Through US government-coordinated partnerships, OpenAI will assist nations with data centers, customized ChatGPT deployments, safety controls, and national AI startup funds. The initiative explicitly links AI governance to geopolitical competition, positioning democratic AI as a counterweight to centralized authoritarian AI development.

★★★★☆
Claims (2)
- Launched partnership program with individual nations for data center capacity
Accurate100%Feb 22, 2026

Supported by source

- OpenAI for Countries program supporting local data centers
Accurate100%Feb 22, 2026

Supported by source

6Brown et al. (2020)arXiv·Tom B. Brown et al.·2020·Paper

Brown et al. (2020) introduce GPT-3, a 175-billion-parameter autoregressive language model that demonstrates strong few-shot learning capabilities without task-specific fine-tuning. By scaling up language model size by 10x compared to previous non-sparse models, GPT-3 achieves competitive performance on diverse NLP tasks including translation, question-answering, reasoning, and arithmetic through text-based prompting alone. The paper shows that language model scale enables task-agnostic performance approaching human-like few-shot learning, while also identifying limitations and societal concerns, including the model's ability to generate human-indistinguishable news articles.

★★★☆☆

The founding announcement of OpenAI, originally established as a non-profit AI research company in December 2015, articulating its mission to advance AI for broad human benefit rather than shareholder return. The post outlines OpenAI's core philosophy: that AI should be openly researched, widely distributed, and developed with safety and positive human impact as primary goals. It introduces the founding team and signals concern about the societal risks of misaligned or misused advanced AI.

★★★★☆

A TIME article summarizing Sam Altman's January 2025 blog post and Bloomberg interview, in which he claims OpenAI knows how to build AGI and is shifting focus toward superintelligence. Altman predicts AI agents will materially enter the workforce in 2025 and suggests AGI may arrive during the current U.S. presidential term. The piece also surveys competing views from Musk, Amodei, and skeptics like Gary Marcus.

★★★☆☆
Claims (1)
AGI will probably get developed during Trump's term."
Accurate100%Feb 26, 2026
In a recent interview with Bloomberg, Altman said he thinks “AGI will probably get developed during [Trump’s] term,” while noting his belief that AGI “has become a very sloppy term.”

OpenAI's official about page describes the company's mission to ensure artificial general intelligence benefits all of humanity. It outlines their dual organizational structure as a nonprofit foundation governing a for-profit public benefit corporation, and links to key documents like their AGI plan and charter.

★★★★☆

In July 2019, Microsoft announced a $1 billion investment in OpenAI and a strategic partnership to develop AGI with widely distributed economic benefits. The partnership established Microsoft Azure as OpenAI's exclusive cloud provider and committed both organizations to jointly developing AI supercomputing technologies. OpenAI would license pre-AGI technologies to Microsoft as its preferred commercialization partner.

★★★★☆

OpenAI launched a $10M grants program in December 2023 to fund technical research on aligning superhuman AI systems, covering areas like weak-to-strong generalization, interpretability, and scalable oversight. The program offers $100K–$2M grants for academic labs, nonprofits, and individual researchers, plus a $150K fellowship for graduate students, explicitly welcoming researchers new to alignment.

★★★★☆

OpenAI completed a $6.6 billion share sale valuing the company at $500 billion, marking one of the largest private fundraising rounds in history. The round reflects massive investor confidence in OpenAI's commercial trajectory and frontier AI development capabilities. This funding will support continued scaling of compute infrastructure and AI research.

★★★☆☆
13Kaplan et al. (2020)arXiv·Jared Kaplan et al.·2020·Paper

Kaplan et al. (2020) empirically characterize scaling laws for language model performance, demonstrating that cross-entropy loss follows power-law relationships with model size, dataset size, and compute budget across seven orders of magnitude. The study reveals that architectural details like width and depth have minimal impact, while overfitting and training speed follow predictable patterns. Crucially, the findings show that larger models are significantly more sample-efficient, implying that optimal compute-efficient training involves training very large models on modest datasets and stopping before convergence.

★★★☆☆

Microsoft announced a multi-year, multi-billion dollar investment extension of its partnership with OpenAI in January 2023, deepening collaboration on AI research and deployment. The deal includes integrating OpenAI's models into Microsoft products and Azure cloud infrastructure, positioning Microsoft as the exclusive cloud provider for OpenAI's compute needs.

★★★★☆

Reports that OpenAI's annualized revenue reached $3.4 billion, doubling from its late 2023 figures, reflecting rapid commercial growth of its AI products. This growth trajectory is relevant for understanding the pace of AI deployment and the resources available to frontier AI labs for further development.

★★★★☆

A 2024 CNBC report reveals OpenAI is projected to lose approximately $5 billion in 2024 despite generating $3.7 billion in revenue, highlighting the massive compute and operational costs involved in frontier AI development. The figures underscore the capital-intensive nature of leading AI labs and raise questions about long-term financial sustainability.

★★★☆☆

The Information is a subscription-based technology journalism outlet known for breaking news and in-depth reporting on major tech companies, AI developments, and industry trends. It frequently covers AI safety, capabilities, and governance topics from a business and policy angle.

★★★★☆

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆

OpenAI announces the creation of OpenAI LP, a 'capped-profit' company structure designed to attract investment and talent while maintaining its nonprofit mission. The structure caps investor returns at 100x and ensures the nonprofit board retains control over strategic decisions. This model was intended to balance commercial sustainability with the goal of developing safe and beneficial AGI.

★★★★☆

Reuters reports on UBS analyst findings that ChatGPT reached 100 million monthly active users in January 2023, just two months after launch, making it the fastest-growing consumer application in history. This milestone surpassed TikTok's growth rate and underscored the unprecedented public adoption of generative AI.

★★★★☆

Anthropic is an AI safety company focused on building reliable, interpretable, and steerable AI systems. The company conducts frontier AI research and develops Claude, its family of AI assistants, with a stated mission of responsible development and maintenance of advanced AI for long-term human benefit.

★★★★☆

OpenAI's CFO disclosed that the company's annual recurring revenue (ARR) surpassed $20 billion in 2025, reflecting rapid commercial growth driven by ChatGPT and API products. This milestone highlights OpenAI's dominant market position and the accelerating monetization of large language model capabilities.

★★★☆☆

OpenAI's system card for GPT-4 documents safety evaluations, risk assessments, and mitigation measures conducted prior to deployment. It covers dangerous capability evaluations, red-teaming findings, and the RLHF-based safety interventions applied to reduce harmful outputs. The document represents OpenAI's public accountability framework for responsible deployment of a frontier AI model.

★★★★☆

OpenAI releases Sora 2, a significantly improved video and audio generation model featuring enhanced physical accuracy, controllability, synchronized dialogue, and sound effects. The model represents a major step toward world simulation, better modeling physical laws including failure states, and supports injection of real-world elements like specific people into generated scenes.

★★★★☆
Claims (1)
| Sora 2 | 2025 | Video generation | Enhanced video creation | Multimodal generation |
Accurate100%Feb 22, 2026

Supported by source

OpenAI introduces 'deliberative alignment,' a technique that explicitly encodes safety specifications into the model's reasoning process, allowing the model to consciously consider guidelines before responding. Rather than relying solely on implicit behavioral training, this approach teaches models to reason about and reference safety policies during inference, improving both safety compliance and instruction-following without sacrificing capability.

★★★★☆
Claims (3)
| Deliberative Alignment | Reasoning-based safety specifications | Used in o-series models | Limited external evaluation in practice |
Minor issues85%Feb 26, 2026
We used deliberative alignment to align OpenAI’s o-series models, enabling them to use chain-of-thought (CoT) reasoning to reflect on user prompts, identify relevant text from OpenAI’s internal policies, and draft safer responses.

The claim states 'Limited external evaluation in practice', but the source only mentions internal and external safety benchmarks. It does not explicitly state that external evaluation is limited. The claim states 'Reasoning-based safety specifications', but the source states 'directly taught safety specifications and how to reason over them.'

- Deliberative Alignment: Teaching reasoning models human-written safety specifications
Accurate100%Feb 26, 2026
We introduce deliberative alignment , a training paradigm that directly teaches reasoning LLMs the text of human-written and interpretable safety specifications, and trains them to reason explicitly about these specifications before answering.
- Integration of human-written safety specifications into reasoning models
Accurate100%Feb 26, 2026
We introduce deliberative alignment , a training paradigm that directly teaches reasoning LLMs the text of human-written and interpretable safety specifications, and trains them to reason explicitly about these specifications before answering.

OpenAI's Preparedness Framework outlines a systematic approach to tracking, evaluating, and mitigating catastrophic risks from frontier AI models. It establishes a 'Preparedness' function responsible for conducting safety evaluations across key risk categories including cybersecurity, CBRN threats, model autonomy, and persuasion, with defined risk thresholds that gate model deployment decisions.

★★★★☆

A tweet by Jan Leike (former OpenAI alignment team lead) sharing views on AI safety and alignment priorities. Given the timing (May 2024), this likely relates to his departure from OpenAI or commentary on alignment research direction.

28OpenAI

This page documents a joint announcement between SoftBank and OpenAI, likely pertaining to a major investment or partnership deal. Such agreements have significant implications for AI development trajectories, compute access, and the geopolitical landscape of frontier AI.

★★★★☆
29OpenAI's GPT-4arXiv·OpenAI et al.·2023·Paper

OpenAI presents GPT-4, a large-scale multimodal model capable of processing both image and text inputs to generate text outputs. The model demonstrates human-level performance on professional and academic benchmarks, including achieving top 10% scores on simulated bar exams. Built on Transformer architecture with post-training alignment to improve factuality and behavioral adherence, GPT-4 represents advances in scaling infrastructure and predictive methods that enable performance estimation from models using 1/1000th of its computational resources.

★★★☆☆
Citation verification: 25 verified of 34 total

Structured Data

73 facts·11 recordsView in FactBase →
Revenue
$25 billion
as of Feb 2026
Valuation
$500 billion
as of Oct 2025
Headcount
4,500
as of Mar 2026
Total Funding Raised
$57.9 billion
as of Mar 2025
Founded Date
Dec 2015

Funding History

7
RoundDateRaisedValuationLead InvestorSource
Seeddonation
Dec 2015$1B
Strategic Investment (Microsoft)strategic
Jul 2019$1BMicrosoft
Series (2021)
Jan 2021$1B
Series (2023)
2023$300M$29B
Strategic Investment (Microsoft, 2023)strategic
Jan 2023$10BMicrosoft
Series (October 2024)
Oct 2024$6.6B$157BThrive Capital
Series (March 2025)
Mar 2025$40B$300BSoftBank

All Facts

73
Organization
PropertyValueAs OfSource
Legal Structure501(c)(3) organization
4 earlier values
Oct 2025Public Benefit Corporation
2025Public Benefit Corporation (converting)
Mar 2019Capped-profit LLC
Dec 2015Nonprofit 501(c)(3)
Founded DateDec 2015
HeadquartersSan Francisco, CA
CountryUnited States
Financial
PropertyValueAs OfSource
Headcount375
2 earlier values
Mar 20264,500
Jun 20253,500
Revenue$5.4 million
5 earlier values
Feb 2026$25 billion
Dec 2025$13.1 billion
202520000000000
Oct 20243400000000
Jun 2024$3.4 billion
Revenue Guidance$30.0 billionJan 2026
Coding Market Share21%Dec 2025
1 earlier value
Jul 202521%
Annual Expenses$22.0 billionDec 2025
1 earlier value
Dec 2024$9.0 billion
Customer Concentration10%Dec 2025
Product Revenue$13.1 billionDec 2025
Gross Margin33%Dec 2025
Market Share27%Dec 2025
1 earlier value
Dec 202350%
Annual Cash Burn$8.5 billionDec 2025
1 earlier value
Dec 2024$5 billion
Enterprise Market Share27%Dec 2025
2 earlier values
Jul 202525%
Dec 202350%
Equity Stake26%Oct 2025
1 earlier value
Oct 202527%
Equity Value$135 billionOct 2025
Valuation$500 billionOct 2025
2 earlier values
Mar 2025$300 billion
Oct 2024$157 billion
Retention Rate71%Jun 2025
Total Funding Raised$57.9 billionMar 2025
3 earlier values
Mar 2025$57.9 billion
Oct 2024$17.9 billion
Jan 202413000000000
Product
PropertyValueAs OfSource
Weekly Active Users900 millionFeb 2026
2 earlier values
Oct 2025800 million
Feb 2025400 million
Business Customers9 millionFeb 2026
1 earlier value
Sep 20251 million
GPU Count1 millionDec 2025
User Count800 millionOct 2025
1 earlier value
Feb 2023100000000
Monthly API Calls260 billionOct 2025
Benchmark Score71.7Sep 2024
3 earlier values
Sep 202483
Sep 202471.7
Sep 202483
Monthly Active Users200 millionAug 2024
1 earlier value
Feb 2023100 million
Model Parameters175000000000Jun 2020
1 earlier value
Jun 2020175 billion
People
PropertyValueAs OfSource
Founded BySam Altman,Ilya Sutskever,Greg Brockman,Elon Musk,Wojciech Zaremba,John Schulman
Founder (text)Elon Musk
Safety
PropertyValueAs OfSource
Safety Staffing Ratio19%Dec 2025
Interpretability Team Size40Dec 2025
Safety Researchers650Dec 2025
AI Safety LevelHigh/Critical capability thresholds (Preparedness Framework v2)Apr 2025
Biographical
PropertyValueAs OfSource
Wikipediahttps://en.wikipedia.org/wiki/OpenAI
General
PropertyValueAs OfSource
Websitehttps://openai.com/
Other
PropertyValueAs OfSource
Infrastructure Investment Target600 billionFeb 2026
1,700%Sep 2024
1 earlier value
202475%
CeoSam Altman

Board Seats

1
MemberAppointedRole
Helen Toner2021Board Member

Divisions

3
NameDivisionTypeStatusStartDateEndDate
Safety Systemsteamactive
Superalignmentteamdissolved2023-072024-05
Preparednessteamactive2023-10

Related Wiki Pages

Top Related Pages

Analysis

OpenAI Foundation Governance ParadoxAnthropic Valuation Analysis

Policy

Voluntary AI Safety CommitmentsAI Whistleblower Protections

Other

Scalable OversightRLHFJan Leike

Historical

Anthropic-Pentagon Standoff (2026)Mainstream EraDeep Learning Revolution EraInternational AI Safety Summit Series

Risks

Deceptive AlignmentAI Development Racing Dynamics

Concepts

Governance-Focused WorldviewReasoning and PlanningLarge Language ModelsHeavy Scaffolding / Agentic Systems

Key Debates

Corporate Influence on AI PolicyAI Structural Risk CruxesWhy Alignment Might Be Hard