Comprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of co-founders departed), and capability advancement (o1/o3 reasoning models). Updated with 2025 developments including o3-mini release, 800M weekly active users, and Altman's confident AGI timeline predictions.
OpenAI
OpenAI
Comprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of co-founders departed), and capability advancement (o1/o3 reasoning models). Updated with 2025 developments including o3-mini release, 800M weekly active users, and Altman's confident AGI timeline predictions.
OpenAI
Comprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of co-founders departed), and capability advancement (o1/o3 reasoning models). Updated with 2025 developments including o3-mini release, 800M weekly active users, and Altman's confident AGI timeline predictions.
Overview
OpenAI is the AI research company that catalyzed mainstream artificial intelligence adoption through ChatGPT and the GPT model series. Founded in 2015 as a non-profit with the mission to ensure AGI benefits humanity, OpenAI has undergone dramatic organizational evolution: from open research lab to secretive commercial entity, from safety-focused non-profit to product-driven corporation racing toward AGI.
The company achieved breakthrough capabilities through massive scale (GPT-3's 175B parameters), pioneered Reinforcement Learning from Human FeedbackCapabilityRLHFRLHF/Constitutional AI achieves 82-85% preference improvements and 40.8% adversarial attack reduction for current systems, but faces fundamental scalability limits: weak-to-strong supervision shows...Quality: 63/100 as a practical alignment technique, and launched ChatGPT—reaching 800 million weekly active users by early 20251 and maintaining 81.13% market share in generative AI2. However, OpenAI's trajectory reveals mounting tensions between commercial pressures and safety priorities, exemplified by the November 2023 board crisis that temporarily ousted CEO Sam AltmanPersonSam AltmanComprehensive biographical profile of Sam Altman documenting his role as OpenAI CEO, timeline predictions (AGI within presidential term, superintelligence in "few thousand days"), and controversies...Quality: 40/100 and the 2024 exodus of key safety researchers including co-founder Ilya SutskeverPersonIlya SutskeverBiographical overview of Ilya Sutskever's career trajectory from deep learning pioneer (AlexNet, GPT series) to founding Safe Superintelligence Inc. in 2024 after leaving OpenAI. Documents his shif...Quality: 26/100.
With over $13 billion in MicrosoftOrganizationMicrosoft AIMicrosoft invested $80B+ in AI infrastructure (FY2025) with a restructured $135B stake (27%) in OpenAI, generating $13B AI revenue run rate (175% YoY growth) and 16 percentage points of Azure's 39%...Quality: 44/100 investment and aggressive capability advancement through reasoning models like o1 and the recent o3-mini release3, OpenAI sits at the center of debates about AI safety governance, racing dynamics, and whether commercial incentives can align with existential risk mitigation.
Recent Developments (2024-2025)
Capability Advances
| Model | Release Date | Key Capabilities | Performance | Strategic Impact |
|---|---|---|---|---|
| o1 (December 2024) | December 2024 | Full reasoning model release | Advanced mathematical/scientific reasoning | Demonstrated test-time compute scaling |
| GPT-5.2 | December 2025 | Professional task optimization | Better at spreadsheets, presentations, image perception4 | Enhanced enterprise value proposition |
| o3-mini | January 31, 2025 | Latest reasoning model | More efficient reasoning capabilities5 | Broader reasoning model availability |
| Sora 2 | 2025 | Video and audio generation | Enhanced video creation with audio6 | Multimodal generation leadership |
Market Dominance and Financial Performance
User Growth and Market Position:
- 800 million weekly active users (doubled from 400M in February 2024)7
- 15.5 million paying subscribers generating approximately $3 billion annually8
- Additional $1 billion from API access9
- Over 92% of Fortune 500 companies now use OpenAI products or APIs10
Developer Ecosystem Growth:
- API business generates ≈$41M monthly revenue from ≈530 billion tokens11
- 10% monthly growth in API usage between December 2023 and June 202412
- GPT Store reached 3 million custom GPTs with 1,500 daily additions13
- OpenAI's share of API-based AI infrastructure now exceeds 50%14
International Expansion Strategy
OpenAI for Countries Initiative:
- Launched partnership program with individual nations for data center capacity15
- Focus on data sovereignty and local industry building
- 10 planned country-specific projects
Asia-Pacific Growth:
- APAC region shows highest user growth globally16
- ChatGPT usage in APAC grew more than fourfold over 2024
- Regional offices established in Tokyo and Seoul from Singapore hub17
AGI Timeline and Leadership Confidence
Sam Altman's 2025 Statements
In January 2025, CEO Sam Altman made unprecedented confident statements about AGI development:
"We are now confident we know how to build AGI as we have traditionally understood it... AGI will probably get developed during Trump's term."18
Key Claims:
- AGI defined as AI capable of working as a remote software engineer19
- "In 2025, we may see the first AI agents join the workforce"
- Capability to "materially change the output of companies"
- Acknowledgment that "AGI has become a very sloppy term"
Strategic Implications:
- Represents significant acceleration in OpenAI's public AGI timeline
- Suggests internal confidence in current technical trajectory
- May influence competitive dynamics and regulatory responses
- Contrasts with more cautious industry voices
Risk Assessment
| Risk Category | Severity | Likelihood | Timeline | Trend | Evidence |
|---|---|---|---|---|---|
| Capability-Safety Misalignment | High | High | 1-2 years | Worsening | Safety team departures, Superalignment dissolution |
| AGI Race Acceleration | High | High | Immediate | Accelerating | Confident AGI timeline statements, competitive pressure |
| Governance Failure | High | Medium | Ongoing | Stable | Nov 2023 crisis showed board inability to constrain CEO |
| Commercial Override of Safety | High | High | 1-2 years | Worsening | Jan Leike: "Safety culture has taken backseat to shiny products" |
| AGI Deployment Without Alignment | Very High | Medium | 2-3 years | Unknown | o3 shows rapid capability gains, alignment solutions unclear |
Organizational Evolution
Founding Vision vs. Current Reality
| Aspect | 2015 Foundation | 2025 Reality | Change Assessment |
|---|---|---|---|
| Structure | Non-profit | Capped-profit with Microsoft partnership | Major deviation |
| Funding | ≈$1B founder commitment | $13B+ Microsoft investment | 13x scale increase |
| Openness | "Open by default" research publishing | Proprietary models, limited disclosure | Complete reversal |
| Mission Priority | "AGI benefits all humanity" | Product revenue and market leadership | Significant drift |
| Safety Approach | "Safety over competitive advantage" | Racing with safety as constraint | Concerning shift |
| Governance | Independent non-profit board | CEO-aligned board post-November crisis | Weakened oversight |
Key Milestones and Capability Jumps
| Date | Development | Parameters/Scale | Significance | Safety Implications |
|---|---|---|---|---|
| 2018 | GPT-1 | 117M | First transformer LM | Established architecture |
| 2019 | GPT-2 | 1.5B | Initially withheld | Demonstrated misuse concerns |
| 2020 | GPT-3 | 175B | Few-shot learningRiskEmergent CapabilitiesEmergent capabilities—abilities appearing suddenly at scale without explicit training—pose high unpredictability risks. Wei et al. documented 137 emergent abilities; recent models show step-functio...Quality: 61/100 breakthrough | Sparked scaling race |
| 2022 | InstructGPT/ChatGPT | GPT-3.5 + RLHFCapabilityRLHFRLHF/Constitutional AI achieves 82-85% preference improvements and 40.8% adversarial attack reduction for current systems, but faces fundamental scalability limits: weak-to-strong supervision shows...Quality: 63/100 | Mainstream AI adoption | RLHF as alignment technique |
| 2023 | GPT-4 | Undisclosed multimodal | Human-level many domains | Dangerous capabilities acknowledged |
| 2024 | o1 reasoning | Advanced chain-of-thought | Mathematical/scientific reasoning | Hidden reasoning, deception risks |
| 2024 | o3 preview | Next-generation reasoning | Near-AGI performance on some tasks | Rapid capability advancement |
| 2025 | o3-mini | Efficient reasoning | Broader reasoning availability | Democratized advanced capabilities |
Technical Contributions and Evolution
Major Research Breakthroughs
| Innovation | Impact | Adoption | Limitations |
|---|---|---|---|
| GPT Architecture | Established transformer LMs as dominant paradigm | Universal across industry | Scaling may hit physical limits |
| RLHF/InstructGPT | Made LMs helpful, harmless, honest | Standard alignment technique | May not scale to superhuman tasks |
| Scaling Laws | Predictable performance from compute/data | Drove $100B+ industry investment | Unclear if continue to AGI |
| Chain-of-Thought Reasoning | Test-time compute for complex problems | Adopted by AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding..., Google | Hidden reasoning enables deception |
| Deliberative Alignment | Reasoning-based safety specifications | Used in o-series models20 | Limited evaluation in practice |
Safety Research Evolution
Current Methodology (2025):
- Deliberative Alignment: Teaching reasoning models human-written safety specifications21
- Scalable Evaluations: Automated tests measuring capability proxies22
- Cross-Lab Collaboration: Joint evaluations with AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding... and other labs23
- Red Teaming: Human adversarial testing complementing automated evaluations
Safety Framework Assessment:
- Preparedness Framework established capability thresholds and evaluation protocols
- Safety evaluations now include third-party assessments beyond internal teams
- Alignment research continues post-Superalignment dissolution but with reduced visibility
- Integration of safety measures into product development rather than separate research track
Competitive Landscape Analysis
Capability Comparison (Late 2025)
| Company | Latest Model | Key Strengths | Market Position | Competitive Response |
|---|---|---|---|---|
| OpenAI | GPT-5.2, o3-mini | Reasoning (100% AIME 2025), broad capabilities | Market leader (81% share) | Continuous releases, AGI timeline |
| AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding... | Claude Opus 4.5 | Safety research, coding (80.9% SWE-bench) | Strong challenger (32% enterprise LLM share) | Enterprise coding dominance (42% market share) |
| Gemini 2.5 | Research depth, multimodal, integration | Technology leader | Increased deployment urgency | |
| MetaOrganizationMeta AI (FAIR)Comprehensive organizational profile of Meta AI covering $66-72B infrastructure investment (2025), LLaMA model family (1B+ downloads), and transition from FAIR research lab to product-focused GenAI...Quality: 51/100 | Llama 4 | Open source approach | Alternative paradigm | Democratizing access |
Performance Benchmarks:
- Claude Opus 4.5 leads coding benchmarks (80.9% SWE-bench Verified, 42% enterprise coding share)
- GPT-5.2 leads mathematical reasoning (100% AIME 2025, 40.3% FrontierMath)
- Enterprise LLM market has shifted: Anthropic at 32%, OpenAI at 25% (Menlo Ventures)
- Context length and safety remain key Anthropic differentiators
Developer Ecosystem and Business Strategy
API and Integration Platform
Market Penetration:
- API monthly revenue: ≈$41M from 530 billion tokens (June 2024)24
- Gross margins: 75% decreasing to 55% with pricing adjustments25
- Azure OpenAI Service: 64% year-over-year growth adoption26
- Enterprise integration across Microsoft Office 365, GitHub Copilot
Developer Adoption:
- GPT Store: 159,000 public GPTs from 3 million total created27
- Average 1,500 new models added daily to marketplace28
- API infrastructure market share exceeding 50% industry-wide
- Integration partnerships with major enterprise software providers
Financial and Commercial Dynamics
Revenue and Investment Structure
2024-2025 Financial Performance:
- Projected 2024 revenue: $3.4 billion (ChatGPT subscriptions + API)29
- Growth rate: 1,700% year-over-year from product scaling
- Operating losses: $5 billion in 2024 despite revenue growth30
- Primary cost drivers: compute infrastructure, talent acquisition, research investment
Microsoft Partnership Impact
| Component | Details | Strategic Implications |
|---|---|---|
| Investment | $13B+ total, 49% profit share (to cap) | Creates commercial pressure for rapid deployment |
| Compute Access | Exclusive Azure partnership | Enables massive model training but creates dependency |
| Product Integration | Bing, Office 365, GitHub Copilot | Drives revenue but requires consumer-ready systems |
| API Monetization | Enterprise and developer access | Success depends on maintaining capability lead |
Governance Crisis Analysis
November 2023 Board Coup
| Timeline | Event | Stakeholders | Outcome |
|---|---|---|---|
| Nov 17 | Board fires Sam Altman for lack of candor | Non-profit board, Ilya Sutskever | Initial dismissal |
| Nov 18-19 | Employee revolt, Microsoft intervention | 500+ employees, Microsoft leadership | Pressure for reversal |
| Nov 20 | Altman reinstated, board replaced | New commercial-aligned board | Governance weakened |
Structural Implications:
- Demonstrated employee and investor loyalty trumps mission governance
- Non-profit board cannot meaningfully constrain for-profit operations
- Microsoft partnership creates de facto veto over safety-motivated decisions
- Sets precedent that commercial interests override safety governance
Safety Researcher Exodus (2024)
| Researcher | Role | Departure Date | Stated Reasons | Destination |
|---|---|---|---|---|
| Ilya SutskeverPersonIlya SutskeverBiographical overview of Ilya Sutskever's career trajectory from deep learning pioneer (AlexNet, GPT series) to founding Safe Superintelligence Inc. in 2024 after leaving OpenAI. Documents his shif...Quality: 26/100 | Co-founder, Chief Scientist | May 2024 | "Personal project" (SSIConceptSuperintelligenceAI systems with cognitive abilities vastly exceeding human intelligence) | Safe Superintelligence Inc |
| Jan Leike | Superalignment Co-lead | May 2024 | "Safety culture backseat to products"31 | AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding... Head of Alignment |
| John Schulman | Co-founder, PPO inventor | Aug 2024 | "Deepen AI alignmentApproachAI AlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) achieve 75-90% effectiveness on existing systems but face critical scalability challenges, with ove...Quality: 91/100 focus" | AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding... |
| Mira Murati | CTO | Sept 2024 | "Personal exploration" | Unknown |
Pattern Analysis:
- 75% of co-founders departed within 9 years
- All alignment-focused departures cited safety prioritization concerns
- Exodus correlates with increasing commercial pressure and capability advancement
- AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding... captured multiple senior OpenAI safety researchers
Current Capability Assessment
Reasoning Models Performance (o1/o3 Series)
| Domain | Capability Level | Benchmark Performance | Risk Assessment |
|---|---|---|---|
| Mathematics | PhD+ | 83% on AIME, IMO medal performance | Advanced problem-solving |
| Programming | Expert | 71.7% on SWE-bench Verified | Code generation/analysis |
| Scientific Reasoning | Graduate+ | High performance on PhD-level physics | Research acceleration potential |
| Strategic Reasoning | Unknown | Chain-of-thought hidden | Deceptive alignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100 risks |
Key Technical Developments:
- Test-time compute scaling enables reasoning capability improvements
- Hidden reasoning processes prevent interpretability and alignment verification
- Performance approaching human expert level across cognitive domains
- Deliberative alignment methodology integrated into training process
Economic Impact and Industry Transformation
Enterprise Adoption and Integration
Fortune 500 Penetration:
- 92% of Fortune 500 companies actively using OpenAI products or APIs32
- Primary use cases: customer service automation, content generation, code assistance
- Integration through Microsoft ecosystem (Office 365, Teams, Azure)
- Custom enterprise solutions and fine-tuning services
Industry Transformation Metrics:
- Sparked $100B+ investment across AI industry following ChatGPT launch
- Developer productivity improvements: 10-40% in coding tasks (GitHub Copilot studies)
- Content creation acceleration across marketing, education, professional services
- Job market evolution with AI-augmented roles replacing traditional functions
International Strategy and Regulatory Engagement
Government Relations and Policy Influence
| Jurisdiction | Engagement Type | OpenAI Position | Policy Impact |
|---|---|---|---|
| US Congress | Altman testimony, lobbying | Self-regulation advocacy | Influenced Senate AI framework |
| EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100 | Compliance preparation | Geographic market access | Foundation model regulations apply |
| UK AI Safety | AISIOrganizationUK AI Safety InstituteThe UK AI Safety Institute (renamed AI Security Institute in Feb 2025) operates with ~30 technical staff and 50M GBP annual budget, conducting frontier model evaluations using its open-source Inspe...Quality: 52/100 collaboration | Partnership approach | Safety institute cooperation |
| China | No direct engagement | Technology export controls | Limited model access |
Global Expansion Framework
Data Sovereignty Approach:
- OpenAI for Countries program supporting local data centers33
- Partnerships for in-country infrastructure development
- Balance between global access and national security concerns
- Custom deployment models for government and enterprise clients
Safety Methodology and Alignment Research
Current Safety Framework (2025)
Evaluation Processes:
- Scalable Evaluations: Automated testing measuring capability proxies34
- Deep Dives: Human red-teaming and third-party assessments35
- Capability Thresholds: Predetermined criteria triggering additional safety measures
- Cross-Lab Collaboration: Joint safety evaluations with industry partners
Deliberative Alignment Implementation:
- Integration of human-written safety specifications into reasoning models36
- Training models to explicitly reason about safety considerations
- Applied to o-series models with ongoing evaluation
- Represents evolution beyond RLHF toward interpretable safety reasoning
Alignment Research Post-Superalignment
Current Research Directions:
- Scalable oversightSafety AgendaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100 methods for superhuman AI systems
- Interpretability research for understanding model reasoning
- Robustness testing across diverse deployment scenarios
- Integration of safety measures into product development cycles
Resource Allocation Concerns:
- Original 20% compute allocation for safety research unclear in current structure
- Safety research integrated into product teams rather than independent research
- External criticism regarding insufficient dedicated safety resources
- Balance between product development velocity and safety thoroughness
Expert Perspectives and Current Debates
Internal Alignment (Current Leadership)
Sam Altman's Position (2025):
- AGI development inevitable and better led by responsible US companies
- Commercial success enables greater safety research investment
- Rapid deployment with iterative safety improvements preferred over delayed release
- Competitive dynamics require maintaining technological leadership
Technical Leadership Perspective:
- Integration of safety measures into development process rather than separate research
- Emphasis on real-world deployment experience for safety learning
- Collaborative industry approach to safety standards and evaluation
External Safety Community Assessment
Academic and Safety Researcher Views:
- Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100: Concerns about commercial mission drift from original safety focus
- Stuart RussellPersonStuart RussellStuart Russell is a UC Berkeley professor who founded CHAI in 2016 with $5.6M from Coefficient Giving (then Open Philanthropy) and authored 'Human Compatible' (2019), which proposes cooperative inv...Quality: 30/100: Warning about commercial capture of safety research priorities
- Former OpenAI safety researchers: Systematic deprioritization of safety relative to capabilities
Policy and Governance Experts:
- Need for external oversight mechanisms beyond self-regulation
- Concerns about concentration of AGI development in single organization
- Questions about democratic accountability in AGI deployment decisions
Future Trajectories and Critical Decisions
Timeline Projections (Updated 2025)
| Scenario | Probability Estimate | Timeline | Key Indicators |
|---|---|---|---|
| AGI DevelopmentProjectAGI DevelopmentComprehensive synthesis of AGI timeline forecasts showing dramatic compression: Metaculus aggregates predict 25% probability by 2027 and 50% by 2031 (down from 50-year median in 2020), with industr...Quality: 52/100 | High | 1-3 years | Altman confidence, o3+ performance |
| Regulatory Intervention | Medium-High | 1-2 years | Government AI governance initiatives |
| Safety Breakthrough | Low-Medium | Unknown | Scalable alignmentSafety AgendaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100 advances |
| Competitive Disruption | Medium | 2-3 years | Open source parity, international advances |
Strategic Decision Points
Immediate (2025):
- AGI timeline communications and expectation management
- Response to increasing regulatory scrutiny and safety criticism
- Resource allocation between reasoning model advancement and safety research
- International expansion pace and partnership selection
Medium-term (2026-2027):
- AGI deployment framework and access policies
- Safety standard establishment and industry coordination
- Relationship management with government oversight bodies
- Competitive response to potential capability disruptions
Key Research Questions
Key Questions
- ?Can OpenAI maintain safety priorities while pursuing aggressive AGI timelines?
- ?Will deliberative alignment scale to superintelligent systems with hidden reasoning?
- ?How will international coordination develop around OpenAI's AGI deployment decisions?
- ?What governance mechanisms could effectively constrain rapid AGI development?
- ?Can the developer ecosystem and API strategy support sustainable business model?
- ?How will competitive dynamics evolve as multiple labs approach AGI capabilities?
Sources and Resources
Primary Documents
| Source | Type | Key Content | Link |
|---|---|---|---|
| GPT-4 System Card | Technical report | Risk assessment, red teaming results | OpenAI GPT-4 System Card |
| Preparedness Framework | Policy document | Catastrophic risk evaluation framework | OpenAI Preparedness |
| Deliberative Alignment | Research paper | Reasoning-based safety methodology | OpenAI Deliberative Alignment |
| OpenAI for Countries | Policy initiative | International partnership framework | Global Affairs Initiative |
Recent Announcements and Performance
| Source | Type | Key Content | Link |
|---|---|---|---|
| Sora 2 Release | Product announcement | Video and audio generation capabilities | Sora 2 Launch |
| o3-mini Launch | Model release | Latest reasoning model availability | Computerworld Coverage |
| AGI Timeline Interview | Executive statement | Altman's confident AGI predictions | TIME Magazine Interview |
Academic Research
| Paper | Authors | Contribution | Citation |
|---|---|---|---|
| Language Models are Few-Shot Learners | Brown et al. | GPT-3 capabilities demonstration | arXiv:2005.14165 |
| Training language models to follow instructions | Ouyang et al. | InstructGPT/RLHF methodology | arXiv:2203.02155 |
| Weak-to-Strong GeneralizationApproachWeak-to-Strong GeneralizationWeak-to-strong generalization tests whether weak supervisors can elicit good behavior from stronger AI systems. OpenAI's ICML 2024 experiments show 80% Performance Gap Recovery on NLP tasks with co...Quality: 91/100 | Burns et al. | Superalignment research direction | arXiv:2312.09390 |
| GPT-4 Technical Report | OpenAI (279 contributors) | Official technical documentation | arXiv:2303.08774 |
Footnotes
-
ChatGPT Users Statistics (February 2026) – Growth & Usage Data ↩
-
ChatGPT Users Statistics (February 2026) – Growth & Usage Data ↩
-
ChatGPT Users Statistics (February 2026) – Growth & Usage Data ↩
-
OpenAI lost $5 billion in 2024 (and its losses are increasing) ↩
-
OpenAI lost $5 billion in 2024 (and its losses are increasing) ↩
-
OpenAI Statistics 2026: Adoption, Integration & Innovation ↩
-
The Era of Tailored Intelligence: Charting the Growth and Market Impact of Custom GPTs ↩
-
OpenAI Statistics 2026: Adoption, Integration & Innovation ↩
-
How OpenAI's Sam Altman Is Thinking About AGI and Superintelligence in 2025 ↩
-
Deliberative alignment: reasoning enables safer language models ↩
-
Deliberative alignment: reasoning enables safer language models ↩
-
OpenAI Statistics 2026: Adoption, Integration & Innovation ↩
-
GPT Store Statistics & Facts: Contains 159.000 of the 3 million created GPTs ↩
-
The Era of Tailored Intelligence: Charting the Growth and Market Impact of Custom GPTs ↩
-
OpenAI lost $5 billion in 2024 (and its losses are increasing) ↩
-
OpenAI lost $5 billion in 2024 (and its losses are increasing) ↩
-
Jan Leike departure statement on X/Twitter, May 2024 ↩
-
OpenAI Statistics 2026: Adoption, Integration & Innovation ↩
-
Deliberative alignment: reasoning enables safer language models ↩