Skip to content
Longterm Wiki
Back

OpenAI dissolves Superalignment AI safety team

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: CNBC

This news event is frequently cited as a notable indicator of organizational tensions between safety priorities and product development at OpenAI, and is relevant to discussions of AI lab governance and safety culture.

Metadata

Importance: 72/100news articlenews

Summary

OpenAI disbanded its Superalignment team in May 2024, less than a year after launching it with a pledge of 20% compute resources toward controlling advanced AI. The dissolution followed the departures of team leaders Ilya Sutskever and Jan Leike, with Leike publicly criticizing OpenAI's safety culture as subordinated to product development.

Key Points

  • OpenAI dissolved its Superalignment team in May 2024, only ~1 year after announcing it with significant resource commitments.
  • Both team leaders Ilya Sutskever and Jan Leike departed days before the team was disbanded.
  • Jan Leike publicly stated that OpenAI's safety culture and processes had 'taken a backseat to shiny products.'
  • OpenAI had originally committed 20% of its compute to the Superalignment initiative over four years.
  • Team members were reassigned to other internal teams rather than the group continuing its dedicated long-term safety mission.

Review

The dissolution of OpenAI's Superalignment team represents a significant setback in the organization's commitment to AI safety research. Originally launched in 2023 with a pledge to dedicate 20% of computing power to controlling superintelligent AI systems, the team's dismantling signals potential shifts in OpenAI's strategic priorities and approach to potential existential risks posed by advanced artificial intelligence. The departure of team leaders Jan Leike and Ilya Sutskever highlights deeper internal conflicts about the company's direction. Leike explicitly criticized OpenAI's safety culture, arguing that 'safety culture and processes have taken a backseat to shiny products' and expressing concern about the trajectory of AI development. This suggests a growing tension between rapid technological advancement and careful, responsible AI development, which could have significant implications for the broader AI safety landscape and the approach to managing potentially transformative AI technologies.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 31, 20269 KB
OpenAI dissolves Superalignment AI safety team

 

 
 
 
 

 Feb
 MAR
 Apr
 

 
 

 
 23
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Tow Center Pink Slime News Sites

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20260323043034/https://www.cnbc.com/2024/05/17/openai-superalignment-sutskever-leike.html

 

Skip Navigation

Markets

Pre-Markets

U.S. Markets

Currencies

Prediction Markets

Cryptocurrency

Futures & Commodities

Bonds

Funds & ETFs

Business

Economy

Finance

Health & Science

Media

Real Estate

Energy

Climate

Transportation

Investigations

Industrials

Retail

Wealth

Sports

Life

Small Business

Investing

Personal Finance

Fintech

Financial Advisors

Options Action

ETF Street

Buffett Archive

Earnings

Trader Talk

Tech

Cybersecurity

AI

Enterprise

Internet

Media

Mobile

Social Media

CNBC Disruptor 50

Tech Guide

Politics

White House

Policy

Defense

Congress

Expanding Opportunity

Video

Latest Video

Full Episodes

Livestream

Live Audio

Live TV Schedule

CNBC Podcasts

CEO Interviews

CNBC Documentaries

Digital Originals

Watchlist

Investing Club

Trust Portfolio

Analysis

Trade Alerts

Meeting Videos

Homestretch

Jim's Columns

Education

Subscribe

PRO

Pro News

Josh Brown

Mike Santoli

Calls of the Day

My Portfolio

Livestream

Full Episodes

Stock Screener

Market Forecast

Options Investing

Chart Investing

Subscribe

Livestream

Menu

Make It

select

USA

INTL

Livestream

Search quotes, news & videos

Livestream

Watchlist

SIGN IN

Create free account

Markets

Business

Investing

Tech

Politics

Video

Watchlist

Investing Club

PRO

Livestream

Menu

The 2026 CNBC Disruptor 50 list will be revealed Tuesday, May 19th

CNBC Disruptor 50

Methodology

2025 List

2024 LIST

2023 List

2022 List

Newsletter

CNBC Disruptor 50

OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

Published Fri, May 17 20241:29 PM EDTUpdated Sat, May 18 20241:49 PM EDT

Hayden Field@haydenfield

WATCH LIVE

Key Points

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, a person familiar with the situation confirmed to CNBC.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup.

OpenAI's Superalignment team, announced in 2023, has been working to achieve "scientific and technical breakthroughs to steer and control AI systems much smarter than us."

At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

Sam Altman, CEO of OpenAI, speaks at the Hope Global Forums annual meeting in Atlanta on Dec. 11, 2023.

Dustin Chambers | Bloomberg | Getty Images

OpenAI has disbanded its team focused on

... (truncated, 9 KB total)
Resource ID: 33a4513e1449b55d | Stable ID: N2M1ZDFiMj