Skip to content
Longterm Wiki
Back

Open letter from 13 AI workers

web

This open letter is a significant public action by AI insiders signaling concerns about whether frontier labs' internal safety cultures match their public commitments, and is relevant to debates about AI governance and corporate accountability.

Metadata

Importance: 62/100opinion pieceprimary source

Summary

Thirteen current and former employees from OpenAI and other frontier AI labs published an open letter raising concerns about inadequate safety oversight and insufficient whistleblower protections within AI companies. The letter calls for stronger mechanisms enabling employees to report safety concerns without fear of retaliation, and advocates for greater transparency and accountability from AI developers. It highlights a gap between public safety commitments and internal company culture.

Key Points

  • Thirteen current and former AI workers signed an open letter expressing concerns about safety culture and oversight at frontier AI labs including OpenAI.
  • The letter calls for robust whistleblower protections so employees can report safety concerns to regulators and the public without retaliation.
  • Signatories argue that AI companies' internal safety processes lack sufficient independence and that employees face pressure not to raise concerns publicly.
  • The letter advocates for government and regulatory bodies to establish formal channels for AI safety whistleblowing.
  • This action reflects growing tension between AI company safety messaging and the experiences of internal employees about actual safety practices.

Cited by 1 page

PageTypeQuality
Corporate Influence on AI PolicyCrux66.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202610 KB
Open Letter from OpenAI Employees Highlights Concerns Around Oversight and Whistleblower Protections 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 
 

 

 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 AML 

 UNCAC 

 IRS 

 Qui Tam 

 SEC 

 CFTC 

 FCPA 

 FAQS 

 
 
 

 
 
 
 
 
 
 
 Subscribe 
 
 
 
 
 Donate 
 
 
 
 
 Get Help 
 
 
 
 
 
 
 
 
 
 
 
 
 No Result 
 
 View All Result 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 The Truth at Any Cost. 

 Qui Tam, Compliance and Anti-Corruption News. 
 

 
 
 
 
 

 
 
 
 
 

 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 No Result 
 
 View All Result 
 
 

 
 
 
 
 

 

 
 
 
 

 

 
 

 
 
 
 
 
 
 Home 
 
 Corporate 
 
 
 
 
 Open Letter from OpenAI Employees Highlights Concerns Around Oversight and Whistleblower Protections

 
 

 
 
 by Sophie Luskin 
 
 
 June 10, 2024 
 
 
 
 in 
 Corporate 
 
 
 
 
 Reading Time: 3 mins read
 
 

 
 
 
 
 

 
 
 
 
 
 
 Share on Twitter https://bsky.app/profile/whistleblowernn.bsky.social Share on Facebook Share on LinkedIn Email 
 
 

 
 Following the revelation of OpenAI’s use of restrictive non-disclosure and non-disparagement agreements , a group of thirteen AI workers – eleven current and former OpenAI employees and two current and former employees from Google DeepMind – penned an open letter on June 4 , underscoring their concerns about rapid development in the artificial intelligence industry. They argue that the AI industry lacks adequate oversight mechanisms and whistleblower protections for those who speak up. In publishing the letter, the group sought to bring immediate attention and action to their concerns.  

 Titled “A Right to Warn about Advanced Artificial Intelligence,” the letter emphasizes that the ability of current and former employees of AI companies to blow the whistle is critical to oversight of AI and the development and implementation of new technology in ways that directly benefit the public. 

 In the letter, the AI workers state, “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.” 

 
 
 
 
 
 
 
 They further explain that under current internal and governmental oversight regimes, AI companies have “weak obligations to share some of this information with governments, and none with civil society.” Thus, the group does not believe that the AI companies can be relied upon to share this information voluntarily.  

 Asserting their pivotal role, the AI workers declare that “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.”  

 The AI workers claim, however, that OpenAI and other AI companies prevent this accountability via non-disclosu

... (truncated, 10 KB total)
Resource ID: 194b8a6feedff102 | Stable ID: YzE4MzI3Yj