Skip to content
Longterm Wiki
Back

Critiques of Prominent AI Safety Labs: Redwood Research

blog

Author

Omega.

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: LessWrong

An anonymous critique of Redwood Research's organizational practices and funding relationships within the EA ecosystem, relevant to understanding institutional dynamics in the AI safety field as of 2023.

Forum Post Details

Karma
4
Comments
0
Forum
lesswrong
Forum Tags
AI Alignment FieldbuildingCriticisms of The Rationalist MovementAICommunity

Metadata

Importance: 38/100blog postcommentary

Summary

Anonymous technical AI safety researchers critique Redwood Research, arguing the organization has serious structural and research quality flaws despite receiving over $21 million in funding, largely attributable to conflicts of interest among key EA funders and leadership. The post examines Redwood's organizational maturity, research approach, and its ties to central EA institutions, while remaining cautiously optimistic about its capacity to improve.

Key Points

  • Redwood Research received over $21M in funding (~$20M from Open Philanthropy), raising questions about whether COIs between funders and leadership led to outsized support for an immature org.
  • Authors are anonymous technical AI safety researchers who shared the draft with Redwood prior to publication, acknowledging potential bias and professional conflicts.
  • Critique focuses on organizational flaws, research reliability concerns, and Redwood's role as a partial gatekeeper to central EA institutions.
  • Redwood's approach is heavily informed by Paul Christiano (ARC), who sits on its board and originally proposed one of its core projects.
  • Authors suggest Redwood has room to improve and note that many orgs have made successful pivots at similar scales, expressing cautious optimism.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202641 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Critiques of prominent AI safety labs: Redwood Research — LessWrong AI Alignment Fieldbuilding Criticisms of The Rationalist Movement AI Community Frontpage 4

 Critiques of prominent AI safety labs: Redwood Research 

 by Omega. 17th Apr 2023 Linkpost for forum.effectivealtruism.org 26 min read 0 4

 In this series, we evaluate AI safety organizations that have received more than $10 million per year in funding. We do not critique MIRI ( 1 ) and OpenAI ( 1 , 2 , 3 ) as there have been several conversations and critiques of these organizations.

 The authors of this post include two technical AI safety researchers, and others who have spent significant time in the Bay Area community. One technical AI safety researcher is senior (>4 years experience), the other junior. We would like to make our critiques non-anonymously but unfortunately believe this would be professionally unwise. Further, we believe our criticisms stand on their own. Though we have done our best to remain impartial, readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We take the benefits and drawbacks of the anonymous nature of our post seriously, and are open to feedback on anything we might have done better.

 The first post in this series will cover Redwood Research (Redwood). Redwood is a non-profit started in 2021 working on technical AI safety (TAIS) alignment research. Their approach is heavily informed by the work of Paul Christiano, who runs the Alignment Research Center (ARC), and previously ran the language model alignment team at OpenAI. Paul originally proposed one of Redwood's original projects and is on Redwood’s board. Redwood has strong connections with central EA leadership and funders, has received significant funding since its inception, recruits almost exclusively from the EA movement, and partly acts as a gatekeeper to central EA institutions.

 We shared a draft of this document with Redwood prior to publication and are grateful for their feedback and corrections (we recommend others also reach out similarly ). We’ve also invited them to share their views in the comments of this post.

 We would like to also invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form . We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).

 Summary of our views 

 We believe that Redwood has some serious flaws as an org, yet has received a significant amount of funding from a central EA grantmaker (Open Philanthropy). Inadequately kept in check conflicts of interest (COIs) might be partly responsible for funders giving a relatively immature org lots o

... (truncated, 41 KB total)
Resource ID: 6ce9ac5e3ca7455a | Stable ID: ZGE3NzBhZW