Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

This EA Forum topic page serves as a reference hub for CLR, an organization occupying a distinctive niche in AI safety focused on suffering risks (s-risks) rather than extinction risks, relevant for understanding the broader landscape of AI safety organizations and research agendas.

Metadata

Importance: 42/100wiki pagereference

Summary

The Center on Long-Term Risk (CLR) is an EA-affiliated research institute focused on mitigating s-risks (suffering risks) from advanced AI, particularly by studying cooperative behavior and conflict prevention between transformative AI systems. Originally founded as the Foundational Research Institute in 2013, CLR operates under the Effective Altruism Foundation and has received significant funding from the Survival and Flourishing Fund.

Key Points

  • CLR specializes in s-risk research — catastrophic scenarios involving extreme suffering at civilizational scale, distinct from x-risk extinction framings.
  • Primary research focus is on encouraging cooperative AI behavior and preventing conflict or defection dynamics between advanced AI systems.
  • Founded in 2013 as the Foundational Research Institute; rebranded to Center on Long-Term Risk to better reflect its mission.
  • Part of the Effective Altruism Foundation ecosystem and has received over $1.2M from the Survival and Flourishing Fund as of June 2022.
  • Represents a welfare-focused perspective within AI safety, prioritizing reduction of suffering over other existential risk framings.

Cited by 1 page

PageTypeQuality
Centre for Long-Term ResilienceOrganization63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20263 KB
Center on Long-Term Risk - EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents Center on Long-Term Risk

 Edit History Discussion 0 Subscribe Edit History Discussion 0 Center on Long-Term Risk History Funding Further reading External links Related entries Random Topic Contributors 3 Will Aldred 2 Pablo 2 MichaelA🔸 2 Eevee🔹 1 Leo The Center on Long-Term Risk ( CLR ) is a research institute that aims to mitigate s-risks from advanced AI. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems . [1] 

 History 

 CLR was founded in July 2013 as the Foundational Research Institute ; [ 2]   it adopted its current name in March 2020. [ 3]   CLR is part of the Effective Altruism Foundation . 

 Funding 

 As of June 2022, CLR has received over $1.2 million in funding from the Survival and Flourishing Fund. [ 4]   

 ... (Read more) 

 Posts tagged Center on Long-Term Risk

 Top Relevance 353 Reducing long-term risks from malevolent actors David_Althaus David_Althaus , Tobias_Baumann + 0 more · 6y  ago · 45 m read 96 4 96 4 310 EAF’s ballot initiative doubled Zurich’s development aid Jonas_ Jonas_ · 6y  ago · 8 m read 24 1 24 1 193 Shallow evaluations of longtermist organizations NunoSempere NunoSempere · 5y  ago · 40 m read 34 4 34 4 176 2021 AI Alignment Literature Review and Charity Comparison Larks Larks · 4y  ago · 87 m read 18 2 18 2 174 List of EA funding opportunities MichaelA🔸 MichaelA🔸 · 4y  ago · 7 m read 42 2 42 2 170 Center on Long-Term Risk: 2023 Fundraiser stefan.torges stefan.torges · 3y  ago · 16 m read 4 2 4 2 162 Leadership change at the Center on Long-Term Risk JesseClifton JesseClifton , Tristan Cook , Mia_Taylor + 0 more · 1y  ago · 3 m read 7 2 7 2 159 Ingredients for creating disruptive research teams stefan.torges stefan.torges · 7y  ago · 65 m read 17 2 17 2 155 2020 AI Alignment Literature Review and Charity Comparison Larks Larks · 5y  ago · 82 m read 16 2 16 2 147 2019 AI Alignment Literature Review and Charity Comparison Larks Larks · 6y  ago · 75 m read 28 1 28 1 137 Replicating and extending the grabby aliens model Tristan Cook Tristan Cook · 4y  ago · 62 m read 27 1 27 1 130 Beginner’s guide to reducing s-risks [link-post] Center on Long-Term Risk Center on Long-Term Risk · 2y  ago · 3 m read 3 1 3 1 118 2018 AI Alignment Literature Review and Charity Comparison Larks Larks · 7y  ago · 75 m read 28 1 28 1 112 Takeaways from EAF's Hiring Round stefan.torges stefan.torges · 7y  ago · 20 m read 22 3 22 3 94 The optimal timing of spending on AGI safety work; why we should probably be spending more now Tristan Cook Tristan Cook , Guillaume Corlouer + 0 more · 3y  ago · 44 m read 12 1 12 1 Load more (15/61) Add posts
Resource ID: d36913572dd84f74 | Stable ID: ZGI3MjI4YT