Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Cold Takes

A widely read series by Holden Karnofsky (Open Philanthropy) that helped mainstream longtermist and transformative AI risk arguments within the effective altruism and AI safety communities; available as blog posts, podcast, and PDF.

Metadata

Importance: 78/100blog postanalysis

Summary

Holden Karnofsky's 'Most Important Century' series argues that 21st-century AI development could trigger a productivity explosion leading to a galaxy-wide civilization far sooner than expected, making current decisions uniquely consequential for long-run human welfare. The series synthesizes arguments about AI timelines, transformative risk, and the moral weight of shaping humanity's long-term trajectory.

Key Points

  • Advanced AI could cause a 'productivity explosion' dramatically compressing the timeline to a technologically mature, potentially galaxy-spanning civilization.
  • The 21st century is uniquely positioned to initiate and shape this transition, giving current actors outsized influence over the long-run future.
  • The long-run future could be a radical utopia or dystopia—making the current period a critical juncture for steering outcomes.
  • Despite the 'wild' sci-fi framing, the author argues these claims deserve serious empirical scrutiny rather than dismissal.
  • Written by GiveWell and Open Philanthropy co-founder Holden Karnofsky, grounding the argument in effective altruism and longtermist frameworks.

Cited by 1 page

PageTypeQuality
Holden KarnofskyPerson40.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202616 KB
The "most important century" blog post series 
 
 
 
 
 -->
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 -->
 -->

 
 -->
 
 
 -->
 

 

 

 

 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 Subscribe (free) 

 
 
 
 
 
 
 
 
 
 The "most important century" series of blog posts argues that the 21st century could be the most important century ever for humanity, via the development of advanced AI systems that could dramatically speed up scientific and technological advancement, getting us more quickly than most people imagine to a deeply unfamiliar future.

 You can get the highlights from the series via: 
 A few-page summary (below) 
 Discussion of the series on The Ezra Klein Show (NYT, 90 minutes) or The 80,000 Hours podcast (2 hours) 
 You can read the whole series as: 
 A series of blog posts : I'd suggest starting with the Roadmap . (Each piece links to the next piece at the end.) This is the original format, where it will be easiest to click around, see graphics at full size, etc.
 An audio series available on most podcast platforms (Spotify, Stitcher, Apple Podcasts, etc.): 

 A single printable pdf .
 For Kindle, you can buy a Kindle-formatted version for $0.99 (the minimum price they let me set) or download this AZW3 file for free (see instructions for putting this on your Kindle). There's also a free ePub file for other readers.
 

 

 

 
 
 
 
 
 The series in a nutshell

 
I've spent most of my career looking for ways to do as much good as possible, per unit of money or time. I worked on finding evidence-backed charities working on global health and development (co-founding GiveWell ), and later moved into philanthropy that takes more risks (co-founding Open Philanthropy ).

 
Over the last few years - thanks to general dialogue with the effective altruism community, and extensive research done by Open Philanthropy's Worldview Investigations team - I've become convinced that humanity as a whole faces huge risks and opportunities this century. Better understanding and preparing for these risks and opportunities is where I am now focused. 

 
This piece will summarize a series of posts on why I believe we could be in the most important century of all time for humanity . It gives a short summary, key post(s), and sometimes key graphics for 5 basic points:

 

 The long-run future is radically unfamiliar. Enough advances in technology could lead to a long-lasting, galaxy-wide civilization that could be a radical utopia, dystopia, or anything in between.

 The long-run future could come much faster than we think, due to a possible AI-driven productivity explosion. 

 The relevant kind of AI looks like it will be developed this century - making this century the one that will initiate, and have the opportunity to shape, a future galaxy-wide civilization.

 These claims seem

... (truncated, 16 KB total)
Resource ID: 1a20dfc897a0933a | Stable ID: Zjk2NzY2OT