Skip to content
Longterm Wiki
Back

Cold Takes – Holden Karnofsky's Blog

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Cold Takes

Influential blog by an Open Philanthropy co-CEO; the 'Most Important Century' series is widely read in the AI safety community and provides strategic framing for why AI safety work is urgent.

Metadata

Importance: 62/100blog posthomepage

Summary

Cold Takes is Holden Karnofsky's (co-CEO of Open Philanthropy) personal blog exploring big-picture questions about AI, existential risk, effective altruism, and how to think about the most important challenges of our time. It features in-depth essays on AI timelines, transformative AI scenarios, and philanthropic strategy. The blog is notable for its 'Most Important Century' series arguing that we may be living at a uniquely pivotal moment in history.

Key Points

  • Hosts the influential 'Most Important Century' series arguing current decades may be uniquely pivotal for humanity's long-term future.
  • Covers AI timelines, transformative AI risk, and the implications of advanced AI from a longtermist perspective.
  • Written by Holden Karnofsky, co-CEO of Open Philanthropy, a major funder of AI safety research.
  • Explores how individuals and philanthropists should prioritize actions given uncertainty about AI development trajectories.
  • Bridges technical AI safety concerns with broader existential risk, policy, and effective altruism considerations.

Cited by 2 pages

PageTypeQuality
Holden KarnofskyPerson40.0
Sharp Left TurnRisk69.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202629 KB
[Subscribe (free)](https://www.cold-takes.com/#subscribe)

# Cold Takes

For audio version, search for "Cold Takes Audio" in your podcast app

[Subscribe](https://www.cold-takes.com/#subscribe)

Latest post: **[Good job opportunities for helping with the most important century](https://www.cold-takes.com/good-job-opportunities-for-helping-with-the-most-important-century/ "Good job opportunities for helping with the most important century")**

## Featured posts

![What does Bing Chat tell us about AI risk?](https://www.cold-takes.com/content/images/size/w750/2023/02/shoggoth-rlhf-1.webp)

### What does Bing Chat tell us about AI risk?

[What does Bing Chat tell us about AI risk?](https://www.cold-takes.com/what-does-bing-chat-tell-us-about-ai-risk/)

### Jobs that can help with the most important century

[Jobs that can help with the most important century](https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/)

![Spreading messages to help with the most important century](https://www.cold-takes.com/content/images/size/w750/2023/01/megaphone-emoji-twitter-dimensions.png)

### Spreading messages to help with the most important century

[Spreading messages to help with the most important century](https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/)

![How we could stumble into AI catastrophe](https://www.cold-takes.com/content/images/size/w750/2023/01/wile-c-coyote-twitter.png)

### How we could stumble into AI catastrophe

[How we could stumble into AI catastrophe](https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/)

### Transformative AI issues (not just misalignment): an overview

[Transformative AI issues (not just misalignment): an overview](https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/)

![Racing through a minefield: the AI deployment problem](https://www.cold-takes.com/content/images/size/w750/2022/12/racing-through-a-minefield-rectangular.png)

### Racing through a minefield: the AI deployment problem

[Racing through a minefield: the AI deployment problem](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/)

![High-level hopes for AI alignment](https://www.cold-takes.com/content/images/size/w750/2022/12/high-level-hopes-rectangle.png)

### High-level hopes for AI alignment

[High-level hopes for AI alignment](https://www.cold-takes.com/high-level-hopes-for-ai-alignment/)

![AI Safety Seems Hard to Measure](https://www.cold-takes.com/content/images/size/w750/2022/12/ai-safety-seems-hard-to-measure-3.png)

### AI Safety Seems Hard to Measure

[AI Safety Seems Hard to Measure](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/)

![Why Would AI "Aim" To Defeat Humanity?](https://www.cold-takes.com/content/images/size/w750/2022/11/exmachina.jpeg)

### Why Would AI "Aim" To Defeat Humanity?

[Why Would AI "Aim" To Defeat Humanity?](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humani

... (truncated, 29 KB total)
Resource ID: 859ff786a553505f | Stable ID: MjViMjRmND