Skip to content
Longterm Wiki
Back

Center for AI Safety (CAIS) Blog

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

CAIS is one of the most prominent AI safety organizations; their blog serves as a hub for both technical research and policy discussion, and is a useful resource for tracking current thinking in the field.

Metadata

Importance: 62/100blog posthomepage

Summary

The official blog of the Center for AI Safety (CAIS), a leading AI safety research organization focused on reducing societal-scale risks from advanced AI systems. The blog publishes research updates, policy commentary, and educational content on AI safety topics including existential risk, alignment, and governance.

Key Points

  • Official publication channel for CAIS researchers and affiliated experts on AI safety topics
  • Covers a broad range of AI safety concerns including existential risk, misuse, and structural risks from AI
  • Publishes accessible summaries of technical research alongside policy-relevant commentary
  • CAIS is known for the 'Statement on AI Risk' signed by leading AI researchers and the AI Safety Levels framework
  • Content spans technical safety, governance, and societal impact of advanced AI systems

Cited by 1 page

PageTypeQuality
Center for AI SafetyOrganization42.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20269 KB
[CAIS 2024 Impact Report\\
\\
CAIS 2024 Impact Report](https://safe.ai/work/impact-report/2024)

# CAIS Blog

## Deeper-dive examinations of relevant AI safety topics

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.

[![](https://cdn.prod.website-files.com/63fe96aeda6beab8f87d3023/683bd216e0b90cfb6bc5ef44_Screenshot%202025-05-31%20at%2021.06.52.png)](https://safe.ai/blog/josue-estrada-joins-center-for-ai-safety-as-chief-operating-officer)

[Press Release](https://safe.ai/category/press-release)

•

Jun 23, 2025

•

2 min read

[**Josué Estrada Joins Center for AI Safety as Chief Operating Officer**](https://safe.ai/blog/josue-estrada-joins-center-for-ai-safety-as-chief-operating-officer)

Written by:

[![](https://cdn.prod.website-files.com/63fe96aeda6beab8f87d3023/66e7924c328a6871029fd339_HLE.png)](https://safe.ai/blog/humanitys-last-exam)

[AI Risks](https://safe.ai/category/ai-risks)

•

Sep 15, 2024

•

2 min read

[**Submit Your Toughest Questions for Humanity's Last Exam**](https://safe.ai/blog/humanitys-last-exam)

CAIS and Scale AI are excited to announce the launch of Humanity's Last Exam, a project aimed at measuring how close we are to achieving expert-level AI systems. The exam is aimed at building the world's most difficult public AI benchmark gathering experts across all fields. People who submit successful questions will be invited as coauthors on the paper for the dataset and have a chance to win money from a $500,000 prize pool.

Written by:

Dan Hendrycks, Alexandr Wang

[![](https://cdn.prod.website-files.com/63fe96aeda6beab8f87d3023/66de8717c7c7a0ef98f313d4_forecasting_bot.png)](https://safe.ai/blog/forecasting)

[AI Risks](https://safe.ai/category/ai-risks)

•

Sep 9, 2024

•

5 min read

[**Superhuman Automated Forecasting**](https://safe.ai/blog/forecasting)

This post describes a superhuman forecasting AI called FiveThirtyNine, which generates probabilistic predictions for any query by retrieving relevant information and reasoning through it. We explain how the system works, its performance compared to human forecasters, and its potential applications in improving decision-making and public discussions.

Written by:

Long Phan, Andrew Zeng, Mantas Mazeika, Adam Khoja, Dan Hendrycks

[![](https://cdn.prod.website-files.com/63fe96aeda6beab8f87d3023/663aea231b6e97fc03807e37_65723e14d78d8bd4dc6373ee_AI-risk-meta-card.jpg)](https://safe.ai/blog/ai-safety-ethics-and-society)

[AI Risks](https://safe.ai/category/ai-risks)

•

May 10, 2024

•

[**AI Safety, Ethics, and Society**](https://safe.ai/blog/ai-safety-ethics-and-society)

AI Safety, Ethics and Society is a textbook and online course providing a non-technical introduction to how current AI systems work, why many experts are concerned that continued advances in AI could pose severe societal-scale risks, and how society can manage and mitigate these risks.

Written by:

[![](https://cdn.prod.website-files.com/63fe96aeda6be

... (truncated, 9 KB total)
Resource ID: a27b8d271c27aa02 | Stable ID: NjYzMDVhND