Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Centre for the Study of Existential Risk

CSER is a Cambridge University research center focused on existential risks; this page covers their decision theory work relevant to AI alignment, though limited content was available for deeper analysis.

Metadata

Importance: 35/100homepage

Summary

This page describes the Centre for the Study of Existential Risk's (CSER) research program on decision theory as it relates to AI safety and existential risk. It covers conferences and workshops exploring how formal decision-theoretic frameworks can inform the development of safe and beneficial AI systems.

Key Points

  • CSER hosts conferences and workshops connecting decision theory research to AI safety challenges
  • Explores how decision-theoretic frameworks can help address existential risks from advanced AI
  • Brings together researchers from philosophy, economics, and AI to examine rational agency under uncertainty
  • Investigates implications of various decision theories for designing AI systems with safe behavior
  • Part of CSER's broader interdisciplinary approach to long-term AI risk reduction

Cited by 1 page

Resource ID: ca2d5bccce7ea4be | Stable ID: N2Y1YjZhZT