Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Future of Humanity Institute

This FHI survey report captures the views of AI safety researchers on timelines and priorities as of 2019, serving as a useful historical benchmark for how expert opinion in the field has shifted over time.

Metadata

Importance: 55/100organizational reportdataset

Summary

A survey conducted by the Future of Humanity Institute examining the views of AI safety researchers on key questions including timelines to transformative AI, prioritization of research areas, and concerns about existential risk. The report aggregates expert opinion to inform the field's direction and resource allocation.

Key Points

  • Surveys AI safety researchers on their probability estimates for transformative or dangerous AI timelines
  • Captures researcher views on which technical and governance problems are most important to prioritize
  • Provides a snapshot of expert consensus and disagreement within the AI safety community circa 2019
  • Published by FHI, a leading academic institution in AI safety and existential risk research
  • Useful for understanding how researcher priorities and beliefs have evolved over time

Cited by 1 page

PageTypeQuality
AI Accident Risk CruxesCrux67.0
Resource ID: c64b78e5b157c2c8 | Stable ID: N2FhMTgxYz