Skip to content
Longterm Wiki
Navigation
Updated 2026-01-29HistoryData
Page StatusContent
Edited 2 months ago2.0k words23 backlinksUpdated every 6 weeksOverdue by 21 days
42QualityAdequate •28ImportancePeripheral38.5ResearchLow
Content8/13
SummaryScheduleEntityEdit history2Overview
Tables13/ ~8Diagrams0/ ~1Int. links49/ ~16Ext. links0/ ~10Footnotes0/ ~6References15/ ~6Quotes0Accuracy0RatingsN:2.5 R:4 A:2 C:6.5Backlinks23
Change History2
Audit wiki pages for factual errors and hallucinations7 weeks ago

Systematic audit of ~20 wiki pages for factual errors, hallucinations, and inconsistencies. Found and fixed 25+ confirmed errors across 17 pages, including wrong dates, fabricated statistics, false attributions, missing major events, broken entity references, misattributed techniques, and internal inconsistencies.

Fix factual errors found in wiki audit7 weeks ago

Systematically audited ~35+ high-risk wiki pages for factual errors and hallucinations using parallel background agents plus direct reading. Fixed 13 confirmed errors across 11 files.

Issues2
QualityRated 42 but structure suggests 67 (underrated by 25 points)
StaleLast edited 66 days ago - may need review

Geoffrey Hinton

Person

Geoffrey Hinton

Comprehensive biographical profile of Geoffrey Hinton documenting his 2023 shift from AI pioneer to safety advocate, estimating 10-20% extinction risk in 5-20 years. Covers his media strategy, policy influence, and distinctive "honest uncertainty" approach, but offers limited actionable guidance for prioritization beyond noting his role in legitimizing safety concerns.

Affiliationindependent
RoleProfessor Emeritus, AI Safety Advocate
Known ForDeep learning pioneer, backpropagation, now AI risk vocal advocate
Related
Organizations
Google DeepMind
2k words · 23 backlinks

Overview

Geoffrey Hinton is widely recognized as one of the "Godfathers of AI" for his foundational contributions to neural networks and deep learning. In May 2023, he made global headlines by leaving Google to speak freely about AI risks, stating a 10–20% probability of AI causing human extinction within 5-20 years.

Hinton's advocacy carries unique weight due to his role in creating modern AI. His 2012 AlexNet breakthrough with student Alex Krizhevsky ignited the current AI revolution, leading to today's large language models. His shift from AI optimist to vocal safety advocate represents one of the most significant expert opinion changes in the field, influencing public discourse and policy discussions worldwide.

His current focus emphasizes honest uncertainty about solutions while advocating for slower AI development and international coordination. Unlike many safety researchers, Hinton explicitly admits he doesn't know how to solve alignment problems, making his warnings particularly credible to policymakers and the public.

Risk Assessment

FactorAssessmentEvidenceTimeline
Extinction Risk10–20% probabilityHinton's public estimate5-20 years
Job DisplacementVery HighEconomic disruption inevitable2-10 years
Autonomous WeaponsCritical concernAI-powered weapons development1-5 years
Loss of ControlHigh uncertaintySystems already exceed understandingOngoing
Capability Growth RateFaster than expectedProgress exceeded predictionsAccelerating

Academic Background and Career

PeriodPositionKey Contributions
1978PhD, University of EdinburghThesis on neural networks and distributed representations
1987-presentProfessor, University of TorontoNeural networks research
2013-2023Part-time researcher, GoogleDeep learning applications
2018Turing Award winnerShared with Yoshua Bengio and Yann LeCun
2024Nobel Prize in PhysicsShared with John Hopfield for foundational discoveries in machine learning with artificial neural networks

Revolutionary Technical Contributions

Foundational Algorithms:

  • Backpropagation (1986): With David Rumelhart and Ronald Williams, provided mathematical foundation for training deep networks
  • Dropout (2012): Regularization technique preventing overfitting in neural networks
  • Boltzmann Machines: Early probabilistic neural networks for unsupervised learning
  • Capsule Networks: Alternative architecture to convolutional neural networks

The 2012 Breakthrough: Hinton's supervision of Alex Krizhevsky's AlexNet won ImageNet competition by unprecedented margin, demonstrating deep learning superiority and triggering the modern AI boom that led to current language models and AI capabilities.

The Pivot to AI Safety (2023)

Resignation from Google

In May 2023, Hinton publicly resigned from Google, stating in The New York Times: "I want to talk about AI safety issues without having to worry about how it interacts with Google's business."

MotivationDetailsImpact
Intellectual FreedomSpeak without corporate constraintsGlobal media attention
Moral ResponsibilityFelt duty given role in creating AILegitimized safety concerns
Rapid ProgressSurprised by LLM capabilitiesShifted expert consensus
Public WarningRaise awareness of risksInfluenced policy discussions

Evolution of Risk Assessment

Hinton's predictions for advanced AI development have shifted dramatically as the field progressed, particularly following the emergence of large language models like ChatGPT. His timeline revisions reflect genuine surprise at the pace of capability improvements, lending credibility to his warnings since they're not based on fixed ideological positions but rather updated evidence.

Expert/SourceEstimateReasoning
Pre-2020 (2019)30-50 years to AGIHinton's original timeline estimate reflected the conventional wisdom among AI researchers that achieving artificial general intelligence would require multiple decades of steady progress. This estimate was based on the then-current state of neural networks and the anticipated challenges in scaling and architectural improvements.
Post-ChatGPT (2023)5-20 years to human-level AIFollowing the release of ChatGPT and other large language models, Hinton dramatically revised his timeline downward after observing capabilities he did not expect to see for many years. The emergence of sophisticated reasoning, multi-domain knowledge integration, and rapid capability scaling convinced him that progress was accelerating far beyond previous projections.
Extinction Risk (2023)10–20% probability in 5-20 yearsHinton's explicit probability estimate for AI causing human extinction reflects his assessment that we lack adequate solutions to alignment problems while simultaneously developing increasingly powerful systems. This estimate combines his revised timeline for human-level AI with uncertainty about whether we can maintain control over systems that exceed human intelligence.

Current Risk Perspectives

Core Safety Concerns

Immediate Risks (1-5 years):

  • Disinformation: AI-generated fake content at scale
  • Economic Disruption: Mass job displacement across sectors
  • Autonomous Weapons: Lethal systems without human control
  • Cybersecurity: AI-enhanced attacks on infrastructure

Medium-term Risks (5-15 years):

  • Power Concentration: Control of AI by few actors
  • Democratic Erosion: AI-enabled authoritarian tools
  • Loss of Human Agency: Over-dependence on AI systems
  • Social Instability: Economic and political upheaval

Long-term Risks (10-30 years):

  • Existential Threat: 10–20% probability of human extinction
  • Alignment Failure: AI pursuing misaligned goals
  • Loss of Control: Inability to modify or stop advanced AI
  • Civilizational Transformation: Fundamental changes to human society

Unique Epistemic Position

Unlike many AI safety researchers, Hinton emphasizes:

AspectHinton's ApproachContrast with Others
Solutions"I don't know how to solve this"Many propose specific technical fixes
UncertaintyExplicitly acknowledges unknownsOften more confident in predictions
TimelinesAdmits rapid capability growth surprised himSome maintain longer timeline confidence
RegulationSupports without claiming expertiseTechnical researchers often skeptical of policy

Public Advocacy and Impact

Media Engagement Strategy

Since leaving Google, Hinton has systematically raised public awareness through:

Major Media Appearances:

  • CBS 60 Minutes (October 2023) - 15+ million viewers
  • BBC interviews on AI existential risk
  • MIT Technology Review cover story
  • Congressional and parliamentary testimonies

Key Messages in Public Discourse:

  1. "We don't understand these systems" - Even creators lack full comprehension
  2. "Moving too fast" - Need to slow development for safety research
  3. "Both near and far risks matter" - Job loss AND extinction concerns
  4. "International cooperation essential" - Beyond company-level governance

Policy Influence

VenueImpactKey Points
UK ParliamentAI Safety Summit inputRegulation necessity, international coordination
US CongressTestimony on AI risksBipartisan concern, need for oversight
EU AI OfficeConsultation on AI ActTechnical perspective on capabilities
UN ForumsGlobal governance discussionsCross-border AI safety coordination

Effectiveness Metrics

Public Opinion Impact:

  • Pew Research shows 52% of Americans more concerned about AI than excited (up from 38% in 2022)
  • Google search trends show substantial increases in "AI safety" searches following his resignation
  • Media coverage of AI risks increased significantly in the months following his departure from Google

Policy Responses:

  • EU AI Act included stronger provisions partly citing expert warnings
  • US AI Safety Institute establishment accelerated
  • UK AISI expanded mandate and funding

Technical vs. Policy Focus

Departure from Technical Research

Unlike safety researchers at MIRI, Anthropic, or ARC, Hinton explicitly avoids proposing technical solutions:

Rationale for Policy Focus:

  • "I'm not working on AI safety research because I don't think I'm good enough at it"
  • Technical solutions require deep engagement with current systems
  • His comparative advantage lies in public credibility and communication
  • Policy interventions may be more tractable than technical alignment

Areas of Technical Uncertainty:

  • How to ensure AI systems remain corrigible
  • Whether interpretability research can keep pace
  • How to detect deceptive alignment or scheming
  • Whether capability control methods will scale

Current State and Trajectory

2024-2025 Activities

Ongoing Advocacy:

  • Regular media appearances maintaining public attention
  • University lectures on AI safety to next generation researchers
  • Policy consultations with government agencies globally
  • Support for AI safety research funding initiatives

Collaboration Networks:

  • Works with Stuart Russell on policy advocacy
  • Supported Future of Humanity Institute research directions (FHI closed April 2024)
  • Collaborates with Centre for AI Safety on public communications
  • Advises Partnership on AI on technical governance

Projected 2025-2028 Influence

AreaExpected ImpactKey Uncertainties
Regulatory PolicyHigh - continued expert testimonyPolitical feasibility of AI governance
Public OpinionMedium - sustained media presenceCompeting narratives about AI benefits
Research FundingHigh - legitimizes safety researchBalance with capabilities research
Industry PracticesMedium - pressure for responsible developmentEconomic incentives vs safety measures

Key Uncertainties and Debates

Internal Consistency Questions

Timeline Uncertainty:

  • Why did estimates change so dramatically (30-50 years to 5-20 years)?
  • How reliable are rapid opinion updates in complex technological domains?
  • What evidence would cause further timeline revisions?

Risk Assessment Methodology:

  • How does Hinton arrive at specific probability estimates (e.g., 10% extinction risk)?
  • What empirical evidence supports near-term catastrophic risk claims?
  • How do capability observations translate to safety risk assessments?

Positioning Within Safety Community

Relationship to Technical Research: Hinton's approach differs from researchers focused on specific alignment solutions:

Technical ResearchersHinton's Approach
Propose specific safety methodsEmphasizes uncertainty about solutions
Focus on scalable techniquesAdvocates for slowing development
Build safety into systemsCalls for external governance
Research-first strategyPolicy-first strategy

Critiques from Safety Researchers:

  • Insufficient engagement with technical safety literature
  • Over-emphasis on extinction scenarios vs. other risks
  • Policy recommendations lack implementation details
  • May distract from technical solution development

Critiques from Capabilities Researchers:

  • Overstates risks based on limited safety research exposure
  • Alarmist framing may harm beneficial AI development
  • Lacks concrete proposals for managing claimed risks
  • Sudden opinion change suggests insufficient prior reflection

Comparative Analysis with Other Prominent Voices

Risk Assessment Spectrum

FigureExtinction Risk EstimateTimelinePrimary Focus
Geoffrey Hinton10–20% in 5-20 years5-20 years to human-level AIPublic awareness, policy
Eliezer Yudkowsky>90%2-10 yearsTechnical alignment research
Dario AmodeiSignificant but manageable5-15 yearsResponsible scaling, safety research
Stuart RussellHigh without intervention10-30 yearsAI governance, international cooperation
Yann LeCunVery low50+ yearsContinued capabilities research

Communication Strategies

Hinton's Distinctive Approach:

  • Honest Uncertainty: "I don't know" as core message
  • Narrative Arc: Personal journey from optimist to concerned
  • Mainstream Appeal: Avoids technical jargon, emphasizes common sense
  • Institutional Credibility: Leverages academic and industry status

Effectiveness Factors:

  • Cannot be dismissed as anti-technology
  • Changed mind based on evidence, not ideology
  • Emphasizes uncertainty rather than certainty
  • Focuses on raising questions rather than providing answers

Sources and Resources

Academic Publications

PublicationYearSignificance
Learning representations by back-propagating errors1986Foundational backpropagation paper
ImageNet Classification with Deep CNNs2012AlexNet breakthrough
Deep Learning2015Nature review with LeCun and Bengio

Recent Media and Policy Engagement

SourceDateTopic
CBS 60 MinutesOctober 2023AI risks and leaving Google
New York TimesMay 2023Resignation announcement
MIT Technology ReviewMay 2023In-depth risk assessment
BBCJune 2023Global AI governance

Research Organizations and Networks

OrganizationRelationshipFocus Area
University of TorontoEmeritus ProfessorAcademic research base
Vector InstituteCo-founderCanadian AI research
CIFARSenior FellowAI and society program
Partnership on AIAdvisorIndustry collaboration

Policy and Governance Resources

InstitutionEngagement TypePolicy Impact
UK ParliamentExpert testimonyAI Safety Summit planning
US CongressHouse/Senate hearingsAI regulation framework
EU CommissionAI Act consultationTechnical risk assessment
UN AI Advisory BoardMember participationGlobal governance principles

References

Partnership on AI (PAI) is a nonprofit coalition of AI researchers, civil society organizations, academics, and companies working to develop best practices, conduct research, and shape policy around responsible AI development. It brings together diverse stakeholders to address challenges including safety, fairness, transparency, and the societal impacts of AI systems. PAI serves as a coordination hub for cross-sector dialogue on AI governance.

★★★☆☆
2**Future of Humanity Institute**Future of Humanity Institute

The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.

★★★★☆

Meta's chief AI scientist Yann LeCun dismisses fears of AI posing existential risks or permanently destroying jobs as 'preposterously ridiculous,' arguing safety concerns can be addressed through proper development rather than restricting research. He draws parallels to historical technologies like turbo-jets, which were eventually made safe, and advocates for open AI research. LeCun represents a notable dissenting voice among AI's 'godfathers,' disagreeing with Hinton and Bengio on AI risk.

★★★★☆

The Vector Institute is a Canadian not-for-profit organization dedicated to advancing AI research, application, and commercialization, with a focus on machine learning, deep learning, privacy, security, and healthcare. It operates across sectors to drive economic growth and has published AI trust and safety principles for organizations. It serves as a hub for academic-industry collaboration in Ontario and across Canada.

Geoffrey Hinton, a pioneer of deep learning and longtime Google researcher, resigned from Google in May 2023 to speak freely about his concerns that AI could pose serious risks to humanity. Hinton expressed regret about his life's work and warned that AI systems may soon surpass human intelligence in dangerous ways. His departure marked a significant moment in public discourse about AI safety from a highly credentialed insider.

★★★★☆

This CBS 60 Minutes interview featured Geoffrey Hinton, the 'Godfather of AI,' discussing his concerns about AI safety and existential risk after leaving Google. The page is currently unavailable, but the interview covered Hinton's warnings about AI surpassing human intelligence and the potential dangers of advanced AI systems.

7ImageNet competitionimage-net.org

ImageNet is a large-scale image database organized according to the WordNet noun hierarchy, containing hundreds to thousands of images per concept node. It has been foundational to advances in computer vision and deep learning, particularly through its annual competition (ILSVRC) which catalyzed breakthroughs like AlexNet in 2012. The dataset is freely available to researchers for non-commercial use.

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆

The official homepage of the University of Toronto, a leading research university in Canada. UofT has significant contributions to AI and deep learning research, being the academic home of Geoffrey Hinton and the birthplace of foundational deep learning breakthroughs.

10Deep LearningNature (peer-reviewed)·2018·Paper

This Nature article provides a comprehensive overview of deep learning, explaining how computational models with multiple processing layers can learn hierarchical representations of data. The paper highlights that deep learning has dramatically advanced performance in speech recognition, visual object recognition, object detection, drug discovery, and genomics. It describes key techniques including backpropagation for training neural networks, convolutional neural networks (CNNs) for image and audio processing, and recurrent neural networks (RNNs) for sequential data like text and speech.

★★★★★

CIFAR is a Canadian research institute that funds and supports interdisciplinary research across multiple domains including AI, neuroscience, and quantum computing. It played a foundational role in the deep learning revolution by funding Geoffrey Hinton, Yann LeCun, and Yoshua Bengio's early work. CIFAR now also engages with AI governance and societal impacts through its AI & Society program.

A Pew Research Center survey from August 2023 documenting increasing American public concern about AI's growing role in daily life. The report finds that more Americans are worried than excited about AI, with majorities expressing unease about its use in various applications. It provides empirical baseline data on public attitudes toward AI across demographic groups.

★★★★☆

This landmark 2012 paper by Krizhevsky, Sutskever, and Hinton introduced AlexNet, a deep convolutional neural network that dramatically outperformed prior methods on the ImageNet Large Scale Visual Recognition Challenge. It demonstrated that deep CNNs trained on GPUs could achieve state-of-the-art image classification, catalyzing the modern deep learning revolution. The techniques introduced—ReLU activations, dropout regularization, and GPU training—became foundational to subsequent AI progress.

14Learning representations by back-propagating errorsNature (peer-reviewed)·David E. Rumelhart, Geoffrey E. Hinton & Ronald J. Williams·2002·Paper

This seminal 1986 Nature paper introduces backpropagation, a learning algorithm for multi-layer neural networks that iteratively adjusts connection weights to minimize the difference between actual and desired outputs. The key innovation is that hidden units in the network automatically learn to represent important features of the task domain through this error-driven learning process. This capability to discover useful internal representations distinguishes backpropagation from earlier methods like the perceptron, enabling neural networks to learn complex, non-linear relationships in data.

★★★★★
15MIT Technology ReviewMIT Technology Review

A landmark interview with Geoffrey Hinton, one of the 'godfathers of deep learning,' explaining why he resigned from Google to speak freely about AI risks. Hinton expresses concern that AI systems may develop goals misaligned with human values, that the competitive race between tech companies makes safety harder, and that he now regrets aspects of his life's work.

★★★★☆

Structured Data

12 facts·2 recordsView in FactBase →
Employed By
University of Toronto
as of 1987
Role / Title
Professor Emeritus
as of 2023
Birth Year
1947

All Facts

12
People
PropertyValueAs OfSource
Role / TitleProfessor Emeritus2023
2 earlier values
Mar 2013VP and Engineering Fellow
1987Professor of Computer Science
Employed ByGoogle DeepMindMar 2013
1 earlier value
1987University of Toronto
Biographical
PropertyValueAs OfSource
Wikipediahttps://en.wikipedia.org/wiki/Geoffrey_Hinton
Google Scholarhttps://scholar.google.com/citations?user=JicYPdAAAAAJ
EducationPhD in Artificial Intelligence, University of Edinburgh (1978); BA in Experimental Psychology, Cambridge University
Notable ForGodfather of deep learning; pioneer of backpropagation, Boltzmann machines, and deep neural networks; Nobel Prize in Physics 2024; Turing Award 2018
Social Media@geoffreyhinton
Birth Year1947
General
PropertyValueAs OfSource
Websitehttps://www.cs.toronto.edu/~hinton/

Career History

2
OrganizationTitleStartEnd
University of TorontoProfessor Emeritus1987
Google DeepMindVP & Engineering Fellow2013-032023-05

Related Wiki Pages

Top Related Pages

Concepts

Existential Risk from AIOptimistic Alignment WorldviewLarge Language ModelsSelf-Improvement and Recursive Enhancement

Organizations

US AI Safety Institute

Approaches

Pause AdvocacyAI Safety Field Building Analysis

Risks

AI Authoritarian ToolsScheming

Policy

Safe and Secure Innovation for Frontier Artificial Intelligence Models ActEU AI Act

Key Debates

AI Accident Risk CruxesThe Case Against AI Existential RiskIs Interpretability Sufficient for Safety?

Analysis

AI Risk Warning Signs ModelLAWS Proliferation Model

Historical

Deep Learning Revolution EraMainstream EraAnthropic-Pentagon Standoff (2026)AI Military Deployment in the 2026 Iran War