Skip to content
Longterm Wiki
YL

Yann LeCun

Also known as: LeCun

Pioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risk

Current Role
Chief AI Scientist
Organization
Meta AI (FAIR)
Born
1960
Age ~66

Expert Positions4 topics

TopicViewEstimateConfidenceDateSource
P(doom)Effectively zero~0%highOct 2024TechCrunch interview (2024)
AGI TimelinesVery long (via current methods, never)50+ yearshigh2024TIME interview (2024)
How Hard Is Alignment?Solvable through designEngineering problem, not fundamentalhigh2023Debate with Yudkowsky (2023)
Takeoff SpeedImpossible (hard takeoff)Incremental progress onlyhighOct 2024Public statements (2023-2024)

Policy Positions1 policy

PolicyPositionStatusReason
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActopposevetoedPublicly rebuked SB 1047 supporters the day after Hinton endorsed the bill; argued it would have 'apocalyptic consequences on the AI ecosystem' and that 'without open-source AI, there is no AI start-up ecosystem and no academic research on large models'

Education

PhD in Computer Science, Université Pierre et Marie Curie (Paris VI, 1987); Diplome d'Ingenieur, ESIEE Paris

From wiki articleRead full article →

Overview

Yann André LeCun (born July 8, 1960) is a French-American computer scientist widely recognized as one of the "Godfathers of AI" alongside Geoffrey Hinton and Yoshua Bengio. He received the 2018 Turing Award for his foundational work on deep learning, particularly his development of convolutional neural networks (CNNs) that revolutionized computer vision. From 2013 to 2025, he served as Chief AI Scientist at Meta (formerly Facebook), leading the company's AI research laboratory (FAIR).

Unlike his fellow Turing laureates, LeCun has remained one of the most vocal and prominent skeptics of AI existential risk claims. While Hinton and Bengio have pivoted toward AI safety advocacy, LeCun has consistently argued that concerns about superintelligent AI posing an existential threat are "premature," "preposterous," and—in his characteristically direct language—"complete B.S." His position represents a significant counterweight in the AI safety debate, as his technical credentials are unimpeachable yet his conclusions differ dramatically from many other leading researchers.

In November 2025, LeCun announced his departure from Meta to found Advanced Machine Intelligence (AMI) Labs, a startup focused on developing "world models"—AI systems that understand the physical world rather than merely predicting text tokens. This move reflects his longstanding argument that large language models (LLMs) represent a "dead end" for achieving human-level intelligence.

Quick Assessment

FactorAssessmentEvidence
Extinction Risk EstimateEffectively zeroPublic statements: "complete B.S."
Timeline to Human-Level AI50+ years (via current methods: never)LLMs cannot reach human-level intelligence
Position on AI RegulationSkeptical; opposes most proposalsOpposed SB 1047, regulatory "doom talk"
Open vs. Closed AIStrong open-source advocateLed Meta's open Llama releases
Technical FocusWorld models, JEPA architectureAlternative to autoregressive LLMs
Influence on PolicyModerate (counterbalances safety advocates)High-profile opposition to SB 1047

Links

Facts

9
People
Role / TitleChief AI Scientist
Employed ByMeta AI (FAIR)
General
Websitehttp://yann.lecun.com
Biographical
Wikipediahttps://en.wikipedia.org/wiki/Yann_LeCun
Google Scholarhttps://scholar.google.com/citations?user=WLN3QrAAAAAJ
Birth Year1960
EducationPhD in Computer Science, Université Pierre et Marie Curie (Paris VI, 1987); Diplome d'Ingenieur, ESIEE Paris
Notable ForPioneer of convolutional neural networks (CNNs); Chief AI Scientist at Meta; Turing Award 2018; vocal skeptic of AGI existential risk
Social Media@ylecun