Skip to content
Longterm Wiki
Navigation
Updated 2026-02-11HistoryData
Citations verified2 accurate23 unchecked
Page StatusContent
Edited 8 weeks ago3.0k words21 backlinksUpdated every 3 daysOverdue by 50 days
51QualityAdequate •30ImportanceReference44ResearchLow
Content9/13
SummaryScheduleEntityEdit history1Overview
Tables14/ ~12Diagrams2/ ~1Int. links17/ ~24Ext. links59/ ~15Footnotes0/ ~9References36/ ~9Quotes7/30Accuracy2/30RatingsN:3.2 R:4.1 A:3.8 C:7.8Backlinks21
Change History1
Fix audit report findings from PR #2167 weeks ago

Reviewed PR #216 (comprehensive wiki audit report) and implemented fixes for the major issues it identified: fixed 181 path-style EntityLink IDs across 33 files, converted 164 broken EntityLinks (referencing non-existent entities) to plain text across 38 files, fixed a temporal inconsistency in anthropic.mdx, and added missing description fields to 53 ai-transition-model pages.

Issues2
QualityRated 51 but structure suggests 100 (underrated by 49 points)
Links48 links could use <R> components

Meta AI (FAIR)

Frontier Lab

Meta AI (FAIR)

Comprehensive organizational profile of Meta AI covering $66-72B infrastructure investment (2025), LLaMA model family (1B+ downloads), and transition from FAIR research lab to product-focused GenAI team. Documents significant talent exodus (50%+ of LLaMA authors departed), weak safety culture, and aggressive open-source strategy amid racing dynamics toward 2027 AGI timeline.

TypeFrontier Lab
Founded2013
LocationMenlo Park, CA
Related
Policies
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
People
Mark ZuckerbergYann LeCunYann LeCun
3k words · 21 backlinks

Quick Assessment

DimensionAssessmentEvidence
Research ImpactA-PyTorch powers 63% of training models globally; LLaMA downloaded 1B+ times; SAM, DINO, DINOv2 foundational computer vision models
Capabilities LevelFrontierLLaMA 4 Scout/Maverick (April 2025) competitive with GPT-4; 10M context window; Meta Superintelligence Labs targeting AGI by 2027
Open Source StrategyIndustry-LeadingMost permissive major lab; open weights for LLaMA family; PyTorch donated to Linux Foundation (2022)
Safety ApproachWeakFrontier AI Framework (Feb 2025) addresses CBRN but no robust safety culture; Chief AI Scientist dismissed existential risk
Capital InvestmentMassive$66-72B CapEx (2025); $115-135B projected (2026); Reality Labs cumulative $70B losses since 2020
Talent RetentionConcerning50%+ of original LLaMA authors departed within 6 months; FAIR described as "dying a slow death" by former employees
Regulatory StanceAnti-RegulationLobbied for 10-year ban on state AI laws; launched Super PAC to support tech-friendly candidates

Recent Developments (2025-2026)

Leadership Changes and Organizational Restructuring

Major management shakeup occurred in late 2025 with the departure of AI pioneer Yann LeCun to found Advanced Machine Intelligence (AMI) Labs. LeCun launched fundraising talks valuing AMI at roughly $3.5 billion, seeking to create "world models"—AI systems that understand physics and maintain persistent memory. Alex LeBrun, co-founder and CEO of Nabla, was hired as AMI's CEO.

The research function has been consolidated under Meta Superintelligence Labs, led by Alexandr Wang, former Scale AI CEO.

AI Performance Metrics and User Growth

Daily actives generating media within Meta AI tripled year-over-year in Q4 2025, while feed and video ranking improvements delivered a 7% lift in views of organic content. Meta AI reached over 1 billion monthly active users as of Q1 2025, with approximately 40 million daily users and 185 million weekly users. WhatsApp dominates with 630 million active AI users, representing 63% of all Meta AI interactions.

Next-Generation AI Models

Meta is preparing next-generation "Mango" and "Avocado" AI models for 2026 launch. Mango is designed for multimodal image and video generation, while Avocado is a text-based LLM aimed at improving coding and reasoning capabilities, both targeting first-half 2026 release.

Hardware Strategy: Custom AI Chips

Meta has aggressively expanded its MTIA (Meta Training and Inference Accelerator) roadmap. MTIA v3 "Iris" chips are moving into broad deployment across Meta's data centers, delivering a 40-44% reduction in total cost of ownership compared to GPUs. The aggressive roadmap includes MTIA-2 slated for H1 2026 debut and MTIA-3 for H2 2026, built on TSMC's 3nm process with advanced packaging specifications.

Reality Labs Restructuring

In January 2026, Meta cut about 10% of staff focusing on metaverse-related VR projects, eliminating roughly 1,000 roles as Reality Labs logged over $70 billion in cumulative losses since late 2020. The shift redirects Reality Labs investment away from VR toward AI and wearable devices, with focus on Ray-Ban Meta smart glasses development.

Organization Details

AttributeValue
FoundedDecember 2013
HeadquartersMenlo Park, California
Parent CompanyMeta Platforms, Inc.
Current LeadershipRobert Fergus (FAIR Director, May 2025); Ahmad Al-Dahle (GenAI); Alexandr Wang & Nat Friedman (Meta Superintelligence Labs)
Former LeadershipYann LeCun (2013-2018, Chief AI Scientist until Nov 2025); Jérôme Pesenti (2018-2022); Joelle Pineau (2023-May 2025)
Research LocationsMenlo Park, New York City, Paris, London, Montreal, Seattle, Pittsburgh, Tel Aviv
Parent Company Employees≈78,800 (Q4 2025)
Parent Company Revenue$200.97B (FY 2025)
AI Infrastructure Investment$66-72B (2025); $115-135B projected (2026)
Key People

No data available.

Overview

Meta AI, originally founded as Facebook Artificial Intelligence Research (FAIR) in December 2013, is the artificial intelligence research division of Meta Platforms. The lab was established through a partnership between Mark Zuckerberg and Yann LeCun, a Turing Award-winning pioneer in deep learning and convolutional neural networks. LeCun served as Chief AI Scientist until his departure in November 2025 to found Advanced Machine Intelligence (AMI), a startup focused on world models.

Meta AI has made foundational contributions to the AI ecosystem, most notably through PyTorch, which now powers approximately 63% of training models and runs over 5 trillion inferences per day across 50 data centers. The lab's open-source LLaMA model family has been downloaded over one billion times, making it a cornerstone of the open-source AI ecosystem. In September 2022, Meta transferred PyTorch governance to an independent foundation under the Linux Foundation.

However, the organization has faced significant internal challenges. More than half of the 14 authors of the original LLaMA research paper departed within six months of publication, with key researchers joining Anthropic, Google DeepMind, Microsoft AI, and startups like Mistral AI. The lab has been described as "dying a slow death" by former employees, with research increasingly deprioritized in favor of product development through the GenAI team.

Meta's AI safety approach remains notably weaker than competitors. The company's Frontier AI Framework published in February 2025 addresses CBRN risks but received criticism for lacking robust evaluation methodologies. The Future of Life Institute's 2025 Winter AI Safety Index found that Meta, like other major AI companies, had no testable plan for maintaining human control over highly capable AI systems. Chief AI Scientist Yann LeCun publicly characterized existential risk concerns as "complete B.S." throughout his tenure.

Risk Assessment

Risk CategoryAssessmentEvidenceTrend
Safety Research DeprioritizationHighFAIR restructured under GenAI (2024); VP of AI Research Joelle Pineau departed; product teams prioritizedWorsening
Racing Dynamics ContributionMedium-High$66-72B AI investment (2025); AGI by 2027 timeline; Meta Superintelligence Labs founded June 2025Intensifying
Open Weights ProliferationMediumLLaMA 4 available as open weights; no effective controls post-release; 1B+ downloadsStable
Safety Culture GapHighLeCun dismissed existential risk; Frontier Framework criticized as inadequate; human risk reviewers replaced with AIWorsening
Talent Exodus ImpactMedium-High50%+ original LLaMA authors departed; key researchers joined competitors; institutional knowledge lossStabilizing

History and Evolution

Diagram (loading…)
flowchart TD
  FOUND[December 2013: FAIR Founded] --> LECUN[Yann LeCun Named Director]
  LECUN --> PARIS[2015: Paris Lab Opens]
  PARIS --> PYTORCH[2017: PyTorch Released]
  PYTORCH --> PESENTI[2018: Jérôme Pesenti Takes Over as VP]
  PESENTI --> FOUNDATION[Sep 2022: PyTorch to Linux Foundation]
  FOUNDATION --> LLAMA1[Feb 2023: LLaMA Released]
  LLAMA1 --> EXODUS[Sep 2023: Mass Researcher Departures]
  EXODUS --> RESTRUCTURE[Jan 2024: FAIR Restructured Under GenAI]
  RESTRUCTURE --> LLAMA2[Jul 2024: LLaMA 3.1 405B Released]
  LLAMA2 --> FRAMEWORK[Feb 2025: Frontier AI Framework Published]
  FRAMEWORK --> PINEAU[Apr 2025: Joelle Pineau Departs]
  PINEAU --> LLAMA4[Apr 2025: LLaMA 4 Released]
  LLAMA4 --> MSL[Jun 2025: Meta Superintelligence Labs Founded]
  MSL --> LECUNDEP[Nov 2025: LeCun Departs for AMI Startup]
  LECUNDEP --> PROMETHEUS[2026: Prometheus Supercluster Launch]

  style FOUND fill:#e6f3ff
  style EXODUS fill:#ffcccc
  style MSL fill:#ffffcc
  style LECUNDEP fill:#ffcccc
  style PROMETHEUS fill:#ccffcc

Founding Era (2013-2017)

FAIR was established in December 2013 when Mark Zuckerberg personally attended the NeurIPS conference to recruit top AI talent. Yann LeCun, then a professor at New York University and pioneer of convolutional neural networks, was named the first director. The lab's founding mission emphasized advancing AI through open research for the benefit of all.

The lab expanded rapidly, opening research sites in Paris (2015), Montreal, and London. FAIR established itself as a center for fundamental research in self-supervised learning, generative adversarial networks, computer vision, and natural language processing. The 2017 release of PyTorch marked a watershed moment, providing an open-source framework that would eventually dominate the deep learning ecosystem.

Growth and Influence (2017-2022)

YearKey DevelopmentImpact
2017PyTorch 1.0 releasedBecame dominant ML framework (63% market share by 2025)
2018Jérôme Pesenti becomes VPShift toward more applied research
2019Detectron2 releasedState-of-the-art object detection platform
2020COVID-19 forecasting toolsApplied AI to pandemic response
2021No Language Left Behind200-language translation model
2022PyTorch Foundation createdGovernance transferred to Linux Foundation

During this period, Meta invested heavily in AI infrastructure while maintaining an open research philosophy. PyTorch adoption accelerated, with major systems including Tesla Autopilot, Uber's Pyro, ChatGPT, and Hugging Face Transformers building on the framework.

The LLaMA Era and Organizational Turmoil (2023-2025)

Model Releases

No data available.

The February 2023 release of LLaMA (Large Language Model Meta AI) represented Meta's entry into the foundation model competition. However, the release triggered significant internal tensions over computing resource allocation and research direction.

EventDateConsequence
LLaMA 1 releaseFeb 20237B-65B parameter models; weights leaked within a week
Mass departuresSep 202350%+ of LLaMA paper authors left; Mistral AI founded by departing researchers
FAIR restructuringJan 2024FAIR consolidated under GenAI team; Chris Cox oversight
LLaMA 2 releaseJul 2023More permissive licensing; Microsoft partnership
LLaMA 3 releaseApr 20248B and 70B models; competitive with GPT-4
LLaMA 3.1 releaseJul 2024405B model; 128K context; multilingual
Joelle Pineau departureMay 2025VP of AI Research joins Cohere as Chief AI Officer
LLaMA 4 releaseApr 2025Mixture-of-experts; Scout (10M context) and Maverick models
LeCun departureNov 2025Founded AMI startup focused on world models

Multimodal AI Capabilities

Video and Audio Generation

Meta has made significant advances in multimodal AI capabilities. Movie Gen enables creation of realistic, personalized HD videos up to 16 seconds at 16 FPS, generates 48kHz audio, and provides video editing capabilities. The system was announced for debut on Instagram in 2025 with multimodal generation capabilities; its current rollout status as of early 2026 is unclear.

The company has also open-sourced Perception Encoder Audiovisual (PE-AV), a unified encoder for audio, video, and text trained on over 100 million videos. PEAV embeds audio, video, audio-video, and text into a single joint space and serves as the core perception engine behind Meta's SAM Audio model.

Computer Vision Breakthroughs

Diagram (loading…)
flowchart LR
  subgraph Detection["Object Detection"]
      DETECTRON[Detectron2]
      MASKRCNN[Mask R-CNN]
      RETINANET[RetinaNet]
  end

  subgraph Segmentation["Segmentation"]
      SAM1[SAM - Apr 2023]
      SAM2[SAM 2 - 2024]
      SAM3[SAM Audio - 2025]
  end

  subgraph SelfSupervised["Self-Supervised Learning"]
      DINO1[DINO]
      DINO2[DINOv2 - Apr 2023]
      DINO3[DINOv3 - 2025]
  end

  SAM1 --> SAM2
  SAM2 --> SAM3
  DINO1 --> DINO2
  DINO2 --> DINO3

  DINO2 -.->|Feature extraction| SAM2

  style SAM1 fill:#ccffcc
  style SAM2 fill:#ccffcc
  style DINO2 fill:#ccffcc
ModelReleaseAchievementRecognition
Segment Anything (SAM)Apr 2023Zero-shot segmentation from prompts; 1B+ image masks datasetICCV 2023 Best Paper Honorable Mention
SAM 22024First unified model for image and video segmentationICLR 2025 Best Paper Honorable Mention
DINOv2Apr 2023Self-supervised learning without labels; 142M diverse imagesUniversal vision backbone
Detectron22019Modular object detection platformIndustry standard

Consumer AI Products and Partnerships

Products

No data available.

Ray-Ban Meta Smart Glasses

Meta's partnership with EssilorLuxottica has proven remarkably successful. Ray-Ban Meta glasses revenue tripled year-over-year, contributing to EssilorLuxottica's €14.02 billion first-half sales. EssilorLuxottica is expanding smart glasses production to 10 million annual units by end of 2025, positioning the glasses as potential smartphone successors.

The Ray-Ban Meta Glasses evolved into "AI-First" devices with real-time translation and object recognition capabilities. New Oakley Meta smart glasses were launched in June 2025.

Meta AI Assistant Integration

Meta has began testing a Meta AI business assistant for advertisers while expanding consumer AI assistant integration across Facebook, Instagram, and WhatsApp. The assistant reached over 1 billion monthly active users, with WhatsApp representing the largest platform with 630 million AI users.

International Expansion and Regulatory Compliance

European Launch

Meta AI launched across all 27 EU member states, plus 14 additional European countries and 21 overseas territories. However, the EU version has a limited feature set due to privacy concerns and GDPR compliance, and has not been trained on any European data.

As of May 27, 2025, Meta started using some personal data of European users to train AI systems following an initial pause after discussions with the Irish Data Protection Commission. GDPR led to more stringent regulations requiring Meta to reach compromise on data usage.

Meta Superintelligence Labs and Infrastructure

Prometheus Supercluster

Prometheus is a 1 gigawatt facility due to start operations in 2026, part of Meta's $100 billion AI infrastructure investment. The Prometheus facility is slated to go live in 2026 under Meta Superintelligence Labs led by Alexandr Wang (former Scale AI CEO) and Nat Friedman (ex-GitHub chief).

A larger Hyperion facility is designed to scale up to 5 gigawatts across multiple phases, representing one of the most ambitious AI infrastructure projects globally.

Safety Approach and Evaluation

Safety Milestones

No data available.

Frontier AI Framework Assessment

The Future of Life Institute's 2025 Winter AI Safety Index gave Meta a C+ grade reflecting mixed performance across safety domains. While Meta has formalized and published its frontier AI safety framework with clear thresholds and risk modeling mechanisms, the evaluation found significant gaps in safety culture and implementation.

Meta continues red-teaming in areas of public safety and critical infrastructure, evaluating models against risks including cybersecurity, catastrophic risks, and child safety. The company conducts pre-deployment risk assessments, safety evaluations and extensive red teaming, though critics argue these processes lack the rigor of competitors like Anthropic.

Safety Framework Limitations

ElementMetaOpenAIAnthropic
PublishedFeb 2025Beta 2023, v2 Apr 2025Sep 2023, updated May 2025
Risk ThresholdsModerate/High/CriticalMedium/High/CriticalASL-2/3/4
CBRN CoverageYesYesYes (ASL-3 active)
Autonomous AI RisksLimitedYesYes
External AuditNoLimitedThird-party review
Deployment DecisionsInternalInternalInternal + board

Open Source Philosophy and Ecosystem

Strategic Rationale

Meta's open-source AI strategy differs fundamentally from competitors like OpenAI and Anthropic. As Mark Zuckerberg articulated in July 2024:

"A key difference between Meta and closed model providers is that selling access to AI models isn't our business model."

FactorMeta's PositionClosed Lab Position (OpenAI/Anthropic)
Business ModelMonetize applications (ads, products)Monetize model access (API, subscriptions)
Competitive MoatEcosystem control and standardizationCapability lead and proprietary access
Safety ApproachDistributed defense; community refinementControlled deployment; centralized monitoring
Innovation ModelWidespread iteration and improvementInternal development with staged release

PyTorch Ecosystem Success

ComponentDescriptionAdoption
PyTorch CoreDynamic computational graphs, Python-first design63% of training models; 70% of AI research
TorchVisionComputer vision models and datasetsStandard for CV research
TorchTextNLP data processing and modelsWidely used in NLP pipelines
PyTorch3D3D computer vision componentsPowers Mesh R-CNN and related research

The PyTorch Foundation operates with governance from AMD, AWS, Google Cloud, Meta, Microsoft Azure, and Nvidia, ensuring long-term sustainability independent of Meta's strategic decisions.

LLaMA Ecosystem Development

Meta held its first-ever developer conference for LLaMA on April 29, 2025, dubbed "LlamaCon." The event announced the billion download milestone and introduced the "Llama for Startups" support program with Meta team access and funding.

Financial Position and Investment

AI Infrastructure Spending

YearCapital ExpenditureKey Investments
2024$39.2BData centers; GPU clusters
2025$66-72B1 GW AI capacity; expanded data centers
2026 (projected)$115-135BMeta Superintelligence Labs; Prometheus supercluster

The Hyperion data center project, a $27B partnership with Blue Owl Capital, represents one of the largest single AI infrastructure investments.

MTIA Custom Chip Development

Meta's custom chip strategy has accelerated significantly:

GenerationTimelineFeaturesImpact
MTIA v3 "Iris"2026 deploymentBroad data center deployment40-44% cost reduction vs GPUs
MTIA v4 "Santa Barbara"2026-2027Enhanced performanceRoadmap component
MTIA v5 "Olympus"2027-2028Advanced capabilitiesRoadmap component
MTIA v6 "Universal Core"2028+Next-generation architectureRoadmap component

Comparative Analysis

vs. Emerging Competitors

Meta faces increasing competition from newer entrants:

DimensionMeta AIOpenAIAnthropicxAICharacter.AI
Open SourceHigh (LLaMA)None (closed)None (closed)LimitedNone
Safety PriorityLowMediumHighLowMedium
Existential Risk ViewDismissiveConcernedVery ConcernedDismissiveNeutral
AGI Timeline20272025-2027Uncertain2025-2026N/A
Primary MarketSocial/AdsEnterprise APIEnterprise SafetyConsumer ChatConsumer Entertainment

Safety Culture Comparison

The departure of Yann LeCun and his public dismissal of existential risk highlights Meta's weaker safety culture compared to safety-focused labs. LeCun estimated P(doom) at effectively zero, placing him at the extreme optimist end of the expert distribution.

Key Uncertainties and Future Scenarios

Technical Questions

QuestionOptimistic ViewPessimistic ViewResolution Timeline
Can LLMs achieve AGI?Scaling + new architectures sufficientFundamental limitations remain2025-2027
Will world models succeed?LeCun's AMI validates approachDistraction from scaling laws2026-2028
Can safety be iterated post-release?Community patches and fine-tuning workUnrecoverable once releasedPer release

Organizational Questions

QuestionCurrent IndicatorConcern Level
Will MSL models remain open?Zuckerberg indicated closure for most powerfulHigh
Can FAIR recover from talent exodus?New leadership appointedMedium
Will safety culture improve?Human reviewers replaced with AIHigh

Scenario Analysis

Optimistic Scenario (25-30% probability):

  • MSL achieves AGI safely with appropriate safeguards developed in parallel
  • Open-source approach enables broader safety research and distributed defense
  • MTIA chips provide competitive advantage while reducing costs
  • Ray-Ban partnership validates AR/AI integration model
  • New leadership rebuilds research culture

Pessimistic Scenario (30-40% probability):

  • Safety culture continues deteriorating as racing dynamics intensify
  • Open weights enable bad actors to remove safeguards from frontier models
  • AGI 2027 timeline proves accurate but without adequate safety measures
  • Talent exodus accelerates; institutional knowledge permanently lost
  • Custom chips fail to compete with Nvidia; infrastructure advantage erodes

Central Scenario (30-40% probability):

  • Meta achieves narrow superintelligence in specific domains
  • Open weights continue for non-frontier models; most capable kept closed
  • Reality Labs pivot to AI-first wearables proves moderately successful
  • Remains competitive but not dominant in AGI race
  • Safety practices improve modestly under regulatory pressure

Sources and Citations

References

This Goodwin Law publication analyzes the legal implications of Meta's use of European user data to train its AI systems, examining compliance with GDPR and related EU data protection frameworks. It likely covers regulatory responses, legitimate interest claims, and the intersection of AI training practices with European privacy law.

Claims (1)
(footnote definition only, no inline reference found)

Meta AI has open-sourced PE-AV (Perception Encoder Audiovisual), a multimodal encoder that jointly processes audio and visual information, powering their SAM-Audio system and enabling large-scale audiovisual retrieval. The model represents an extension of Meta's Perception Encoder family into the audio-visual domain, designed for robust cross-modal understanding. This release contributes to the open-source multimodal AI ecosystem with implications for how foundation models handle combined sensory inputs.

Claims (1)
(footnote definition only, no inline reference found)

Meta reportedly laid off approximately 10% of Reality Labs employees as part of a strategic restructuring, signaling a reduced focus on VR hardware and a pivot toward AI development and wearable technologies. This shift reflects broader industry trends of companies reallocating resources from metaverse/VR initiatives toward generative AI capabilities. The move has implications for understanding how major tech firms are prioritizing AI investment over earlier technology bets.

Claims (1)
(footnote definition only, no inline reference found)

Bloomberg reports that Essilor, the maker of Ray-Ban frames, is positioning Meta's AI-powered smart glasses as a potential successor to smartphones. The article covers industry claims about the trajectory of wearable AI devices and their mainstream adoption potential.

★★★★☆
Claims (1)
(footnote definition only, no inline reference found)

VentureBeat reports on Meta's Movie Gen, an AI video generation model announced in October 2024, capable of creating and editing videos from text prompts. The model is demonstrated by Zuckerberg using it to transform real footage on Instagram, with broader rollout planned for 2025. This positions Meta as a competitor in the growing AI video generation space alongside OpenAI, Google, and others.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)

Mark Zuckerberg published a manifesto alongside Meta's Llama 3.1 release arguing that open-source AI is the path forward, framing it as a democratizing force against concentrated AI power. The piece captures the intensifying debate between open-source AI advocates and those who favor closed, monitored systems, with significant implications for AI safety and governance.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)

TrendForce reports that Meta's MTIA-3 AI inference chip is slated for a H2 2026 debut, built on TSMC's 3nm process with GUC handling back-end packaging. The chip features a more complex design than MTIA-2, including extra I/O and an additional SoC, limiting CoWoS packaging yield. This is part of Meta's broader $115–135B 2026 capital spending push into in-house AI ASICs.

Claims (1)
(footnote definition only, no inline reference found)
Not verifiable100%Feb 22, 2026
Commercial Times notes that Meta’s sustained capex momentum is directly driving stronger demand for AI servers and ASICs (application-specific integrated circuits).

This resource appears to be a statistics page about Meta AI user data, but the content is inaccessible due to a bot verification challenge. No substantive information about Meta AI usage metrics could be retrieved.

Claims (1)
(footnote definition only, no inline reference found)
Not verifiable100%Feb 22, 2026
Meta AI has crossed 1 billion monthly active users as of 2025

Meta laid off over 1,000 employees (~10%) from its Reality Labs VR division in January 2026, shutting down multiple VR game studios. The move signals a major strategic retreat from metaverse ambitions just four years after Facebook rebranded to Meta, as Zuckerberg redirects resources toward AI development and talent acquisition.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)

Meta's official page outlining their vision and ambitions toward developing superintelligent AI systems. The page signals Meta's strategic commitment to pursuing advanced AI capabilities, positioning the company alongside other major labs in the race toward superintelligence. Limited content is available, but the URL itself reflects a significant public-facing declaration of intent from a major AI developer.

Claims (1)
(footnote definition only, no inline reference found)

Yann LeCun, Meta's Chief AI Scientist, has confirmed he is launching a new startup focused on world models, reportedly seeking a $5 billion valuation. The venture represents LeCun's vision for an alternative path to AI beyond large language models, centered on building systems that can reason about and predict the physical world. This news highlights continued divergence in approaches to advanced AI development among leading researchers.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)

Meta announced the creation of the PyTorch Foundation under the Linux Foundation umbrella in September 2022, transitioning PyTorch's governance from Meta to a neutral, multi-stakeholder body. The foundation aims to foster open-source AI development and broader community collaboration across industry and academia. Founding members include AMD, Amazon, Google, Meta, Microsoft, and Nvidia.

Claims (1)
(footnote definition only, no inline reference found)
13Finterra Deep Dive - Meta 2026markets.chroniclejournal.com

A financial analysis piece examining Meta's massive 2026 AI investment strategy, framing the company's ~$100B AI spending as a bet on achieving superintelligence-level capabilities. The article explores the business implications of Meta's AI infrastructure buildout and competitive positioning in the emerging superintelligence era.

Claims (1)
(footnote definition only, no inline reference found)

This Fortune article covers Yann LeCun's departure from Meta to found AMI Labs, an AI startup that has achieved a significant valuation. The piece details LeCun's transition from his Chief AI Scientist role at Meta and the funding/valuation details of his new venture.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)
Accurate100%Feb 22, 2026
AI whiz Yann LeCun is already targeting a $3.5 billion valuation for his new startup—and it hasn’t even launched yet

This resource appears to be a Meta corporate report from 2026 detailing how AI is driving performance across their platforms and products. As the content was not accessible, the full scope of claims, metrics, or safety-relevant disclosures cannot be verified. It likely covers Meta's AI deployment outcomes, business metrics, and potentially responsible AI commitments.

Claims (1)
(footnote definition only, no inline reference found)
16TokenRing - MTIA Iris Rolloutmarkets.financialcontent.com

This article covers Meta's 2026 rollout of its second-generation custom AI chip, MTIA Iris (Meta Training and Inference Accelerator), as part of a broader strategy to reduce dependence on third-party silicon and build internal AI compute infrastructure. The piece discusses Meta's silicon sovereignty ambitions and the competitive implications of custom chip development for large-scale AI deployment.

Claims (1)
(footnote definition only, no inline reference found)

Reports on Meta's plans to release two next-generation AI models codenamed 'Mango' and 'Avocado' in 2026, representing significant capability upgrades in Meta's AI development roadmap. These models are expected to push the frontier of large language model capabilities, continuing Meta's open-source AI strategy.

Claims (1)
(footnote definition only, no inline reference found)

Meta is investing approximately $100 billion to build a massive AI supercluster called Prometheus, signaling an unprecedented escalation in compute infrastructure spending by major AI labs. This initiative reflects the intensifying race among tech giants to secure the computational resources needed for frontier AI development. The scale of investment underscores growing concerns about compute concentration and its implications for AI governance.

Claims (1)
(footnote definition only, no inline reference found)

Mark Zuckerberg announced the creation of Meta Superintelligence Labs, a new organizational unit within Meta focused on achieving superintelligence. The memo signals Meta's explicit strategic pivot toward AGI/superintelligence development, representing a major escalation in the AI capabilities race among frontier labs.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)
Not verifiable0%Feb 22, 2026
Mark Zuckerberg announced the creation of Meta Superintelligence Labs, which will be run by some of his company&#x27;s most recent hires.

This Fox News article covers Meta's construction of massive AI supercomputing clusters, positioning the company at the forefront of AI infrastructure investment. It highlights the scale of compute resources being deployed and Meta's strategic ambitions in AI development.

Claims (1)
(footnote definition only, no inline reference found)

Meta celebrates the 10-year anniversary of its Fundamental AI Research (FAIR) lab, highlighting its history of open science, major research contributions, and impact on the AI field. The post reflects on FAIR's founding principles around open collaboration and publishing, and its role in advancing AI capabilities and research culture. It serves as both a retrospective and a statement of Meta's continued commitment to open AI research.

★★★★☆
Claims (1)
(footnote definition only, no inline reference found)

EssilorLuxottica reported that revenue from Ray-Ban Meta smart glasses tripled, signaling strong consumer adoption of AI-integrated wearable technology. This growth reflects increasing mainstream interest in AI-powered augmented reality and always-on computing devices. The commercial success of these glasses marks a significant milestone in the deployment of AI capabilities in consumer hardware.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)
Accurate100%Feb 22, 2026
Revenue from sales of Ray-Ban Meta smart glasses more than tripled year over year, EssilorLuxottica revealed Monday as part of the company's most recent earnings report .

Meta outlines its official approach to developing frontier AI responsibly, covering safety research priorities, red-teaming practices, model evaluations, and governance frameworks. The document describes Meta's commitments to open-source development alongside safety measures, and its stance on balancing capability advancement with risk mitigation. It represents Meta's public positioning on responsible AI development as it pursues large-scale frontier models.

Claims (1)
(footnote definition only, no inline reference found)

This news article covers Meta AI's expansion into the European Union market, detailing the rollout of Meta's AI assistant across its platforms in EU countries. The launch had previously been delayed due to regulatory concerns around data privacy and compliance with EU law, particularly GDPR.

Claims (1)
(footnote definition only, no inline reference found)

A 2025 study (the AI Safety Index) assesses the state of AI safety regulation and corporate practices, finding that AI systems face less regulatory oversight than many everyday products. The report highlights the accelerating race toward superintelligence by major tech firms and evaluates how inadequately current governance frameworks address the associated risks.

Claims (1)
(footnote definition only, no inline reference found)
Not verifiable0%Feb 22, 2026
“AI is also less regulated than sandwiches [in the United States], and there is continued lobbying against binding safety standards in government,” he said.

A Fortune investigation into Meta's Fundamental AI Research (FAIR) lab, examining researcher departures, internal tensions, and questions about the lab's direction and relevance amid Meta's broader AI ambitions. The piece explores whether FAIR can maintain its academic research identity under commercial pressures and Yann LeCun's leadership philosophy.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)
27Yann LeCun - WikipediaWikipedia·Reference

Wikipedia biography of Yann LeCun, Chief AI Scientist at Meta and Turing Award winner, covering his foundational contributions to deep learning, convolutional neural networks, and his prominent public skepticism toward AGI existential risk narratives. LeCun is a significant voice arguing that current AI architectures are insufficient for human-level intelligence and that AI safety concerns are overstated.

★★★☆☆

This Meta blog post describes how PyTorch serves as the foundational deep learning framework enabling both AI research and large-scale production deployment across Meta's products. It covers PyTorch's design philosophy, its role in bridging research and production workflows, and how it supports Meta's AI infrastructure at scale.

Meta announces Llama 3, their most capable openly available large language model family, featuring 8B and 70B parameter models with improved reasoning, coding, and instruction-following capabilities. The release includes details on training data, architecture improvements, and safety measures implemented before public release. Llama 3 represents a significant milestone in open-weight frontier model development.

★★★★☆
Claims (1)
(footnote definition only, no inline reference found)

Yann LeCun, AI pioneer and Meta researcher, argues that concerns about AI posing an existential threat to humanity are unfounded, contending that current LLMs lack fundamental capabilities like reasoning, planning, persistent memory, and physical-world understanding. He maintains that LLMs will not lead to AGI and that entirely new approaches are needed for genuine machine intelligence.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)
Not verifiable0%Feb 22, 2026
He elaborated on his opinions in an interview with The Wall Street Journal , where he replied to a question about AI becoming smart enough to pose a threat to humanity by saying, “You’re going to have to pardon my French, but that’s complete B.S.”

Meta's Llama is a family of open-source large language models including Llama 3 and Llama 4 variants, offering multimodal capabilities, extended context windows, and various model sizes for deployment across diverse use cases. The latest Llama 4 models feature native multimodality with early fusion architecture, supporting up to 10M token context windows. Models are freely downloadable and fine-tunable, positioning Llama as a major open-source alternative to proprietary AI systems.

★★★★☆
32AI Safety Index Winter 2025Future of Life Institute

The Future of Life Institute evaluated eight major AI companies across 35 safety indicators, finding widespread deficiencies in risk management and existential safety practices. Even top performers Anthropic and OpenAI received only marginal passing grades, highlighting systemic gaps across the industry in preparedness for advanced AI risks.

★★★☆☆
Claims (1)
(footnote definition only, no inline reference found)

Meta's blog post introduces Llama Guard 3, a safety classifier model designed to detect unsafe content in LLM inputs and outputs, released alongside Llama 3.1. It outlines Meta's responsible deployment approach including red-teaming, safety evaluations, and open-source safety tooling for the broader AI ecosystem.

★★★★☆
Claims (1)
(footnote definition only, no inline reference found)

This blog post covers Meta's LlamaCon 2025 conference, highlighting announcements around the Llama open-source AI model ecosystem and Meta's strategic vision for open-source AI development. It discusses new model releases, developer tools, and Meta's positioning in the competitive AI landscape.

★★☆☆☆

This article examines Meta's massive $27 billion investment in AI compute infrastructure and how it is reshaping Wall Street's investment strategies around AI hardware and data centers. It explores how large-scale compute spending by tech giants is creating new financial instruments and investment opportunities. The piece highlights the broader trend of AI infrastructure becoming a major asset class.

★★★☆☆
36Roman YampolskiyarXiv·Severin Field·2025·Paper

This paper presents a survey of 111 AI experts examining their familiarity with AI safety concepts and attitudes toward existential risks from AGI. The research reveals that experts cluster into two distinct viewpoints: those who see AI as a controllable tool versus those who view it as an uncontrollable agent, with significant knowledge gaps in fundamental safety concepts. While 78% of experts agreed that technical AI researchers should be concerned about catastrophic risks, only 21% were familiar with 'instrumental convergence,' a core AI safety concept. The findings suggest that experts least concerned about AI safety are also least familiar with key safety concepts, indicating that effective communication requires establishing clear conceptual foundations.

★★★☆☆
Citation verification: 2 verified, 23 unchecked of 30 total

Structured Data

29 factsView in FactBase →
Revenue
200970000000
as of 2025
Valuation
3500000000
as of Dec 2025
Headcount
78800
as of Dec 2025
Founded Date
Dec 2013

All Facts

29
Organization
PropertyValueAs OfSource
Legal StructureDivision of Meta Platforms, Inc.
HeadquartersMenlo Park, CA
Founded DateDec 2013
Financial
PropertyValueAs OfSource
Annual Cash Burn70000000000Jan 2026
Cumulative Losses$70 billionJan 2026
Infrastructure Investment115000000000-1350000000002026
5 earlier values
2026$125 billion
Oct 202527000000000
Oct 2025$27 billion
202566000000000-72000000000
2025$69 billion
Valuation3500000000Dec 2025
Headcount78800Dec 2025
1 earlier value
Dec 20241,500
Revenue2009700000002025
Market Share632025
1 earlier value
202563%
Product
PropertyValueAs OfSource
User Count1000000000Apr 2025
5 earlier values
Apr 20251 billion
Mar 20251000000000
Mar 2025630000000
Mar 2025630 million
Mar 20251 billion
People
PropertyValueAs OfSource
Founded ByYann LeCun
General
PropertyValueAs OfSource
Websitehttps://ai.meta.com/
Other
PropertyValueAs OfSource
Compute Cost40-442026
Parent Headcount78,800Dec 2025
Parent Revenue201.0 billion2025
1 earlier value
2024164.5 billion

Related Wiki Pages

Top Related Pages

Analysis

Projecting Compute SpendingUS Government Authority Over Commercial AI Infrastructure

Other

Mark ZuckerbergLlamaLlama 2Llama 3Llama 3.1Llama 3.3

Organizations

AnthropicxAIMicrosoftAI Revenue Sources

Risks

AI Development Racing DynamicsFinancial Stability Risks from AI Capital ExpenditureAI Proliferation

Key Debates

The Case Against AI Existential RiskOpen vs Closed Source AIIs Scaling All You Need?

Concepts

Frontier Ai ComparisonLabs Overview