Longterm Wiki
Updated 2026-01-29HistoryData
Page StatusContent
Edited 2 weeks ago2.2k words
60
QualityGood
67
ImportanceUseful
12
Structure12/15
812043%9%
Updated every 3 weeksDue in 6 days
Summary

Comprehensive analysis of open vs closed source AI debate, documenting that open model performance gap narrowed from 8% to 1.7% in 2024, with 1.2B+ Llama downloads by April 2025 and DeepSeek R1 demonstrating 90-95% cost reduction. Research shows fine-tuning can remove safety guardrails in hours, while NTIA 2024 found insufficient evidence to restrict open weights and EU AI Act exempts non-systemic open models below 10²⁵ FLOPs.

Issues2
QualityRated 60 but structure suggests 80 (underrated by 20 points)
Links12 links could use <R> components

Open vs Closed Source AI

Crux

Open vs Closed Source AI

Comprehensive analysis of open vs closed source AI debate, documenting that open model performance gap narrowed from 8% to 1.7% in 2024, with 1.2B+ Llama downloads by April 2025 and DeepSeek R1 demonstrating 90-95% cost reduction. Research shows fine-tuning can remove safety guardrails in hours, while NTIA 2024 found insufficient evidence to restrict open weights and EU AI Act exempts non-systemic open models below 10²⁵ FLOPs.

QuestionShould frontier AI model weights be released publicly?
StakesBalance between safety, innovation, and democratic access
Current TrendMajor labs increasingly keeping models closed
2.2k words

Quick Assessment

DimensionAssessmentEvidence
Market TrajectoryOpen models closing gap rapidlyPerformance difference narrowed from 8% to 1.7% in one year (Stanford HAI 2025)
Adoption Scale1.2B+ Llama downloads by April 2025Meta reports 53% growth in Q1 2025; 50%+ Fortune 500 experimenting
Enterprise ShareOpen source declining slightly11-13% enterprise workloads use open models, down from 19% in 2024 (Menlo Ventures)
Cost EfficiencyOpen dramatically cheaperDeepSeek R1 runs 20-50x cheaper than comparable closed models; 90-95% training cost reduction
Safety GuardrailsSignificant vulnerabilityFine-tuning can remove safety training in hours; "uncensored" variants appear within days of release
Regulatory StatusCautiously permissiveNTIA 2024: insufficient evidence to restrict; EU AI Act: exemptions for non-systemic open models
Geopolitical ImpactComplicates Western restraintDeepSeek demonstrates frontier capabilities from China; unilateral restrictions less effective
Crux

Open vs Closed Source AI

QuestionShould frontier AI model weights be released publicly?
StakesBalance between safety, innovation, and democratic access
Current TrendMajor labs increasingly keeping models closed

One of the most heated debates in AI: Should powerful AI models be released as open source (weights publicly available), or kept closed to prevent misuse? The debate intensified following Meta's Llama releases, Mistral's emergence as a European open-weights champion, and DeepSeek's 2025 disruption demonstrating Chinese open models at the frontier.

Key Links

SourceLink
Official Websitetechtarget.com
Wikipediaen.wikipedia.org

Key Arguments Summary

ArgumentFor Open WeightsFor Closed Models
SafetyEnables external scrutiny and vulnerability discovery; "security through transparency" parallels open-source softwarePrevents removal of safety guardrails; maintains ability to revoke access; enables monitoring for misuse
InnovationAccelerates research through global collaboration; enables startups and academics to build on frontier workControlled deployment allows careful capability assessment before wider release
SecurityDistributed development reduces single points of failurePrevents adversaries from accessing and weaponizing capabilities
Power ConcentrationPrevents AI monopoly by a few corporations; LeCun argues concentration is "a much bigger danger than everything else"Responsible actors can implement safety measures that open release cannot
AccountabilityPublic weights enable third-party auditing and bias detectionClear liability chain; developers can update, patch, and control deployment
Misuse PotentialKnowledge democratization; misuse happens regardless of opennessRAND research shows weights are theft targets; "uncensored" derivatives appear within days of release

Trade-offs Overview

Loading diagram...

Stakeholder Positions

StakeholderPositionKey RationaleEvidence
Meta (Yann LeCun)Strong openPower concentration is the real existential risk; open source enables safety through scrutinyReleased Llama 2, Llama 3 (8B-405B parameters)
Anthropic (Dario Amodei)Cautious closedIrreversibility of release; responsible scaling requires controlClaude models closed; Responsible Scaling Policy
OpenAI (Sam Altman)Closed (shifted)Safety concerns grew with capabilities; GPT-4 too capable for open releaseShifted from GPT-2 open to GPT-4 closed
Mistral AIStrong openEuropean AI sovereignty; innovation through opennessMistral 7B/8x7B/Large released with minimal restrictions
DeepSeek (China)Strategic openDemonstrates Chinese frontier capabilities; signed AI Safety Commitments alongside 16 Chinese firmsDeepSeek-R1 open weights, though with documented censorship and security issues
U.S. Government (NTIA)Cautiously pro-open2024 report found insufficient evidence to restrict open weights; recommends monitoringCalled for research and risk indicators, not immediate restrictions
EU RegulatorsRisk-basedAI Act applies stricter rules to "foundation models" including open onesFoundation models face transparency and safety testing requirements
Eliezer YudkowskyStrongly closedOpen-sourcing powerful AI is existential riskPublic advocacy against any frontier model release

What's At Stake

Open weights (often called "open source" though technically distinct) means releasing model weights so anyone can download, modify, and run the model locally. Meta clarified in 2024 that Llama models are "open weight" rather than fully open source, as the training data and code remain proprietary. Examples include Llama 2/3, Mistral, Falcon, and DeepSeek-R1. As of April 2025, Llama models alone had been downloaded over 1.2 billion times, with 20,000+ derivative models published on Hugging Face. Once released, weights cannot be recalled or controlled, and anyone can fine-tune for any purpose—including removing safety features. Research shows that "jailbreak-tuning" can remove essentially all safety training within hours using modest compute (FAR.AI 2024). Within days of Meta releasing Llama 2, "uncensored" versions appeared on Hugging Face with safety guardrails stripped away.

Closed source means keeping weights proprietary, providing access only via API. Examples include GPT-4, Claude, and Gemini. Labs maintain control and can monitor usage patterns, update models, revoke access for policy violations, and refuse harmful requests. However, this concentrates power in a small number of corporations.

Current Landscape (2024-2025)

The landscape shifted dramatically with DeepSeek's January 2025 release of R1, demonstrating that Chinese labs could produce frontier-competitive open models. Before DeepSeek, Meta's Llama family dominated the open-weights ecosystem, with models ranging from 7B to 405B parameters.

Market Statistics (2024-2025)

MetricValueSourceTrend
Open source model downloads30,000-60,000 new models/month on Hugging FaceRed HatExponential growth
Llama cumulative downloads1.2 billion (April 2025)Meta+53% in Q1 2025
Enterprise open source share11-13% of LLM workloadsMenlo VenturesDown from 19% in 2024
Performance gap (open vs closed)1.7% on Chatbot ArenaStanford HAINarrowed from 8% in Jan 2024
Global AI spending$17B in 2025Menlo Ventures3.2x YoY increase from $11.5B
DeepSeek R1 training costUnder $1 millionWorld Economic Forum90-95% below Western frontier models
Fortune 500 Llama adoption50%+ experimentingMetaIncluding Spotify, AT&T, DoorDash
Open vs Closed Models
NameOpennessAccessSafetyCustomizationCostControl
GPT-4/4oClosedAPI onlyStrong guardrails, monitoredLimited fine-tuning via APIPay per tokenOpenAI maintains full control
Claude 3/3.5ClosedAPI onlyConstitutional AI, monitoredLimitedPay per tokenAnthropic maintains full control
Llama 3.1 405BOpen weightsDownload and run locallyResponsible Use Guide (often ignored)Full fine-tuning possibleFree (need substantial compute)No control after release
Mistral Large 2Open weightsDownload and run locallyTransparent 'no moderation mechanism'Full fine-tuning possibleFree (need own compute)No control after release
DeepSeek-R1Open weightsDownload and run locallyCensors Chinese-sensitive topics; security vulnerabilities on political promptsFull fine-tuning possibleFree (need own compute)Subject to Chinese regulatory environment

Key Positions

Positions on Open Source AI

Where different actors stand on releasing model weights

Yann LeCun (Meta)strong-open

Argues open source is essential for safety, innovation, and preventing corporate monopoly. Concentration of AI power is the real risk.

Evidence: Llama releases; Public statements
Open source makes AI safer, not more dangerous
Confidence: high
Dario Amodei (Anthropic)cautious-closed

Concerned about irreversibility and misuse. Believes responsible scaling means keeping frontier models closed.

Evidence: Claude is closed; Responsible Scaling Policy
Can't unring the bell once weights are public
Confidence: high
Sam Altman (OpenAI)closed

Shifted from open (GPT-2) to closed (GPT-4) as capabilities increased. Cites safety concerns.

Evidence: GPT-4 closed; Public statements
GPT-4 is too dangerous to release openly
Confidence: high
Demis Hassabis (Google DeepMind)mostly-closed

Gemini models are closed, but Google has history of open research. Pragmatic about risks.

Evidence: Gemini closed; Some open research
Case-by-case based on capabilities
Confidence: medium
Stability AIstrong-open

Business model and philosophy built on open source. Argues openness is ethical imperative.

Evidence: Stable Diffusion; Open models
AI should be accessible to everyone
Confidence: high
Eliezer Yudkowskystrongly-closed

Believes open sourcing powerful AI is one of the most dangerous things we could do.

Evidence: Public writing
Open sourcing AGI would be suicide
Confidence: high

Safety Guardrail Effectiveness

Research on open model safety reveals significant challenges in maintaining guardrails once weights are released.

FactorFindingImplicationSource
Guardrail bypass techniquesEmoji smuggling achieves 100% evasion against some guardrailsEven production-grade defenses can be bypassedarXiv
Fine-tuning vulnerability"Jailbreak-tuning" enables removal of all safety trainingEvery fine-tunable model has an "evil twin" potentialFAR.AI
Open model guardrail scoresBest open model (Phi-4): 84/100; Worst (Gemma-3): 57/100Wide variance in baseline safetyADL
Larger models more vulnerableTested 23 LLMs: larger models more susceptible to poisoningCapability-safety tradeoff worsens at scaleFAR.AI
Time to "uncensored" variantsHours to days after releaseCommunity rapidly removes restrictionsHugging Face observations
Multilingual guardrailsOpenGuardrails supports 119 languagesSafety coverage possible but not universalHelp Net Security

Key Cruxes

Key Questions

  • ?Can safety guardrails be made robust to fine-tuning?
    No - always removable

    Research shows safety training is superficial and easily overridden with fine-tuning.

    Open source will always enable misuse

    Confidence: high
    Yes - with better techniques

    We can make safety intrinsic to the model, not just surface-level training.

    Open source can be safe

    Confidence: low
  • ?Will open models leak or be recreated anyway?
    Yes - inevitable

    Too many actors with capability and motivation. Secrets don't keep in ML.

    Better to shape open ecosystem than fight it

    Confidence: medium
    No - frontier models stay ahead

    Leading labs maintain advantage. Not everything leaks.

    Keeping closed is feasible and valuable

    Confidence: medium
  • ?At what capability level does open source become too dangerous?
    Already crossed it

    Current models can assist in serious harms if unconstrained.

    Should stop open sourcing now

    Confidence: low
    Not even close yet

    Current models not capable enough for catastrophic misuse.

    Continue open sourcing current generations

    Confidence: medium
    Dangerous at AGI-level

    When models can autonomously plan and execute complex tasks, open source becomes untenable.

    Have time to decide on framework

    Confidence: high
  • ?Do the benefits of scrutiny outweigh misuse risks?
    Yes - security through transparency

    More eyes finding and fixing problems. History of open source security.

    Open source is safer

    Confidence: medium
    No - attackers benefit more than defenders

    One attacker can exploit what thousands of defenders miss.

    Closed is safer

    Confidence: medium

Possible Middle Grounds

Several proposals aim to capture benefits of both approaches while mitigating risks:

ApproachDescriptionAdoption StatusEffectiveness Estimate
Staged Release6-12 month delay after initial deployment before open releaseProposed; not yet implemented at scaleMedium (allows risk monitoring)
Structured AccessWeights provided to vetted researchers under agreementGPT-2 XL initially; some academic partnershipsMedium-High for research
Differential AccessSmaller models open, frontier models closedCurrent de facto standardMedium (capability gap narrows)
Safety-Contingent ReleaseRelease only if safety evaluations pass thresholdsAnthropic RSP (for deployment, not release)High if thresholds appropriate
Hardware ControlsRelease weights but require specialized hardware to runNot implementedLow-Medium (hardware becomes accessible)
Capability ThresholdsOpen below certain compute/parameter thresholdsEU AI Act: 10²⁵ FLOPs as "systemic risk" cutoffUncertain (thresholds may become obsolete)

The International Dimension

The geopolitical calculus shifted dramatically in 2025. DeepSeek's R1 release demonstrated that keeping Western models closed does not prevent capable open models from emerging globally. The market impact was immediate: NVIDIA reportedly lost $100 billion in market capitalization in a single day, and by month's end DeepSeek had overtaken ChatGPT as the most downloaded free app on the Apple App Store in the US.

DeepSeek's Impact: DeepSeek's January 2025 release sent "shockwaves globally" by demonstrating frontier capabilities in an open Chinese model at a fraction of Western costs—reportedly under $1 million in training costs compared to hundreds of millions for comparable Western models. The model runs 20-50x cheaper at inference than OpenAI's comparable offerings. However, NIST/CAISI evaluations found significant issues: DeepSeek models were 12x more susceptible to agent hijacking attacks than U.S. frontier models, and CrowdStrike research showed the model produces insecure code when prompted with politically sensitive terms (Tibet, Uyghurs). Several countries including Italy, Australia, and Taiwan have banned government use of DeepSeek.

If US/Western labs stay closed:

  • May slow dangerous capabilities domestically
  • But China has demonstrated strategic open-sourcing (DeepSeek)
  • Could lose innovation race and talent to more open ecosystems
  • Does not prevent proliferation given global competition

If US/Western labs open source:

  • Loses monitoring capability over deployment
  • But levels playing field globally and enables allies
  • Benefits developing world and academic research
  • May shape global norms through responsible release practices

Coordination problem:

  • Optimal if all major powers coordinate on release thresholds
  • Carnegie research notes emerging convergence on risk frameworks
  • Unilateral Western restraint may simply cede ground to less safety-conscious actors
  • DeepSeek's signing of AI Safety Commitments suggests potential for Chinese engagement

Implications for Different Risks

The open vs closed question has different implications for different risks:

Misuse risks (bioweapons, cyberattacks):

  • Clear case for closed: irreversibility, removal of guardrails
  • Open source dramatically increases risk once capabilities cross danger thresholds
  • However, the March 2024 "ShadowRay" attack on Ray (an open-source AI framework used by Uber, Amazon, OpenAI) showed that open ecosystems create additional attack surfaces

Accident risks (unintended behavior):

  • Mixed: Open source enables external safety research and red-teaming
  • But also enables less careful deployment by actors who may not understand risks
  • Depends on whether scrutiny benefits or proliferation risks dominate

Structural risks (power concentration):

  • Clear case for open: prevents AI monopoly by a few corporations
  • But only if open source is actually accessible (frontier models require substantial compute)
  • LeCun's concern: "a very bad future in which all of our information diet is controlled by a small number of companies"

Race dynamics:

  • Open source may accelerate race (lower barriers to entry)
  • But also may reduce duplicated effort (can build on shared base)
  • DeepSeek's cost-efficient training suggests open release may not slow capability development

Key Policy Developments

U.S. Policy: The NTIA's July 2024 report concluded that evidence is "insufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future." It recommended monitoring and research rather than immediate restrictions.

California SB-1047: In September 2024, Governor Newsom vetoed this bill which would have imposed liability requirements on AI developers. The veto cited concerns about stifling innovation without meaningfully improving safety.

EU AI Act: Takes a risk-based approach, entered into force August 2024 with GPAI model obligations applicable from August 2025. Open-source models receive exemptions from transparency obligations if they use permissive licenses and publicly share architecture information—but models with "systemic risk" (training compute exceeding 10²⁵ FLOPs) face full compliance requirements regardless of openness. France, Germany, and Italy initially opposed applying strict rules to open models, citing innovation concerns.

Emerging Consensus: Carnegie Endowment research in July 2024 found it is "no longer accurate to cast decisions about model and weight release as an ideological debate between rigid 'pro-open' and 'anti-open' camps." Instead, different camps have begun to converge on recognizing open release as a "positive and enduring feature of the AI ecosystem, even as it also brings potential risks."

Regulatory Comparison

JurisdictionPolicy StanceOpen Model TreatmentEnforcement Status
United States (NTIA)Cautiously pro-openNo restrictions recommended without clearer risk evidenceMonitoring via AISI
EU AI ActRisk-basedExemptions for non-systemic models; full rules above 10²⁵ FLOPsApplicable August 2025
California (SB-1047)Proposed liabilityWould have imposed developer liability; vetoed September 2024Not enacted
ChinaStrategic opennessState-backed labs releasing competitive open models (DeepSeek, Qwen)Active support
UKLight-touchNo specific open model restrictions; voluntary commitmentsMonitoring via AISI

References

Related Pages

Top Related Pages

Key Debates

Should We Pause AI Development?Is Interpretability Sufficient for Safety?Is Scaling All You Need?

Safety Research

Scalable Oversight

Organizations

Alignment Research CenterUS AI Safety Institute

Approaches

AI Safety CasesAI Governance Coordination Technologies

Risks

Multipolar Trap (AI Development)AI Authoritarian Tools

Policy

Voluntary AI Safety CommitmentsUS Executive Order on Safe, Secure, and Trustworthy AI

Analysis

OpenAI Foundation Governance ParadoxLong-Term Benefit Trust (Anthropic)

Models

AI Capability Proliferation ModelInternational AI Coordination Game Model

Transition Model

Lab Behavior

Labs

GovAI

People

Yoshua BengioStuart Russell