Longterm Wiki
Updated 2026-01-28HistoryData
Page StatusResponse
Edited 2 weeks ago1.7k words
68
QualityGood
75
ImportanceHigh
13
Structure13/15
9115017%31%
Updated every 3 weeksDue in 5 days
Summary

Analyzes model registries as foundational governance infrastructure across US (≥10^26 FLOP threshold), EU (≥10^25 FLOP), and state-level implementations, showing they enable pre-deployment review and incident tracking but don't prevent harm directly. Provides specific implementation recommendations including 30-90 day pre-deployment notification and 72-hour incident reporting, with medium-high confidence that registries improve visibility and incident learning.

Issues1
Links4 links could use <R> components

Model Registries

Policy

Model Registries

Analyzes model registries as foundational governance infrastructure across US (≥10^26 FLOP threshold), EU (≥10^25 FLOP), and state-level implementations, showing they enable pre-deployment review and incident tracking but don't prevent harm directly. Provides specific implementation recommendations including 30-90 day pre-deployment notification and 72-hour incident reporting, with medium-high confidence that registries improve visibility and incident learning.

Related
Policies
EU AI ActUS AI Chip Export ControlsAI Safety Institutes (AISIs)Responsible Scaling Policies (RSPs)
1.7k words

Overview

Model registries represent a foundational governance tool for managing risks from advanced AI systems. Like drug registries that enable pharmaceutical regulation or aircraft registries that support aviation safety, AI model registries would create centralized databases containing information about frontier AI systems—their capabilities, training details, deployment contexts, and safety evaluations. This infrastructure provides governments with the visibility necessary to implement more sophisticated AI governance measures.

The policy momentum is significant. The U.S. Executive Order on AI (October 2023) mandated quarterly reporting for models trained above 10^26 FLOP. The EU AI Act requires registration of high-risk AI systems and general-purpose AI models. California's SB 53 (signed September 2025) requires transparency reports and incident reporting for frontier models above 10^26 FLOP. New York's RAISE Act requires incident reporting within 72 hours. These requirements create the skeleton of a registry system, though implementation remains fragmented and early-stage.

The strategic value of model registries lies in their enabling function. A registry alone doesn't prevent harm—but it provides the information foundation for safety requirements, pre-deployment review, incident tracking, and international coordination. Without knowing what models exist and what capabilities they possess, governments cannot effectively regulate AI development. Model registries transform AI governance from reactive to proactive by creating visibility into the development pipeline before deployment.

Current Implementation Landscape

United States

Federal Level: The October 2023 Executive Order directed the Bureau of Industry and Security (BIS) to establish reporting requirements for advanced AI models. Under the proposed rule:

  • Entities must report models trained with >10^26 FLOP
  • Quarterly reporting on training activities
  • Six-month forward-looking projections required
  • Information includes ownership, compute access, safety testing

State Level:

StateLegislationKey RequirementsStatus
CaliforniaSB 53Transparency reports for models above 10^26 FLOP; 15-day incident reportingEnacted Sep 2025; effective Jan 1, 2026
New YorkRAISE Act72-hour incident reporting; safety protocol publication; civil penalties up to $1MEnacted 2024
ColoradoSB 24-205High-risk AI system registration; algorithmic impact assessmentsEnacted May 2024

European Union

The EU AI Act (Regulation EU 2024/1689), which entered into force August 1, 2024, establishes the most comprehensive registry requirements to date:

  • General-Purpose AI Models: Registration with EU AI Office if trained above 10^25 FLOP
  • High-Risk AI Systems: Registration in EU database before market placement
  • Systemic Risk Models: Additional transparency and safety requirements
  • Required Information: Technical documentation, compliance evidence, intended use

The EU database will be publicly accessible for high-risk AI systems, with confidential technical documentation available to regulators. Per Article 49, providers must register themselves and their systems before placing high-risk AI systems on the market. High-risk obligations become applicable in August 2026-2027.

China

China has implemented registration requirements since 2023 under the Interim Measures for Generative AI Services:

  • Deep synthesis (deepfake) algorithms must register with CAC
  • Generative AI services require registration before public offering
  • Algorithmic recommendation services subject to separate registry
  • As of November 2025, 611 generative AI services and 306 apps had completed filing
  • Apps must publicly disclose which filed model they use, including filing number
  • Focus on content moderation and political sensitivity

Comparison Table

JurisdictionCompute ThresholdPre/Post DeploymentPublic AccessPenalties
US Federal10^26 FLOPPre + ongoingLimited (security)Under development
California10^26 FLOPPre-deploymentTransparency reports publicUp to $1M/violation
New YorkScale-basedPre + incidentsProtocols publicUp to $1M
EU10^25 FLOPPre-marketPartialUp to 7% revenue
ChinaAny public AIPre-deploymentLimitedService suspension

Strategic Assessment

Benefits of Model Registries

BenefitMechanismConfidence
Visibility for governanceKnow what exists before regulatingHigh
Incident learningTrack failures across the ecosystemHigh
Pre-deployment reviewEnable safety checks before releaseMedium-High
International coordinationCommon information standardsMedium
Enforcement foundationCan't enforce rules without knowing who to apply them toHigh
Research ecosystem supportAggregate data for policy researchMedium

Limitations and Challenges

ChallengeDescriptionMitigation
Threshold gamingDevelopers structure training to avoid thresholds (research shows model distillation and mixture-of-agents approaches can achieve frontier performance below thresholds)Multiple thresholds; capability-based triggers
Dual-use concernsRegistry information could advantage competitors/adversariesTiered access; confidentiality provisions
Open-source gapRegistries focus on centralized developersPost-release monitoring; community registries
Enforcement difficultyVerifying submitted information is accurateAuditing; whistleblower protections
Rapid obsolescenceThresholds outdated as technology advancesAutomatic update mechanisms; sunset provisions
International gapsNo global registry; jurisdiction shoppingInternational coordination (nascent)

Relationship to Other Governance Tools

Model registries are necessary but not sufficient for AI governance. They enable but don't replace:

Loading diagram...

Implementation Recommendations

Minimum Viable Registry

For jurisdictions establishing initial AI model registries:

  1. Compute-based threshold: 10^25-10^26 FLOP (adjustable)

  2. Pre-deployment notification: 30-90 days before public release

  3. Required information:

    • Developer identity and contact
    • Training compute and data sources (categorical)
    • Intended use cases and deployment scope
    • Safety evaluation summary
    • Known risks and mitigations
  4. Incident reporting: 72 hours for critical harms

  5. Annual updates: Mandatory refresh of all information

  6. Tiered access: Public summary + confidential technical details

Best Practices from Research

Based on analysis by the Institute for Law & AI:

PrincipleRationaleImplementation
Minimal burdenEncourage compliance, reduce resistanceRequire only information developers already track
InteroperableEnable international coordinationAlign with emerging international standards
UpdatableTechnology changes faster than regulationBuilt-in mechanism for threshold adjustment
ComplementaryRegistry enables other tools, doesn't replace themDesign for integration with safety requirements
ProportionateDifferent requirements for different risk levelsTiered obligations based on capability/deployment

Avoiding Common Pitfalls

Don't:

  • Set thresholds so high only 2-3 models qualify (too narrow)
  • Require disclosure of trade secrets unnecessarily (industry opposition)
  • Create registry without enforcement mechanism (toothless)
  • Assume static thresholds will remain appropriate (obsolescence)
  • Ignore international coordination from the start (jurisdiction shopping)

Future Trajectory

Near-Term (2025-2026)

  • California SB 53 effective January 2026 (transparency reports, incident reporting)
  • EU high-risk AI database operational (August 2026-2027 compliance deadlines)
  • GovAI forecasts 103-306 models exceeding 10^25 FLOP (EU threshold) by 2028
  • 5-10 jurisdictions with some form of registry
  • Initial international coordination discussions

Medium-Term (2027-2030)

  • Potential international registry framework
  • Capability-based triggers supplement compute thresholds
  • Integration with compute monitoring
  • Real-time incident reporting systems
  • Cross-border data sharing agreements

Key Uncertainties

QuestionOptimistic ScenarioPessimistic Scenario
International coordinationCommon standards, shared databaseFragmented, incompatible systems
Enforcement effectivenessHigh compliance, meaningful oversightWidespread evasion, symbolic only
Open-source coverageCommunity registries, post-release trackingUnmonitored proliferation
Threshold relevanceAdaptive thresholds track real risksOutdated, easily gamed

Quick Assessment

DimensionAssessmentNotes
TractabilityHighActive legislation in multiple jurisdictions
If AI risk highHighEssential infrastructure for any governance
If AI risk lowMediumStill useful for transparency and accountability
NeglectednessLow-MediumActive policy area but implementation gaps
Timeline to impact1-3 yearsRequirements taking effect 2025-2026
GradeB+Foundational but not transformative alone

Risks Addressed

RiskMechanismEffectiveness
Racing DynamicsVisibility into development timelinesLow-Medium
Misuse RisksKnow what capabilities existMedium
Regulatory arbitrageHarmonized international requirementsLow (currently)
Incident learning gapsMandatory reporting creates databaseMedium-High

Complementary Interventions

  • Compute Governance - Hardware-based verification complements software registration
  • Export Controls - Control inputs to models in registry
  • AI Safety Institutes - Institutions to review registered models
  • Responsible Scaling Policies - Industry commitments that registries can verify

Sources

Policy Analysis

Legislation and Regulation

  • US Executive Order 14110 (October 2023): "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" - Established 10^26 FLOP reporting threshold
  • EU AI Act (2024): Regulation (EU) 2024/1689 - Article 49 covers registration requirements
  • California SB 53 (2025): Transparency in Frontier Artificial Intelligence Act - First US state frontier AI safety law; effective January 2026
  • New York RAISE Act (2024): Requiring AI Safety and Excellence - 72-hour incident reporting

Implementation Resources


AI Transition Model Context

Model registries improve the Ai Transition Model through Civilizational Competence:

FactorParameterImpact
Civilizational CompetenceRegulatory CapacityProvides information foundation for any governance interventions
Civilizational CompetenceInstitutional QualityEnables pre-deployment review and incident learning
Civilizational CompetenceInternational CoordinationCommon standards facilitate cross-border coordination

Registries are necessary but not sufficient infrastructure; they enable rather than replace safety requirements, evaluations, and enforcement mechanisms.

Related Pages

Top Related Pages

Approaches

Process Supervision

People

Chris Olah

Concepts

Responsible Scaling Policies (RSPs)US AI Chip Export ControlsEU AI ActAI Safety Institutes (AISIs)AI GovernanceInternational Coordination

Policy

US Executive Order on Safe, Secure, and Trustworthy AICalifornia SB 53Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

Models

AI Lab Whistleblower Dynamics Model

Key Debates

Government Regulation vs Industry Self-GovernanceAI Governance and Policy