Skip to content
Longterm Wiki
Navigation
Updated 2026-03-07HistoryData
Page StatusResponse
Edited 4 weeks ago4.5k words12 backlinksUpdated weeklyOverdue by 22 days
91QualityComprehensive57ImportanceUseful24.5ResearchMinimal
Content7/13
SummaryScheduleEntityEdit historyOverview
Tables12/ ~18Diagrams2/ ~2Int. links40/ ~36Ext. links18/ ~23Footnotes0/ ~14References27/ ~14Quotes0Accuracy0RatingsN:5 R:7 A:6 C:8Backlinks12
Issues1
Links12 links could use <R> components

US Executive Order on Safe, Secure, and Trustworthy AI

Policy

US Executive Order on Safe, Secure, and Trustworthy AI

Executive Order 14110 (Oct 2023) established compute thresholds (10^26 FLOP general, 10^23 biological) and created AISI, but was revoked after 15 months with ~85% completion. The 10^26 threshold was never triggered before revocation; GPT-5 estimated at 3×10^25 FLOP remained below it, demonstrating threshold obsolescence concerns. International comparison shows EU AI Act set 10x lower threshold (10^25 FLOP) and cannot be revoked by executive action.

Introduced2023-10-30
Statusrevoked
AuthorPresident Biden
ScopeFederal
TypeExecutive Order
Number14110
SignedOctober 30, 2023
RevokedJanuary 20, 2025
Duration15 months
Implementation at Revocation~85% of 150 requirements completed
Related
Policies
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActEU AI ActVoluntary AI Safety Commitments
Concepts
Compute Governance
4.5k words · 12 backlinks

Overview

Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, signed by President Biden on October 30, 2023, represented the most comprehensive federal response to AI governance in US history. The 111-page directive established mandatory reporting requirements for frontier AI systems, created new oversight institutions, and addressed both immediate risks like algorithmic bias and long-term catastrophic risks from advanced AI capabilities. According to analysis by Stanford HAI, the order placed 150 specific requirements on over 50 federal entities—making it the most detailed AI policy directive ever issued by any government.

The timing was strategically significant: Biden signed the EO one day before the UK's AI Safety Summit at Bletchley Park (November 1-2, 2023), where the US joined 27 other countries in signing the Bletchley Declaration on AI safety. On the same day the EO was released, G7 leaders announced the Hiroshima AI Process Guiding Principles and a voluntary Code of Conduct for organizations developing advanced AI systems, aligning with the EO on red-teaming and content authentication.

The order was revoked by President Trump on January 20, 2025, within hours of his assuming office. The White House stated that EO 14110 "hindered AI innovation and imposed onerous and unnecessary government control over the development of AI." Stanford HAI tracking showed that approximately 85% of the order's 150 distinct requirements had been completed before revocation.

Quick Assessment

DimensionAssessmentEvidence
Duration15 monthsOct 30, 2023 to Jan 20, 2025
Scope150+ requirementsAcross 50+ federal entities per Stanford HAI
Implementation≈85% completed13/13 management requirements fully implemented per GAO
Budget Impact$10M initial, $47.7M requestedAISI received $10M FY2024; Biden requested +$47.7M for FY2025
Companies AffectedFewer than 15BIS assessment: no more than 15 companies exceeded compute thresholds
EnforcementWeakNo specified penalties; relied on voluntary cooperation
DurabilityRevoked Day 1Executive action vulnerable to administration change
LegacyPartial survivalFinal rules (KYC) require formal rulemaking to rescind; AISI → CAISI June 2025

For AI safety, the order represented both progress and limitations. It normalized government oversight of frontier AI development and created institutional capacity through the US AI Safety Institute. Yet it primarily focused on transparency and voluntary cooperation rather than mandatory safety requirements or deployment restrictions.

Key Provisions and Mechanisms

Compute-Based Reporting Framework

The order's most innovative feature was its use of computational thresholds to trigger regulatory requirements. Companies training models using more than 10^26 floating-point operations (FLOP) were required to notify the Department of Commerce before and during training, share safety testing results, and provide detailed information about model capabilities, cybersecurity measures, and red-team testing outcomes.

Compute Threshold Comparison

ThresholdApplicationTraining Cost EstimateModels Affected
10^26 FLOPGeneral dual-use foundation models$10-100M per training runNext-gen frontier models (GPT-5 class)
10^23 FLOPBiological sequence models≈$10-100K per training runSpecialized bio-AI tools
10^20 FLOP/sComputing cluster capacity thresholdN/ALarge data centers
GPT-4 (reference)Estimated at ≈2 × 10^25 FLOP≈$100MJust under general threshold
GPT-5 (reference)Estimated at ≈3 × 10^25 FLOP≈$200M+Still below threshold
GPT-3 (reference)3.14 × 10^23 FLOP≈$1M≈318x below threshold

A Biden Administration official stated that "the threshold was set such that current models wouldn't be captured but the next generation state-of-the-art models likely would." The Bureau of Industry and Security assessed that no more than 15 companies exceeded the reporting thresholds for models and computing clusters.

No model ever triggered the threshold before revocation. Epoch AI estimated GPT-5 pretraining at approximately 3 × 10^25 FLOP—still below the 10^26 threshold. This reflects a shift in frontier AI development: rather than scaling pre-training compute by orders of magnitude, labs increasingly focus on inference-time compute (reasoning models like OpenAI o1) and algorithmic efficiency improvements. xAI's Colossus data center may have approached 10^26 FLOP for some training runs, but this remains unconfirmed.

The separate 10^23 FLOP threshold for biological sequence models reflected concerns that even smaller models could assist in bioweapon development—approximately 1,000 times less compute than the general threshold, acknowledging that biological design capabilities may emerge at lower scales than general intelligence capabilities.

The compute-based approach offered several advantages over capability-based regulations. FLOP measurements are objective and difficult to manipulate, unlike subjective assessments of AI capabilities. The thresholds also provided predictability for developers. However, the static nature of these numbers created risks of obsolescence as algorithmic efficiency improves—researchers estimated the thresholds could become outdated within 3-5 years. According to Fenwick analysis, algorithmic improvements of approximately 2-3x per year mean a model that would have required 10^26 FLOP in 2023 might achieve equivalent capabilities with 10^25 FLOP by 2026—rendering static thresholds increasingly ineffective.

Institutional Infrastructure Creation

The order established the US AI Safety Institute (AISI) within the National Institute of Standards and Technology, tasked with developing evaluation methodologies, conducting safety assessments, and coordinating with international partners. Unlike purely advisory bodies, AISI had operational responsibilities including direct testing of frontier models and developing technical standards for the broader AI ecosystem.

AISI Timeline and Development

DateEvent
Nov 2023AISI announced at NIST, one day after EO 14110 signed
Feb 2024Elizabeth Kelly appointed as director; AISIC consortium created with 200+ member organizations
Mar 2024$10M initial budget allocated (vs. $17.7M FY2025 request)
May 2024NIST Director warns only $1M actually available; "very difficult without additional funding"
Aug 2024Agreements signed with Anthropic and OpenAI for pre-deployment testing
Nov 2024First joint evaluation with UK AISI: Claude 3.5 Sonnet assessment
Dec 2024OpenAI o1 model evaluation published
Jan 2025EO 14110 revoked; AISI future uncertain
Feb 2025Elizabeth Kelly resigns as director; NIST layoffs announced
Jun 2025Renamed to Center for AI Standards and Innovation (CAISI); mission refocused from safety to innovation

AISI's creation paralleled the UK's AI Safety Institute, with the two signing cooperation agreements and developing shared evaluation frameworks. The November 2024 joint evaluation of Claude 3.5 Sonnet tested biological capabilities, cyber capabilities, software/AI development, and safeguard efficacy—representing the first such government-led assessment of a frontier model.

However, AISI faced significant resource constraints. With only $1-10M in actual funding versus the $17.7M requested, and staffing well below the estimated 200+ personnel needed for full capacity, the institute struggled to match the technical sophistication of private AI laboratories.

Global AI Safety Institute Comparison

InstituteEstablishedBudget (Annual)StaffKey Activities
US AISI/CAISIAnnounced Nov 2023; operational Feb 2024$10M (FY24); $6M actual spending≈50 estimatedModel evaluation; standards development
UK AISINov 2023£100M (≈$125M) over 3 years100+Pre-deployment testing; international coordination
Japan AISIFeb 2024¥2B (≈$13M) initial≈30Standards research; evaluation frameworks
Singapore AISIFeb 2024Not disclosed≈20Testing frameworks; regional coordination
Canada AISINov 2024C$50M ($37M) pledgedNot disclosedLaunched Nov 2024 at SF summit
EU AI OfficeFeb 2024Part of EC budget≈140Regulatory enforcement; standards

The US AISI's $10M budget contrasts sharply with the UK's £100M commitment. NIST Director Laurie Locascio warned in May 2024 that only $1M was actually available, stating it would be "very, very tough" to continue operations without additional funding.

Leadership Transition and Organizational Uncertainty

Elizabeth Kelly, the inaugural AISI director, resigned on February 6, 2025. In her departure announcement, she stated: "I am confident that AISI's future is bright and its mission remains vital to the future of AI innovation." NIST Director Laurie Locascio also departed at the start of 2025 to head the American National Standards Institute (ANSI). Reports emerged that the Trump administration planned to lay off up to 500 NIST staffers, which posed particular risk for AISI as a new organization where most employees remained on probation.

Cloud Compute Governance

The order introduced "Know Your Customer" (KYC) requirements for Infrastructure-as-a-Service (IaaS) providers, mandating that cloud computing companies verify the identity of foreign customers and monitor large training runs. The Bureau of Industry and Security proposed rule required US IaaS providers to implement Customer Identification Programs (CIP) including:

  • Collection of customer name, address, payment source, email, telephone, and IP addresses
  • Verification of whether beneficial owners are US persons
  • Reporting to Commerce when foreign customers train large AI models with potential malicious applications
  • Violations subject to civil and criminal penalties under the International Emergency Economic Powers Act

These requirements reflected recognition that compute infrastructure represents a chokepoint in AI development that the US can potentially control. By leveraging American companies' dominance in cloud computing, the order extended US regulatory reach to foreign AI developers who rely on American infrastructure—complementing export controls on AI chips.

Diagram (loading…)
flowchart TD
  subgraph EO14110["Executive Order 14110 Framework"]
      COMPUTE[Compute Thresholds<br/>10^26 FLOP general<br/>10^23 FLOP biological] --> REPORT[Mandatory Reporting<br/>to Commerce Dept]
      REPORT --> AISI[US AI Safety Institute<br/>Pre-deployment Testing]
      CLOUD[Cloud KYC Requirements<br/>IaaS Providers] --> FOREIGN[Foreign Customer<br/>Identification]
      FOREIGN --> MONITOR[Training Run<br/>Monitoring]
  end

  subgraph REVOCATION["Trump Administration (Jan 2025)"]
      REVOKE[EO 14110 Revoked] --> REVIEW[Agency Review<br/>of All Actions]
      REVIEW --> CAISI[AISI → CAISI<br/>Safety → Innovation Focus]
      REVIEW --> UNCERTAIN[Status of KYC Rules<br/>Uncertain]
  end

  EO14110 --> REVOCATION

  style COMPUTE fill:#e1f5fe
  style AISI fill:#e1f5fe
  style REVOKE fill:#ffcdd2
  style CAISI fill:#fff3e0

The practical implementation faced several challenges. Defining "large training runs" in real-time requires technical sophistication from cloud providers, who must distinguish AI training from other compute-intensive applications. Moreover, determined adversaries might circumvent these requirements by using non-US cloud providers or developing domestic computing capabilities.

Broader Policy Scope

While the compute thresholds and AISI attracted the most attention, the order addressed AI governance across eight policy areas. Several provisions addressed immediate, non-frontier risks:

Policy AreaKey RequirementsLead AgenciesDeadlineStatus at Revocation
Civil Rights & BiasGuidance to prevent AI discrimination in housing, benefits, hiring; DOJ coordination on civil rights enforcementDOJ, HUD, EEOC180 daysGuidance issued
HealthcareAI safety program for harm reporting; quality control strategy for AI-enabled medical devicesHHS180 daysFramework published
Immigration & TalentStreamlined visa processing for AI researchers; modernized O-1A, EB-1A, EB-2 pathways; updated Schedule A occupationsState, DHS, DOL45-180 daysLargely completed
Consumer ProtectionEncouraged FTC/CFPB to apply existing consumer protection laws to AI; directed new safeguards against AI-related fraudFTC, CFPBOngoingSome actions taken
Government ModernizationEach agency to designate a Chief AI Officer; risk-based approach to generative AI useOMB, all agencies60 days13/13 fully implemented
Innovation & CompetitionNSF to launch NAIRR pilot program within 90 days; measures to attract AI talentNSF90 daysPilot launched

The government modernization provisions had notably high completion rates. OMB issued guidance M-24-10 defining "rights-impacting" and "safety-impacting" AI categories—terminology that mirrored the EU AI Act's "high-risk" framework—and all 24 CFO Act agencies designated Chief AI Officers by the deadline. These institutional changes largely survived the revocation, as agencies retained their AI leadership structures at their own discretion.

The order relied on two key legal authorities for its mandatory provisions. The Defense Production Act (DPA) was invoked to compel reporting from companies developing dual-use foundation models above the compute thresholds—an expansive application of Cold War-era industrial mobilization law to commercial AI development. Some legal scholars questioned whether the DPA could legitimately be used for information-gathering rather than production mandates. The International Emergency Economic Powers Act (IEEPA) underpinned the KYC requirements for cloud providers, enabling civil and criminal penalties for violations.

However, the order included no new enforcement mechanisms for most provisions. The BIS proposed rule on developer reporting requirements—published September 2024 with a comment period closing October 11, 2024—was never finalized before revocation, meaning the compute-based reporting regime never achieved full regulatory force. This gap between the order's sweeping scope and its limited enforcement authority was a recurring criticism.

Safety Implications and Risk Assessment

Promising Aspects for AI Safety

The order's most significant safety contribution is establishing the principle that frontier AI development requires government oversight. By creating mandatory reporting requirements and institutional evaluation capacity, it moves beyond purely voluntary industry commitments toward structured accountability. The compute-based thresholds provide objective criteria that avoid subjective judgments about AI capabilities while capturing systems of genuine concern.

The institutional infrastructure created by the order builds long-term capacity for AI governance that could prove crucial as capabilities advance. AISI's technical expertise and evaluation methodologies may become essential tools for assessing increasingly powerful systems. The institute's international coordination role also creates foundations for global governance frameworks that could address catastrophic risks requiring multilateral cooperation.

The order's breadth across multiple risk categories—from algorithmic bias to national security threats—reflects sophisticated understanding of AI's diverse impact pathways. By addressing both immediate harms and long-term risks simultaneously, it avoids the false dichotomy between near-term and existential AI safety concerns. The integration of fairness, security, and catastrophic risk considerations within a single framework could prove influential for future governance approaches.

Concerning Limitations

Despite its comprehensive scope, the order lacks mechanisms to actually prevent the development or deployment of dangerous AI systems. The reporting requirements provide visibility but not control, and the order includes no authority to pause training runs or restrict model releases based on safety concerns. This represents a fundamental limitation for addressing catastrophic risks that might emerge from future AI systems.

The voluntary nature of many provisions weakens the order's potential effectiveness. While reporting requirements are mandatory, many safety-related provisions rely on industry cooperation rather than enforceable mandates. Companies that choose not to comply face unclear consequences, undermining the order's credibility as a regulatory framework. The absence of specified penalties or enforcement mechanisms reflects the limited authority available through executive action.

The order's durability remains highly uncertain given its status as executive action rather than legislation. Future administrations could modify or revoke its provisions entirely, creating regulatory uncertainty that might discourage long-term compliance investments. This political fragility represents a significant weakness for addressing long-term AI risks that require sustained governance approaches spanning multiple electoral cycles.

International Comparison of AI Compute Thresholds

JurisdictionThresholdScopeObligationsStatus
US EO 1411010^26 FLOPGeneral dual-use modelsReport to Commerce; share red-team resultsRevoked Jan 2025
US EO 1411010^23 FLOPBiological sequence modelsSame as aboveRevoked Jan 2025
EU AI Act10^25 FLOPGPAI with systemic riskRegistration; model evaluation; incident reportingIn force Aug 2025
UK (voluntary)None specifiedFrontier modelsVoluntary pre-deployment testing with UK AISIActive
China (proposed)Not compute-basedFoundation models serving publicRegistration; security assessment; content moderationPartial implementation

The EU AI Act sets a 10x lower threshold (10^25 vs 10^26 FLOP) than the US EO did, meaning more models face regulatory obligations in Europe. The US threshold was intentionally set high—a Biden Administration official stated it was designed so "current models wouldn't be captured but the next generation state-of-the-art models likely would."

Revocation and Aftermath

Trump Administration Response

On January 20, 2025, President Trump revoked Executive Order 14110 within hours of assuming office. The White House fact sheet stated that the order "hindered AI innovation and imposed onerous and unnecessary government control over the development of AI."

Policy Paradigm Comparison

DimensionBiden EO 14110Trump EO 14179 & Subsequent Orders
Primary framingSafety and trustworthinessInnovation and competitiveness
Government roleActive oversight and evaluationRemove barriers; minimize intervention
Compute thresholds10^26 FLOP triggers mandatory reportingRevoked; no federal thresholds
AISI/CAISI missionPre-deployment safety testingInnovation promotion; national security focus
State regulationNeutral; states develop own frameworksAggressive preemption via DOJ litigation
International stanceMultilateral safety cooperationCompetitive advantage; refused Paris communique
Industry relationshipMandatory reporting + voluntary testing agreementsVoluntary engagement; "pro-growth" emphasis

Three days later, on January 23, 2025, Trump signed Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," which:

  • Directed agencies to identify and revise/rescind all EO 14110 actions "inconsistent with enhancing America's leadership in AI"
  • Mandated development of an "action plan" within 180 days to "sustain and enhance America's global AI dominance"
  • Explicitly framed AI development as a matter of national competitiveness over safety
  • Required OMB to revise memoranda M-24-10 and M-24-18 within 60 days

Vice President Vance subsequently stated that "pro-growth AI policies" should be prioritized over safety, and the US refused to sign the February 2025 AI Action Summit communique in Paris.

What Survived the Revocation

The revocation did not automatically repeal everything implemented under EO 14110. Legal analysis indicates:

CategoryStatusUncertainty
Completed agency actionsRemain unless specifically reversedHigh—under review
Final rules (e.g., IaaS KYC)Require formal rulemaking to rescindMedium
Voluntary industry agreementsContinue unless parties withdrawLow
AISI evaluations completedPublished; cannot be "unreviewed"None
International agreementsContinue; diplomatic relations independentLow
Chief AI Officer designationsRemain at agency discretionMedium

The Commerce Department's Framework for AI Diffusion and other final rules may require separate rulemaking processes to revoke, providing some continuity even as the overall framework shifts.

AISI to CAISI Transformation

In June 2025, the US AI Safety Institute was renamed to the Center for AI Standards and Innovation (CAISI) with a fundamentally different mission. According to Commerce Secretary Howard Lutnick: "For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards."

This represents a shift from:

  • Safety evaluation → Innovation promotion
  • Pre-deployment risk assessment → National security focus
  • International safety coordination → Competitive advantage emphasis

The December 2025 NIST announcement of $10M in AI centers (with MITRE) and a planned $10M AI for Resilient Manufacturing Institute suggests resources are being redirected toward manufacturing and cybersecurity applications rather than frontier model safety evaluation.

Trump AI Action Plan (July 2025)

On July 23, 2025, the White House OSTP released "Winning the Race: America's AI Action Plan," outlining 90 federal policy positions across three pillars: accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security. The plan was developed in response to EO 14179's mandate for a 180-day action plan and incorporated over 10,000 public comments.

The plan takes a deregulatory stance, directing OMB to identify and repeal regulations that hinder AI development. It emphasizes national security applications and competitive positioning against China rather than the safety-testing framework of EO 14110. CAISI's role under the plan focuses on cybersecurity, biosecurity, chemical weapons, and countering foreign adversarial influence—areas that align with national defense priorities rather than the broad consumer protection and civil rights mandate of EO 14110.

State Law Preemption Order (December 2025)

On December 11, 2025, President Trump signed a new executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which directly targets state-level AI regulation. This order represents a significant expansion of federal AI policy beyond simply revoking Biden-era rules.

ProvisionMechanismTimeline
AI Litigation Task ForceDOJ to sue states over AI laws deemed to obstruct federal policyImmediate
Commerce Department evaluationIdentify "onerous" state AI laws for DOJ referral90 days
FTC policy statementClarify FTC Act preemption of state AI disclosure requirements90 days
Federal funding leverageStudy withholding rural broadband funding from states with unfavorable AI lawsUnder review
Legislative recommendationPrepare proposal for uniform federal AI frameworkOngoing

The order explicitly targets the Colorado AI Act, claiming it "may even force AI models to produce false results in order to avoid a 'differential treatment or impact' on protected groups." At minimum, Commerce must identify state laws requiring AI models to alter "truthful outputs" or compel disclosures "that would violate the First Amendment."

Legal analysts note the executive order cannot itself preempt state law—only Congress or the courts can do so. Until legal challenges are resolved, state AI laws remain enforceable. The order functions as a "pressure-and-positioning instrument" to narrow the practical space for state AI regulation rather than an immediate legal override.

US AI Governance Timeline (2023-2025)

Diagram (loading…)
timeline
  title Evolution of US Federal AI Policy
  section Biden Era
      Oct 2023 : EO 14110 signed
               : 150+ requirements
               : Compute thresholds set
      Nov 2023 : AISI founded at NIST
      Feb 2024 : Elizabeth Kelly named director
               : AISIC consortium formed
      Aug 2024 : Anthropic/OpenAI testing agreements
      Nov 2024 : Joint US-UK model evaluation
               : International Safety Network launched
  section Trump Era
      Jan 2025 : EO 14110 revoked
               : EO 14179 signed
      Feb 2025 : Kelly resigns
               : NIST layoffs announced
               : Paris communique refused
      Jun 2025 : AISI renamed CAISI
               : Mission pivots to innovation
      Jul 2025 : AI Action Plan released
               : 90 policy positions
      Dec 2025 : State preemption EO
               : DOJ AI Litigation Task Force

Implementation Progress (Pre-Revocation)

Completed Actions (Oct 2023 - Jan 2025)

Stanford HAI's tracker documented approximately 85% completion of the order's 150 distinct requirements before revocation:

Policy AreaRequirementsCompletion RateKey Actions
AI Safety & Security≈25HighAISI created; evaluation agreements signed
Civil Rights & Bias≈20HighAgency guidance issued
Consumer Protection≈15MediumStandards development ongoing
Labor & Workforce≈15MediumReports published
Innovation & Competition≈20HighResearch initiatives launched
Government Modernization≈30HighChief AI Officers designated
International Cooperation≈15HighUK AISI partnership; international network launched
Emerging Threats≈10MediumBiosecurity framework under development

Key Accomplishments

Despite its short duration, the order achieved several notable outcomes:

Model Evaluation Precedent: The joint US-UK evaluation of Claude 3.5 Sonnet and OpenAI o1 established government capacity for pre-deployment testing of frontier models—the first such government-led assessments anywhere. The o1 evaluation notably found the model "solved an additional three cryptography-related challenges that no other model completed."

International Network: In November 2024, the US launched the International Network of AI Safety Institutes, establishing formal cooperation with the UK, Canada, Japan, Singapore, and other allies on AI safety research.

Industry Cooperation: Voluntary agreements with Anthropic and OpenAI demonstrated that frontier AI companies would accept government access to pre-release models—a precedent that may persist even after revocation.

Expert and Public Reactions

The order received broadly positive reactions from AI governance researchers and Democratic lawmakers. Stanford HAI experts characterized it as "a good start" but cautioned it was "not enough" without congressional legislation. Representative Don Beyer called it a "comprehensive strategy for responsible innovation." Polling from the AI Policy Institute found 69% of voters supported the EO, including 64% of Republicans.

Critics focused on two concerns. Senator Ted Cruz described it as creating "barriers to innovation disguised as safety measures." Some legal scholars questioned whether the Defense Production Act could legitimately compel disclosures from AI companies, arguing the statute was designed for industrial production requirements rather than information-gathering mandates.

The revocation generated a different debate. Legal analysts from firms including Skadden and Wiley noted that rescinding the order created significant regulatory uncertainty, since many agency actions taken under EO 14110 were already in effect and their post-revocation legal status was unclear.

Key Uncertainties and Future Outlook

The Broader 2024-2025 Regulatory Landscape

The EO 14110 revocation occurred within a rapidly evolving AI policy environment:

Level20232024Change
Federal AI regulations2559+136%
Agencies issuing regulations2142+100%
State AI bills proposed≈300629+110%
State AI bills passed≈50131+162%
Congressional AI bills proposed≈100211+111%
Congressional AI bills passed14+300% (from low base)
Prior EO compliance (agencies filing inventories)53%ImprovedEO drove compliance

This landscape reveals a core tension: while federal AI governance has fragmented following the EO revocation, state-level activity has accelerated dramatically—a 110% increase in bills proposed and 162% increase in bills passed year-over-year. The December 2025 state preemption order represents an attempt to address this fragmentation by federal assertion rather than federal legislation. According to the Stanford HAI 2025 AI Index, despite receiving over 10,000 public comments on the AI Action Plan, Congress has not passed major AI legislation since the initial AI in Government Act of 2020.

What Happens Next?

With EO 14110 revoked and AISI transformed into CAISI, several key questions remain:

QuestionOptimistic ScenarioPessimistic ScenarioCurrent Assessment
Will voluntary industry agreements continue?Labs maintain AISI relationships independentlyLabs reduce cooperation without mandateMedium uncertainty—depends on lab incentives
Will international coordination survive?UK/EU/allies continue; US rejoins laterUS isolation undermines global frameworksMedium-high—US refused to sign Paris communique
Will Congress legislate AI safety?Bipartisan legislation codifies key provisionsNo legislation; state patchwork emergesHigh uncertainty—no major bills advancing
Will compute thresholds become obsolete?Future frameworks adopt capability-based triggersNo governance framework adaptsHigh—3-5 year threshold for obsolescence
Will frontier labs face any oversight?Industry self-governance; state regulationsNo meaningful oversight until incidentMedium-high—depends on state action and incidents

Lessons for AI Governance

The EO 14110 experience offers several lessons for future AI governance efforts:

Executive action fragility: The complete revocation within 15 months demonstrates that executive orders cannot provide durable AI governance. Of the approximately 150 requirements in EO 14110, roughly 85% were completed before revocation—yet all this implementation effort could be unwound by a single signature. Any sustainable framework requires congressional legislation or deeply embedded institutional practices that survive administration changes. For comparison, the EU AI Act took 3 years to negotiate but cannot be revoked by a single executive; modification requires parliamentary supermajorities.

Compute thresholds have a shelf life: The 10^26 FLOP threshold, designed to capture "next-generation" models, was never actually triggered before revocation. Researchers estimate such thresholds become outdated within 3-5 years as algorithmic efficiency improves.

Voluntary cooperation is necessary but insufficient: The Anthropic and OpenAI agreements demonstrated frontier labs will cooperate with government oversight—but this cooperation was voluntary and contingent on political conditions that no longer exist.

International coordination requires US participation: The International Network of AI Safety Institutes launched just months before the US pivot away from safety-focused governance. Without sustained US engagement, international safety coordination faces significant headwinds.


Sources

Primary Sources

  • Executive Order 14110 (Federal Register) - Full text of Biden order
  • Executive Order 14179 (Federal Register) - Trump replacement order
  • Executive Order on State Law Preemption (White House) - December 2025 state preemption order
  • Commerce Secretary Statement on CAISI - AISI transformation announcement

Implementation Tracking

  • Stanford HAI Executive Action Tracker - Implementation progress monitoring
  • Stanford HAI AI Index 2025 - Policy landscape analysis
  • NIST AI Safety Institute - AISI resources and evaluations

Analysis

  • Georgetown CSET Analysis - Policy analysis of the revocation
  • Congress.gov CRS Report - Congressional Research Service analysis
  • TechPolicy.Press Analysis - AISI to CAISI renaming implications
  • Epoch AI Notes on GPT-5 Compute - Compute threshold analysis
  • Gibson Dunn State Preemption Analysis - Legal analysis of December 2025 order
  • Skadden: AI Broad Biden Order Withdrawn - Post-revocation legal analysis
  • Wiley: Trump Revokes Biden AI EO - Regulatory uncertainty analysis

References

A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight requirements. The piece reviews key policy developments at state and federal levels relevant to frontier AI governance.

This Morrison Foerster client alert analyzes Biden's October 2023 AI Executive Order, focusing on its unprecedented direct obligations on private companies to disclose information about powerful AI models (trained with >10^26 FLOPs) and computing clusters to the federal government. It examines the legal basis for these compelled disclosures under the Defense Production Act and the scope of covered models and clusters.

3NIST AI Safety InstituteNIST·Government

This NIST page covers the Biden Administration's Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, issued October 30, 2023. The EO directed NIST and other agencies to develop AI safety standards, guidelines, and evaluation tools. The order was rescinded on January 20, 2025, under the incoming Trump administration.

★★★★★

The U.S. AI Safety Institute (NIST) announced Memoranda of Understanding with Anthropic and OpenAI in August 2024, establishing formal frameworks for pre- and post-deployment access to major AI models. These agreements enable collaborative research on capability evaluations, safety risk assessment, and mitigation methods, representing the first formal government-industry partnerships of this kind in the U.S.

★★★★★

On January 20, 2025, President Trump revoked Biden's landmark 2023 AI Executive Order (EO 14110) as part of a broader rescission of Biden-era actions. The article analyzes the implications for federal AI governance efforts launched under the Biden EO, including OMB guidance on AI risk management and acquisition, and notes that not all Biden AI initiatives were rolled back.

6Congress.gov CRS ReportUS Congress·Government

This Congressional Research Service report summarizes Biden's Executive Order 14110 on AI, issued October 30, 2023, covering eight major policy areas including AI safety, civil rights, and federal AI governance. It details agency mandates and timelines, serving as a reference for Congress to understand the administration's AI governance framework. The report is a key document for understanding U.S. federal AI policy as of late 2023.

★★★★★
7Executive Order 14110federalregister.gov·Government

President Biden's landmark Executive Order on AI (October 2023) established comprehensive federal policy for AI safety, security, and trustworthiness. It mandated safety evaluations for frontier AI models, created reporting requirements for large-scale AI training runs, and directed agencies across the federal government to develop AI governance frameworks and standards.

8UK AI Safety Institute WikipediaWikipedia·Reference

Wikipedia article covering the UK AI Safety Institute (AISI), a government body established in 2023 to advance AI safety research and evaluation. It provides an overview of the institute's mission, structure, key activities such as frontier model evaluations, and its role in international AI safety coordination. The article serves as a reference point for understanding the UK's institutional approach to governing advanced AI.

★★★☆☆

Reports on the Biden administration's appointments to lead the AI Safety Institute (AISI) at NIST, while highlighting concerns about limited available funding (~$1M) for the institute's operations. The piece covers the tension between political momentum for AI safety governance and practical resource constraints facing the newly established body.

★★★☆☆

The U.S. and UK AI Safety Institutes jointly conducted pre-deployment safety evaluations of Anthropic's upgraded Claude 3.5 Sonnet, testing biological capabilities, cyber capabilities, software/AI development, and safeguard efficacy. The evaluation used question answering, agent tasks, qualitative probing, and red teaming to benchmark the model against prior versions and competitors. This represents one of the first formal government-led pre-deployment AI safety evaluations made public.

★★★★★

Skadden law firm provides legal analysis of the withdrawal of President Biden's broad AI executive order, examining the implications for AI governance, compute thresholds, and the US AI Safety Institute. The piece covers how the rescission affects existing regulatory frameworks and what it signals for future AI policy direction under the new administration.

In November 2024, the U.S. Departments of Commerce and State launched the International Network of AI Safety Institutes, uniting ten countries and the EU to advance collaborative AI safety science, share best practices, and coordinate evaluation methodologies. The inaugural San Francisco convening produced a joint mission statement, multilateral testing findings, and over $11 million in synthetic content research funding. The initiative aims to build global scientific consensus on safe AI development while preventing fragmented international governance.

★★★★★

This document examines the use of training compute thresholds as a governance mechanism for regulating advanced AI systems, analyzing how computational resource requirements can serve as proxies for identifying potentially dangerous AI models. It likely addresses methodological considerations for setting appropriate thresholds and their role in AI safety policy frameworks, particularly in the context of US AI Safety Institute initiatives.

Stanford HAI's policy tracker monitors the implementation status of U.S. executive actions related to artificial intelligence, including executive orders and directives. It provides a structured overview of which AI-related federal mandates have been fulfilled, are in progress, or remain pending. This serves as a reference tool for researchers and policymakers tracking the regulatory landscape.

★★★★☆
15Executive Order 14179federalregister.gov·Government

Signed by President Trump in January 2025, this executive order revokes Biden-era AI safety mandates (including EO 14110) and reorients U.S. AI policy toward promoting innovation, economic competitiveness, and minimizing regulatory burdens. It directs agencies to review and rescind rules seen as impeding AI development and instructs the development of a new national AI action plan prioritizing American dominance in AI.

This Mayer Brown legal analysis covers the Bureau of Industry and Security (BIS) proposed rule requiring companies to report the development of advanced AI models and large compute clusters to the US government. The proposal aims to enhance federal oversight of frontier AI development by mandating transparency about capabilities and infrastructure. It represents a significant regulatory step toward monitoring AI progress at the hardware and model level.

The US and UK AI Safety Institutes conducted a joint pre-deployment evaluation of OpenAI's o1 model, assessing its capabilities and risks across three domains including potential for misuse. The evaluation compared o1's performance to reference models and represents an early example of government-led frontier AI safety testing prior to public release.

★★★★★

NIST and MITRE announced a $20 million collaborative research initiative focused on AI cybersecurity, aiming to develop standards, tools, and frameworks to address AI-related security vulnerabilities. The effort reflects growing U.S. government recognition that AI systems introduce novel cybersecurity risks requiring dedicated research infrastructure. This initiative is part of broader federal efforts to ensure AI safety and security through institutional partnerships.

The U.S. Department of Commerce proposed a rule requiring Infrastructure-as-a-Service (IaaS) providers to implement Know Your Customer (KYC) verification for foreign users accessing cloud computing resources above certain thresholds. The rule aims to prevent adversarial actors from using U.S. cloud infrastructure to train advanced AI models. This legal analysis covers the regulatory implications for cloud providers and the AI industry.

This article examines how federal AI guidance affecting employers has shifted under a new administration, likely referring to the reversal of Biden-era AI executive orders and fact sheets by the Trump administration. It covers practical implications for employers navigating evolving federal AI policy and workplace regulations.

21Georgetown CSET AnalysisCSET Georgetown

Georgetown CSET analyzes the executive order focused on advancing American AI leadership by reducing regulatory barriers, examining implications for AI governance, safety standards, and the balance between innovation and oversight. The analysis likely evaluates how deregulatory approaches affect compute thresholds, AI safety institutions, and federal AI policy frameworks.

★★★★☆
22November 2024 joint evaluation of Claude 3.5 SonnetUK AI Safety Institute·Government

The UK and US AI Safety Institutes conducted a joint pre-deployment evaluation of Anthropic's upgraded Claude 3.5 Sonnet, assessing biological capabilities, cyber capabilities, software/AI development, and safeguard efficacy. The evaluation used multiple methodologies including red teaming and agent tasks, benchmarking against prior Claude 3.5 Sonnet, GPT-4o, and o1-preview. This represents an early example of government-led pre-deployment safety testing of frontier AI models.

★★★★☆

The GAO assessed federal agency compliance with 13 AI management and talent requirements from Executive Order 14110 (October 2023), finding all requirements with March 2024 deadlines were fully implemented by agencies including OMB, OPM, and GSA. Key implementations included establishing the White House AI Council, Chief AI Officer councils, AI talent recruitment plans, and government-wide AI guidance. This represents foundational infrastructure for coordinated federal AI governance.

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing risk-based regulations for AI systems deployed in the European Union. It categorizes AI into unacceptable risk, high-risk, limited risk, and minimal risk tiers, with corresponding obligations for transparency, safety, and conformity assessment. It also introduces specific rules for general-purpose AI models, including those with systemic risk.

★★★★☆
25Renamed to Center for AI Standards and Innovation (CAISI)US Department of Commerce·Government

The U.S. Commerce Department announced the renaming and restructuring of the AI Safety Institute into the Center for AI Standards and Innovation (CAISI) under Secretary Howard Lutnick. The shift explicitly reframes the mission away from safety-oriented regulation toward pro-innovation voluntary standards, while retaining national security evaluation functions focused on demonstrable risks like cybersecurity and biosecurity. This represents a significant policy realignment in the U.S. government's approach to AI oversight.

★★★★☆

The Stanford HAI 2025 AI Index Report's policy chapter tracks the rapid growth of AI-related legislation, national government AI investment strategies, and emerging international frameworks for AI safety collaboration. It provides empirical data on how governments worldwide are responding to AI development through regulatory and institutional mechanisms.

★★★★☆

This White House executive action establishes a federal preemption framework for AI policy, aiming to eliminate conflicting state-level AI regulations in favor of a unified national approach. It asserts federal supremacy over AI governance to prevent a patchwork of state laws that could obstruct national AI development and deployment priorities. The order reflects the administration's intent to accelerate AI adoption by reducing regulatory fragmentation.

★★★★☆

Related Wiki Pages

Top Related Pages

Organizations

US AI Safety InstituteAnthropicOpenAIEpoch AIBureau of Industry and SecurityFAR AI

Analysis

AI Regulatory Capacity Threshold ModelFlash Dynamics Threshold ModelIrreversibility Threshold Model

Historical

International AI Safety Summit SeriesAnthropic-Pentagon Standoff (2026)

Policy

China AI Regulatory FrameworkExecutive Order 14179: Removing Barriers to American Leadership in AIBletchley Declaration

Other

Joe BidenElizabeth Kelly

Concepts

Governance OverviewInternational Coordination MechanismsModel Registries