Skip to content
Longterm Wiki
Navigation
Updated 2026-03-19HistoryData
Page StatusContent
Edited 2 weeks ago565 words7 backlinksUpdated monthlyDue in 2 weeks
Content6/13
SummaryScheduleEntityEdit historyOverview
Tables2/ ~2Diagrams0Int. links6/ ~5Ext. links1/ ~3Footnotes0/ ~2References1/ ~2Quotes0Accuracy0Backlinks7
Issues1
Links1 link could use <R> components

Americans for Responsible Innovation

Safety Org

Americans for Responsible Innovation

Americans for Responsible Innovation (ARI) is a DC-based AI policy group founded in 2023 with backing from EA-aligned donors including Dustin Moskovitz's Open Philanthropy network. ARI focuses on bipartisan AI safety legislation in Congress, particularly around catastrophic risk prevention, compute governance, and responsible AI development standards.

TypeSafety Org
Related
People
Brad CarsonEric GastfriendDoug CalidasJustin Bullock
565 words · 7 backlinks

Quick Assessment

DimensionAssessmentEvidence
Congressional AccessHighActive lobbying and briefing program with bipartisan engagement
FundingSignificantBacked by well-resourced EA-aligned donors
Policy FocusConcentratedCatastrophic risk prevention, AI safety legislation
Bipartisan ReachHighEngages both parties on AI safety as bipartisan priority
EA Community TiesStrongFunded partly through Open Philanthropy network
Track RecordDevelopingFounded 2023, still building policy wins

Organization Details

AttributeDetails
Founded2023
LocationWashington, D.C.
StructureNonprofit policy organization
LeadershipBrad Carson (President, former US Congressman); Eric Gastfriend (Executive Director)
Key FundersInclusive Abundance Action, Omidyar Network, Coefficient Giving; no for-profit corporate or non-US funding accepted
Advisory BoardErik Brynjolfsson (Stanford), Stuart Russell (UC Berkeley CHAI)
Websiteari.us
Focus AreasAI safety legislation, catastrophic risk, compute governance, bipartisan AI policy

Overview

Americans for Responsible Innovation (ARI) emerged in 2023 as part of a wave of new AI policy organizations in Washington, D.C., distinguished by its significant funding from effective altruism-aligned donors and its focus on building bipartisan support for AI safety legislation. ARI was highlighted by Semafor in October 2023 as one of the "new think tanks influencing AI policy in Washington," noting its connections to the billionaire-backed effective altruism movement.

ARI's approach focuses on direct Congressional engagement, working to convince legislators from both parties that AI safety is a national priority requiring legislative action. Unlike pure research organizations, ARI combines policy analysis with active lobbying and advocacy, aiming to translate AI safety concerns into concrete legislation.

Policy Priorities

Catastrophic Risk Prevention: ARI advocates for legislation that would establish regulatory frameworks specifically targeting the most severe potential risks from advanced AI systems, including bioweapons enablement, critical infrastructure attacks, and loss of human control over AI systems.

Compute Governance: Drawing on research from organizations like GovAI, ARI has advocated for compute-based governance measures including reporting requirements for large training runs and know-your-customer rules for cloud compute providers.

Responsible Development Standards: ARI supports legislative requirements for AI companies to conduct safety evaluations, report dangerous capabilities, and maintain human oversight mechanisms for their most powerful systems.

EA Community Connection

ARI's funding and intellectual roots in the effective altruism community connect it to a broader ecosystem of AI safety organizations. This includes relationships with Open Philanthropy, Center for AI Safety, and other organizations that share a concern about catastrophic AI risks. This connection provides ARI with access to deep technical expertise on AI safety but has also drawn scrutiny from critics of the EA movement's influence on AI policy.


Key Dynamics

EA policy bridge: ARI represents one of the most direct channels through which EA-community AI safety concerns are translated into Congressional policy proposals, alongside IAPS and AI Policy Institute.

Bipartisan strategy: ARI's bipartisan framing — emphasizing national security risks to appeal to Republicans and corporate accountability to appeal to Democrats — mirrors broader efforts to make AI governance a non-partisan issue.

Shifting political landscape: The political environment for AI regulation has been volatile since 2023, with industry pushback, changing administrations, and competing policy priorities creating a challenging landscape for organizations advocating new AI legislation.

References

The Alignment Research Institute (ARI) appears to be an organization focused on AI alignment research. Without accessible content, the homepage likely serves as a hub for their research agenda, publications, and organizational information related to ensuring advanced AI systems are safe and beneficial.

Structured Data

9 factsView in FactBase →
Founded Date
2023

All Facts

9
Organization
PropertyValueAs OfSource
Founded Date2023
HeadquartersWashington, DC
Legal Structure501(c)(4) nonprofit
Political
PropertyValueAs OfSource
Lobbying Spend$2.1 million2025
1 earlier value
2024$1.1 million
General
PropertyValueAs OfSource
Websitehttps://ari.us/
Other
PropertyValueAs OfSource
CampaignCo-organized letter signed by 260 bipartisan state lawmakers from 50 states (130R, 129D, 1I) opposing federal ban on state AI regulation2025
1 earlier value
Oct 2024Co-led (with ITI) coalition of 60+ companies, universities, and nonprofits calling on Congress to codify US AI Safety Institute within NIST
Pac Fundraising916612024

Related Wiki Pages

Top Related Pages

Other

Justin BullockStuart Russell

Concepts

Government Orgs OverviewSafety Orgs Overview

Organizations

Open PhilanthropyCenter for AI SafetyGovAICenter for AI PolicyInstitute for AI Policy and StrategyLeading the Future super PAC

Historical

Anthropic-Pentagon Standoff (2026)