Bipartisan national security and defense policy think tank founded in 2007. CNAS has produced the most detailed public research on hardware-enabled governance (HEM) mechanisms, including the "Secure, Governable Chips" working paper documenting existing chip features from AMD, Apple, Intel, and Nvidia that could support governance functions. The Wadhwani Center for AI and Advanced Technologies at CNAS, led by Gregory Allen, is a leading voice on AI chip export controls and allied coordination for semiconductor governance.
Facts
4
Organization
Founded Date2007
HeadquartersWashington, DC
Legal Structure501(c)(3) nonprofit
Biographical
Notable ForEstablished the policy framework for hardware-enabled AI chip governance through Tim Fist's research; coined 'on-chip governance' terminology
Compute MonitoringApproachCompute MonitoringAnalyzes two compute monitoring approaches: cloud KYC (implementable in 1-2 years, covers ~60% of frontier training via AWS/Azure/Google) and hardware governance (3-5 year timeline). Cloud KYC targ...Quality: 69/100Hardware-Enabled GovernanceApproachHardware-Enabled GovernanceRAND analysis identifies attestation-based licensing as most feasible hardware-enabled governance mechanism with 5-10 year timeline, while 100,000+ export-controlled GPUs were smuggled to China in ...Quality: 70/100
Analysis
Hardware Mechanisms for International AI AgreementsAnalysisHardware Mechanisms for International AI AgreementsA comprehensive technical taxonomy of hardware-based AI verification mechanisms—location attestation, TEEs, compute metering, interconnect limits, chip tracking, and remote disablement—mapped to sp...AI Safety Defense in Depth ModelAnalysisAI Safety Defense in Depth ModelMathematical framework showing independent AI safety layers with 20-60% individual failure rates can achieve 1-3% combined failure, but deceptive alignment creates correlations (ρ=0.4-0.5) that inc...Quality: 69/100
Organizations
Open PhilanthropyOrganizationOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information.Quality: 52/100Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100Center for AI SafetyOrganizationCenter for AI SafetyCAIS is a nonprofit research organization founded by Dan Hendrycks that has distributed compute grants to researchers, published technical AI safety papers including the representation engineering ...Quality: 42/100Carnegie Endowment for International PeaceOrganizationCarnegie Endowment for International PeaceA competent reference entry on Carnegie Endowment for International Peace covering its AI governance work, relationship to the AI safety community, and institutional limitations; useful as an organ...CSIS Wadhwani Center for AI and Advanced TechnologiesOrganizationCSIS Wadhwani Center for AI and Advanced TechnologiesThe Wadhwani Center for AI and Advanced Technologies at CSIS publishes influential research on AI in national security, military competition, autonomous weapons, and US-China tech rivalry. Establis...William and Flora Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a $14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focuse...Quality: 55/100
Concepts
AI Chip Governance Supply ChainConceptAI Chip Governance Supply ChainCovers AI chip governance supply chain frameworks including U.S. export controls (EAR, FDPR), hardware-enabled governance proposals, key chokepoints (TSMC, ASML, Nvidia), enforcement gaps and smugg...Governance-Focused WorldviewConceptGovernance-Focused WorldviewThis worldview argues governance/coordination is the bottleneck for AI safety (not just technical solutions), estimating 10-30% P(doom) by 2100. Evidence includes: compute export controls reduced H...Quality: 67/100Compute ThresholdsConceptCompute ThresholdsComprehensive analysis of compute thresholds (EU: 10^25 FLOP, US: 10^26 FLOP) as regulatory triggers for AI governance, documenting that algorithmic efficiency improvements of ~2x every 8-17 months...Quality: 91/100
Key Debates
AI Risk Critical Uncertainties ModelCruxAI Risk Critical Uncertainties ModelIdentifies 35 high-leverage uncertainties in AI risk across compute (scaling breakdown at 10^26-10^30 FLOP), governance (10% P(US-China treaty by 2030)), and capabilities (autonomous R&D 3 years aw...Quality: 71/100AI Governance and PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100