AI-Enabled Authoritarian Takeover
AI-Enabled Authoritarian Takeover
Comprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining internet freedom. Evidence suggests AI fundamentally changes authoritarian stability by closing traditional pathways to regime change (revolutions, coups, uprisings) through comprehensive surveillance, predictive policing, and automated enforcement.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Severity | Catastrophic | Billions affected; potentially permanent foreclosure of human freedom |
| Likelihood | Medium-High | 72% of global population already under autocracy; 45 countries autocratizing |
| Timeline | Near-term to ongoing | AI surveillance already deployed in 80+ countries; capabilities advancing rapidly |
| Trend | Worsening | 15 consecutive years of declining global internet freedom; autocracies now outnumber democracies |
| Reversibility | Low | AI tools close traditional pathways for regime change (revolutions, coups, uprisings) |
| Tractability | Medium | Technical countermeasures exist but face adoption barriers; policy responses fragmented |
| Global Coordination | Weak | Export controls limited; authoritarian states actively resist international norms |
Overview
AI could enable authoritarian regimes that are fundamentally more stable and durable than historical autocracies. The concern is not merely that AI enables human rights abuses today—it is that AI-powered authoritarianism might become effectively permanent, closing off pathways for political change that historically enabled transitions to freedom.
2025 Global Surveillance & Authoritarianism Snapshot
| Indicator | 2024 Value | 2025 Value | Change | Source |
|---|---|---|---|---|
| Global AI surveillance market | $1.90 billion | $1.74 billion | +21.5% | Fortune Business Insights |
| Global video surveillance market | $13.75 billion | $10+ billion | +12.1% CAGR | Grand View Research |
| Population affected by internet shutdowns | 4.1 billion | 4.6 billion | +12% | Surfshark |
| Countries with increased censorship | 50 | 60 | +20% | Surfshark |
| Global VPN users | 1.6 billion | 2+ billion | +25% | Security.org |
| Countries with declining internet freedom | 27 | 27 | stable | Freedom House |
| Years of consecutive global internet freedom decline | 14 | 15 | +1 year | Freedom House |
The scale is already staggering: 72% of the global population (5.7 billion people) now lives under autocracy, the highest proportion since 1978. Internet freedom has declined for 15 consecutive years. And 45 countries are currently autocratizing, while only 19 are democratizing—a 2.4:1 ratio favoring authoritarianism. AI surveillance technology has been exported to over 80 countries, with Chinese firms Hikvision, Dahua, and Uniview controlling approximately 60% of China's video surveillance market. Internet shutdowns affected 4.6 billion people (more than half the world's population) in 2025 alone.
Historical autocracies fell through revolutions, coups, popular uprisings, or external pressure. AI surveillance and control technologies may close off these pathways:
- Comprehensive surveillance detects organizing before it becomes effective
- Predictive systems identify dissidents before they act
- Information control prevents coordination among opposition
- Automated enforcement reduces reliance on potentially disloyal human agents
If these tools work as intended, billions could live under repressive regimes indefinitely.
Why This Is a Distinct Risk
This differs from other structural risks:
| Risk | Focus | Key Distinction |
|---|---|---|
| AI-Driven Concentration of Power | Power accumulating in few hands | Could be corporate, state, or AI; not necessarily repressive |
| AI Value Lock-in | General permanence of systems | Could lock in good or bad values; mechanism-agnostic |
| Authoritarian Takeover | Stable political repression | Specifically: loss of human freedom via state coercion |
| Erosion of Human Agency | Gradual loss of human control | May occur without explicit repression |
The specific harm is loss of political freedom at civilizational scale, potentially permanently. Unlike other structural risks, authoritarian takeover involves deliberate, coordinated suppression of dissent by identifiable actors using AI as a tool of control.
Pathways to Authoritarian Takeover
Diagram (loading…)
flowchart TD
subgraph EXISTING[Existing Autocracies]
A1[AI Surveillance Development] --> A2[Regime Stabilization]
A2 --> A3[Technology Export]
end
subgraph BACKSLIDE[Democratic Backsliding]
B1[Security Justification] --> B2[Surveillance Expansion]
B2 --> B3[Institutional Erosion]
end
subgraph CAPTURE[Rapid Capture]
C1[AI-Assisted Coup] --> C2[Control Consolidation]
C3[Corporate-State Fusion] --> C2
end
A3 --> SPREAD[Global Spread]
B3 --> SPREAD
C2 --> SPREAD
SPREAD --> LOCK[Permanent Authoritarianism]
style LOCK fill:#ffcccc
style SPREAD fill:#ffddcc
style A2 fill:#ffe6cc
style B3 fill:#ffe6cc
style C2 fill:#ffe6ccState-Led Authoritarianism
Existing authoritarian states develop AI capabilities that make their regimes effectively unchallengeable. China represents the most advanced example, with integrated systems for surveillance, censorship, and predictive policing. These systems spread through technology export (80+ countries have adopted Chinese surveillance technology) and emulation by other regimes.
Key indicators:
- China and Russia rated worst for internet freedom by Freedom House
- 45 countries currently autocratizing (V-Dem 2024)
- Hikvision and Dahua control 34% of global surveillance camera market
Democratic Backsliding
Democratic countries gradually adopt surveillance and control tools for ostensibly legitimate purposes (terrorism, crime, content moderation), eventually enabling authoritarian capture. Half of the 18 countries rated "Free" by Freedom House experienced internet freedom declines from June 2024 to May 2025.
Key indicators:
- V-Dem's 2024 report identifies 25 autocratizing countries that began as democracies
- 9 of 18 "Free" countries experienced internet freedom declines (2024-2025)
- EU AI Act attempts to create guardrails, but implementation varies
AI-Assisted Coup
A small group uses AI capabilities to seize and maintain power that would have been impossible with human-scale surveillance and control. AI lowers the threshold for effective governance by a small elite by automating functions previously requiring large bureaucracies.
Cross-Border Technology Transfer
Through its Digital Silk Road initiative, China has become a major exporter of digital authoritarianism. Leaked documents revealed Chinese company Geedge Networks offering "a commercialized version of the Great Firewall" that governments can install at will.
| Recipient Country | Technology Type | Status (2025) | Freedom House Rating |
|---|---|---|---|
| Pakistan | Great Firewall-style system | Under construction | Not Free (25/100) |
| Ethiopia | Safe City surveillance | Operational | Not Free (20/100) |
| Kazakhstan | Internet control infrastructure | Operational | Not Free (23/100) |
| Myanmar | Censorship systems | Operational; 2025 cybersecurity law | Not Free (10/100) |
| Cambodia | Surveillance infrastructure | Operational | Not Free (24/100) |
| Belarus | Internet control systems | Operational | Not Free (11/100) |
| Bangladesh | Digital authoritarianism tools | Deployed | Partly Free (39/100) |
| Thailand | AI surveillance systems | Deployed | Partly Free (36/100) |
| Philippines | Safe City packages | Deployed | Partly Free (56/100) |
Sources: Freedom House FOTN 2025, Taylor & Francis
Corporate-State Fusion
Technology companies accumulate surveillance capabilities that merge with or capture state functions, creating a new form of authoritarian control. "Safe City" agreements between Huawei and governments—70% in "Partly Free" or "Not Free" countries—exemplify this pathway.
AI Surveillance Technology Market Concentration (2024-2025)
| Company | Country | Global Market Share | Key Products | Entity List Status |
|---|---|---|---|---|
| Hikvision | China | 45% (China IP cameras) | AI cameras, facial recognition | Listed (July 2019) |
| Dahua | China | Part of 60% combined | Smart city systems, biometrics | Listed (July 2019) |
| Uniview | China | Part of 60% combined | AI video analytics | Not listed |
| SenseTime | China | N/A | Facial recognition platform | Listed (December 2021) |
| Megvii | China | N/A | Face++ platform | Listed (October 2019) |
| Yitu | China | N/A | AI recognition systems | Listed (October 2019) |
Combined Hikvision, Dahua, and Uniview control approximately 60% of China's video surveillance market. Hikvision alone invested RMB 11.864 billion ($1.65 billion) in R&D in 2024, accumulating 10,580+ patents. Sources: Mordor Intelligence, Atlantic Council
Current Evidence
AI-Enabled Authoritarianism by Region (2025)
| Region | Key Developments | Population Affected | Primary AI Tools |
|---|---|---|---|
| Asia | 10 countries imposed 56 new internet restrictions; China exporting Great Firewall technology | 2+ billion | Facial recognition, predictive policing, content filtering |
| Middle East | Iran using AI for protest surveillance; Saudi facial recognition in Mecca/Medina | 400+ million | Emotion detection, social media monitoring, spyware |
| Eastern Europe | Russia's Sovereign Internet near-complete; YouTube effectively blocked | 200+ million | VPN blocking, content throttling, social media monitoring |
| Africa | Ethiopia, Kenya building Chinese-style systems; Kenya saw largest single-country decline | 500+ million | Safe City packages, mobile surveillance |
| Latin America | Guatemala, Colombia adopting Chinese surveillance; Brazil content moderation pressures | 300+ million | Safe City systems, social media monitoring |
Sources: Freedom House FOTN 2025, ASPI China AI Report, Modern Diplomacy
Global Statistics
| Metric | Value | Source | Year |
|---|---|---|---|
| Population under autocracy | 72% (5.7 billion people) | V-Dem Democracy Report↗🔗 webV-Dem Democracy ReportRelevant to AI safety insofar as democratic institutions and checks on executive power are considered prerequisites for meaningful AI oversight; democratic backsliding may undermine the political conditions needed for accountable AI governance globally.The V-Dem Democracy Report 2024 provides a comprehensive global assessment of democratic and autocratic trends, documenting the continued spread of authoritarianism worldwide. I...governancepolicycoordinationexistential-risk+1Source ↗ | 2024 |
| Countries autocratizing | 45 (vs. 19 democratizing) | V-Dem Democracy Report↗🔗 webV-Dem Democracy ReportRelevant to AI safety insofar as democratic institutions and checks on executive power are considered prerequisites for meaningful AI oversight; democratic backsliding may undermine the political conditions needed for accountable AI governance globally.The V-Dem Democracy Report 2024 provides a comprehensive global assessment of democratic and autocratic trends, documenting the continued spread of authoritarianism worldwide. I...governancepolicycoordinationexistential-risk+1Source ↗ | 2024 |
| Consecutive years of declining internet freedom | 15 | Freedom House↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2025: Uncertain Future of the Global InternetRelevant to AI safety governance discussions around how authoritarian AI deployment and internet fragmentation could undermine global coordination on AI safety norms and oversight mechanisms.Freedom House's annual Freedom on the Net 2025 report assesses the state of internet freedom globally, documenting trends in government censorship, surveillance, and the fragmen...governancepolicyai-safetydeployment+2Source ↗ | 2025 |
| Countries with arrests for online expression | 57 of 72 surveyed (record high) | Freedom House↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2025: Uncertain Future of the Global InternetRelevant to AI safety governance discussions around how authoritarian AI deployment and internet fragmentation could undermine global coordination on AI safety norms and oversight mechanisms.Freedom House's annual Freedom on the Net 2025 report assesses the state of internet freedom globally, documenting trends in government censorship, surveillance, and the fragmen...governancepolicyai-safetydeployment+2Source ↗ | 2025 |
| Countries receiving Chinese surveillance tech | 80+ | CSIS Big Data China↗🔗 web★★★★☆CSISThe AI-Surveillance Symbiosis in China (CSIS Big Data China)Relevant to AI governance discussions about how authoritarian states can use surveillance infrastructure as a competitive advantage in AI development, with implications for global AI policy and democratic oversight of AI deployment.This CSIS analysis examines how China's state surveillance infrastructure and private AI companies form a mutually reinforcing feedback loop: government surveillance generates v...governancecapabilitiespolicyai-safety+2Source ↗ | 2024 |
| Global surveillance camera market (Hikvision + Dahua) | 34% | Biometric Update↗🔗 webReport Finds US Technology Still Flowing Into China's Surveillance SystemRelevant to AI governance discussions around dual-use technology, export controls, and the global spread of AI-enabled authoritarian surveillance infrastructure; illustrates concrete geopolitical risks of unregulated AI deployment.A House Select Committee minority report warns that China has constructed the world's most extensive surveillance state using facial recognition, biometrics, and AI-driven predi...governancepolicyai-safetydeployment+5Source ↗ | 2024 |
| Internet users in "Free" countries | 16% | Freedom House↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2025: Uncertain Future of the Global InternetRelevant to AI safety governance discussions around how authoritarian AI deployment and internet fragmentation could undermine global coordination on AI safety norms and oversight mechanisms.Freedom House's annual Freedom on the Net 2025 report assesses the state of internet freedom globally, documenting trends in government censorship, surveillance, and the fragmen...governancepolicyai-safetydeployment+2Source ↗ | 2025 |
China: The Leading Case Study
China represents the most advanced deployment of AI-enabled authoritarian control, with integrated systems spanning surveillance, censorship, predictive policing, and information control.
Key systems deployed:
| System | Function | Coverage | AI Capabilities |
|---|---|---|---|
| Sharp Eyes | Urban video surveillance | Nationwide, near-total in Xinjiang/Tibet | Facial recognition, behavior analysis |
| Integrated Joint Operations Platform | Dissent prediction | Xinjiang | Predictive policing, "pre-crime" detection |
| Great Firewall | Internet censorship | Nationwide | Real-time content filtering, VPN blocking |
| Social Credit System | Behavioral compliance | Fragmented pilots; 32M+ travel bans issued | Limited AI currently; expanding per 2024-2025 plan |
Xinjiang as intensive case: The region represents the most intensive deployment of AI-enabled population control:
- 12 million Uyghurs subject to comprehensive surveillance
- Blood samples, biometrics, GPS tracking, and behavioral monitoring collected systematically
- European Parliament study↗🔗 web★★★★☆European UnionEuropean Parliament studyAn official European Parliament commissioned study relevant to AI governance and geopolitical risk; useful for understanding EU institutional perspectives on authoritarian AI misuse, though the full content was unavailable for detailed verification.This European Parliament study examines the intersection of artificial intelligence and authoritarian governance, analyzing how AI technologies can be leveraged by authoritarian...governancepolicyexistential-riskai-safety+2Source ↗754450_EN.pdf) documents AI used to "detect potential dissidents before any concrete act is committed"
Recent developments (2024-2025):
- ASPI report (December 2025) describes China as "the world leader in adopting generative AI" for surveillance and "public opinion management"
- According to ASPI analyst Nathan Attrill: "China is harnessing AI to make its existing systems of control far more efficient and intrusive"
- AI has become "the backbone of a far more pervasive and predictive form of authoritarian control"
- Shanghai district documents detail plans for AI-powered cameras and drones to "automatically discover and intelligently enforce the law," including alerting police to crowd gatherings
- New capabilities include predicting public demonstrations and monitoring prison inmates' moods
- Export of "commercialized version of the Great Firewall" to Pakistan, Ethiopia, Kazakhstan, Myanmar, Cambodia, and Belarus
Russia: Digital Sovereignty as Control
Russia has pursued "digital sovereignty" to enable comprehensive state control over internet access and information flow.
Key developments:
- Sovereign Internet Law (2019): Enables isolation of Russia's internet from global networks
- By December 2024, traffic volumes dropped to 20% of normal levels—a de facto blockade
- April 2025: Ban on foreign messaging apps for government bodies and public services
- July 2025: First ban on not just distributing but consuming "extremist materials"
- Freedom House↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2024: RussiaRelevant to AI governance discussions around state control of information infrastructure, digital authoritarianism, and how repressive regimes may use AI-enabled surveillance and censorship tools to consolidate power.Freedom House's annual assessment of internet freedom in Russia, documenting the Russian government's extensive censorship, surveillance, and control over online information. Th...governanceauthoritarianismpolicydeployment+2Source ↗ rates Russia among the worst for internet freedom, with the largest 15-year decline recorded
Impact on citizens:
- VPN use reached 41% by 2025, concentrated among young, urban, affluent users
- Number of permanently inaccessible websites in 2024 was 5x higher than in 2022
- VKontakte surpassed YouTube in popularity by April 2025 after systematic blocking
- Roskomnadzor restricted access to 12,600 materials "promoting VPN services" in January-April 2025 alone—twice as many as all of 2024
VPN Adoption as Censorship Resistance (2025)
| Country | VPN Adoption Rate | Regime Type | Primary Censorship Driver |
|---|---|---|---|
| Indonesia | 55% | Flawed democracy | Content filtering (pornography, political dissent) |
| India | 43% | Flawed democracy | Periodic internet shutdowns, content blocks |
| Russia | 41% | Authoritarian | YouTube throttling, social media blocks, news censorship |
| United Arab Emirates | 38% | Authoritarian | VoIP restrictions, social media monitoring |
| Saudi Arabia | 36% | Authoritarian | Religious content, political expression restrictions |
| China | 31% | Authoritarian | Great Firewall, comprehensive platform blocks |
| Global average | 33% | — | Approximately 2 billion users worldwide |
Source: Vocal Media VPN Report 2025, CEPA
Warning Signs in Democracies (2025)
Half of the 18 countries rated "Free" by Freedom House experienced internet freedom declines during June 2024 to May 2025.
| Country | Freedom Score Change | Key Concerns (2024-2025) |
|---|---|---|
| Georgia | −5 points | Largest decline among free countries; election manipulation concerns |
| Germany | −3 points | Declining despite AI Act passage; content moderation pressures |
| United States | −3 points | AI social media surveillance of visa holders; CIVICUS downgrade to "obstructed" |
| Average (Free countries) | −1.2 points | 9 of 18 "Free" countries experienced declines |
Source: Freedom House Freedom on the Net 2025
Middle East: AI as a Tool of Repression (2025)
A 2025 academic study analyzed AI-driven governance in Iran, Saudi Arabia, and the UAE through the TRIAD framework (Technology, Regime Intent, Algorithmic Deployment):
| Country | Primary AI Tools | Key Applications | Notable Cases |
|---|---|---|---|
| Iran | Facial recognition, web traffic analysis, geolocation | Surveillance of women's protests; targeting of activists | Used to identify and arrest women not wearing hijab |
| Saudi Arabia | Facial recognition, crowd monitoring | Deployed in Mecca and Medina for "crowd management" | Creates vast biometric databases of pilgrims |
| UAE | Predictive policing, social media monitoring | Pre-crime identification; dissent suppression | Integrated with national ID systems |
| Egypt | Social media AI, keyword monitoring | Predicting and preemptively suppressing protests | Hashtag and activity analysis |
| Bahrain | Spyware, AI-driven monitoring | Targeting activists with surveillance malware | Arrests based on online activity |
Source: Taylor & Francis Democratization Journal 2025
Specific concerns:
- United States: The Century Foundation rates American democracy at 57/100, a 28% drop in one year—described as "well into authoritarianism." Federal authorities announced AI-assisted social media surveillance of student visa recipients. CIVICUS downgraded U.S. civic freedoms from "narrowed" to "obstructed."
- Germany/EU: Declining Freedom House scores despite AI Act passage
- Weak democracies: Brookings research↗🔗 web★★★★☆Brookings InstitutionGeopolitical Implications of AI and Digital Surveillance AdoptionA June 2022 Brookings policy brief relevant to AI governance discussions about dual-use technologies, export controls, and the geopolitical dimensions of AI deployment in authoritarian contexts.This Brookings policy brief by researchers from Georgetown's CSET and Australia's ASPI examines how the spread of AI-enabled surveillance technologies—particularly from China—af...governancepolicyai-safetydeployment+2Source ↗ found these "exhibited backsliding—a dismantling of democratic institutions" regardless of whether surveillance technology came from China or the US
- Israel: 2024 Facial Recognition Bill allows military-grade computer vision in domestic public spaces
What Makes AI Different
Previous surveillance and control technologies were limited by human capacity. AI changes this fundamentally:
| Limitation | Pre-AI Reality | AI Capability | Authoritarian Implication |
|---|---|---|---|
| Human attention | Stasi employed 1 informant per 63 citizens | AI can process all communications simultaneously | No "attention gap" for organizing |
| Censorship speed | Human review creates backlogs | Real-time content filtering at scale | Viral content can be blocked before spreading |
| Pattern recognition | Analysts miss subtle signals | AI identifies dissent patterns across millions | "Pre-crime" detection of organizing |
| Enforcement personnel | Requires large, potentially disloyal bureaucracy | Automated enforcement reduces human agents | Fewer points of potential defection |
| Information control | Underground networks persist | AI can map and disrupt networks | Harder to maintain alternative information ecosystems |
Technical Affordances of AI-Enabled Authoritarianism
Research from Oxford University identifies four technical capabilities that enable AI surveillance to bypass democratic constraints:
| Affordance | Description | Authoritarian Application | Pre-AI Limitation |
|---|---|---|---|
| Population-scale data ingestion | Process data from millions simultaneously | Monitor all citizens, not just known dissidents | Human analysts could only track thousands |
| Black-box inference | Draw conclusions without transparent reasoning | Flag "suspicious" behavior without explainable criteria | Human judgment could be questioned |
| Predictive automation | Forecast future actions from behavioral patterns | "Pre-crime" detection of organizing | Could only respond to completed actions |
| Real-time execution speed | Act on insights within milliseconds | Block content before viral spread; intercept organizing | Response delays allowed information spread |
China's Sharp Eyes program integrates 200+ million AI-enabled cameras into a national monitoring network for "100% coverage."
The Stability Mechanism
Traditional autocracies fell through predictable mechanisms that AI may foreclose:
| Mechanism of Regime Change | How AI Undermines It |
|---|---|
| Popular uprising | Detected and disrupted before critical mass; predictive analytics identify potential leaders |
| Military coup | AI surveillance of military communications; automated monitoring of officer networks |
| Elite defection | Comprehensive monitoring makes coordination among elites visible and risky |
| External pressure | Information control limits external influence; reduced dependence on international integration |
| Economic collapse | AI-optimized resource allocation may improve regime efficiency; surveillance enables rationing enforcement |
This creates a "stability trap" where AI-enabled authoritarianism may be self-reinforcing: the more effective it becomes, the harder it is to reverse. The AI surveillance market is growing at 21.3% annually, projected to reach $12.46 billion by 2030, suggesting rapid capability expansion.
Severity Assessment
Why This Could Be Catastrophic
| Factor | Assessment | Notes |
|---|---|---|
| Scale | 5.7 billion people currently under autocracy | 72% of global population; highest since 1978 |
| Duration | Potentially permanent | No clear mechanism for liberation once AI surveillance matures |
| Trajectory foreclosure | High | Locks out future democratic transitions and positive trajectories |
| Spread risk | High | Technology export and emulation accelerating; 80+ countries already have Chinese surveillance tech |
| Irreversibility | Very High | Each year of consolidation makes reversal harder |
Probability Estimates for Authoritarian AI Scenarios
| Scenario | 10-Year Probability | 30-Year Probability | Key Uncertainties |
|---|---|---|---|
| Global autocracy share exceeds 75% | 40-60% | 50-70% | Currently at 72%; trajectory unclear |
| China-style surveillance in 100+ countries | 50-70% | 70-85% | Already at 80+; technology transfer accelerating |
| Major democracy falls to AI-enabled authoritarianism | 15-30% | 30-50% | US democracy score fell 28% in single year (2025) |
| Effective technical countermeasures emerge | 30-50% | 50-70% | VPN adoption growing but increasingly blocked |
| International coordination halts spread | 10-20% | 20-35% | Current export controls ineffective |
| AI surveillance becomes prohibitively expensive | 5-15% | 10-20% | Costs declining rapidly; $1.74B market growing 21%/year |
Uncertainty Factors
| Factor | Reduces Concern | Increases Concern |
|---|---|---|
| AI effectiveness | May not work as well as feared; gaps persist | Improving rapidly; China claims generative AI approach "expert-level" |
| Human adaptation | Countermeasures emerge; VPN use at 41% in Russia | Adaptation concentrated among educated, urban elites; 2 billion VPN users globally |
| International pressure | May slow adoption | Authoritarian states resist; export controls have limited effect |
| Technical limitations | Current systems have gaps | Gaps narrowing with each generation; 60 countries increased censorship in 2025 |
| Regime incentives | Surveillance is expensive; may overreach | Cost declining 21%/year; benefits to regime stability substantial |
Expert Estimates
The Carnegie Endowment for International Peace↗🔗 web★★★★☆Carnegie EndowmentCarnegie Endowment for International PeacePublished by the Carnegie Endowment for International Peace in late 2024, this piece is relevant to discussions of AI's societal risks, democratic backsliding, and the political dimensions of AI governance at the international level.This Carnegie Endowment analysis examines how AI threatens democratic governance through disinformation, surveillance, and power concentration, while exploring whether democrati...governanceai-safetypolicyexistential-risk+2Source ↗ warns that AI presents "significant threats to democracies by enabling malicious actors—from political opponents to foreign adversaries—to manipulate public perceptions, disrupt electoral processes, and amplify misinformation."
The National Endowment for Democracy↗🔗 webData-Centric AuthoritarianismPublished by the National Endowment for Democracy, this report is relevant to AI governance discussions about dual-use technologies and the geopolitical dimensions of AI deployment, particularly concerning authoritarian misuse of emerging AI capabilities.This NED report analyzes how China is deploying frontier technologies—including AI surveillance, neurotechnologies, quantum computing, and digital currencies—to enable mass data...governancepolicyai-safetydeployment+3Source ↗ characterizes China's approach as "data-centric authoritarianism" that "could globalize repression" through technology transfer and normative influence.
Relationship to Other Risks
| Connection | Relationship |
|---|---|
| AI Authoritarian Tools | The capabilities that enable this risk |
| AI-Driven Concentration of Power | Often co-occurs; authoritarianism is one form of power concentration |
| AI Value Lock-in | Authoritarian systems may become locked in |
| Erosion of Human Agency | Citizens lose meaningful agency under authoritarianism |
| AI Mass Surveillance | Key enabling capability |
Potential Responses
Responses That Address This Risk
| Response | Mechanism | Effectiveness | Status |
|---|---|---|---|
| US AI Chip Export Controls | Restricts surveillance technology transfer | Medium | 19 Chinese AI companies on Entity List; gaps persist |
| EU AI Act | Bans certain AI surveillance uses | Medium | Passed March 2024; implementation ongoing |
| Privacy-preserving technology | Technical countermeasures (encryption, VPNs) | Low-Medium | Widely used but increasingly blocked |
| Democratic resilience building | Strengthens institutions against capture | Medium-High | Varies by country; requires political will |
| International pressure | Diplomatic and economic costs | Low | Limited effectiveness against major powers |
Technical Countermeasures
| Technology | Function | Current Adoption | Limitations |
|---|---|---|---|
| End-to-end encryption | Protects communications from surveillance | High in democracies | Governments seeking backdoors; metadata still exposed |
| VPNs | Circumvents internet censorship | 36% in Russia (March 2025) | Increasingly blocked; requires technical sophistication |
| Tor/onion routing | Anonymous internet access | Limited | Slow; some countries block entry nodes |
| Decentralized social networks | Resist centralized censorship | Very low | Network effects favor centralized platforms |
| Mesh networks | Communication without central infrastructure | Experimental | Limited range; requires hardware |
Policy Approaches
Export controls:
- US has placed 19 Chinese AI facial recognition companies on Entity List (as of mid-2022)
- Bulletin of the Atomic Scientists↗🔗 webBulletin of Atomic Scientists: AI Surveillance and DemocracyPublished by the Bulletin of Atomic Scientists in June 2024, this article is relevant to AI governance and societal risk discussions, particularly around how AI deployment without oversight can undermine democratic institutions globally.This Bulletin of Atomic Scientists article examines how AI-powered surveillance systems—including facial recognition, predictive policing, and mass monitoring tools—are being de...governancepolicyai-safetydeployment+3Source ↗ argues current controls insufficient; supply chains remain opaque
- OECD (April 2024) called for "trustworthy technology development guided by democratic principles"
Regulatory frameworks:
- EU AI Act bans real-time biometric identification in public spaces (with exceptions)
- Atlantic Council↗🔗 web★★★★☆Atlantic CouncilThe West, China, and AI SurveillanceAtlantic Council geopolitics analysis relevant to AI governance discussions about how surveillance AI exported by authoritarian states poses systemic risks to democratic norms and global AI safety standards.cbranley (2020)This Atlantic Council analysis examines how China is deploying AI-powered surveillance technologies domestically and exporting them globally, raising concerns about authoritaria...governancepolicyai-safetycoordination+2Source ↗ recommends democracies "establish ethical frameworks, mandate transparency, limit how mass surveillance data is used, enshrine privacy protections, and impose clear redlines on government use of AI for social control"
Institutional Measures
Democratic oversight:
- Strong judicial review of surveillance programs
- Independent oversight bodies with technical expertise
- Sunset clauses on emergency surveillance powers
- Whistleblower protections for surveillance abuses
International coordination:
- Summit for Democracy Export Controls and Human Rights Initiative
- Potential for democratic technology alliances
- Support for civil society in autocratizing countries
Core Questions
| Question | Optimistic View | Pessimistic View |
|---|---|---|
| Can AI make authoritarianism permanently stable? | New vulnerabilities will emerge; AI has failure modes | Historical escape routes increasingly foreclosed |
| How quickly will this spread? | Slow adoption; most countries lack infrastructure | 80+ countries already have Chinese tech; accelerating |
| Will democracies resist backsliding? | Strong institutions; public values privacy | Half of "Free" countries declining; security trumps liberty |
| Can technical countermeasures keep pace? | Encryption, VPNs, decentralization work | Governments adapting; countermeasures require sophistication |
| Will international coordination work? | Democratic alliances forming | Authoritarian states resist; export controls leaky |
Key Cruxes
Crux 1: Does AI fundamentally change the stability of authoritarianism?
- If yes: Unprecedented intervention urgency; prevention-focused strategy essential
- If no: Continue traditional democracy support; AI-specific measures less critical
Crux 2: Are democratic backsliding risks comparable to authoritarian consolidation?
- If yes: Focus on domestic surveillance limits in democracies
- If no: Focus on export controls and supporting dissidents abroad
Crux 3: Can technical countermeasures keep pace with surveillance capabilities?
- If yes: Invest heavily in privacy technology development and deployment
- If no: Policy and institutional approaches become primary intervention point
Sources & Resources
Primary Reports
- V-Dem Institute (2024): Democracy Report 2024: Democracy Winning and Losing at the Ballot↗🔗 webV-Dem Democracy ReportRelevant to AI safety insofar as democratic institutions and checks on executive power are considered prerequisites for meaningful AI oversight; democratic backsliding may undermine the political conditions needed for accountable AI governance globally.The V-Dem Democracy Report 2024 provides a comprehensive global assessment of democratic and autocratic trends, documenting the continued spread of authoritarianism worldwide. I...governancepolicycoordinationexistential-risk+1Source ↗ - Comprehensive democracy statistics showing 72% of population under autocracy
- Freedom House (2025): Freedom on the Net 2025: An Uncertain Future for the Global Internet↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2025: Uncertain Future of the Global InternetRelevant to AI safety governance discussions around how authoritarian AI deployment and internet fragmentation could undermine global coordination on AI safety norms and oversight mechanisms.Freedom House's annual Freedom on the Net 2025 report assesses the state of internet freedom globally, documenting trends in government censorship, surveillance, and the fragmen...governancepolicyai-safetydeployment+2Source ↗ - Documents 15th consecutive year of declining internet freedom
- European Parliament (2024): AI and Human Rights: Using AI as a Weapon of Repression↗🔗 web★★★★☆European UnionEuropean Parliament studyAn official European Parliament commissioned study relevant to AI governance and geopolitical risk; useful for understanding EU institutional perspectives on authoritarian AI misuse, though the full content was unavailable for detailed verification.This European Parliament study examines the intersection of artificial intelligence and authoritarian governance, analyzing how AI technologies can be leveraged by authoritarian...governancepolicyexistential-riskai-safety+2Source ↗754450_EN.pdf) - Analysis of AI-enabled repression mechanisms
- CSIS Big Data China (2024): The AI-Surveillance Symbiosis in China↗🔗 web★★★★☆CSISThe AI-Surveillance Symbiosis in China (CSIS Big Data China)Relevant to AI governance discussions about how authoritarian states can use surveillance infrastructure as a competitive advantage in AI development, with implications for global AI policy and democratic oversight of AI deployment.This CSIS analysis examines how China's state surveillance infrastructure and private AI companies form a mutually reinforcing feedback loop: government surveillance generates v...governancecapabilitiespolicyai-safety+2Source ↗ - Technical analysis of China's surveillance ecosystem
Policy Analysis
- Carnegie Endowment (2024): Can Democracy Survive the Disruptive Power of AI?↗🔗 web★★★★☆Carnegie EndowmentCarnegie Endowment for International PeacePublished by the Carnegie Endowment for International Peace in late 2024, this piece is relevant to discussions of AI's societal risks, democratic backsliding, and the political dimensions of AI governance at the international level.This Carnegie Endowment analysis examines how AI threatens democratic governance through disinformation, surveillance, and power concentration, while exploring whether democrati...governanceai-safetypolicyexistential-risk+2Source ↗ - Analysis of AI threats to democratic governance
- Brookings (2024): Geopolitical Implications of AI and Digital Surveillance Adoption↗🔗 web★★★★☆Brookings InstitutionGeopolitical Implications of AI and Digital Surveillance AdoptionA June 2022 Brookings policy brief relevant to AI governance discussions about dual-use technologies, export controls, and the geopolitical dimensions of AI deployment in authoritarian contexts.This Brookings policy brief by researchers from Georgetown's CSET and Australia's ASPI examines how the spread of AI-enabled surveillance technologies—particularly from China—af...governancepolicyai-safetydeployment+2Source ↗ - Research on surveillance technology and democratic backsliding
- National Endowment for Democracy (2024): Data-Centric Authoritarianism↗🔗 webData-Centric AuthoritarianismPublished by the National Endowment for Democracy, this report is relevant to AI governance discussions about dual-use technologies and the geopolitical dimensions of AI deployment, particularly concerning authoritarian misuse of emerging AI capabilities.This NED report analyzes how China is deploying frontier technologies—including AI surveillance, neurotechnologies, quantum computing, and digital currencies—to enable mass data...governancepolicyai-safetydeployment+3Source ↗ - Analysis of China's technology export and global repression
- Bulletin of the Atomic Scientists (2024): How AI Surveillance Threatens Democracy Everywhere↗🔗 webBulletin of Atomic Scientists: AI Surveillance and DemocracyPublished by the Bulletin of Atomic Scientists in June 2024, this article is relevant to AI governance and societal risk discussions, particularly around how AI deployment without oversight can undermine democratic institutions globally.This Bulletin of Atomic Scientists article examines how AI-powered surveillance systems—including facial recognition, predictive policing, and mass monitoring tools—are being de...governancepolicyai-safetydeployment+3Source ↗ - Policy recommendations for democratic response
- Atlantic Council (2024): The West, China, and AI Surveillance↗🔗 web★★★★☆Atlantic CouncilThe West, China, and AI SurveillanceAtlantic Council geopolitics analysis relevant to AI governance discussions about how surveillance AI exported by authoritarian states poses systemic risks to democratic norms and global AI safety standards.cbranley (2020)This Atlantic Council analysis examines how China is deploying AI-powered surveillance technologies domestically and exporting them globally, raising concerns about authoritaria...governancepolicyai-safetycoordination+2Source ↗ - Comparative analysis and policy recommendations
Country-Specific Sources
- Freedom House (2024): Russia: Freedom on the Net 2024 Country Report↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2024: RussiaRelevant to AI governance discussions around state control of information infrastructure, digital authoritarianism, and how repressive regimes may use AI-enabled surveillance and censorship tools to consolidate power.Freedom House's annual assessment of internet freedom in Russia, documenting the Russian government's extensive censorship, surveillance, and control over online information. Th...governanceauthoritarianismpolicydeployment+2Source ↗ - Detailed analysis of Russia's internet control measures
- DGAP (2024): Deciphering Russia's "Sovereign Internet Law"↗🔗 webDeciphering Russia's "Sovereign Internet Law"Relevant to AI governance discussions around state control of digital infrastructure and how authoritarian internet fragmentation models could affect global AI deployment and oversight mechanisms.This analysis examines Russia's 2019 'Sovereign Internet Law' (RuNet), which enables the Russian government to isolate its domestic internet from the global web. It explores the...governancepolicyauthoritarianismcoordination+2Source ↗ - Technical analysis of Russia's digital sovereignty approach
- ASPI (2024): China's AI Surveillance Report↗🔗 webChina's Homegrown AI and Surveillance Tech Boosts Global Social Control: ASPI/NED ReportRelevant to AI governance discussions about dual-use surveillance technology and the geopolitical implications of AI capabilities being deployed for authoritarian social control, including potential global export of such systems.A February 2025 National Endowment for Democracy report finds China is leveraging AI, big data, facial recognition, quantum computing, and brain-computer interfaces to expand do...governancepolicyauthoritarianismcapabilities+3Source ↗ - Analysis of China's generative AI adoption for surveillance
- CNAS (2024): The Dangers of the Global Spread of China's Digital Authoritarianism↗🔗 web★★★★☆CNASThe Dangers of the Global Spread of China's Digital AuthoritarianismCongressional testimony relevant to AI governance discussions about how powerful AI surveillance capabilities can be misused by authoritarian states, with implications for global norms and the geopolitics of AI development.Congressional testimony from CNAS examining how China exports surveillance technologies and digital authoritarian practices to other governments, enabling repression and undermi...governancepolicyai-safetydeployment+2Source ↗ - Congressional testimony on technology export
Academic Research
- Oxford AI Governance Initiative (2025): Toward Resisting AI-Enabled Authoritarianism↗🔗 webToward Resisting AI-Enabled AuthoritarianismPublished by the Oxford AI Governance Initiative (AIGI), this paper is relevant to discussions of macro-level AI risk scenarios where AI accelerates authoritarian consolidation of power, a concern highlighted in many AI safety threat models.This paper examines the risks of AI technologies being used to entrench or enable authoritarian governance, and proposes frameworks and strategies for resisting such outcomes. I...governanceexistential-riskpolicycoordination+2Source ↗ - Framework for resistance strategies
- Lawfare (2024): The Authoritarian Risks of AI Surveillance↗🔗 web★★★★☆LawfareThe Authoritarian Risks of AI SurveillancePublished on Lawfare, a national security and law-focused outlet, this piece is relevant to AI governance discussions about dual-use risks and the geopolitical dimensions of AI deployment, particularly for researchers studying how AI could enable large-scale societal control.This Lawfare article examines how AI-powered surveillance technologies can be exploited by authoritarian regimes to monitor, control, and suppress populations. It explores the p...governancepolicydeploymentai-safety+2Source ↗ - Legal and policy analysis
- Taylor & Francis (2025): From Predicting Dissent to Programming Power: AI-Driven Authoritarian Governance↗🔗 webFrom Predicting Dissent to Programming Power: AI-Driven Authoritarian GovernanceRelevant to AI safety discussions about dual-use risks and the political economy of AI deployment; this peer-reviewed article addresses how AI systems can entrench authoritarian power structures, a concern adjacent to long-term governance and existential risk from misaligned power concentration.Arash Beidollahkhani (2025)This academic paper examines how authoritarian regimes are leveraging AI technologies to move beyond reactive surveillance toward proactive control systems that predict and supp...governanceauthoritarianismexistential-riskpolicy+3Source ↗ - TRIAD framework for analyzing AI authoritarianism
References
This Atlantic Council analysis examines how China is deploying AI-powered surveillance technologies domestically and exporting them globally, raising concerns about authoritarian governance models and the geopolitical competition between democratic and autocratic AI development paradigms. It explores implications for Western policy responses to counter the spread of surveillance-enabling AI infrastructure.
2From Predicting Dissent to Programming Power: AI-Driven Authoritarian Governancetandfonline.com·Arash Beidollahkhani·2025▸
This academic paper examines how authoritarian regimes are leveraging AI technologies to move beyond reactive surveillance toward proactive control systems that predict and suppress dissent. It analyzes how AI tools enable unprecedented consolidation of political power by automating censorship, social scoring, and population monitoring at scale. The paper raises concerns about the global diffusion of these governance models and their implications for democracy and human rights.
This analysis examines Russia's 2019 'Sovereign Internet Law' (RuNet), which enables the Russian government to isolate its domestic internet from the global web. It explores the law's technical mechanisms, political motivations, and implications for internet fragmentation and state control over digital infrastructure.
A House Select Committee minority report warns that China has constructed the world's most extensive surveillance state using facial recognition, biometrics, and AI-driven predictive policing, while weakened US export controls allow American technology to continue enabling these systems. The report also highlights China's Digital Silk Road initiative, which exports this surveillance infrastructure to over 80 countries, normalizing authoritarian monitoring globally.
This CSIS analysis examines how China's state surveillance infrastructure and private AI companies form a mutually reinforcing feedback loop: government surveillance generates vast datasets that train facial recognition systems, which in turn enhance surveillance capabilities. Drawing on research by economists Yuchtman and Yang, it argues this state-private sector dynamic gives China a structural advantage in AI development distinct from Western models.
This NED report analyzes how China is deploying frontier technologies—including AI surveillance, neurotechnologies, quantum computing, and digital currencies—to enable mass data collection and social control. It examines the risk that these tools could be exported globally, spreading authoritarian governance models and undermining democratic freedoms worldwide.
A February 2025 National Endowment for Democracy report finds China is leveraging AI, big data, facial recognition, quantum computing, and brain-computer interfaces to expand domestic surveillance and export authoritarian control tools globally. The report highlights how DeepSeek and 'city brain' systems enable real-time population monitoring, while digital yuan and quantum computing threaten financial privacy and encryption respectively.
This Bulletin of Atomic Scientists article examines how AI-powered surveillance systems—including facial recognition, predictive policing, and mass monitoring tools—are being deployed globally by both authoritarian and democratic governments, threatening civil liberties and democratic norms. It traces the spread of these technologies from countries like Singapore and Malaysia to Western democracies, arguing that the normalization of AI surveillance poses systemic risks to political freedom and accountability.
This Brookings policy brief by researchers from Georgetown's CSET and Australia's ASPI examines how the spread of AI-enabled surveillance technologies—particularly from China—affects geopolitics, human rights, and democratic norms globally. It analyzes patterns in surveillance technology exports and their implications for authoritarian governance diffusion.
Congressional testimony from CNAS examining how China exports surveillance technologies and digital authoritarian practices to other governments, enabling repression and undermining democratic norms globally. The testimony warns of the geopolitical and human rights consequences of AI-enabled surveillance spreading through Chinese tech companies and Belt and Road investments.
This European Parliament study examines the intersection of artificial intelligence and authoritarian governance, analyzing how AI technologies can be leveraged by authoritarian regimes and the implications for democratic institutions and global security. It likely provides policy recommendations for the EU on countering authoritarian uses of AI while promoting responsible AI governance frameworks.
The V-Dem Democracy Report 2024 provides a comprehensive global assessment of democratic and autocratic trends, documenting the continued spread of authoritarianism worldwide. It tracks indicators of democratic backsliding, electoral integrity, and civil liberties across nations, offering quantitative data relevant to understanding threats to democratic governance institutions that underpin AI oversight mechanisms.
This Carnegie Endowment analysis examines how AI threatens democratic governance through disinformation, surveillance, and power concentration, while exploring whether democratic institutions can adapt to manage AI's destabilizing effects. It assesses the risk that AI accelerates authoritarian consolidation and erodes checks and balances that protect democratic norms.
This Lawfare article examines how AI-powered surveillance technologies can be exploited by authoritarian regimes to monitor, control, and suppress populations. It explores the political and governance risks posed by the proliferation of AI surveillance tools, both domestically and through export to repressive governments.
This paper examines the risks of AI technologies being used to entrench or enable authoritarian governance, and proposes frameworks and strategies for resisting such outcomes. It analyzes how AI tools like surveillance, predictive policing, and information control can consolidate authoritarian power, and considers policy and technical countermeasures.
Freedom House's annual Freedom on the Net 2025 report assesses the state of internet freedom globally, documenting trends in government censorship, surveillance, and the fragmentation of the open internet. The report highlights how authoritarian regimes leverage digital controls and how AI is increasingly being used as a tool of repression and information manipulation.
Freedom House's annual assessment of internet freedom in Russia, documenting the Russian government's extensive censorship, surveillance, and control over online information. The report covers restrictions on connectivity, content blocking, and violations of user rights including arrests of online users and use of digital tools for political repression.
CNN coverage of an Australian Strategic Policy Institute (ASPI) report examining how China is deploying artificial intelligence to enhance its censorship and surveillance capabilities. The report details how Chinese authorities are leveraging AI tools to monitor citizens, suppress dissent, and extend state control both domestically and potentially abroad.