Authoritarian Tools
AI Authoritarian Tools
Comprehensive analysis documenting AI-enabled authoritarian tools across surveillance (350M+ cameras in China analyzing 25.9M faces daily per district), censorship (22+ countries mandating AI content removal), and social control (1.16B individuals in social credit database). Evidence shows Chinese surveillance tech deployed in 100+ countries via Digital Silk Road; 80% of global population lives in countries "not fully free." Argues AI enables "perfect autocracy" through preemptive suppression—RAND suggests 90%+ detection of organized opposition—with $300B surveillance market projected by 2028.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Current Scale | 350+ million surveillance cameras in China; 1.16 billion individuals in social credit database | Freedom House 2025; Global Times 2024 |
| Global Spread | 100+ countries using Chinese smart city tech; 47-65 countries with AI surveillance components | ORF 2024 |
| Internet Freedom Decline | 15 consecutive years of decline; 28 of 72 countries deteriorated in 2024-25 | Freedom House 2025 |
| Economic Cost | $7.69 billion lost to internet shutdowns in 2024; 88,000 hours of outages globally | Access Now 2024 |
| Surveillance Market | $300 billion projected by 2028; $5.33 billion China IP camera market in 2025 | Mordor Intelligence |
| Political Repression | Legal consequences for online speech in 55 of 70 countries; 47 countries deploy state commentators | Freedom House 2025 |
| Stability Risk | High—AI enables preemptive suppression of dissent before organization occurs | RAND Corporation; Lawfare |
Overview
Artificial intelligence is fundamentally transforming the tools of authoritarianism, enabling unprecedented capabilities for surveillance, censorship, propaganda, and social control. Unlike traditional autocracies that relied on physical force and limited information, AI-powered authoritarian systems can monitor entire populations in real-time, automatically detect and suppress dissent, and predict opposition before it organizes.
Freedom House↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2023: The Repressive Power of Artificial IntelligenceRelevant to AI safety discussions about misuse risks and geopolitical dimensions of AI deployment; illustrates how current AI systems are already being used in ways that threaten human autonomy and democratic institutions at scale.Freedom House's 2023 Freedom on the Net report examines how authoritarian governments are deploying AI tools to surveil, censor, and repress citizens across the globe. It docume...governancepolicyai-safetydeployment+4Source ↗ reports that internet freedom has declined for 15 consecutive years as of 2025, with AI playing an increasingly central role in digital repression. In 2024-25 alone, conditions deteriorated in 28 of 72 countries assessed. At least 22 countries now mandate platforms use machine learning to remove political, social, and religious speech deemed undesirable by authorities. China's surveillance state monitors 1.4 billion people through 350+ million cameras with facial recognition, a social credit database covering 1.16 billion individuals, and integrated behavioral analysis platforms.
The core concern extends beyond immediate human rights violations: AI may enable the creation of stable, durable authoritarian regimes that are significantly harder to overthrow than historical autocracies. If comprehensive surveillance can detect organizing before it becomes effective, and predictive systems can identify dissidents early, billions could live under repressive regimes indefinitely—representing a potential civilizational lock-in of oppressive governance.
AI-Enabled Authoritarian Control Ecosystem
Diagram (loading…)
flowchart TD
subgraph DATA["Data Collection Layer"]
CAM[Surveillance Cameras<br/>350M+ in China]
FR[Facial Recognition<br/>99.9% accuracy]
DIG[Digital Monitoring<br/>Social media, purchases]
BIO[Biometric Data<br/>Gait, voice, DNA]
end
subgraph PROC["Processing Layer"]
IJOP[Integrated Platform<br/>IJOP in Xinjiang]
CITY[City Brain Systems<br/>Real-time analytics]
PRED[Predictive Policing<br/>Pre-crime detection]
end
subgraph CONTROL["Control Mechanisms"]
CENS[Automated Censorship<br/>Content filtering]
PROP[AI Propaganda<br/>Micro-targeted messaging]
SCORE[Social Credit<br/>1.16B individuals scored]
BLACK[Blacklists<br/>33M+ travel bans]
end
subgraph OUTCOME["Regime Outcomes"]
SUPP[Preemptive Suppression<br/>Dissent detection]
STAB[Regime Stability<br/>Perfect autocracy risk]
end
CAM --> IJOP
FR --> IJOP
DIG --> CITY
BIO --> CITY
IJOP --> PRED
CITY --> PRED
PRED --> CENS
PRED --> PROP
PRED --> SCORE
SCORE --> BLACK
CENS --> SUPP
PROP --> SUPP
BLACK --> SUPP
SUPP --> STAB
style DATA fill:#e3f2fd
style PROC fill:#fff3e0
style CONTROL fill:#ffebee
style OUTCOME fill:#ffcdd2Risk Assessment
| Dimension | Assessment | Notes |
|---|---|---|
| Current Severity | High | 12+ million Uyghurs under comprehensive surveillance↗🔗 web★★★★☆Human Rights Watch200+ million Uyghurs under surveillanceThis HRW report is a key reference for AI governance discussions around surveillance misuse; the page is currently returning a 404 error and may have been moved or removed from the HRW website.This Human Rights Watch report (now returning a 404 error) documented the mass surveillance infrastructure deployed against Uyghurs and other minorities in Xinjiang, China. It d...governancepolicydigital-repressionauthoritarianism+4Source ↗; 22+ countries mandating AI content removal |
| Geographic Scope | Expanding rapidly | 100+ countries using Chinese smart city tech; 47-65 countries with AI surveillance (ORF 2024) |
| Technological Maturity | Near-comprehensive | Facial recognition 99.9% accurate; China's cameras analyze 25.9M faces daily in one district alone |
| Population Affected | 4+ billion at risk | 80% of global population lives in countries "not fully free" per Freedom House |
| Economic Infrastructure | $22 billion invested | Digital Silk Road investment 2017-2023; $300B surveillance market by 2028 |
| Stability Risk | Extreme | AI enables preemptive suppression—RAND analysis suggests 90%+ detection of organized opposition |
| Timeline | Accelerating | City Brain 3.0 launched March 2025; integration deepening annually |
| Trend | Worsening | 15 consecutive years of internet freedom decline; 28 countries deteriorated in 2024-25 |
Comprehensive AI Control Systems
Surveillance Infrastructure
Modern AI surveillance operates at unprecedented scale and granularity. China's SenseTime↗🔗 web★★★★☆ReutersUS Adds SenseTime and Other Chinese AI Firms to Investment Blacklist Over XinjiangRelevant to AI governance discussions about export controls, dual-use AI risks, and the real-world harms of deploying surveillance AI by state actors; illustrates policy responses to AI-enabled repression.Reuters reports on the US Treasury Department adding SenseTime and other Chinese technology companies to an investment blacklist due to their alleged role in surveillance and hu...governancepolicydeploymentauthoritarianism+5Source ↗ and Megvii↗🔗 web★★★★☆Financial Times68155560 D4e3 11e9 8367 807ebd53ab77A Financial Times article (paywalled) on digital repression by authoritarian states; relevant context for AI governance discussions around surveillance misuse, though content is inaccessible for full verification.This Financial Times article appears to cover the use of digital surveillance and technology by authoritarian regimes to suppress human rights and political dissent. Without acc...governancepolicydeploymentauthoritarianism+4Source ↗ systems can identify individuals from crowds in real-time, track movements across cities, and correlate behavior patterns across multiple data sources. Shanghai alone has over 5,000 surveillance cameras per square mile. In one Shanghai district, authorities estimate they capture and analyze 25.9 million faces daily—an average of 18,860 individuals per minute. The integration extends far beyond facial recognition:
- Gait analysis identifies individuals from walking patterns, defeating facial coverings
- Voice recognition monitors phone calls and public conversations
- Digital exhaust tracks online behavior, purchases, and location data
- Social network analysis maps relationships and influence patterns
- Predictive modeling flags "pre-crime" indicators and protest likelihood
Carnegie Endowment research↗🔗 web★★★★☆Carnegie EndowmentGlobal Expansion of AI SurveillanceRelevant to AI governance and deployment risk discussions; highlights how unregulated AI diffusion enables authoritarian surveillance, with implications for international AI policy and norms around dual-use technologies.This Carnegie Endowment study maps the global spread of AI surveillance technologies, identifying at least 75 countries using AI tools for surveillance including smart city plat...governancepolicydeploymentai-safety+1Source ↗ documents Chinese surveillance technology deployment in over 80 countries, often through "Safe City" infrastructure projects that embed comprehensive monitoring capabilities into urban planning.
Automated Censorship at Scale
AI censorship systems operate with speed and comprehensiveness impossible for human moderators. Oxford Internet Institute↗🔗 webOxford Internet Institute: Computational PropagandaThis research group is a leading academic source on AI-enabled influence operations and political manipulation; relevant to AI safety discussions around misuse, information integrity, and societal-scale harms from deployed AI systems.The Oxford Internet Institute's Computational Propaganda project investigates how digital technologies, bots, and algorithmic systems are weaponized to manipulate public opinion...governancepolicyai-ethicsdeployment+4Source ↗ research shows these systems can:
- Content filtering: Remove text, images, and videos in milliseconds based on semantic understanding
- Shadow banning: Reduce content visibility without explicit removal
- Keyword evolution: Automatically identify new euphemisms and coded language
- Context analysis: Distinguish between permitted and forbidden uses of identical content
China's Great Firewall 2.0↗🔗 web★★★★☆Atlantic CouncilGreat Firewall 2.0: China's Internet Control, Censorship, and Social CreditRelevant to AI safety discussions around the dual-use nature of AI capabilities and the governance risks posed by state-level deployment of AI surveillance systems without accountability or rights protections.This Atlantic Council analysis examines China's evolving digital authoritarianism, including the expansion of internet censorship infrastructure, the social credit system, and t...governancepolicyauthoritarianismdeployment+3Source ↗ employs deep packet inspection and machine learning to block VPNs dynamically. Russian SORM↗🔗 web★★★★☆Human Rights WatchTo Serve without Health? Inadequate Nutrition and Health Care in the Russian Armed Forces: ACKNOWLEDGEMENTSThis is an acknowledgements page from a 2003 Human Rights Watch report on Russian military conditions; it has minimal direct relevance to AI safety topics and appears to have been mistagged in the knowledge base.This Human Rights Watch report examines deficiencies in nutrition and healthcare within the Russian military, documenting systemic failures that harm soldiers' wellbeing. It ser...governancepolicyhuman-rightsauthoritarianismSource ↗ systems have evolved to incorporate AI-driven content analysis across platforms.
Personalized Propaganda and Influence Operations
AI enables micro-targeted propaganda that adapts to individual psychological profiles. Stanford Internet Observatory↗🔗 webStanford Internet ObservatorySIO is a leading academic institution tracking online harms including AI-generated disinformation; relevant to AI governance and deployment safety discussions around information integrity.The Stanford Internet Observatory is a multidisciplinary research center at Stanford's Freeman Spogli Institute focused on studying the misuse of the internet, including disinfo...governancepolicyai-safetydeployment+3Source ↗ research demonstrates:
- Behavioral targeting: Personalized messaging based on browsing history, social connections, and inferred beliefs
- A/B testing at scale: Real-time optimization of persuasive content
- Deepfake generation: Synthetic media indistinguishable from authentic content
- Emotional manipulation: Content designed to trigger specific psychological responses
The Internet Research Agency↗🏛️ governmentSenate Intelligence Committee ReportRelevant to AI safety discussions around AI-enabled disinformation, platform governance, and the misuse of automated systems for large-scale influence operations; provides an official government baseline for understanding state-sponsored information warfare.This Senate Intelligence Committee report documents how Russia's Internet Research Agency systematically exploited social media platforms to spread disinformation, sow division,...governancepolicydeploymentcoordination+4Source ↗ operations in 2016 U.S. elections demonstrated early-stage capabilities; current systems are orders of magnitude more sophisticated.
Social Credit and Behavioral Modification
China's Social Credit System↗🔗 web★★★★☆Council on Foreign RelationsSocial Credit SystemThis CFR page is currently returning a 404 error, making content unverifiable; it is tagged for China's social credit system as an example of AI-enabled state surveillance and digital authoritarianism relevant to AI governance concerns.This resource appears to be a Council on Foreign Relations backgrounder on China's Social Credit System, but the page currently returns a 404 error and the content is unavailabl...governancepolicyauthoritarianismdigital-repression+3Source ↗ represents the most comprehensive attempt to use AI for population-wide behavioral modification. By the end of 2024, the system had collected credit data on 1.16 billion individuals and 140 million enterprises:
- Comprehensive scoring: Integration of financial, social, and political behavior into unified ratings; 80% of provinces had implemented some version by 2022
- Algorithmic punishment: Automatic restriction of travel, education, and employment based on scores; 26.82 million air tickets and 5.96 million rail tickets denied to blacklisted individuals (as of 2019)
- Predictive intervention: Early identification of "unreliable" individuals before violations occur
- Corporate focus: 33+ million businesses given scores under the Corporate Social Credit System
- Social pressure: Public shaming and peer pressure through score visibility; 4.37 million blacklisted people subsequently fulfilled legal duties
Sesame Credit↗🔗 web★★★☆☆WIREDChina's Social Credit System: Sesame Credit and Algorithmic GovernanceRelevant to AI safety discussions around deployment of behavioral scoring systems, misuse of AI for social control, and governance risks of opaque algorithmic decision-making affecting human rights.This Wired article examines China's Sesame Credit system, a private and government-linked social scoring initiative that rates citizens based on financial behavior, social conne...governancepolicyauthoritarianismdigital-repression+5Source ↗ pilot programs demonstrated 20-30% improvement in targeted behaviors. However, recent analysis suggests the system is now focused primarily on corporate compliance rather than individual social scores.
Current Deployment and Geographic Spread
Global AI Surveillance Adoption by Region
| Region | Countries with Chinese Surveillance Tech | Key Technologies | Notable Implementations |
|---|---|---|---|
| East Asia | China, Hong Kong (expanding) | Full-spectrum surveillance, City Brain 3.0 | 350M cameras; 5,000 cameras/sq mile in Shanghai |
| Central Asia | Uzbekistan, Kazakhstan, Tajikistan | Safe City systems, facial recognition | Huawei Safe City initiative since 2017 |
| Southeast Asia | Myanmar, Cambodia, Vietnam, Philippines | Internet controls, facial recognition | Myanmar's 2024 VPN blocking; censorship laws |
| Middle East | UAE, Saudi Arabia, Egypt | Smart city infrastructure, AI analytics | 28 of 34 China-led projects involve intelligence ties |
| Africa | 18+ countries including Zimbabwe, Uganda | Safe City programs, facial recognition | 266 Chinese tech projects; social media taxes |
| Latin America | Venezuela, Ecuador, Bolivia, Argentina | Safe City products, surveillance systems | 35 cities using Huawei Safe City |
| Eastern Europe | Russia, Belarus | Internet isolation, content filtering | Signal blocked; YouTube throttled in 2024 |
Sources: ORF; Freedom House 2025; Jamestown Foundation
China's Integrated System
China operates the world's most comprehensive AI-enabled authoritarian system. Human Rights Watch↗🔗 web★★★★☆Human Rights Watch200+ million Uyghurs under surveillanceThis HRW report is a key reference for AI governance discussions around surveillance misuse; the page is currently returning a 404 error and may have been moved or removed from the HRW website.This Human Rights Watch report (now returning a 404 error) documented the mass surveillance infrastructure deployed against Uyghurs and other minorities in Xinjiang, China. It d...governancepolicydigital-repressionauthoritarianism+4Source ↗ documentation reveals:
- Xinjiang surveillance: 1 camera per 6 residents, mandatory phone app monitoring, DNA collection for 12+ million Uyghurs
- Nationwide expansion: 350+ million cameras with facial recognition capabilities (targeting 600 million)
- Predictive policing: IJOP system↗🔗 web★★★★☆Human Rights WatchEradicating Ideological Viruses: China's Campaign of Repression Against Xinjiang's Muslims (HRW Report)A key real-world case study for AI governance discussions about the dangers of deploying predictive AI systems without oversight, illustrating how automated surveillance tools can systematize large-scale human rights abuses when deployed by authoritarian regimes.This Human Rights Watch report documents China's use of the Integrated Joint Operations Platform (IJOP), a mass surveillance and predictive policing system deployed in Xinjiang ...governancepolicydeploymentauthoritarianism+5Source ↗ flags "unusual" behavior for investigation; combines CCTV, WiFi, and checkpoint data
- Social credit: 1.16 billion individuals and 140 million enterprises tracked; 6.7 billion credit report inquiries to date
- Travel restrictions: 33+ million blacklisted individuals denied air or rail tickets as of 2019
- Internet censorship: Real-time blocking of millions of websites and keywords via Great Firewall 2.0
The system's effectiveness is demonstrated by the absence of large-scale protests since implementation, despite historical patterns of periodic unrest. Research suggests fewer people protest when public safety agencies acquire AI surveillance technology.
Russian Digital Authoritarianism
Russia's Sovereign Internet Law↗🔗 web★★★★☆BBCSovereign Internet LawThis BBC article about Russia's 2019 Sovereign Internet Law is no longer accessible (404 error); it is tangentially relevant to AI governance discussions about state control over digital infrastructure and information censorship.This BBC News article (now returning a 404 error) covered Russia's Sovereign Internet Law, which enabled Russian authorities to isolate the Russian internet (Runet) from the glo...governancepolicyauthoritarianismdigital-repression+2Source ↗ creates infrastructure for comprehensive digital control. In 2024-25, authorities dramatically escalated digital isolation:
- Deep packet inspection: Real-time monitoring and filtering of all internet traffic via SORM systems
- Platform blocking: Signal blocked and YouTube throttled in summer 2024; Cloudflare ECH protocol sites restricted
- Encrypted messaging ban: In June 2025, Russia and Belarus announced plans to develop AI built on "fundamental and traditional values"
- Platform compliance: Requirements for data localization and content removal
- Information warfare: State-sponsored disinformation campaigns using AI-generated content; 47 countries now deploy state commentators
- Opposition targeting: Navalny app removal↗🔗 web★★★★☆ReutersApple and Google Remove Navalny App in Russia Amid Arrests and Protests (2021)This dead link references a 2021 Reuters news story relevant to how technology platforms can facilitate authoritarian censorship; useful as a real-world case study in AI/tech governance and digital repression, but the content is no longer accessible.This Reuters article reported on Apple and Google's removal of Alexei Navalny's tactical voting app from their app stores in Russia during the 2021 parliamentary elections, unde...governancepolicydeploymentcoordinationSource ↗ demonstrates platform cooperation under pressure
Global Expansion
Freedom House tracking↗🔗 web★★★★☆Freedom HouseFreedom House trackingUseful reference for AI governance and policy discussions on how authoritarian states deploy AI-enabled surveillance and censorship tools, providing empirical grounding for concerns about misuse of AI capabilities by governments.Freedom House's Freedom on the Net 2023 interactive map provides country-by-country assessments of internet freedom, tracking digital repression, censorship, and surveillance gl...governancepolicyauthoritarianismdigital-repression+4Source ↗ shows authoritarian technology adoption across regions:
- Middle East: UAE, Saudi Arabia deploying Chinese surveillance systems
- Africa: 18 countries with Chinese-supplied "Safe City" programs
- Latin America: Venezuela, Ecuador implementing social control systems
- Southeast Asia: Myanmar, Cambodia expanding digital monitoring
Export financing through Belt and Road Initiative↗🔗 web★★★★☆Council on Foreign RelationsBelt and Road InitiativeThis resource covers Chinese geopolitical infrastructure strategy and is largely tangential to AI safety; it may be relevant only in discussions of great-power competition or AI governance in the context of US-China rivalry.A Council on Foreign Relations backgrounder on China's Belt and Road Initiative, the massive global infrastructure project launched in 2013 by Xi Jinping to expand trade network...governancepolicycoordinationgeopolitics+1Source ↗ often includes surveillance infrastructure, creating long-term technological dependencies.
Digital Authoritarianism Impact by Country (2024-2025)
| Country | Freedom Score Change | Key Developments | Economic Impact |
|---|---|---|---|
| Venezuela | -7 points | Second-largest decline globally; internet controls during July 2024 election | Significant productivity losses |
| Russia | Severe decline | Signal blocked; YouTube throttled; Sovereign Internet deepening | Growing isolation costs |
| Myanmar | Severe decline | New VPN blocking tech in 2024; cybersecurity law January 2025 | $1.62B+ losses (2024) |
| Pakistan | Major decline | Nationwide cellular blackout on election day; social media blocks | $1.62 billion in 2024 |
| China | Sustained low | City Brain 3.0 launched; 350M+ cameras; 1.16B in social credit system | Market leader ($5.3B camera market) |
| Iran | Sustained low | AI deployed to identify hijab violations (2022-ongoing) | International isolation |
| Egypt | Sustained decline | Among worst 15-year declines globally | Investment uncertainty |
| Turkey | Sustained decline | Among worst 15-year declines globally | Tech sector impacts |
Sources: Freedom House 2025; Access Now
Global economic cost of internet shutdowns in 2024: $7.69 billion across approximately 88,000 hours of outages in 39 countries.
The Stability Concern: Perfect Autocracy
Historical autocracies fell through revolution, coups, or external pressure. AI may fundamentally alter these dynamics by creating "perfect autocracy"—regimes with comprehensive information about their populations and the ability to suppress threats before they materialize.
Information Asymmetry
Traditional revolutions required information advantage—knowing something the regime didn't. AI surveillance eliminates this by providing:
- Real-time monitoring: Continuous awareness of population sentiment and activity
- Predictive capabilities: Early warning systems for protest organization
- Network analysis: Identification of influential individuals and communication patterns
- Behavioral prediction: Models forecasting individual likelihood of dissent
RAND Corporation analysis↗🔗 web★★★★☆RAND CorporationRAND Corporation analysisThis RAND report on U.S.-Russia great-power competition is only tangentially relevant to AI safety; it may be referenced in discussions of geopolitical context for AI governance or great-power dynamics affecting international AI coordination.This 2019 RAND Corporation report systematically analyzes U.S. strategic options for competing with Russia in the context of great-power competition, examining Russia's economic...governancepolicycoordinationgeopolitics+3Source ↗ suggests comprehensive surveillance could detect 90%+ of organized opposition activity before it reaches critical mass.
Preemptive Suppression
Rather than reacting to threats, AI enables prevention through:
- Targeted intervention: Removing key organizers before movements form
- Information manipulation: Flooding communication channels with noise
- Social isolation: Restricting travel, employment, and social connections for dissidents
- Psychological pressure: Demonstrating omnipresent monitoring to discourage resistance
International Implications
Stable AI-enabled authoritarianism could affect global governance by:
- Norm erosion: Legitimizing digital repression as "effective governance"
- Technology export: Spreading control systems to client states
- Democratic pressure: Forcing open societies to compete on efficiency rather than freedom
- Lock-in effects: Creating technological and economic dependencies difficult to reverse
Key Technological Uncertainties
Circumvention vs. Suppression
The ongoing competition between surveillance capabilities and privacy-preserving technologies remains uncertain:
| Technology | Surveillance Capability | Circumvention Tool | Current Balance | Trend |
|---|---|---|---|---|
| Encryption | Deep packet inspection; metadata analysis | Signal Protocol; quantum-resistant protocols | Contested | Surveillance gaining via metadata |
| VPNs | Dynamic blocking; traffic pattern analysis | Obfuscation protocols; decentralized VPNs | Surveillance advantage | Myanmar blocked VPNs in 2024 |
| Anonymity networks | Tor exit node monitoring; traffic correlation | Tor; I2P; mesh networks | Mixed | Russia blocked Tor in 2024 |
| Facial recognition | 99.9% accuracy under optimal conditions | Masks; adversarial makeup; IR LEDs | Strong surveillance advantage | 350M+ cameras deployed |
| Gait analysis | Defeats facial coverings | Limited countermeasures | Surveillance advantage | Rapidly advancing |
| AI content filtering | Real-time semantic analysis; context detection | Euphemisms; coded language; steganography | Contested | AI auto-detects new evasion patterns |
Key insight: Circumvention tools provide temporary advantages, but state-level actors have sustained resources for detection improvements.
- Encryption advancement: Quantum-resistant protocols may preserve private communication
- Anonymization tools: Tor, VPNs, and decentralized networks enable some circumvention
- AI detection: Advanced systems may identify circumvention attempts in real-time
- Cat-and-mouse dynamics: Historical precedent suggests temporary advantages rather than permanent solutions
Electronic Frontier Foundation↗🔗 webElectronic Frontier FoundationRelevant to AI safety discussions around state surveillance capabilities, misuse of AI-powered monitoring tools, and governance frameworks needed to prevent authoritarian applications of emerging technologies.The Electronic Frontier Foundation's dedicated resource hub on mass surveillance covers legal, policy, and technical dimensions of government and corporate surveillance programs...governancepolicyhuman-rightsauthoritarianism+3Source ↗ research indicates circumvention tools face increasing sophistication in detection and blocking.
Technological Dependencies
The durability of AI-enabled authoritarianism may depend on:
- Semiconductor supply chains: Advanced chips required for surveillance infrastructure
- Internet infrastructure: Physical control points for traffic monitoring
- Cloud computing: Centralized vs. distributed processing capabilities
- Energy requirements: Substantial power needs for comprehensive surveillance
Human Factors
AI systems require human operators, creating potential vulnerabilities:
- Operator loyalty: Security forces must remain committed to the regime
- Technical expertise: Maintaining complex systems requires skilled personnel
- Error rates: False positives could create public resentment
- Adaptation: Opposition groups may develop counter-surveillance tactics
Current Trajectory and Projections
Technological Advancement
AI capabilities relevant to authoritarianism are advancing rapidly:
- Accuracy improvements: Facial recognition error rates dropping approximately 50% annually; now exceeding 99.9% accuracy under optimal conditions
- Processing speed: Real-time analysis of millions of faces per day per district; City Brain 3.0 launched March 2025
- Integration capabilities: Unified systems combining CCTV, WiFi, purchase data, location tracking, and biometrics
- Cost reduction: China's surveillance IP camera market at $5.33 billion in 2025, growing 13.3% annually to $9.94 billion by 2030
- Market scale: Global surveillance technology market projected to exceed $300 billion by 2028
MIT Technology Review↗🔗 web★★★★☆MIT Technology ReviewMIT Technology ReviewRelevant to AI governance discussions around high-risk deployments; illustrates real-world consequences of unregulated AI systems and the policy landscape around facial recognition specifically.This MIT Technology Review article examines the growing debate over facial recognition technology regulation, highlighting concerns about civil liberties, racial bias, and misus...governancepolicydeploymentai-safety+4Source ↗ reports facial recognition accuracy exceeding 99.9% under optimal conditions. The number of countries deploying state commentators to manipulate online discussions has doubled in the past decade to 47 countries.
Geographic Expansion
Current trends suggest continued spread of authoritarian AI:
- Technology transfer: Chinese vendors expanding global market share
- Financing mechanisms: Development banks funding surveillance infrastructure
- Technical training: Capacity building for local implementation
- Regulatory frameworks: Legal structures legitimizing digital monitoring
Democratic Responses
Nascent efforts to counter authoritarian AI include:
- Export controls: U.S. and EU restrictions on surveillance technology sales
- Privacy legislation: GDPR and similar frameworks limiting data collection
- Technical assistance: Supporting civil society with circumvention tools
- Diplomatic pressure: Sanctions and international criticism
However, Center for Strategic and International Studies↗🔗 web★★★★☆CSISCenter for Strategic and International StudiesCSIS is a leading US think tank; this analysis is relevant to AI governance discussions around geopolitical competition, though content could not be verified due to missing page text. Original tags suggest focus on authoritarian misuse of technology.A CSIS analysis examining how emerging strategic technologies shape national power competition, particularly focusing on how states leverage advanced technologies including AI f...governancepolicycapabilitiescoordination+2Source ↗ analysis suggests defensive measures lag significantly behind authoritarian capabilities.
Countermeasures and Defensive Strategies
Technical Approaches
- Privacy-preserving technologies: Signal Protocol↗🔗 webSignal Protocol DocumentationRelevant to AI safety governance and human rights contexts where secure communication infrastructure protects vulnerable populations from authoritarian surveillance and digital repression.Technical documentation for the Signal Protocol, an end-to-end encryption protocol used to secure private communications. It provides cryptographic specifications for messaging ...digital-repressionhuman-rightsgovernancetechnical-safety+1Source ↗, Tor↗🔗 webThe Tor Project | Privacy & Freedom OnlineRelevant to discussions of digital rights infrastructure under authoritarian regimes; Tor is a key technical countermeasure to state surveillance and censorship, though it is only tangentially related to core AI safety research.The Tor Project is a nonprofit organization that develops and maintains the Tor anonymity network and browser, enabling private, censorship-resistant internet access. It is wide...governancepolicyhuman-rightsauthoritarianism+3Source ↗, mesh networking
- Decentralized systems: Blockchain-based communication and organization tools
- AI red-teaming: Testing surveillance systems for vulnerabilities
- Open-source intelligence: Monitoring authoritarian technology deployment
Policy Responses
- Export controls: Bureau of Industry and Security↗🏛️ government★★★★☆Bureau of Industry and SecurityBureau of Industry and SecurityBIS is a key regulatory body for AI governance watchers, as it controls export of advanced semiconductors and AI-enabling hardware, directly shaping global access to compute infrastructure critical for frontier AI development.BIS is the U.S. federal agency responsible for export controls, technology security, and national security investigations related to critical imports. It enforces restrictions o...governancepolicycomputecapabilities+2Source ↗ Entity List restrictions
- Sanctions regimes: Targeting surveillance technology companies and users
- International coordination: Freedom Online Coalition↗🔗 webFreedom Online CoalitionRelevant to AI governance discussions as a key intergovernmental body shaping norms around digital rights; tangentially relevant to AI safety insofar as AI-enabled surveillance and censorship intersect with Internet freedom and human rights protection online.The Freedom Online Coalition is a partnership of 41 governments coordinating diplomatic efforts to protect Internet freedom and human rights online, including free expression, a...governancepolicycoordinationhuman-rights+3Source ↗ diplomatic efforts
- Funding alternatives: Supporting democratic technology development
Civil Society Strategies
- Digital security training: Teaching circumvention and privacy tools
- Documentation: Recording human rights violations enabled by AI
- Advocacy: Raising awareness of surveillance technology impacts
- Legal challenges: Constitutional and human rights litigation
Timeline of AI-Enabled Authoritarianism
Historical Foundation (2010-2015)
- 2012: China begins massive surveillance camera deployment under Skynet project
- 2013: Snowden revelations expose NSA capabilities, spurring global surveillance adoption
- 2014: Xi Jinping consolidates power, accelerates Social Credit System development
- 2015: China's Cybersecurity Law establishes data localization requirements
System Integration (2016-2020)
- 2016: Internet Research Agency demonstrates AI-powered influence operations
- 2017: Xinjiang surveillance apparatus reaches full deployment; Digital Silk Road launched
- 2018: China's Social Credit System enters nationwide pilot phase
- 2019: Russia passes Sovereign Internet Law enabling comprehensive filtering; 26.82M air tickets denied to blacklisted individuals
- 2020: COVID-19 contact tracing normalizes population surveillance globally
Current Acceleration (2021-2025)
- 2021: Taliban uses facial recognition to hunt former officials
- 2022: Iran deploys AI to identify hijab violations; China launches Global Security Initiative
- 2023: 22+ countries mandate AI-powered content removal; internet shutdowns reach record 283 instances across 39 countries
- 2024: Venezuela suffers second-largest global decline in internet freedom; Russia blocks Signal and throttles YouTube; Myanmar deploys VPN-blocking technology; Hong Kong plans 2,000 new cameras with facial recognition; $7.69B global cost of internet shutdowns
- 2025: Freedom House reports 15th consecutive year of internet freedom decline; China launches City Brain 3.0; Myanmar enacts cybersecurity law restricting anti-censorship tools; China regulates facial recognition (effective June 2025); Russia-Belarus announce "values-based" AI development
Projected Developments (2026-2030)
- Enhanced prediction: AI systems forecasting individual behavior with 95%+ accuracy
- Camera expansion: China targeting 600 million surveillance cameras
- Market growth: Surveillance market exceeding $300 billion by 2028
- Counter-surveillance evolution: Arms race between monitoring and privacy technologies
- Institutional lock-in: Democratic backsliding enabled by "temporary" surveillance measures
Sources & Resources
Primary Research Organizations
- Freedom House - Freedom on the Net Reports↗🔗 web★★★★☆Freedom HouseFreedom House ReportsRelevant to AI governance discussions around state surveillance, digital authoritarianism, and the misuse of AI-enabled monitoring technologies; provides empirical data on how governments weaponize digital tools against citizens.Freedom House's Freedom on the Net is an annual report assessing internet freedom across countries worldwide, tracking government censorship, surveillance, and digital repressio...governancepolicyauthoritarianismhuman-rights+4Source ↗
- Carnegie Endowment - Global Expansion of AI Surveillance↗🔗 web★★★★☆Carnegie EndowmentGlobal Expansion of AI SurveillanceRelevant to AI governance and deployment risk discussions; highlights how unregulated AI diffusion enables authoritarian surveillance, with implications for international AI policy and norms around dual-use technologies.This Carnegie Endowment study maps the global spread of AI surveillance technologies, identifying at least 75 countries using AI tools for surveillance including smart city plat...governancepolicydeploymentai-safety+1Source ↗
- Human Rights Watch - China Surveillance Documentation↗🔗 web★★★★☆Human Rights Watch200+ million Uyghurs under surveillanceThis HRW report is a key reference for AI governance discussions around surveillance misuse; the page is currently returning a 404 error and may have been moved or removed from the HRW website.This Human Rights Watch report (now returning a 404 error) documented the mass surveillance infrastructure deployed against Uyghurs and other minorities in Xinjiang, China. It d...governancepolicydigital-repressionauthoritarianism+4Source ↗
- Oxford Internet Institute - Computational Propaganda Research↗🔗 webOxford Internet Institute: Computational PropagandaThis research group is a leading academic source on AI-enabled influence operations and political manipulation; relevant to AI safety discussions around misuse, information integrity, and societal-scale harms from deployed AI systems.The Oxford Internet Institute's Computational Propaganda project investigates how digital technologies, bots, and algorithmic systems are weaponized to manipulate public opinion...governancepolicyai-ethicsdeployment+4Source ↗
- Stanford Internet Observatory - Platform Governance↗🔗 webStanford Internet ObservatorySIO is a leading academic institution tracking online harms including AI-generated disinformation; relevant to AI governance and deployment safety discussions around information integrity.The Stanford Internet Observatory is a multidisciplinary research center at Stanford's Freeman Spogli Institute focused on studying the misuse of the internet, including disinfo...governancepolicyai-safetydeployment+3Source ↗
Policy and Technical Analysis
- RAND Corporation - Information Warfare Studies↗🔗 web★★★★☆RAND CorporationRAND Corporation - Information Warfare StudiesRelevant to AI safety as it addresses emerging use of generative AI in state-level influence operations and the governance challenges of deploying AI in sensitive DoD contexts; less directly focused on core alignment or technical safety research.RAND Corporation's research hub on information operations covers influence operations, propaganda, psychological warfare, and disinformation tactics. It aggregates over 300 rese...governancepolicyai-safetydeployment+4Source ↗
- Center for Strategic and International Studies - Technology and National Power↗🔗 web★★★★☆CSISCSIS Critical QuestionsCSIS is a prominent Washington D.C. think tank; this program page is a hub for policy-oriented analysis on technology and national security, relevant to AI governance and international coordination discussions.The Center for Strategic and International Studies (CSIS) Strategic Technologies Program analyzes the intersection of technology, national security, and international competitio...governancepolicycoordinationinternational-coordination+4Source ↗
- Atlantic Council - Digital Forensic Research Lab↗🔗 web★★★★☆Atlantic CouncilAtlantic Council DFRLabDFRLab is tangentially relevant to AI safety through its focus on disinformation and digital manipulation, areas where AI capabilities pose growing risks; useful for researchers studying AI misuse and information integrity.The DFRLab is a leading research organization that investigates digital threats, disinformation campaigns, and influence operations through open-source intelligence methods. It ...governancepolicydisinformationdeployment+3Source ↗
- Electronic Frontier Foundation - Surveillance and Privacy↗🔗 webElectronic Frontier FoundationRelevant to AI safety discussions around state surveillance capabilities, misuse of AI-powered monitoring tools, and governance frameworks needed to prevent authoritarian applications of emerging technologies.The Electronic Frontier Foundation's dedicated resource hub on mass surveillance covers legal, policy, and technical dimensions of government and corporate surveillance programs...governancepolicyhuman-rightsauthoritarianism+3Source ↗
- Citizen Lab - Digital Rights Research↗🔗 webCitizen Lab - Digital Rights ResearchCitizen Lab is a leading empirical research institution documenting how governments and private actors deploy digital tools against civil society; relevant to AI governance, misuse risks, and deployment accountability discussions.The Citizen Lab is an interdisciplinary research lab investigating digital threats to democracy, human rights, and global security, including state-sponsored surveillance, spywa...governancepolicydeploymentred-teaming+5Source ↗
Government and International Bodies
- U.S. Bureau of Industry and Security - Export Controls↗🏛️ government★★★★☆Bureau of Industry and SecurityBureau of Industry and SecurityBIS is a key regulatory body for AI governance watchers, as it controls export of advanced semiconductors and AI-enabling hardware, directly shaping global access to compute infrastructure critical for frontier AI development.BIS is the U.S. federal agency responsible for export controls, technology security, and national security investigations related to critical imports. It enforces restrictions o...governancepolicycomputecapabilities+2Source ↗
- EU AI Office - Regulation and Oversight↗🔗 web★★★★☆European UnionEU AI Office - European CommissionThe EU AI Office is a key regulatory institution for AI safety practitioners and developers operating in Europe; its mandates and guidelines directly shape how frontier AI models must be evaluated and deployed under the EU AI Act framework.The EU AI Office is the European Commission's central body responsible for overseeing and implementing the EU AI Act, particularly for general-purpose AI models. It coordinates ...governancepolicyai-safetydeployment+3Source ↗
- UK AI Safety Institute - Risk Assessment↗🏛️ government★★★★☆UK AI Safety InstituteUK AI Safety Institute (AISI)AISI is a key institutional actor in AI safety, representing one of the first government-led efforts to systematically evaluate frontier AI models; its work and publications are directly relevant to governance, evaluation methodology, and international AI safety coordination.The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, deve...ai-safetygovernancepolicyevaluation+5Source ↗
- Freedom Online Coalition - Digital Rights↗🔗 webFreedom Online CoalitionRelevant to AI governance discussions as a key intergovernmental body shaping norms around digital rights; tangentially relevant to AI safety insofar as AI-enabled surveillance and censorship intersect with Internet freedom and human rights protection online.The Freedom Online Coalition is a partnership of 41 governments coordinating diplomatic efforts to protect Internet freedom and human rights online, including free expression, a...governancepolicycoordinationhuman-rights+3Source ↗
- UN Special Rapporteur on Privacy↗🔗 webUN Special Rapporteur on PrivacyThe UN Special Rapporteur on Privacy is relevant to AI safety discussions around surveillance technologies, biometric AI, and the governance of automated systems that affect fundamental rights; useful as an international normative reference point.The UN Special Rapporteur on Privacy is a mandate holder appointed by the UN Human Rights Council to address privacy rights globally, including in the digital age. The office in...governancepolicyhuman-rightssurveillance+4Source ↗
References
This Atlantic Council analysis examines China's evolving digital authoritarianism, including the expansion of internet censorship infrastructure, the social credit system, and the integration of AI-powered surveillance tools. It explores how these systems collectively enable unprecedented state control over citizens' online and offline behavior.
This Human Rights Watch report documents China's use of the Integrated Joint Operations Platform (IJOP), a mass surveillance and predictive policing system deployed in Xinjiang to monitor and repress Uyghur Muslims. The report details how algorithmic tools, biometric data collection, and digital tracking are used to flag individuals for detention based on behavioral and religious indicators. It represents a landmark case study of AI-enabled authoritarian surveillance at scale.
This resource appears to be a Council on Foreign Relations backgrounder on China's Social Credit System, but the page currently returns a 404 error and the content is unavailable. The topic would typically cover how China's algorithmic surveillance and scoring system affects citizens and raises concerns about digital authoritarianism.
This 2019 RAND Corporation report systematically analyzes U.S. strategic options for competing with Russia in the context of great-power competition, examining Russia's economic, political, and military vulnerabilities. It evaluates policy options across ideological, economic, geopolitical, and military domains, concluding that economic measures—particularly boosting U.S. energy production and multilateral sanctions—offer the highest likelihood of success with manageable risks, while geopolitical and ideological approaches carry significant escalation risks.
This BBC News article (now returning a 404 error) covered Russia's Sovereign Internet Law, which enabled Russian authorities to isolate the Russian internet (Runet) from the global internet. The law represented a significant milestone in state control over digital infrastructure and information flow.
The Freedom Online Coalition is a partnership of 41 governments coordinating diplomatic efforts to protect Internet freedom and human rights online, including free expression, association, assembly, and privacy. Operating through information sharing, diplomatic coordination, and advocacy, the FOC engages with civil society and the private sector to shape global digital governance norms. Under Switzerland's 2026 chairship, priorities include navigating global digital governance processes and operationalizing multistakeholder approaches.
This Carnegie Endowment study maps the global spread of AI surveillance technologies, identifying at least 75 countries using AI tools for surveillance including smart city platforms, facial recognition, and predictive policing. Chinese firms like Huawei, Hikvision, and ZTE are dominant suppliers, providing systems to both democracies and authoritarian regimes alike.
RAND Corporation's research hub on information operations covers influence operations, propaganda, psychological warfare, and disinformation tactics. It aggregates over 300 research outputs addressing how adversaries like Russia and China conduct malign information campaigns, and how the U.S. and allies can develop countermeasures and policy frameworks.
BIS is the U.S. federal agency responsible for export controls, technology security, and national security investigations related to critical imports. It enforces restrictions on dual-use technologies, conducts Section 232 national security investigations into semiconductors, robotics, and other strategic materials, and prosecutes export control violations.
This Reuters article reported on Apple and Google's removal of Alexei Navalny's tactical voting app from their app stores in Russia during the 2021 parliamentary elections, under pressure from Russian authorities. The incident illustrates how tech platforms can become instruments of state censorship and digital repression. The original article is no longer accessible at this URL.
The Citizen Lab is an interdisciplinary research lab investigating digital threats to democracy, human rights, and global security, including state-sponsored surveillance, spyware misuse, and AI-enabled influence operations. Their work documents cases of forensic tools like Cellebrite being misused against civil society actors and exposes disinformation campaigns by state and private actors. It serves as a primary source for empirical research on digital repression worldwide.
The Oxford Internet Institute's Computational Propaganda project investigates how digital technologies, bots, and algorithmic systems are weaponized to manipulate public opinion and undermine democratic processes. Using computational and social science methods, the project analyzes disinformation campaigns, social media manipulation, and platform dynamics across multiple countries. Their research informs policy responses to coordinated inauthentic behavior and influence operations.
Freedom House's Freedom on the Net is an annual report assessing internet freedom across countries worldwide, tracking government censorship, surveillance, and digital repression. It documents how authoritarian regimes use technology to suppress dissent and control information, providing country-by-country scores and analysis. The reports are a key resource for understanding the global landscape of digital rights and state control over information.
A Council on Foreign Relations backgrounder on China's Belt and Road Initiative, the massive global infrastructure project launched in 2013 by Xi Jinping to expand trade networks across Asia, Africa, and beyond. The piece examines the geopolitical implications, debt trap concerns, and U.S. strategic responses to what some analysts view as a vehicle for Chinese economic and political expansion.
This Human Rights Watch report (now returning a 404 error) documented the mass surveillance infrastructure deployed against Uyghurs and other minorities in Xinjiang, China. It detailed biometric data collection, predictive policing systems, and integrated digital tracking technologies used by the Chinese state. The report served as a key reference for understanding state-deployed AI surveillance at scale.
The Center for Strategic and International Studies (CSIS) Strategic Technologies Program analyzes the intersection of technology, national security, and international competition. It produces policy analysis on topics including AI governance, cybersecurity, and emerging technologies with geopolitical implications. The program informs policymakers and the public on technology strategy and regulation.
This Financial Times article appears to cover the use of digital surveillance and technology by authoritarian regimes to suppress human rights and political dissent. Without access to the full content, the tags suggest it addresses how technology enables state repression and monitoring of citizens.
A CSIS analysis examining how emerging strategic technologies shape national power competition, particularly focusing on how states leverage advanced technologies including AI for geopolitical advantage. The piece likely addresses the intersection of technological capabilities, governance frameworks, and great power competition.
The DFRLab is a leading research organization that investigates digital threats, disinformation campaigns, and influence operations through open-source intelligence methods. It works to expose authoritarian information operations, protect democratic institutions, and build resilience against digital manipulation. Its research is relevant to AI safety insofar as AI tools are increasingly used to scale disinformation and digital repression.
Freedom House's 2023 Freedom on the Net report examines how authoritarian governments are deploying AI tools to surveil, censor, and repress citizens across the globe. It documents how AI-powered surveillance and information controls are spreading from pioneer countries to others, threatening human rights and democratic freedoms. The report highlights how these technologies enable more efficient and scalable repression with reduced accountability.
The UN Special Rapporteur on Privacy is a mandate holder appointed by the UN Human Rights Council to address privacy rights globally, including in the digital age. The office investigates surveillance, data protection, and the intersection of emerging technologies with human rights. It produces thematic reports and recommendations relevant to AI governance and digital rights frameworks.
The Tor Project is a nonprofit organization that develops and maintains the Tor anonymity network and browser, enabling private, censorship-resistant internet access. It is widely used by journalists, activists, and citizens in authoritarian regimes to circumvent surveillance and digital repression. Tor provides free, open-source tools that protect users' identities and locations online.
This MIT Technology Review article examines the growing debate over facial recognition technology regulation, highlighting concerns about civil liberties, racial bias, and misuse by governments and law enforcement. It explores legislative efforts and advocacy pushing for stricter controls or outright bans on facial recognition in various contexts.
This Wired article examines China's Sesame Credit system, a private and government-linked social scoring initiative that rates citizens based on financial behavior, social connections, and online activity. It explores how algorithmic scoring can be used to reward or restrict individuals' access to services and opportunities. The piece raises concerns about surveillance, behavioral control, and the normalization of data-driven social governance.
Freedom House's Freedom on the Net 2023 interactive map provides country-by-country assessments of internet freedom, tracking digital repression, censorship, and surveillance globally. It serves as a reference tool for monitoring authoritarian use of technology to control information and suppress dissent. The dataset is widely used in policy and research contexts to understand the erosion of digital rights.
Technical documentation for the Signal Protocol, an end-to-end encryption protocol used to secure private communications. It provides cryptographic specifications for messaging security including key exchange, message encryption, and forward secrecy. This protocol underpins Signal and is widely adopted by other messaging platforms.
This Senate Intelligence Committee report documents how Russia's Internet Research Agency systematically exploited social media platforms to spread disinformation, sow division, and influence American voters during the 2016 election. It provides a detailed government-level analysis of coordinated inauthentic behavior, platform vulnerabilities, and the scale of foreign information operations. The report serves as a key reference for understanding state-sponsored influence campaigns and the governance challenges they pose.
Reuters reports on the US Treasury Department adding SenseTime and other Chinese technology companies to an investment blacklist due to their alleged role in surveillance and human rights abuses against Uyghurs in Xinjiang. The action reflects growing concern about AI-enabled authoritarian repression and the dual-use risks of facial recognition and surveillance technologies. This represents a significant policy intervention at the intersection of AI capabilities and human rights.
The Stanford Internet Observatory is a multidisciplinary research center at Stanford's Freeman Spogli Institute focused on studying the misuse of the internet, including disinformation, influence operations, and digital repression. It conducts empirical research and policy analysis on how online platforms are exploited to undermine democracy, human rights, and security. SIO publishes investigations, tools, and reports used by governments, platforms, and civil society.
The EU AI Office is the European Commission's central body responsible for overseeing and implementing the EU AI Act, particularly for general-purpose AI models. It coordinates AI governance across member states, enforces compliance with AI safety requirements, and supports the development of AI standards and testing methodologies.
The Electronic Frontier Foundation's dedicated resource hub on mass surveillance covers legal, policy, and technical dimensions of government and corporate surveillance programs. It serves as a reference point for civil liberties advocacy, legal challenges, and public education about surveillance threats to privacy and free expression. The page aggregates EFF's ongoing work including litigation, legislative efforts, and technical research.
This Human Rights Watch report examines deficiencies in nutrition and healthcare within the Russian military, documenting systemic failures that harm soldiers' wellbeing. It serves as an acknowledgements section of a broader investigative report into human rights conditions in the Russian armed forces circa 2003.
The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, develops evaluation frameworks for frontier AI models, and works with international partners to inform global AI governance and policy.
Freedom House's annual Freedom on the Net 2025 report assesses the state of internet freedom globally, documenting trends in government censorship, surveillance, and the fragmentation of the open internet. The report highlights how authoritarian regimes leverage digital controls and how AI is increasingly being used as a tool of repression and information manipulation.
RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technology, governance, and emerging risks. It produces influential studies on AI policy, cybersecurity, and global governance challenges. RAND's work is frequently cited by governments and policymakers worldwide.
This Lawfare article examines how AI-powered surveillance technologies can be exploited by authoritarian regimes to monitor, control, and suppress populations. It explores the political and governance risks posed by the proliferation of AI surveillance tools, both domestically and through export to repressive governments.