Skip to content
Longterm Wiki
Navigation
Updated 2026-03-20HistoryData
Page StatusRisk
Edited 2 weeks ago3.7k words1 backlinksUpdated weeklyOverdue by 9 days
55QualityAdequate •85ImportanceHigh75ResearchHigh
Content4/13
SummaryScheduleEntityEdit historyOverview
Tables2/ ~15Diagrams0/ ~1Int. links11/ ~30Ext. links9/ ~19Footnotes0/ ~11References0/ ~11Quotes0Accuracy0RatingsN:8 R:6.5 A:7 C:7Backlinks1
Issues1
QualityRated 55 but structure suggests 87 (underrated by 32 points)

AI Surveillance and US Democratic Erosion

Risk

AI Surveillance and US Democratic Erosion

Analysis of how data centralization, oversight dismantlement, and AI capability acquisition by the US government create near-term threats to democratic processes. Documents the Anthropic-Pentagon standoff as a crystallizing moment, current administration actions (100+ targeted opponents, national citizenship database, Palantir contracts, DOGE AI surveillance of federal workers, gutted oversight boards), legal loopholes enabling warrantless bulk data collection, how AI changes surveillance economics, five threat scenarios for the 2026 midterms with probability estimates, and countervailing forces including courts and betting-market-favored Democratic House win.

CategoryMisuse Risk
SeverityHigh
Likelihoodhigh
Timeframe2026
MaturityEmerging
FocusUS domestic surveillance and election integrity
Key TriggerAnthropic-Pentagon standoff (Feb 2026)
Related
Risks
AI Mass SurveillanceAI-Enabled Authoritarian TakeoverAI Authoritarian Tools
Events
Anthropic-Pentagon Standoff (2026)
3.7k words · 1 backlinks
Rapidly Developing

This page covers events through early March 2026. The situation is evolving rapidly — the Anthropic-Pentagon standoff lawsuit is pending, data centralization efforts continue, and the 2026 midterm campaign is underway. Update frequency is set to weekly.

Quick Assessment

DimensionAssessmentEvidence
SeverityHighCould undermine competitive elections for 330M+ Americans
LikelihoodHigh (infrastructure assembly) / Medium (electoral deployment)Data centralization and AI monitoring already underway; electoral use uncertain
TimelineNow through November 2026Key milestones: citizenship database completion, midterm campaigns
TrendRapidly worseningOversight boards gutted, data silos being merged, AI monitoring expanding
Key TriggerAnthropic-Pentagon standoff (Feb 2026)Pentagon sought AI analysis of bulk commercial data on Americans — location, browsing, financial records
Countervailing ForcesModerateCourts pushing back; betting markets favor Democratic House (69-84%); bipartisan resistance emerging

Overview

Three trends are converging in real time. First, the current administration has demonstrated a pattern of using government power against political opponents — over 100 individuals and organizations targeted through investigations, prosecutions, firings, and retaliatory actions. Second, systematic efforts are centralizing citizen data across federal agencies while dismantling the oversight mechanisms built after Watergate and COINTELPRO. Third, the government is actively pursuing AI-powered analysis capabilities applied to bulk data on American citizens.

The Anthropic-Pentagon standoff of February 2026 crystallized this convergence. When the Pentagon demanded Anthropic allow Claude for "all lawful purposes" — which, according to reporting by the Atlantic and Axios, specifically included AI analysis of Americans' location data, browsing histories, and financial transactions purchased from data brokers — Anthropic refused, and was designated a "supply chain risk to national security." OpenAI signed a replacement deal within 24 hours. The government's willingness to destroy a $380 billion company over surveillance restrictions reveals how seriously it pursues these capabilities.

This page focuses specifically on the US domestic threat. For the global picture of AI-enabled surveillance, see Mass Surveillance. For the structural risk of AI enabling permanent authoritarianism, see AI-Enabled Authoritarian Takeover.

What's Already Happening

Targeting of Political Opponents

The pattern of using government power against perceived enemies is extensively documented:

  • 100+ individuals and organizations targeted through investigations, prosecutions, firings, security clearance revocations, and retaliatory actions (documented by NPR, Protect Democracy, ABC News).
  • Targets span institutions: Federal Reserve Chair Jerome Powell (criminal investigation), Fed Governor Lisa Cook (prosecution), former Chief of Staff John Kelly (censure and retirement grade reduction), Senator Adam Schiff (fraud investigation), Representative Eric Swalwell (criminal referral), ActBlue (DOJ investigation).
  • Attempted indictment of six members of Congress for making a video advising service members about illegal orders — the grand jury refused to indict, an exceedingly rare outcome.
  • Historical comparison: Nixon-era historian Timothy Naftali described the current targeting as more dangerous for the rule of law than the 1970s, because a compliant Republican Congress allows the administration to go further than Nixon could.

Data Centralization

The administration has pursued aggressive data centralization through multiple channels:

  • Executive Order on Data Sharing (March 2025): Directed agencies to eliminate "data silos" and ensure "unfettered access to comprehensive data from all State programs that receive Federal funding."
  • National Citizenship Data System: DHS and DOGE built a searchable national citizenship data system linking Social Security Administration records, immigration databases, driver's license data, and voter rolls — the first system of its kind. Legal experts called it "a sea change" developed without a transparent public process.
  • Palantir contract: The data-mining firm received contracts to compile government information for immigration enforcement, accessing data from the IRS, DOGE, and other agencies.
  • State data acquisition: USDA demanded names, SSNs, addresses, and dates of birth of tens of millions of SNAP recipients. ICE issued subpoenas for state records. Federal health officials shared Medicaid data from multiple states with DHS.

ACLU senior policy counsel Cody Venzke warned: "Once you build a system that connects every database about an individual across federal and state governments, it's incredibly hard to unwind that system." George Washington University Law Professor Paul Schwartz called it "the demolition of the Watergate-era safeguards that were intended to keep databases separated."

AI Surveillance of Government Workers

DOGE is already using AI to monitor federal employees:

  • EPA surveillance: Trump-appointed officials told EPA managers that DOGE was using AI to monitor Microsoft Teams and other communication platforms for "anti-Trump or anti-Musk language." Managers were told: "Be careful what you say, what you type, and what you do." (Reuters)
  • Job justification analysis: Federal workers' responses to the "what did you accomplish last week" email were fed into LLMs to determine whether their jobs were necessary.
  • Grok deployment: DOGE has "heavily" deployed Musk's Grok AI chatbot as part of government operations.
  • Government ethics expert Kathleen Clark described DOGE's activities as "an abuse of government power to suppress or deter speech that the president of the United States doesn't like."

Dismantling Oversight

Key oversight mechanisms have been gutted or destroyed:

  • Privacy and Civil Liberties Oversight Board (PCLOB): Three Democratic members removed, destroying the quorum needed to conduct oversight. CDT's CEO called it "a brazen effort to destroy an independent watchdog."
  • FBI Foreign Influence Task Force: Dissolved by AG Pam Bondi.
  • State Department Global Engagement Center: Shut down.
  • Foreign Malign Influence Center: Closed.
  • NSA/Cyber Command leadership: Gen. Tim Haugh fired.

Why "All Lawful Purposes" Permits More Than People Assume

When the Pentagon assured that mass surveillance is illegal, this provides far less comfort than it appears. The legal framework governing government data collection on Americans contains enormous loopholes.

Section 702 of FISA: Allows warrantless collection of communications of foreigners abroad, but in practice sweeps up vast quantities of American communications because Americans communicate internationally. This "incidental collection" is then searchable by the FBI through warrantless "backdoor searches." A federal court ruled in January 2025 that these backdoor searches ordinarily require a warrant, but the practice continues.

Executive Order 12333: Authorizes intelligence collection occurring outside the US — but because global internet traffic routes through US infrastructure, this enables collection of domestic communications. This framework underpinned many of the surveillance programs revealed by Edward Snowden.

The data broker loophole is arguably the most critical gap. Federal agencies — including the FBI, DHS, ICE, IRS, DEA, DOD, and Secret Service — have purchased vast quantities of Americans' personal data from commercial data brokers without warrants:

  • Senator Ron Wyden confirmed the NSA buys Americans' internet browsing records from data brokers
  • The Defense Intelligence Agency purchased and used location data from Americans' phones
  • Defense contractors purchased location data from Muslim prayer apps, dating apps, and other sources
  • The CDC spent $420,000 on location data to track compliance with COVID movement restrictions
  • A data broker collected location data from apps on 390+ million devices, grouping users into audiences like "Christian church goers" and "wealthy and not healthy"

The government's legal position has been that buying commercially available data doesn't constitute a "search" under the Fourth Amendment, despite the Supreme Court's 2018 Carpenter decision holding that seven or more days of cell-site location data requires a warrant. The Fourth Amendment Is Not For Sale Act has been introduced multiple times to close this loophole but has not passed.

Bottom line: When the Pentagon says "all lawful purposes," the legal aperture encompasses analysis of commercially purchased location data, browsing histories, financial transactions, and social media data — exactly the data types the Pentagon reportedly sought from Anthropic.

How AI Changes the Equation

Traditional surveillance was constrained by human analyst bandwidth. AI fundamentally changes the economics in ways that make this qualitatively different from historical surveillance programs:

Scale: Pre-AI, analyzing the communications, movements, and financial transactions of millions of Americans required an army of analysts. AI reduces the marginal cost of analyzing one additional person toward zero. A system that can process bulk commercial data on 300+ million Americans becomes feasible not just for collection (which already occurs) but for meaningful analysis and pattern detection.

Cross-referencing: AI excels at finding patterns across disparate data sources — connecting location data with financial transactions with social media activity with communication patterns. This transforms individually innocuous data points into comprehensive behavioral profiles.

Predictive capability: AI can identify patterns predictive of future behavior — including political organizing, donation patterns, and activist networks forming. This enables preemptive targeting rather than reactive investigation.

Automated selective enforcement: The current pattern of targeting political opponents requires human prosecutors to identify targets and build cases. AI could automate target identification — flagging every opposition donor, organizer, or activist with any technical legal vulnerability and generating investigative leads at industrial scale.

For quantitative modeling of how surveillance suppresses expression and organizing, see Surveillance Chilling Effects Model.

Historical Precedent

The US has a documented history of surveillance infrastructure being built for legitimate purposes, then extended to political targeting:

COINTELPRO (1956-1971): The FBI targeted civil rights leaders, anti-war activists, and others. Tactics included wiretapping Martin Luther King Jr. (8 wiretaps, 16 bugs), sending fabricated letters urging him to commit suicide, planting informants, using IRS audits against political targets, and spreading disinformation to discredit activists.

Nixon-era abuses: The "enemies list" targeted perceived opponents. A secret IRS program ("Special Services Staff") investigated and harassed political opponents with audits. The Huston Plan proposed expanded domestic surveillance including office break-ins.

Post-9/11 bulk collection: NSA bulk phone metadata collection on virtually all Americans. The PRISM program accessed data from major tech companies. FBI "assessments" allowed investigation without factual predicate of illegal activity.

The pattern is consistent: infrastructure justified for legitimate purposes (national security, counterterrorism, fighting crime) is extended to political targeting. The Church Committee found that a combination of perceived security threats, easy access to damaging personal information, and perceived ineffectiveness of traditional methods led "law enforcers to become law breakers."

Threat Models for 2026

Scenario 1: Chilling Effect (~40-60% probability, already visible)

The most likely scenario doesn't require active deployment against specific individuals. If political organizers, donors, journalists, and activists know (or believe) the government has AI-powered analysis of their personal data, many will self-censor. The administration's public destruction of Anthropic for resisting surveillance — combined with documented targeting of 100+ political opponents — creates a credible deterrent.

The DOGE surveillance of federal workers already demonstrates this mechanism in action: managers told employees to "be careful what you say, what you type, and what you do."

Impact: Reduced opposition organizing, fewer donations to opposition causes, less willingness to participate in activism. Difficult to measure but potentially significant at the margins.

Scenario 2: Selective Investigation and Prosecution at Scale (~25-40%)

Using AI to analyze bulk data, the administration identifies opposition figures with legal vulnerabilities — tax irregularities, regulatory violations, immigration issues, financial anomalies. These leads are used for targeted investigations and prosecutions, continuing the current pattern but at industrial scale.

Impact: Neutralizes opposition leaders and compounds chilling effects. Already happening manually at smaller scale.

Scenario 3: Voter Suppression Through Targeted Disinformation (~20-35%)

AI-generated content, informed by detailed behavioral profiles, is used to suppress opposition voter turnout through micro-targeted messaging designed to demoralize specific demographic groups, create confusion about voting procedures, or manufacture artificial social consensus against opposition candidates.

How this differs from generic disinformation: The combination of centralized citizen data (Scenario 1-2's infrastructure) with AI targeting creates a qualitatively new capability. Rather than broadcasting propaganda and hoping it reaches persuadable voters, this scenario uses detailed behavioral profiles — built from the very data the administration is centralizing — to identify specific individuals and craft personalized suppression messages.

Operational model:

  1. Targeting: AI analyzes the centralized citizen database cross-referenced with commercial data (browsing histories, social media activity, purchase patterns, location data from data brokers) to identify opposition-leaning voters in competitive districts who are most susceptible to demobilization — particularly low-propensity voters whose participation is uncertain.

  2. Message generation: For each target demographic, AI generates tailored content:

    • For voters anxious about immigration: fake "official" notices about polling place changes in their area
    • For voters skeptical of institutions: cynicism-amplifying content ("both parties are the same — why bother?")
    • For minority communities: AI-translated messages with subtly incorrect voting instructions (wrong dates, wrong ID requirements)
    • For younger voters: AI-generated social media posts from fake local accounts expressing voting nihilism
  3. Delivery: Distributed through AI-generated robocalls (despite the FCC's February 2024 ban — enforcement is reactive), targeted social media ads via shell organizations, AI text messages, and fake local news sites. Concentrated in the 48-72 hours before Election Day when corrections cannot reach affected voters.

Precedent: The 2016 Internet Research Agency operation included targeted demobilization of Black voters — what internal documents called "deterrence" campaigns. Cambridge Analytica similarly ran "deterrence" targeting using Facebook data. Both were crude by AI standards (human-written, template-based). AI-powered operations would be orders of magnitude more personalized and harder to attribute.

Quantitative estimate: A 2026 PNAS study — the most rigorous to date — tracked over 10,000 participants who installed an app capturing every ad they viewed for six weeks before the 2016 election. Participants who saw vote-suppressing Facebook messages were 1.9% less likely to actually vote. Nonwhite voters in nonwhite-majority counties in battleground states were 14.2% less likely to vote compared to white voters in non-battleground areas who didn't see suppression ads. Extrapolated nationally, the researchers estimated approximately 4.7 million people may have been kept from voting — and this was from crude, human-written 2016-era content. AI-personalized suppression at scale would likely produce larger effects.

Enforcement gap: The New Hampshire Biden robocall incident (January 2024, targeting 25,000 Democratic primary voters) demonstrated the basic capability — and the enforcement failure. The FCC fined consultant Steve Kramer $6 million and New Hampshire filed 22 criminal charges (11 felony counts of voter suppression, up to 7 years each, plus 11 counts of impersonating a candidate). But a jury acquitted Kramer on all charges in June 2025 — and he publicly stated he would not pay the FCC fine. The precedent: even in a clear-cut case with an identified perpetrator, criminal enforcement failed. AI-powered operations at scale, potentially routed through shell organizations or foreign actors, would be far harder to prosecute.

Impact: Could measurably reduce turnout in targeted demographics. Research shows people perform only slightly better than chance at identifying AI-generated content.

Scenario 4: Voter Roll Purges via Citizenship Database (~5-15%)

Using the national citizenship data system (which links voter rolls with immigration, Social Security, and other databases), the administration purges eligible voters or creates barriers to registration, particularly in opposition-leaning areas.

The infrastructure already exists: The DHS/DOGE national citizenship data system (described in "Data Centralization" above) links voter rolls with Social Security records, immigration databases, and driver's license data. This is precisely the infrastructure needed for algorithmic voter roll purges — and it is being built right now, not hypothetically.

How AI enables this at scale:

  1. Automated eligibility challenges: Software already exists for this. EagleAI, developed by a Georgia activist and promoted through Trump ally Cleta Mitchell's Election Integrity Network, compiles data from secretary of state records, USPS change-of-address data, obituaries, and property records to auto-generate voter challenge forms. Approximately 100,000 challenges have been filed in Georgia using this tool and similar approaches by a network of about a dozen conservative activists. The Brennan Center warns these tools rely on "datasets not designed to determine voter eligibility," producing "huge numbers of false positives." True AI-powered systems — cross-referencing the national citizenship database — could operate at 100x this scale.

  2. False-positive citizenship matching: Name-matching algorithms have well-documented error rates that disproportionately affect minority communities:

    • Hispanic surnames: patronymic naming patterns create false cross-state matches (e.g., "Maria Garcia" flagged in multiple states)
    • Asian names: transliteration variations between documents (e.g., different romanizations of the same Chinese name)
    • Hyphenated and compound names: mismatched across databases
    • Common names in minority communities: "James Brown," "Jose Rodriguez" generate many false matches
  3. Targeted purge geography: AI could identify which precincts to prioritize for purges based on partisan lean — focusing eligibility challenges on opposition-leaning areas while leaving friendly precincts untouched. This is harder to detect than uniform purges because the mechanism appears neutral (citizenship verification) even as the application is partisan.

Historical precedent:

  • Florida 2000: The state contracted Database Technologies (DBT) to purge felons from voter rolls before the presidential election. The matching algorithm was set to an extremely broad tolerance — "80% match" on names — producing thousands of false positives disproportionately affecting Black voters. The election was decided by 537 votes.
  • Interstate Crosscheck: The Crosscheck program compared voter rolls across 27 states using only first name, last name, and date of birth. Research by Stanford, Harvard, and Microsoft found a 99% false-positive rate. The program was used to justify purges in multiple states before being shut down due to data security violations and legal challenges.
  • Ohio 2015-2016: The state purged approximately 40,000 voters from rolls in three of the largest counties (Cuyahoga, Franklin, Hamilton) using a "use it or lose it" policy that removed voters who hadn't voted in recent elections. A Reuters analysis found the purges disproportionately affected minority and low-income neighborhoods.

What AI adds: Previous purge operations were limited by the crude matching algorithms available and the human labor required to process challenges. AI enables:

  • Much larger scale (millions of voter records cross-referenced simultaneously)
  • More plausible-seeming justifications (AI generates individualized "evidence" for each challenge)
  • Faster processing (challenges generated and filed in days rather than months)
  • Better targeting (AI identifies which purges will have the most partisan impact)

Legal landscape: The National Voter Registration Act requires states to maintain accurate rolls but prohibits systematic purges within 90 days of a federal election. However, the "citizenship verification" framing may circumvent this restriction — courts have generally allowed citizenship-based challenges even close to elections. The Supreme Court's 2018 Husted v. A. Philip Randolph Institute ruling upheld Ohio's aggressive purge practices, providing legal cover for similar programs.

Impact: Could disenfranchise tens of thousands of eligible voters through false-positive citizenship matches. The combination of the national citizenship database (already under construction) with AI-powered matching and partisan targeting creates a voter suppression mechanism that appears procedurally neutral while producing racially and politically disparate outcomes.

Scenario 5: Comprehensive Digital Authoritarianism (~5-10%)

Full deployment of a China-style AI surveillance apparatus with behavioral monitoring and systematic suppression of opposition organizing. Would represent a fundamental transformation of American governance.

Impact: Would effectively end competitive elections. Extremely unlikely near-term due to institutional, legal, and cultural resistance, but the infrastructure being assembled lowers the barrier over time. See AI-Enabled Authoritarian Takeover for the structural endpoint.

Probability Estimates

ScenarioProbabilityTimeframe
Administration wants AI for partisan surveillance≈90%Already evident
AI surveillance of government employees (already happening)≈95%Current
Data centralization creates comprehensive citizen database≈75%6-18 months
AI analysis of bulk commercial data deployed for intelligence≈50-60%12-24 months
AI surveillance materially affects 2026 midterm outcomes≈15-30%November 2026
Measurable chilling effect on opposition organizing≈50-65%Already beginning
Courts effectively constrain surveillance deployment≈30-40%Ongoing
Democrats win House in 2026 (from betting markets)≈70-84%November 2026

These are subjective probability estimates based on available evidence as of March 2026. The novelty of the situation means historical base rates are less informative than usual and uncertainty bands should be wide.

Countervailing Forces

Courts: Federal courts have pushed back on various administration actions. A federal judge found that SSA likely violated privacy laws in giving DOGE access to data. Multiple legal experts have called the Anthropic "supply chain risk" designation "almost surely illegal." However, the judiciary's ability to constrain classified surveillance programs has historically been limited.

Electoral dynamics: As of March 2026, betting markets suggest Democrats have approximately 69-84% probability of winning the House. The leading Polymarket scenario is split government (R Senate, D House) at 43%, followed by Democratic sweep at 40%. Republican retention of both chambers is at only 17-18%. An administration that expects to lose power has less incentive to build permanent surveillance infrastructure — but also greater urgency to use it before losing access.

Civil society: Multiple organizations are actively challenging surveillance overreach through litigation and advocacy: ACLU, EFF, Anthropic's own lawsuit challenging the supply chain designation, and open letters from 330+ Google and OpenAI employees expressing solidarity with Anthropic's position.

Technical and institutional friction: The federal government has a historically poor track record of deploying new technology effectively. DOGE's own track record includes significant errors (Veterans Affairs contract analysis mistakes, Agriculture Department staff terminations during bird flu outbreaks). Building a functioning AI surveillance apparatus is substantially harder than building a data centralization infrastructure.

Bipartisan resistance: Even some conservative voices have criticized the administration's approach. Former Trump AI policy advisor Dean Ball called Hegseth's Anthropic designation "a psychotic power grab" and "almost surely illegal." Conservative activist Catherine Engelbrecht expressed discomfort about data centralization: "Such centralization of data poses a threat to individual freedoms and privacy."

Key Uncertainties

What would increase the risk:

  • Courts declining to intervene on citizenship database or data sharing
  • OpenAI's Pentagon contract terms proving weaker than Anthropic's in practice
  • Administration successfully deploying AI analysis of commercial data before November 2026
  • Additional AI companies capitulating to "all lawful purposes" demands

What would decrease the risk:

  • Anthropic winning its supply chain designation lawsuit
  • Congressional passage of the Fourth Amendment Is Not For Sale Act
  • Democratic House win in 2026 enabling oversight
  • Technical failures or high-profile errors in DOGE AI systems undermining credibility
  • Whistleblower disclosures prompting public backlash

Biggest unknown: Whether the infrastructure being assembled will be used for electoral manipulation, or whether it remains a latent capability that future administrations inherit. Even if the current administration exercises restraint, the infrastructure outlasts any single president — and rebuilding dismantled oversight is harder than destroying it.

  • Anthropic-Pentagon Standoff (2026) — The specific incident that crystallized the surveillance dispute
  • Mass Surveillance — Global context for AI-enabled surveillance
  • AI-Enabled Authoritarian Takeover — The structural endpoint if these trends continue
  • Authoritarian Tools — AI tools used for political repression globally
  • Surveillance Chilling Effects Model — Quantitative modeling of surveillance impact on behavior

Related Wiki Pages

Top Related Pages

Approaches

AI-Era Epistemic Security

Analysis

Surveillance Chilling Effects ModelElectoral Impact Assessment ModelAuthoritarian Tools Diffusion ModelCritical Political Races 2026AI Surveillance and Regime Durability ModelFailed and Stalled AI Proposals

Risks

AI DisinformationAI-Driven Trust DeclineEpistemic Collapse

Organizations

Leading the Future super PACBrennan Center for JusticeProtect DemocracyFreedom House

Other

Archon Fung