Expert Predictions on What's at Stake in AI Policy in 2026
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: TechPolicy.Press
A practitioner-oriented policy commentary from consumer advocacy group Public Citizen, useful for tracking the U.S. political and regulatory context around AI governance as of early 2026, particularly the federal-state dynamic and real-world harm accumulation.
Metadata
Summary
Public Citizen advocates J.B. Branch and Ilana Beller survey the AI policy landscape heading into 2026, cataloging real-world AI harms from 2025 and assessing the political and regulatory battles ahead. The piece highlights Congressional inaction at the federal level contrasted with active state-level legislation, and frames key tensions around who controls AI, who bears its costs, and whether democratic institutions can keep pace with rapid deployment.
Key Points
- •By end of 2025, AI harms had become concrete and widespread—including child safety failures, deepfakes in elections, and AI-linked mental health crises.
- •Congress passed only one AI-related law in 2025 (TAKE IT DOWN Act on nonconsensual intimate images), while states were more active with bipartisan legislation.
- •The Trump administration's approach to AI policy, including executive orders and deregulatory posture, is expected to shape federal AI governance in 2026.
- •Key 2026 battles include who bears liability for AI harms, federal vs. state regulatory authority, and the role of democratic oversight over AI deployment.
- •Synthetic media and deepfakes emerged as a significant political and social threat, used by candidates and public figures in the 2025 election cycle.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Short AI Timeline Policy Implications | Analysis | 62.0 |
| EU AI Act | Policy | 55.0 |
Cached Content Preview
Perspective
# Expert Predictions on What’s at Stake in AI Policy in 2026
J.B. Branch, Ilana Beller / Jan 6, 2026
_J.B. Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch division, and Ilana Beller leads Public Citizen’s state legislative work relating to artificial intelligence._

US President Donald Trump displays a signed executive order as (L-R) Sen. Ted Cruz (R-TX), Commerce Secretary Howard Lutnick and White House AI and crypto czar David Sacks look on in the Oval Office of the White House on December 11, 2025 in Washington, DC. (Photo by Alex Wong/Getty Images)
For years, debates over the regulation of artificial intelligence required a degree of speculation about its potential harms. But even as the technology continues to evolve, it is clear that by the end of 2025 AI ceased to be an “emerging” policy issue. Real world harms are accumulating rapidly, putting pressure on lawmakers to answer the concerns of their constituents. The stage is set for important political and legal battles that will play out in 2026 and will define who controls AI, who bears the costs of its harms, and whether democratic governments and regulators can keep pace.
Indeed, some of 2025’s most revealing moments seemed like scripts from the dystopian science fiction series _Black Mirror_. [Leaked Meta documents](https://www.bbc.com/news/articles/c3dpmlvx1k2o) revealed that executives signed off on allowing AI to have “sensual” conversations with children. In Baltimore, an AI-powered security system [mistook a student’s bag of Doritos for a gun](https://www.cnn.com/2025/10/25/us/baltimore-student-chips-ai-gun-detection-hnk), prompting school administrators to summon the police. An [AI-enabled teddy bear was yanked from store shelves](https://abcnews.go.com/Technology/video/ai-toys-sale-suspended-dangerous-messages-kids-127749242) after reports that it discussed sexual topics and encouraged children to harm their parents. Psychiatrists across the United States increasingly [warned](https://www.wsj.com/tech/ai/ai-chatbot-psychosis-link-1abf9d57?gaa_at=eafs&gaa_n=AWEtsqeWjOqGhA8GRIFm08lVN3RgWpSF0AJfCQcCtu3GGVpUJeidZh9-p8AHRp4I_uc%3D&gaa_ts=6957edfb&gaa_sig=XzV_6DKqzygtk5T7ta5GU3O42nocL0IIFSSXfMm9xErhQhzIxGCG6quUCByx2UDttbW8KM3R3wCU50bWm6qe4w%3D%3D) about the growing problem of AI “psychosis,” even as as OpenAI was sued for allegedly [coaching a teen](https://www.techpolicy.press/breaking-down-the-lawsuit-against-openai-over-teens-suicide/) to commit suicide.
Last year, AI-generated synthetic media became even more prevalent in the political arena, as the tools to produce it became easier to use. President Donald Trump openly shared AI-generated images and videos to [ridicule opponents](https://www.nytimes.com/interactive/2025/10/21/business/media/trump-ai-truth-social-no-kings.html). In Virginia, a congressional candidate rec
... (truncated, 23 KB total)753fa09705230d91 | Stable ID: ZTYzNjc4MG