Skip to content
Longterm Wiki
Updated 2026-01-30HistoryData
Page StatusRisk
Edited 2 months ago1.8k words11 backlinksUpdated every 6 weeksOverdue by 20 days
91QualityComprehensive19ImportancePeripheral20.5ResearchMinimal
Content9/13
SummaryScheduleEntityEdit historyOverview
Tables10/ ~7Diagrams1/ ~1Int. links19/ ~14Ext. links37/ ~9Footnotes0/ ~5References46/ ~5Quotes0Accuracy0RatingsN:5 R:6 A:5 C:7Backlinks11
Issues2
Links9 links could use <R> components
StaleLast edited 65 days ago - may need review

Erosion of Human Agency

Risk

Erosion of Human Agency

Comprehensive analysis of AI-driven agency erosion across domains: 42.3% of EU workers under algorithmic management (EWCS 2024), 70%+ of Americans consuming news via social media algorithms, and documented 2-point political polarization shifts from algorithmic exposure (Science 2024). Covers mechanisms from data collection through cognitive dependency, with quantified impacts in employment (75% ATS screening), healthcare (30-40% algorithmic triage), and credit (Black/Brown borrowers 2x+ denial rates).

SeverityMedium-high
Likelihoodhigh
Timeframe2030
MaturityNeglected
TypeStructural
StatusAlready occurring
Related
Risks
AI-Induced EnfeeblementAI Mass SurveillanceSycophancy
1.8k words · 11 backlinks

Quick Assessment

DimensionAssessmentEvidence
SeverityHighAffects 4B+ social media users; 42.3% of EU workers under algorithmic management (EWCS 2024)
LikelihoodHigh (70-85%)Already observable across social media, employment, credit, healthcare domains
TimelinePresent - 2035Human-only tasks projected to drop from 47% to ≈33% by 2030 (McKinsey 2025)
ReversibilityLowNetwork effects, infrastructure lock-in, and skill atrophy create path dependency
TrendAccelerating60% of workers will require retraining by 2027; 44% of skills projected obsolete in 5 years (WEF 2024)
Global Exposure40-60% of employmentIMF estimates 40% global, 60% in advanced economies face AI disruption
Detection DifficultyHigh67% of users believe AI increases autonomy while objective measures show reduction

Overview

Human agency—the capacity to make meaningful choices that shape one's life—faces systematic erosion as AI systems increasingly mediate, predict, and direct human behavior. Unlike capability loss, erosion of agency concerns losing meaningful control even while retaining technical capabilities.

For comprehensive analysis, see Human Agency, which covers:

  • Five dimensions of agency (information access, cognitive capacity, meaningful alternatives, accountability, exit options)
  • Agency benchmarks by domain (information, employment, finance, politics, relationships)
  • Factors that increase and decrease agency
  • Measurement approaches and current state assessment
  • Trajectory scenarios through 2035

How It Works

Agency erosion operates through multiple reinforcing mechanisms that compound over time. Research from the Centre for International Governance Innovation identifies a core paradox: "AI often creates an illusion of enhanced agency while actually diminishing it."

The Agency Erosion Cycle

Stage 1: Data Collection and Behavioral Profiling

AI systems accumulate detailed behavioral profiles through continuous monitoring. Social media platforms track 2,000-3,000 data points per user (Privacy International), while workplace algorithmic management systems monitor keystrokes, screen time, and communication patterns. This creates fundamental information asymmetry: systems know more about users than users know about themselves.

Stage 2: Algorithmic Mediation of Choices

Once behavioral patterns are established, AI systems increasingly mediate decisions:

DomainMediation RateMechanismSource
News consumption70%+ Americans via social mediaAlgorithmic feeds replace editorial curationPew Research 2022
Job applications75% screened by ATSAutomated filtering before human reviewHarvard Business School
Credit decisions80%+ use algorithmic scoringBlack-box models determine access to capitalUrban Institute 2024
Healthcare triage30-40% of hospitalsRisk algorithms prioritize care allocationScience 2019

Stage 3: Preference Shaping and Behavioral Modification

Algorithmic systems don't merely respond to preferences—they actively shape them. A 2025 study in Philosophy & Technology demonstrates "how the absence of reliable failure indicators and the potential for unconscious value shifts can erode domain-specific autonomy both immediately and over time."

Key mechanisms include:

  • Recommendation loops: YouTube's algorithm drives 70% of watch time through recommendations that optimize for engagement, not user welfare
  • Default effects: Opt-out organ donation increases consent from ~15% to 80%+, demonstrating the power of choice architecture
  • Filter bubbles: Users exposed to algorithmically curated content show 2+ point shifts in partisan feeling (Science 2024)

Stage 4: Cognitive Dependency and Skill Atrophy

Extended AI reliance produces measurable cognitive effects. Research on generative AI users shows "symptoms such as memory decline, reduced concentration, and diminished analysis depth" (PMC 2024). Users who rely heavily on AI have "fewer opportunities to commit knowledge to memory, organize it logically, and internalize concepts."

Stage 5: Lock-in and Reduced Exit Options

As dependency deepens, switching costs increase. Network effects (social graphs, recommendation histories), data portability barriers, and skill atrophy create structural lock-in. Users increasingly lack both the capability and the practical alternatives to opt out.


Risk Assessment

DimensionAssessmentNotes
SeverityHighThreatens democratic governance foundations
LikelihoodMedium-HighAlready observable in social media, expanding to more domains
Timeline2-10 yearsCritical mass of life domains affected
TrendAcceleratingIncreasing AI deployment in decision systems
ReversibilityLowNetwork effects create strong lock-in

Current Manifestations

DomainUsers/ScaleAgency ImpactEvidence
YouTube2.7B usersRecommendations drive 70% of watch timeGoogle Transparency Report
Social media4B+ users13.5% of teen girls report worsened body image from InstagramWSJ Facebook Files
Criminal justice1M+ defendants/yearCOMPAS affects sentencing with documented racial biasProPublica
Employment75% of large companiesAutomated screening with hidden criteriaReuters
Consumer credit$1.4T annuallyAlgorithmic lending with persistent discriminationBerkeley researchers

Workforce Agency Under Algorithmic Management

The 2024 European Working Condition Survey found 42.3% of EU workers are now subject to algorithmic management, with significant variation by country (27% in Greece to 70% in Denmark).

Management FunctionAlgorithmic ControlWorker ImpactEvidence
Task allocation45-60% of gig workersReduced discretion over work selectionAnnual Reviews 2024
Performance monitoringReal-time tracking"Digital Panopticon" effects; constant surveillanceEWCS 2024
Schedule optimization35-50% of shift workersBasic needs neglected (food, bathroom breaks)Swedish transport study
Productivity targetsAlgorithmic quotasIncreased stress, reduced autonomyPMC 2024

Research using German workplace data found that "specific negative experiences with algorithmic management—such as reduced control, loss of design autonomy, privacy violations, and constant monitoring—are more strongly associated with perceptions of workplace bullying than the mere frequency of algorithmic management usage" (Reimann & Diewald 2024).

Bias and Discrimination in Automated Decisions

DomainBias FindingAffected PopulationSource
Healthcare algorithmsBlack patients needed to be "much sicker" to receive same care recommendationsMillions of patients annuallyScience 2019
Hiring AIAmazon's tool systematically downgraded resumes with words like "women's"All female applicantsReuters 2018
Mortgage lendingBlack and Brown borrowers 2x+ more likely to be deniedMillions of loan applicantsUrban Institute 2024
Age discriminationWorkday AI screening lawsuit allowed to proceed (ADEA)Applicants over 40Federal Court 2025
Gender in LLMsWomen associated with "home/family" 4x more than menAll users of major LLMsUNESCO 2024

Key Erosion Mechanisms

Diagram (loading…)
flowchart TD
  subgraph DRIVERS["AI System Drivers"]
      DATA[Data Collection<br/>Behavioral tracking]
      ALGO[Algorithmic Mediation<br/>Recommendation systems]
      PRED[Predictive Modeling<br/>Behavioral forecasting]
  end

  subgraph MECHANISMS["Erosion Mechanisms"]
      ASYM[Information Asymmetry<br/>AI knows more than user]
      CHOICE[Choice Architecture<br/>Nudging and defaults]
      DEPEND[Cognitive Dependency<br/>Skill atrophy]
      SURVEIL[Surveillance<br/>Panopticon effects]
  end

  subgraph IMPACTS["Agency Impacts"]
      AUTO[Reduced Autonomy<br/>Fewer genuine choices]
      MANIP[Preference Manipulation<br/>Shaped desires]
      LOCK[Lock-in Effects<br/>Switching costs]
  end

  DATA --> ASYM
  DATA --> SURVEIL
  ALGO --> CHOICE
  ALGO --> DEPEND
  PRED --> ASYM
  PRED --> MANIP

  ASYM --> AUTO
  CHOICE --> AUTO
  DEPEND --> AUTO
  SURVEIL --> MANIP

  AUTO --> LOCK
  MANIP --> LOCK

  style DRIVERS fill:#e6f3ff
  style MECHANISMS fill:#fff3e6
  style IMPACTS fill:#ffcccc

Information Asymmetry

AI System KnowledgeHuman KnowledgeImpact
Complete behavioral historyLimited self-awarenessPredictable manipulation
Real-time biometric dataDelayed emotional recognitionMicro-targeted influence
Social network analysisIndividual perspectiveCoordinated shaping
Predictive modelingRetrospective analysisAnticipatory control

The Illusion of Enhanced Agency

MIT research found 67% of participants believed AI assistance increased their autonomy, even when objective measures showed reduced decision-making authority. People confuse expanded options with meaningful choice.


Democratic Implications

Democratic RequirementAI ImpactEvidence
Informed deliberationFilter bubble creationPariser 2011
Autonomous preferencesPreference manipulationSusser et al.
Equal participationAlgorithmic amplification biasNoble 2018
Accountable representationOpaque influence systemsPasquale 2015

Voter manipulation: Cambridge Analytica demonstrated 3-5% vote share changes achievable through personalized political ads affecting 87 million users.

Recent Research on Algorithmic Political Influence

A 2024 field experiment with 1,256 participants during the US presidential campaign found that algorithmically reranking partisan animosity content shifted out-party feelings by more than 2 points on a 100-point scale. This provides causal evidence that algorithmic exposure directly alters political polarization.

The EU's Digital Services Act (DSA), enacted February 2024, now mandates that large social media platforms assess their risks to democratic values and fundamental rights, including civic discourse and freedom of expression.

Researchers have identified that "the individual is therefore deprived of at least some of their political autonomy for the sake of the social media algorithm" (SAGE Journals 2025).


Key Uncertainties

UncertaintyRange of ViewsWhy It Matters
Net welfare effectSome argue AI expands effective choice; others that it narrows meaningful autonomyDetermines whether regulatory intervention is warranted
ReversibilityOptimists: skills can be relearned; Pessimists: cognitive atrophy is cumulativeAffects urgency of intervention
Defense-offense balanceCan transparency and user control tools offset manipulation capabilities?Shapes policy approach (prohibition vs. empowerment)
MeasurementHow to operationalize "meaningful agency" vs. "mere choice"?Without measurement, progress cannot be tracked
Individual variationAre some populations (youth, elderly, low digital literacy) more vulnerable?Targeted vs. universal protections
Technological trajectoryWill agentic AI (2025-2030) dramatically accelerate or plateau agency erosion?Planning horizons for governance

Critical Research Questions

  1. Threshold effects: Is there a critical level of algorithmic mediation beyond which recovery becomes impractical?
  2. Intergenerational transmission: Will children raised with AI assistants develop fundamentally different agency capacities?
  3. Collective agency: Can coordinated user action restore agency, or do network effects make individual resistance futile?
  4. Alternative architectures: Are there AI system designs that could enhance rather than erode agency?

Responses That Address This Risk

ResponseMechanismStatus
AI GovernanceRegulatory frameworksEU AI Act in force
Human-AI Hybrid SystemsPreserve human judgmentActive development
Responsible ScalingIndustry self-governanceExpanding adoption
Algorithmic transparencyExplainability requirementsUS EO 14110

See Human Agency for detailed intervention analysis.

Sources

Core Research

  • WSJ Facebook Files
  • MIT: Illusion of enhanced agency
  • Susser et al.: Preference manipulation
  • Autonomy by Design: Preserving Human Autonomy in AI Decision-Support - Philosophy & Technology 2025
  • The Silent Erosion: How AI's Helping Hand Weakens Our Mental Grip - CIGI

Algorithmic Management

Policy and Governance

  • EU AI Act
  • Digital Services Act - European Commission
  • Reranking partisan animosity in algorithmic social media feeds - Science 2024

Industry Reports

Bias and Discrimination

References

A Pew Research Center analysis summarizing survey data on American public attitudes toward artificial intelligence as of early 2024. The report covers concerns about AI's societal impact, levels of trust in AI systems, and views on government regulation, revealing widespread anxiety alongside limited understanding of AI technologies.

★★★★☆

Partnership on AI (PAI) is a nonprofit coalition of AI researchers, civil society organizations, academics, and companies working to develop best practices, conduct research, and shape policy around responsible AI development. It brings together diverse stakeholders to address challenges including safety, fairness, transparency, and the societal impacts of AI systems. PAI serves as a coordination hub for cross-sector dialogue on AI governance.

★★★☆☆

This page outlines the European Commission's comprehensive policy framework for AI, centered on promoting trustworthy, human-centric AI through the AI Act, AI Continent Action Plan, and Apply AI Strategy. It aims to balance Europe's global AI competitiveness with safety, fundamental rights, and democratic values. Key initiatives include AI Factories, the InvestAI Facility, GenAI4EU, and the Apply AI Alliance.

★★★★☆

This Guardian investigation exposed how Cambridge Analytica harvested personal data from approximately 50 million Facebook users without their consent, using it to build psychographic profiles for targeted political advertising during the 2016 US election. The scandal revealed systemic vulnerabilities in social media data governance and the potential for AI-driven behavioral manipulation at scale. It became a landmark case in debates around data privacy, algorithmic influence, and the weaponization of personal data.

★★★☆☆
5**Future of Humanity Institute**Future of Humanity Institute

The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.

★★★★☆

Amazon developed an AI hiring tool (2014-2015) that systematically discriminated against female candidates because it was trained on historically male-dominated resume data, teaching itself to prefer male applicants. Despite attempts to patch specific biased terms, Amazon disbanded the project after recognizing the system could develop new discriminatory patterns unpredictably. This case illustrates core challenges in algorithmic fairness, bias from training data, and the difficulty of ensuring interpretability in deployed ML systems.

★★★★☆
7study by Berkeley researchershaas.berkeley.edu

This resource link points to a UC Berkeley working paper on algorithmic lending, but the page is no longer accessible (404 error). The paper likely examined fairness, bias, and discrimination issues in automated credit and lending algorithms.

8Google Transparency Reporttransparencyreport.google.com

Google's Transparency Report provides public data on government requests for user information, content removal and moderation actions, security metrics, and privacy-related enforcement across Google's products and services. It aims to foster accountability by disclosing how government and corporate policies affect user privacy, security, and access to information online.

9Meta's internal researchThe Wall Street Journal

This WSJ article (part of the 'Facebook Files' series) reported on Meta's internal research showing Instagram was aware of harms to teenage mental health, particularly among girls, yet prioritized engagement over user wellbeing. The page is currently a 404, but the original reporting drew on leaked internal documents to expose how the platform suppressed or ignored its own findings.

★★★★☆
10Research by Helen TonerCSET Georgetown

A research publication by Helen Toner at Georgetown's Center for Security and Emerging Technology (CSET) reviewing the state of AI governance in 2024. It likely surveys major policy developments, regulatory trends, and governance challenges that emerged during the year across different jurisdictions and institutions.

★★★★☆
11AI Now InstituteAI Now Institute

The AI Now Institute is a leading research center studying the social and political dimensions of artificial intelligence, with a focus on accountability, power structures, and policy interventions. It produces reports, briefings, and analysis examining how AI systems affect labor, civil rights, and democratic governance. The institute advocates for regulatory frameworks that protect public interests from concentrations of corporate AI power.

★★★★☆

Executive Order 14110, signed by President Biden on October 30, 2023, established comprehensive federal directives for AI safety, security, and governance in the United States. It required safety testing and reporting for frontier AI models, directed agencies to address AI risks across sectors including national security and civil rights, and aimed to position the US as a global leader in responsible AI development. The page content is currently unavailable, but the order is a landmark AI governance document.

★★★★☆

This resource links to a Harvard University Press catalog page that returns a 404 error, so the actual content of the book cannot be verified. Based on the citation 'Pasquale 2015' and the existing tags, this likely refers to Frank Pasquale's work on algorithmic systems, autonomy, and manipulation, possibly 'The Black Box Society.' No substantive content is accessible.

Safiya Umoja Noble's 2018 book examines how search engine algorithms, particularly Google's, embed racial and gender biases that systematically disadvantage women of color. Noble argues that the combination of commercial interests, monopolistic market structures, and unexamined design choices produces discriminatory search results that reinforce harmful stereotypes. The work challenges the myth of algorithmic neutrality and calls for structural reform of how discoverability and information access are governed online.

16Race After Technologyruhabenjamin.com

Ruha Benjamin's 2019 book examines how automated systems and algorithmic technologies encode and perpetuate racial bias, introducing the concept of the 'New Jim Code' to describe how discriminatory social hierarchies are embedded in ostensibly neutral technical systems. The book argues that technology is never a neutral extension of society but always reflects and amplifies existing inequalities, while also offering frameworks for resistance and equitable design.

17Research by addiction specialistsNature (peer-reviewed)·Steven W. Gust·2015·Paper

This study analyzes data from over 120,000 respondents across 126 countries to examine how societal-level factors influence vaccine confidence. The research demonstrates that macro-level trust in science significantly predicts individual vaccine confidence, independent of personal scientific trust. Importantly, the strength of social consensus around trust in science moderates this relationship—in countries with strong consensus about science's trustworthiness, the link between individual trust in science and vaccine confidence is substantially stronger than in countries with weaker consensus.

★★★★★

California's SB-362 (Delete Act), signed into law October 2023, strengthens data broker regulations by requiring registration with the California Privacy Protection Agency and establishing a single accessible mechanism for consumers to request deletion of their personal information from all registered data brokers simultaneously. The law builds on CCPA/CPRA frameworks to give individuals greater practical control over personal data held by commercial data brokers.

19White House AI Bill of RightsWhite House·Government

This URL previously hosted the White House Office of Science and Technology Policy's Blueprint for an AI Bill of Rights, a non-binding framework outlining five principles to protect Americans in the age of AI systems. The page is currently returning a 404 error, suggesting the resource has been moved or removed. The Blueprint covered safe systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives.

★★★★☆

The homepage of the UN Office of the High Commissioner for Human Rights (OHCHR), the principal UN body mandated to promote and protect human rights globally. It publishes press releases, reports, and statements on human rights issues worldwide, including recent coverage of AI and society, racial discrimination, and conflict-related displacement.

The resource URL returned a 404 error and no content could be retrieved. The book 'The Shallows' by Nicholas Carr argues that internet use is reshaping human cognition, reducing capacity for deep reading and sustained concentration in favor of shallow, distracted processing.

22Partnership on AI frameworkPartnership on AI

The Partnership on AI's Algorithmic Impact Assessment (AIA) framework provides structured guidance for organizations to evaluate potential harms and benefits of AI systems before and during deployment. It offers a systematic approach to identifying affected stakeholders, assessing risks, and establishing accountability mechanisms for algorithmic decision-making systems.

★★★☆☆
23Democracy and TechnologyCambridge University Press (peer-reviewed)

This Cambridge University Press book examines the complex relationship between democratic governance and emerging technologies, exploring how digital systems and AI affect political participation, power distribution, and civic autonomy. It analyzes risks of technological manipulation of democratic processes and proposes frameworks for maintaining democratic values in an increasingly automated world.

★★★★★

ProPublica's landmark 2016 investigative report exposing racial bias in the COMPAS algorithm used to predict recidivism risk in criminal sentencing. The investigation found that Black defendants were nearly twice as likely as white defendants to be falsely flagged as future criminals, while white defendants were more often incorrectly labeled low risk. This work sparked a major public debate about algorithmic fairness, transparency, and accountability in high-stakes automated decision-making.

The Wall Street Journal's 'Facebook Files' investigation exposes Facebook's secret 'XCheck' (cross-check) system that exempted high-profile users from standard content moderation rules, creating a two-tiered enforcement system. The reporting reveals how Facebook's internal research documented harms caused by its platforms while leadership suppressed or ignored findings. This investigation raises fundamental questions about corporate accountability, algorithmic manipulation, and the gap between public claims and internal practices at major social media platforms.

★★★★☆
26Cambridge Analytica case studyNature (peer-reviewed)·Metehan Kandemir·2023·Paper
★★★★★

This resource from Trends in Cognitive Sciences appears to examine human agency, autonomy, and susceptibility to manipulation from a cognitive science perspective. The research likely investigates how humans make decisions and maintain or lose autonomous control, with implications for understanding influence and manipulation. The truncated URL and missing content limit full analysis.

This resource from the Behavioral Economics encyclopedia explains 'friction' as intentional obstacles or resistance introduced into decision-making processes to slow impulsive choices. It covers how friction can be used as a design tool to nudge behavior, with research suggesting it can reduce impulsive decisions by approximately 15%. The concept has implications for both beneficial choice architecture and potential manipulation.

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆

This UK government collection page documents the implementation of the Online Safety Act, which establishes legal duties for online platforms to protect users from harmful content. It covers regulatory frameworks, codes of practice, and Ofcom's enforcement role in making the internet safer, particularly for children and vulnerable users.

★★★★☆
31Research by Rudin and Radin (2019)Nature (peer-reviewed)·Paper

Rudin and Radin argue that black box ML models are inappropriate for high-stakes decisions in healthcare, criminal justice, and similar domains, and that post-hoc explanation methods are an insufficient remedy. They advocate instead for designing inherently interpretable models from the outset, distinguishing sharply between explainability and true interpretability.

★★★★★

YouTube's 2024 Creator Economy Report examines the scale and economic impact of content creators on the platform, highlighting trends in monetization, audience engagement, and the growing professionalization of creator-driven media. The report provides data on how creators are building livelihoods and the platform's role in supporting this ecosystem.

EUR-Lex is the official portal for accessing European Union law, including regulations, directives, and legislative documents. It serves as the authoritative repository for EU legal texts relevant to AI governance, digital regulation, and policy frameworks such as the EU AI Act.

★★★★☆

A JAMA Oncology study examining AI system performance in a medical diagnostic context, finding that the AI performed poorly compared to human clinicians. The study raises questions about the readiness of AI systems for high-stakes medical deployment and the risks of over-relying on automated systems in critical healthcare decisions.

The UK government's foundational AI regulatory framework document outlining a principles-based, pro-innovation approach to AI governance. It establishes five cross-sectoral principles—safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress—to guide responsible AI development and deployment without imposing rigid legislation initially.

★★★★☆

This Pew Research Center survey examines how American teenagers use social media and technology platforms in 2022, highlighting shifts in platform popularity, usage frequency, and teen attitudes toward social media's impact on their lives. It provides empirical data on how platforms like TikTok, YouTube, and Instagram dominate teen attention, raising concerns about autonomy and manipulation by algorithmic systems.

★★★★☆
37Nudge: The Final Editionyalebooks.yale.edu

The updated and expanded edition of the landmark book by Richard Thaler and Cass Sunstein introducing 'libertarian paternalism' and the concept of 'nudging'—designing choice architectures that guide people toward better decisions while preserving freedom of choice. The final edition incorporates new research and addresses critiques, extending the framework to modern challenges including digital platforms and AI-driven recommendation systems. It remains the foundational text for understanding how environmental design and defaults shape human behavior.

Eli Pariser's 'The Filter Bubble' examines how personalization algorithms on platforms like Google and Facebook create invisible information silos, showing users only content that reinforces their existing beliefs. This limits exposure to diverse perspectives and undermines informed democratic participation. The concept is foundational for understanding algorithmic manipulation of human attention and epistemic autonomy.

This McKinsey article from 2016 appears to cover strategies for retailers to adapt to rapidly changing consumer behaviors and expectations. The content is inaccessible due to an access denial error, so a full summary cannot be provided.

★★★☆☆

Project Texas is TikTok's initiative to address U.S. national security concerns by storing American user data domestically and restricting access by ByteDance employees in China. The project outlines technical and organizational commitments to protect user data sovereignty and limit foreign influence over content moderation and recommendation algorithms. It represents a major platform governance effort responding to regulatory pressure over AI-driven content systems.

This MIT study by Cass Sunstein and colleagues examines how algorithmic systems and AI can manipulate human decision-making, potentially undermining individual autonomy and rational agency. The paper explores the ethical and governance implications of AI-driven persuasion and nudging, raising concerns about the boundary between legitimate influence and manipulation. It provides a framework for evaluating when algorithmic interventions cross into ethically problematic territory.

★★★☆☆

This paper develops a philosophical account of manipulation, arguing it involves covert influence that bypasses rational agency. The authors analyze how digital technologies enable novel and pervasive forms of manipulation, with implications for autonomy, privacy, and democratic governance. They propose a framework for identifying and addressing online manipulation.

★★★☆☆
43McKinsey State of AI 2025McKinsey & Company

McKinsey's annual survey-based report tracking enterprise AI adoption, investment trends, and organizational practices across industries. It provides data on how companies are deploying AI, where value is being generated, and emerging risks and governance challenges associated with scaling AI systems.

★★★☆☆

This CIGI article examines how increasing reliance on AI tools may gradually erode human cognitive abilities, critical thinking, and mental autonomy. It explores the psychological and societal risks of cognitive offloading to AI systems, arguing that convenience-driven AI adoption could undermine human agency and reasoning capacity over time.

45Obermeyer et al. (2019)Science (peer-reviewed)·Z. Obermeyer, Brian W. Powers, C. Vogeli & S. Mullainathan·2019·Paper

Obermeyer et al. (2019) demonstrate significant racial bias in a widely used commercial health algorithm that affects millions of patients. The bias arises because the algorithm uses health care costs as a proxy for health needs, but due to unequal access to care, less money is spent on Black patients with equivalent health needs. This causes the algorithm to systematically underestimate illness severity in Black patients—at the same risk score, Black patients are considerably sicker than White patients. The authors show that reformulating the algorithm to directly predict health needs rather than costs eliminates this racial bias and would increase the percentage of Black patients identified for additional care from 17.7% to 46.5%.

★★★★★
46EU Digital Services ActEuropean Union

The Digital Services Act (DSA) is binding EU legislation establishing accountability and transparency rules for digital platforms operating in Europe, covering social media, marketplaces, and app stores. It introduces protections including content moderation transparency, minor safeguards, algorithmic feed controls, and ad transparency requirements. The DSA represents a major regulatory framework shaping how AI-driven platforms operate and moderate content at scale.

★★★★☆

Related Wiki Pages

Top Related Pages

Approaches

Responsible Scaling Policies

Analysis

Automation Bias Cascade ModelEconomic Disruption Structural ModelAutonomous Cyber Attack TimelinePreference Manipulation Drift Model

Risks

AI Flash DynamicsAutomation Bias (AI Systems)AI-Driven Economic DisruptionAI-Enabled Authoritarian TakeoverAI Preference ManipulationAI-Induced Cyber Psychosis

Concepts

Structural OverviewPersuasion and Social Manipulation