Center for Democracy and Technology
Center for Democracy and Technology
CDT is one of the oldest and most established digital rights organizations engaging on AI policy, founded in 1994. Its AI Governance Lab focuses on algorithmic accountability, automated decision-making in government, AI in education, and workers' rights. CDT has direct influence on US and EU regulatory processes through decades of technology policy expertise.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Policy Influence | Very High | Decades of relationships with Congress, FTC, and EU institutions |
| Institutional Credibility | Very High | 30+ year track record on technology policy, bipartisan respect |
| AI Focus | Growing | AI Governance Lab is expanding, building on broader digital rights expertise |
| Civil Liberties Lens | Very Strong | AI work firmly grounded in rights-based framework |
| Regulatory Engagement | Very High | Regular participation in rulemakings, legislative drafting, and litigation |
| Technical Capacity | High | Combines technical analysis with legal and policy expertise |
Organization Details
| Attribute | Details |
|---|---|
| Founded | 1994 |
| Location | Washington, D.C.; Brussels |
| Structure | Nonprofit public interest organization |
| Staff | ≈60 employees (2024) |
| CEO | Alexandra Reeve Givens |
| AI Governance Lab Director | Miranda Bogen (formerly Meta Responsible AI, formerly Upturn) |
| Annual Revenue | ≈$10.5M (2024; 56% foundations, 24% corporate, 10% events) |
| Website | cdt.org |
| Focus Areas | AI governance, digital privacy, free expression, surveillance, government transparency |
Overview
The Center for Democracy and Technology (CDT) was founded in 1994, during the early commercial internet era, to promote democratic values and civil liberties in the design and governance of digital technologies. Over three decades, CDT has built deep relationships with Congress, the Federal Trade Commission, the European Commission, and other regulatory bodies, becoming one of the most trusted voices on technology policy in Washington.
CDT's AI work builds on this long institutional history of digital rights advocacy. The organization's AI Governance Lab applies CDT's rights-based framework to the specific challenges of artificial intelligence, focusing on how AI systems affect civil liberties, democratic processes, and vulnerable populations.
AI Governance Lab
CDT's AI Governance Lab focuses on several key areas:
Algorithmic Accountability: Research and advocacy for transparency and accountability requirements for AI systems used in consequential decision-making, including hiring, lending, housing, and criminal justice.
Government Use of AI: Monitoring and analyzing how federal, state, and local government agencies use AI for decision-making, benefits administration, surveillance, and law enforcement, with advocacy for transparency and oversight requirements.
AI in Education: Research on AI tools in educational settings, including concerns about student privacy, algorithmic grading, surveillance, and equitable access to AI-enhanced education.
Workers' Rights and AI: Analysis of how AI-driven workplace surveillance, algorithmic management, and automated hiring affect workers, with advocacy for labor protections in an increasingly automated workplace.
EU AI Engagement: Through its Brussels office, CDT engages directly with EU AI Act implementation, digital services regulation, and other European AI governance frameworks.
Legislative and Regulatory Impact
CDT's AI policy impact comes through multiple channels:
- Congressional testimony: CDT staff regularly testify before House and Senate committees on AI-related issues
- FTC engagement: CDT participates in FTC proceedings on AI and algorithmic fairness
- Amicus briefs: CDT files legal briefs in cases involving AI and civil liberties
- Coalition building: CDT leads and participates in coalitions of civil society organizations advocating for AI governance
Key Dynamics
Rights-based approach: CDT's AI work is grounded in civil liberties and democratic accountability, distinguishing it from both the consequentialist framing of AI safety organizations like CAIS and the industry-led approach of Partnership on AI.
Institutional longevity: CDT's 30+ year track record gives it credibility and relationships that newer AI policy organizations lack, particularly with Congressional offices and regulatory agencies that have worked with CDT on internet, privacy, and surveillance issues for decades.
Present harms focus: Like AI Now Institute, CDT focuses primarily on present-day AI harms and governance rather than long-term existential risk, though its work on government AI use and algorithmic accountability has implications for AI governance at any capability level.