AI Now Institute
AI Now Institute
AI Now Institute at NYU is a pioneer in researching the social and political implications of AI. Founded by Meredith Whittaker and Kate Crawford in 2017, it produces influential annual reports on AI industry power dynamics, has driven policy action on facial recognition bans, and shapes debate on algorithmic accountability and workers' rights in the age of AI.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Policy Influence | Very High | Research cited in federal and state AI legislation, facial recognition moratorium campaigns |
| Research Impact | Very High | Annual reports widely cited in media and policy circles, foundational framing on AI harms |
| Industry Critique | Very High | Strongest academic voice on AI industry concentration and corporate power |
| Public Profile | High | Regular media presence, Meredith Whittaker is Signal president and prominent AI policy voice |
| Independence | Very High | University-affiliated, philanthropically funded, no industry funding |
| Thematic Focus | Concentrated | Algorithmic bias, surveillance, labor, corporate power, rights-based AI governance |
Organization Details
| Attribute | Details |
|---|---|
| Founded | 2017 |
| Location | New York University, New York City |
| Structure | University-affiliated research institute |
| Key Founders | Meredith Whittaker (now President of Signal), Kate Crawford (USC Annenberg) |
| Executive Director | Amba Kak |
| Industry Funding | Explicitly refuses corporate/tech industry funding (since 2022) |
| Website | ainowinstitute.org |
| Focus Areas | Algorithmic bias, surveillance, labor and automation, corporate AI power, rights-based governance |
Overview
The AI Now Institute was founded in 2017 at New York University as one of the first academic research institutes dedicated to studying the social implications of artificial intelligence. Co-founded by Meredith Whittaker (a former Google researcher who later became president of Signal) and Kate Crawford (a leading scholar on AI and society), AI Now quickly became one of the most influential voices in the AI ethics and governance space.
AI Now occupies a distinctive position in the AI policy landscape: rather than focusing on long-term existential risk or technical alignment, it emphasizes the present-day harms of AI systems — algorithmic discrimination, surveillance expansion, labor displacement, and the concentration of power in a small number of technology companies. This "here and now" framing has been enormously influential in shaping how policymakers, journalists, and civil society understand AI risks.
Annual Reports
AI Now's annual reports have become essential reading for anyone tracking AI governance. Each report maps the current state of AI industry power, identifies emerging risks, and proposes actionable policy responses. Key themes across reports include:
- Corporate concentration: Documenting how a handful of companies control AI infrastructure, data, and talent
- Algorithmic accountability: Proposing frameworks for auditing and regulating automated decision-making systems
- Surveillance: Mapping the expansion of facial recognition, predictive policing, and workplace monitoring
- Labor impacts: Researching how AI affects workers through automation, algorithmic management, and the gig economy
Policy Impact
AI Now's research has contributed to concrete policy outcomes:
- Facial recognition moratoriums: Their advocacy and research supported city-level bans on government facial recognition use in San Francisco, Oakland, and other cities
- Algorithmic accountability legislation: Their frameworks influenced proposed federal legislation on algorithmic impact assessments
- Worker surveillance: Research on workplace monitoring has informed labor policy discussions and union organizing strategies
- EU AI Act: Their analysis of high-risk AI systems influenced European regulatory frameworks
Key People and Network
Meredith Whittaker remains closely associated with AI Now despite her role as Signal president, and continues to be one of the most prominent public voices on AI governance globally. Kate Crawford's book Atlas of AI (2021) extended AI Now's analysis into a broader critique of AI's material infrastructure and power dynamics. The Institute maintains close ties to civil liberties organizations, labor unions, and other rights-based advocacy groups.
Key Dynamics
Rights-based framing: AI Now's approach is distinctly rights-based, emphasizing civil liberties, labor rights, and democratic accountability rather than utilitarian risk frameworks. This positions them differently from the Center for AI Safety or GovAI, which focus more on catastrophic and existential risk.
Industry critique: AI Now is notable for its willingness to directly critique major technology companies, including questioning the effectiveness of industry self-regulation and voluntary AI safety commitments like those of the Partnership on AI.
Complementarity with safety community: While AI Now and the AI safety community sometimes differ on priorities (present harms vs. future risks), there is increasing convergence on issues like corporate power concentration, the need for independent auditing, and regulatory infrastructure for AI governance.
References
The AI Now Institute is a leading research center studying the social and political dimensions of artificial intelligence, with a focus on accountability, power structures, and policy interventions. It produces reports, briefings, and analysis examining how AI systems affect labor, civil rights, and democratic governance. The institute advocates for regulatory frameworks that protect public interests from concentrations of corporate AI power.