Skip to content
Longterm Wiki
Back

MacArthur Foundation - Technology Public Interest Strategy

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: MacArthur Foundation

This is the strategic overview page for a major philanthropic funder in AI governance; useful for understanding funding priorities and institutional approaches to public-interest AI oversight.

Metadata

Importance: 42/100organizational reporthomepage

Summary

The MacArthur Foundation's Technology in the Public Interest program aims to strengthen democratic oversight of AI through evaluation, auditing, and accountability mechanisms centered on public interest. The strategy operates through three grantmaking pillars: building research foundations for AI accountability, advancing AI policy and regulation, and supporting networks in high-stakes sectors. It connects to broader initiatives including Humanity AI and Current AI.

Key Points

  • Overarching goal is to strengthen democratic oversight of AI via evaluation, auditing, and accountability mechanisms that center public interest considerations.
  • Three grantmaking approaches: research base for AI accountability, development/enforcement of AI laws and regulations, and networks in high-stakes sectors.
  • Emphasizes that accountability mechanisms should be grounded in experiences of communities most vulnerable to AI harms.
  • Supports organizations working on AI evaluations/audits, policy frameworks, capacity building, and responsible deployment in healthcare, education, and finance.
  • Connected to MacArthur's Humanity AI commitment ($500M) and Current AI initiative, with openness to emerging issue areas.

Cited by 1 page

PageTypeQuality
MacArthur FoundationOrganization65.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20269 KB
#### Account Login

Login into your grantee account, reset your password, or submit an idea to the MacArthur Foundation.

[Login or Submit Idea](https://macfound.fluxx.io/user_sessions/new)

![close](https://www.macfound.org/img/cross.svg)

# Technology in the Public Interest

Strengthening oversight and innovation in the governance of artificial intelligence.

- [Overview](https://www.macfound.org/programs/field-support/technology-public-interest/)
- [Strategy](https://www.macfound.org/programs/field-support/technology-public-interest/strategy)

- [Our Strategy](https://www.macfound.org/programs/field-support/technology-public-interest/strategy#our-strategy)
- [Why We Support](https://www.macfound.org/programs/field-support/technology-public-interest/strategy#why-we-support)
- [Expected Outcomes](https://www.macfound.org/programs/field-support/technology-public-interest/strategy#eligibility)
- [Funding Priorities](https://www.macfound.org/programs/field-support/technology-public-interest/strategy#funding-priorities)
- [Evaluation for Learning](https://www.macfound.org/programs/field-support/technology-public-interest/strategy#evaluation-learning)

### **Our Strategy**

The overarching goal of Technology in the Public Interest is to strengthen democratic oversight and innovation in the governance of artificial intelligence (AI) through the development, adoption, and enforcement of evaluation, auditing, and accountability mechanisms that center public interest considerations.

Our strategy seeks to ensure that AI systems are governed in ways that uphold democratic values, such as transparency, accountability, and public engagement. We emphasize the need for meaningful oversight that measure how AI systems impact people, communities, and society. To that end, we believe that accountability mechanisms should be grounded in the experiences of communities most vulnerable to AI harms. By doing so, we can help ensure the benefits of AI are widely shared.

The goal is advanced through three interrelated grantmaking approaches:

- **Strengthen the research base for public interest-focused AI evaluations, auditing, and accountability in ways that inform policy, practice, and public knowledge.** AI evaluations broadly assess a system’s performance, examining factors such as societal impact, safety, and risk. And AI audits rigorously review systems, usually against defined standards or regulations, to verify compliance and accountability. We support organizations that create and implement evaluation and audit methods and advance other types of accountability measures for AI systems.
- **Advance the development, implementation, and enforcement of AI laws, policies, and regulations that center public interest considerations.** This includes the development of policy and regulatory frameworks and analysis, capacity building, education, and new ideas for governing AI in the public interest.
- **Support networks that ensure AI deployment in high-stakes sectors pr

... (truncated, 9 KB total)
Resource ID: 5bbe726dda0c03c4 | Stable ID: ZjUwMzc4ZG