MacArthur Foundation - Prioritizing Safety and Rights in AI
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: MacArthur Foundation
A philanthropic perspective from the MacArthur Foundation on AI governance risks, useful for understanding how major funders frame civil society priorities around AI safety and national security policy debates.
Metadata
Summary
Eric Sears of the MacArthur Foundation argues that the dominant 'AI arms race' framing between the U.S. and China, driven by corporate and government interests, risks sidelining critical human rights and safety oversight. He warns that without proper safeguards, AI deployed in national security and surveillance could erode democracy and enable authoritarian outcomes. The Foundation is funding civil society organizations working at the intersection of AI, national security, and geopolitics to build more human-centered governance frameworks.
Key Points
- •The 'AI arms race' narrative between the U.S. and China benefits corporations and investors but risks marginalizing safety and rights-based oversight mechanisms.
- •AI systems used in national security, surveillance, and warfare can produce misleading outputs with profound consequences if deployed without proper oversight.
- •Growing ties between Silicon Valley and Washington create conflicts of interest that may shape AI regulation in favor of industry over public interest.
- •The MacArthur Foundation calls for broader expertise, stronger civil society networks, global perspectives, and a shared human-centered lexicon in AI governance.
- •There is a real risk that existing safety and rights-based AI regulations will be swept aside during periods of geopolitical instability and rapid decision-making.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| MacArthur Foundation | Organization | 65.0 |
Cached Content Preview
Prioritizing Safety and Rights in AI Technology - MacArthur Foundation
-->
m_auth=WIdLN_TpIEfx1XUl9lWkdQ>m_preview=env-34>m_cookies_win=x" height="0" width="0" style="display: none; visibility: hidden">
Account Login
Login into your grantee account, reset your password, or submit an idea to the MacArthur Foundation.
Login or Submit Idea
Prioritizing Safety and Rights in AI Technology
January 30, 2025
Perspectives
Technology in the Public Interest
Eric Sears, Director, Technology in the Public Interest, shares insights on what is needed to ensure AI governance centers human rights, community, and safety.
Artificial intelligence (AI) has become a significant animating force of national security and geopolitics. Over the last few years, a dangerous “AI arms race” narrative has taken center stage, whereby the United States and China are in competition to achieve increasingly advanced AI as a means towards global superiority. Leading AI companies and technology executives in the U.S. often embrace this narrative, and with good reason: they and their investors stand to significantly profit as advanced AI systems are industry driven and owned.
While deep connections between Silicon Valley and Washington, DC, have long existed, ties are growing closer than ever as technology companies and their leaders pursue valuable government contracts and seek to shape AI policy and regulation in their interest. The military, intelligence agencies, and law enforcement are increasingly integrating AI technologies into surveillance, threat assessment, warfare, and for other purposes. This ensures AI will play an even more active role in shaping foreign and domestic policy moving forward.
Yet we know that AI systems can provide misleading and inaccurate information that could have profound national security implications. Moreover, without proper oversight and protections, AI technologies raise a range of security and human rights risks that could erode, not advance, democracy and national security. The specter of AI working toward authoritarian ends looms large.
Without proper oversight and protections, AI technologies raise a range of security and human rights risks that could erode, not advance, democracy and national security.
Throughout 2024, the intersection of AI, national security, and geopolitics became increasingly complex. There has been some progress in establishing safeguards to help ensure AI systems used for national security purposes uphold democratic values and include safety and rights-based considerations. However, we are entering into what could be a time of deep geopolitical instability, fueled by precipitous decision-making. In such a context, there is a real risk that rules and regulation governing AI, that are m
... (truncated, 5 KB total)19ffd0264157df57 | Stable ID: YWRlZmQwOG