Skip to content
Longterm Wiki
Back

Elizabeth Kelly: Leading America's AI Safety Institute (TIME 100 Most Influential in AI 2024)

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TIME

Profile of the director of the U.S. AI Safety Institute, relevant for understanding the institutional landscape of AI safety governance and federal efforts to evaluate and manage risks from advanced AI systems.

Metadata

Importance: 45/100news articlenews

Summary

A TIME profile of Elizabeth Kelly, who leads the U.S. AI Safety Institute (AISI) at NIST, highlighting her role in shaping federal AI safety policy and evaluation frameworks. The article likely covers her background, priorities, and vision for responsible AI development and government oversight.

Key Points

  • Elizabeth Kelly directs the U.S. AI Safety Institute, the primary federal body focused on AI safety evaluation and standards
  • AISI plays a central role in coordinating government efforts to assess risks from advanced AI systems
  • Her leadership reflects growing U.S. government commitment to institutionalizing AI safety practices
  • The profile situates AI safety governance within broader political and policy debates about AI regulation
  • Kelly represents a key figure bridging technical AI safety concerns and federal policy implementation

Cited by 2 pages

PageTypeQuality
US AI Safety InstituteOrganization91.0
AI Safety Institutes (AISIs)Policy69.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20266 KB
Elizabeth Kelly: The 100 Most Influential People in AI 2024 | TIME Loading... Elizabeth Kelly

 by Tharin Pillay Pillay is an editorial fellow at TIME.

 Photo-Illustration by TIME (Source: Courtesy of Elizabeth Kelly) Last November, at the world’s first AI Safety Summit, both the British and American governments announced the formation of AI safety institutes, intended to test these cutting-edge systems for potential risks.

 Appointed to lead America’s AI Safety Institute (AISI) was Elizabeth Kelly , a lawyer by training, who has been pivotal in shaping the Biden administration’s approach to technology policy. She was one of the lead drafters of Biden’s AI executive order.

 As new and increasingly capable AI models continue to be released, Kelly and her team, which includes top computer scientists, ethicists, and anthropologists, are on the frontlines helping the government discern which systems are safe. The U.S. AISI’s vision is ambitious: to advance the science, practice, and adoption of AI safety, addressing a range of risks in the process. “Our view is that safety enables trust, which enables adoption, which enables innovation,” Kelly says.

 Advertisement

 TIME spoke with Kelly to understand how she’s approaching the task ahead. 

 (This interview has been condensed and edited for clarity.) 

 TIME: Part of the institute's job is to address what exactly ‘AI safety’ means. What does the term mean to you? 

 Elizabeth Kelly : I'd say that we take a really broad view of AI safety. We are focused on the risks of AI, both in the here and now, in terms of potential harms to privacy, synthetic content harms, and what that means for millions of people; as well as the risks to national security and to public safety, as outlined in the executive order.

 We're very much aware that the risks are evolving as AI evolves. We view our job as advancing the science of AI safety, and a large part of that is really understanding the capabilities that are emerging, the risks that are emerging, and keeping our eye on the horizon.

 Your strategy document states that “safety is key to unlocking innovation.” Do you think that safety and innovation are in tension? 

 I really don't. Our view is that safety enables trust, which enables adoption, which enables innovation. And we all have a shared goal in making sure that we're able to fully enjoy the transformative benefits that AI offers, like individualized education, new drug discovery and development, early detection of illnesses—I could go on. But to make sure that that happens, we really need to ensure that we are seeing safe AI development. Enabling safe innovation is our North Star here at the Safety Institute.

 There’s been prior reporting about NIST’s offices being in pretty rough shape—things like mold, leaks, and pests. Do you have what you need to get your work done at AISI? 

 We absolutely have the resources that we need to fulfill our mandate under the executive order and to really start our wor

... (truncated, 6 KB total)
Resource ID: 0694bc71bc9daac0 | Stable ID: OGQ5NDJiZD