Skip to content
Longterm Wiki
Back

AI Alliance: State of Open Source AI Trust and Safety (2024)

web

Published by the AI Alliance, a multi-stakeholder consortium promoting open AI development; this report reflects an industry perspective on balancing openness with safety and is relevant to debates around open-source AI governance and regulation.

Metadata

Importance: 42/100organizational reportanalysis

Summary

A 2024 report from the AI Alliance assessing the current landscape of trust, safety, and governance in open-source AI development. It examines how open-source AI models and ecosystems are addressing safety challenges, and advocates for collaborative approaches to ensuring responsible open-source AI deployment.

Key Points

  • Surveys the state of safety practices, tools, and norms emerging within the open-source AI community as of 2024.
  • Argues that openness and safety are complementary rather than conflicting goals in AI development.
  • Highlights gaps in evaluation frameworks and safety benchmarks specifically tailored for open-source models.
  • Advocates for industry-wide collaboration on shared safety standards and transparency measures for open AI systems.
  • Provides recommendations for researchers, developers, and policymakers on improving trust in open-source AI ecosystems.

Cited by 1 page

PageTypeQuality
Open Source AI SafetyApproach62.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202615 KB
The State of Open Source AI Trust and Safety - End of 2024 Edition | AI Alliance Return to Articles The State of Open Source AI Trust and Safety - End of 2024 Edition

 11th December 2024 News Contributors: Joe Spisak (Meta), Andrea Greco (IBM Research), Zhuo Li (HydroX AI), Florencio Cano (Red Hat), Victor Bian (HydroX AI), Kristen Menou (University of Toronto), Virendra Mehta (University of Trento), Dean Wampler (IBM Research), Jonathan Bnayahu (IBM Research), Zach Delpierre Coudert (Meta), Agata Ferretti (IBM Research) 

 

 The AI Alliance now has more than 140 members in 23 countries and is growing. Membership is diverse with 35% start-ups,  30% academic organizations, 19% enterprises, 11% non-profits and 6% research institutions. This diversity of perspective is an absolute strength of the Alliance and it’s been exciting to witness its growth over the last year.  

 
One of the six Alliance workstreams led by Meta and IBM has been focused on trust and safety and how we can bring people together to tackle some of the major challenges faced by the community today. Inspired by the State of AI Report led by Air Street Capital and Nathan Benaich, we thought it would be interesting to dive deeper in this area and build an understanding of the ground truth for AI Alliance members regarding: 

 What best practices are being applied today including tools, evals and methodologies?
 What major gaps exist?; and 
 What are the top needs and wants from developers in this space? 
 
We conducted a survey of AI Alliance member organizations and received 110 responses. We hope that this post will help bridge the gap between platform providers, model developers and ultimately those building generative AI applications to help drive the community towards a direction that benefits us all. 

 
 Key takeaways: 

 Respondents : The largest segment of survey takers (~50%) are from enterprises followed by academia, then an almost equal split between research and start-ups, and independent developers and non-profits closing it out. The majority of the respondents are from the United States, then Europe, Brazil, South Korea, Australia, India and UAE.
 Applications : Chatbots, coding assistants and summarization were the main use cases deployed today.
 Model popularity: GPTx and Llama topped the list followed by Mistral and a very long tail of other models.
 Importance of safety : On average respondents reported that it was very high (~8 out of 10) as a critical concern for their applications. These concerns are motivated by legal, regulatory and customer satisfaction (in that order) with some driven by competitive pressures. 
 Regulations : The EU AI Act is the core regulation on the minds of the survey takers regardless of the region they are from.
 Source of AI Safety knowledge : Mainly driven by internal teams, NIST with some leveraging AI specific safety vendors and cloud providers.
 Use case risk : Running counter to many narratives, 2/3rds of respondents have not

... (truncated, 15 KB total)
Resource ID: 17b0a686e7b02f5f | Stable ID: YzI0NDhjNG