Back
Reworked - Can We Trust Tech Companies to Regulate Generative AI?
webA 2023 journalistic piece questioning the effectiveness of industry-led AI governance, relevant to debates about self-regulation versus independent oversight of frontier AI models.
Metadata
Importance: 38/100news articlenews
Summary
This article examines the Frontier Model Forum (FMF), an industry self-regulatory body created by Microsoft, Anthropic, Google, and OpenAI. Experts argue that profit-driven companies cannot effectively self-regulate AI safety and that independent oversight with international governmental leadership is essential. The piece highlights the gap between AI development pace and governmental regulatory capacity.
Key Points
- •The Frontier Model Forum was created by major AI companies to ensure safe and responsible development of frontier AI models through industry self-regulation.
- •Expert Andrew Rogoyski compares industry self-regulation to 'putting the foxes in charge of the chicken coop,' arguing the AI industry is too immature to self-regulate.
- •Independent bodies—not suppliers—should set safety standards, conduct audits, and enforce accountability according to critics.
- •Governments are described as significantly behind AI development, with gaps in data protection, copyright, and corporate accountability regulation.
- •Rogoyski suggests FMF should commission independent studies from academia and safety agencies to gain credibility.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Frontier Model Forum | Organization | 58.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202614 KB
Can the Frontier Model Forum Really Regulate Generative AI? Latest Coverage Reworked TV Webinars Research Podcast Events Calendar Editorial Calendar IMPACT Awards Advertising YOUR GUIDE TO THE R/EVOLUTION OF WORK Join us By David Barry August 16, 2023 Information Management Share Share Copy link
Email
LinkedIn
Twitter
Facebook
Telegram
Save SAVED The Frontier Model Forum was created by tech companies to try to regulate the development of generative AI. But can these companies do this effectively? Running parallel to the development of generative AI and its use in the workplace is another discussion that is becoming increasingly important for enterprises: What will be the impact of new AI technologies on business and on society as a whole?
As concerns surrounding the technology and how it may be used in the future grow, a group of tech giants — Microsoft, Anthropic, Google and OpenAI — have gotten together to create the Frontier Model Forum (FMF), an industry body focused on ensuring the safe and responsible development of frontier AI models.
A look into what that means and where we go from here.
Frontier Models
The Frontier Model Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.
Following its creation, the Forum's next task is to establish an advisory board to help guide its strategy and priorities, and establish institutional arrangements, including a charter, governance and funding with a working group and executive board to lead these initiatives.
In a statement announcing the creation of the Forum, Anna Makanju, vice president of global affairs with OpenAI, said that it is essential for the safe development of these models that those who are developing them are working from a common base.
“It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” she said, stressing the urgency of the work and underlying the Forum's unique positioning to act quickly to advance the state of AI safety.
Can Generative AI Self-regulate?
While the Forum is a needed step, the question is: Is it realistic to expect the very companies that are developing these technologies to effectively regulate their development? As the market develops and competition to produce the most effective models intensifies, is it not likely that these companies will push beyond whatever boundaries the Forum creates?
The Frontier Model Forum is laudable in its aims but isn’t by any means the whole answer to safety concerns on AI, Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey told Reworked. “The AI industry isn’t mature enough to be allowed to self-regulate,” he said, likening the effort to "putting t
... (truncated, 14 KB total)Resource ID:
2181aa136128f378 | Stable ID: NTM1NTE5MT