Skip to content
Longterm Wiki
Back

October 2023 code of conduct

web

This is an official G7 Hiroshima Process document released October 2023, representing a significant international governmental effort to establish voluntary AI safety and governance norms for frontier AI developers ahead of formal regulation.

Metadata

Importance: 72/100guidance documentprimary source

Summary

The Hiroshima Process International Code of Conduct, developed under the G7 Hiroshima AI Process, provides voluntary guidance for organizations developing advanced AI systems including foundation models and generative AI. It outlines specific actions organizations should take across the AI lifecycle—covering risk identification, mitigation, transparency, and accountability—grounded in a risk-based approach and building on OECD AI Principles.

Key Points

  • Voluntary code calling on AI developers to identify, evaluate, and mitigate risks throughout the AI lifecycle using methods such as red-teaming and independent testing.
  • Emphasizes respect for rule of law, human rights, democracy, and non-discrimination in the design, development, and deployment of advanced AI systems.
  • Encourages organizations to establish internal AI governance structures, self-assessment mechanisms, and to support development of external monitoring tools.
  • Explicitly prohibits development or deployment of AI that undermines democratic values, facilitates terrorism, promotes criminal misuse, or poses substantial safety/security risks.
  • Designed as a living document to be updated via inclusive multistakeholder consultations as AI technology and governance approaches evolve.

Cited by 1 page

PageTypeQuality
Failed and Stalled AI ProposalsAnalysis63.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202617 KB
# Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems

On the basis of the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI systems, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth "advanced AI systems").

Organizations should follow these actions in line with a risk-based approach.

Organizations that may endorse this Code of Conduct may include, among others, entities from academia, civil society, the private sector, and/or the public sector.

This non-exhaustive list of actions is discussed and elaborated as a living document to build on the existing OECD AI Principles in response to the recent developments in advanced AI systems and is meant to help seize the benefits and address the risks and challenges brought by these technologies. Organizations should apply these actions to all stages of the lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems.

This document will be reviewed and updated as necessary, including through ongoing inclusive multistakeholder consultations, in order to ensure it remains fit for purpose and responsive to this rapidly evolving technology.

Different jurisdictions may take their own unique approaches to implementing these actions in different ways.

We call on organizations in consultation with other relevant stakeholders to follow these actions, in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches. We also commit to develop proposals, in consultation with the OECD, GPAI and other stakeholders, to introduce monitoring tools and mechanisms to help organizations stay accountable for the implementation of these actions. We encourage organizations to support the development of effective monitoring mechanisms, which we may explore to develop, by contributing best practices.

In addition, we encourage organizations to set up internal AI governance structures and policies, including self-assessment mechanisms, to facilitate a responsible and accountable approach to implementation of these actions and in AI development.

While harnessing the opportunities of innovation, organizations should respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and humancentricity, in the design, development and deployment of advanced AI systems.

Organizations should not develop or deploy advanced AI systems in ways that undermine democratic values, are particularly harmful to individuals or communities, facilitate terrorism, promote criminal misuse, or pose substantial ri

... (truncated, 17 KB total)
Resource ID: 4bbbfcce32a89a06 | Stable ID: NjI0N2YyND