Skip to content
Longterm Wiki
Back

OpenAI Preparedness Framework

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OpenAI

This is OpenAI's official Preparedness Framework page, relevant to discussions of frontier AI governance, deployment safety standards, and how leading labs operationalize risk thresholds before releasing powerful models.

Metadata

Importance: 72/100organizational reportreference

Summary

OpenAI's Preparedness initiative outlines a framework for tracking, evaluating, and mitigating catastrophic risks from frontier AI models. It establishes risk thresholds across categories like cybersecurity, CBRN threats, and persuasion, and defines safety standards that must be met before model deployment.

Key Points

  • Defines a structured risk assessment framework categorizing frontier model risks into domains: CBRN, cybersecurity, persuasion/deception, and model autonomy.
  • Establishes 'safety levels' (low, medium, high, critical) for each risk category with deployment thresholds tied to those levels.
  • Mandates pre-deployment and ongoing evaluations ('evals') of frontier models to track risk profiles as capabilities scale.
  • Creates a Preparedness team responsible for conducting red-teaming, evaluations, and maintaining the safety case for model releases.
  • Represents OpenAI's institutional commitment to structured risk governance alongside its broader safety policies.

Cited by 6 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 20266 KB
Safety & responsibility \| OpenAI

Safety

# Safety at every step

We believe in AI’s potential to make life better for everyone, which means making it safe for everyone

## Teach

Your browser does not support the video tag.

We start by teaching our AI right from wrong, filtering harmful content and responding with empathy.

Read more

## Test

Your browser does not support the video tag.

We conduct internal evaluations and work with experts to test real-world scenarios, enhancing our safeguards.

Read more

## Share

Your browser does not support the video tag.

We use real-world feedback to help make our AI safer and more helpful.

Read more

## Safety doesn’t stop

Building safe AI isn’t one and done. Every day is a chance to make things better. And every step helps anticipate, evaluate, and prevent risk.

Teach

Filter data

OpenAI Policies

Human values

Test

Red teaming

System cards

Preparedness evals

Share

Safety committees

Alpha / Beta

GA

Feedback

## How we think about safety and alignment

[Learn more](https://openai.com/safety/how-we-think-about-safety-alignment/)

## Protecting people where it matters most

We work with industry leaders and policymakers to reduce harm and protect people across critical areas.

![Illustration of two people avatar, with one smaller than the other, depicting a parent and child relationship.](https://images.ctfassets.net/kftzwdyauwt9/3Pnye15XS4LzFC5MKXMOfQ/48ed4691f91cef9d6d704c53cdfa6c65/Child_Safety_light.svg?w=3840&q=90)

#### Child safety

Read more

![Illustration of a pair of sunglasses and a hat, depicting the concept of anonymity and privacy.](https://images.ctfassets.net/kftzwdyauwt9/1hD7jNR4a6Nm52qy2ZdzXj/959e5ffd2c51568b8a5769a4423db5b1/Private_Information_light.svg?w=3840&q=90)

#### Private information

Read more

![An illustration of a human avatar inside of a shield depicting safety.](https://images.ctfassets.net/kftzwdyauwt9/6Ax9ev057EsSvXSPgPXQuS/8dccc721200679aeb367b537fd46e5d8/Deep_Fakes_light.svg?w=3840&q=90)

#### Deep fakes

Read more

![An abstract illustration featuring a tilted balance scale and directional arrows, representing the concept of bias in decision-making or evaluation.](https://images.ctfassets.net/kftzwdyauwt9/2TPsjs9yTPSHETTNatJetJ/2c57e7b997687bcdd1f184052f0061b7/Bias_light.svg?w=3840&q=90)

#### Bias

Read more

![A stylized illustration depicting a ballot box with a check-marked paper being inserted, symbolizing the voting process in an election.](https://images.ctfassets.net/kftzwdyauwt9/3ELUkC9OslwmbT0seo5VUF/74469164d582f4916c260111da6744ff/Elections_light.svg?w=3840&q=90)

#### Elections

Read more

## Learn more

![Parental-Controls_Art Card 1080x1080](https://images.ctfassets.net/kftzwdyauwt9/52OZyosYauJk0dTLi8ScwL/0c18767cc70cef41fb50052ee5d82d53/Art_Card_1080x1080.png?w=3840&q=90&fm=webp)

[Introducing parental controls\\
\\
ProductSep 29, 2025](https://openai.com/index/introducing-parental-controls/)

![PFv2 artcard 1.1](https://images.ctfasset

... (truncated, 6 KB total)
Resource ID: 90a03954db3c77d5 | Stable ID: NDI1NDdlNz