Back
OpenAI Preparedness Framework
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This is OpenAI's official Preparedness Framework page, relevant to discussions of frontier AI governance, deployment safety standards, and how leading labs operationalize risk thresholds before releasing powerful models.
Metadata
Importance: 72/100organizational reportreference
Summary
OpenAI's Preparedness initiative outlines a framework for tracking, evaluating, and mitigating catastrophic risks from frontier AI models. It establishes risk thresholds across categories like cybersecurity, CBRN threats, and persuasion, and defines safety standards that must be met before model deployment.
Key Points
- •Defines a structured risk assessment framework categorizing frontier model risks into domains: CBRN, cybersecurity, persuasion/deception, and model autonomy.
- •Establishes 'safety levels' (low, medium, high, critical) for each risk category with deployment thresholds tied to those levels.
- •Mandates pre-deployment and ongoing evaluations ('evals') of frontier models to track risk profiles as capabilities scale.
- •Creates a Preparedness team responsible for conducting red-teaming, evaluations, and maintaining the safety case for model releases.
- •Represents OpenAI's institutional commitment to structured risk governance alongside its broader safety policies.
Cited by 6 pages
| Page | Type | Quality |
|---|---|---|
| AI Capability Threshold Model | Analysis | 72.0 |
| Instrumental Convergence Framework | Analysis | 60.0 |
| Corporate AI Safety Responses | Approach | 68.0 |
| Evals-Based Deployment Gates | Approach | 66.0 |
| Third-Party Model Auditing | Approach | 64.0 |
| Voluntary AI Safety Commitments | Policy | 91.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20266 KB
Safety & responsibility \| OpenAI
Safety
# Safety at every step
We believe in AI’s potential to make life better for everyone, which means making it safe for everyone
## Teach
Your browser does not support the video tag.
We start by teaching our AI right from wrong, filtering harmful content and responding with empathy.
Read more
## Test
Your browser does not support the video tag.
We conduct internal evaluations and work with experts to test real-world scenarios, enhancing our safeguards.
Read more
## Share
Your browser does not support the video tag.
We use real-world feedback to help make our AI safer and more helpful.
Read more
## Safety doesn’t stop
Building safe AI isn’t one and done. Every day is a chance to make things better. And every step helps anticipate, evaluate, and prevent risk.
Teach
Filter data
OpenAI Policies
Human values
Test
Red teaming
System cards
Preparedness evals
Share
Safety committees
Alpha / Beta
GA
Feedback
## How we think about safety and alignment
[Learn more](https://openai.com/safety/how-we-think-about-safety-alignment/)
## Protecting people where it matters most
We work with industry leaders and policymakers to reduce harm and protect people across critical areas.

#### Child safety
Read more

#### Private information
Read more

#### Deep fakes
Read more

#### Bias
Read more

#### Elections
Read more
## Learn more

[Introducing parental controls\\
\\
ProductSep 29, 2025](https://openai.com/index/introducing-parental-controls/)
Resource ID:
90a03954db3c77d5 | Stable ID: NDI1NDdlNz