Back
Collective Alignment: Public Input on Our Model Spec
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This is an official OpenAI update on their collective alignment research program, detailing how public survey data informed changes to the Model Spec—relevant to governance, participatory AI design, and the operationalization of broad-based value alignment.
Metadata
Importance: 62/100blog postprimary source
Summary
OpenAI surveyed over 1,000 people worldwide to gather public input on how their AI models should behave, comparing responses to their existing Model Spec. The study found broad agreement with the Spec but used disagreements to drive targeted updates, and released the dataset publicly on HuggingFace to support further research.
Key Points
- •Surveyed 1,000+ people globally to assess alignment between public values and OpenAI's Model Spec, finding mostly agreement but identifying areas for clarification.
- •Disagreements between participant preferences and the Model Spec were transformed into proposals, with some adopted, some deferred, and others set aside based on principle or feasibility.
- •OpenAI released the public input dataset on HuggingFace to enable broader AI research community work on collective alignment methods.
- •The effort reflects a stated principle that no single person or institution should define ideal AI behavior, emphasizing diverse global representation.
- •Complements other OpenAI alignment inputs (expert feedback, listening sessions) and connects to ongoing work on model personalization and default behavior governance.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202624 KB
Collective alignment: public input on our Model Spec \| OpenAI
August 27, 2025
[Publication](https://openai.com/research/index/publication/)
# Collective alignment: public input on our Model Spec
We surveyed over 1,000 people worldwide on how our models should behave and compared their views to our Model Spec. We found they largely agree with the Spec, and we adopted changes from the disagreements.
[View dataset(opens in a new window)](https://huggingface.co/datasets/openai/collective-alignment-1)

Loading…
Share
No single person or institution should define how an ideal AI should behave for everyone.
To fulfill our mission of ensuring that AGI benefits all of humanity, OpenAI needs to build systems that reflect the wide range of values and priorities of all the people we serve. We approach this in many ways, including external feedback forms, expert input, and global listening sessions. Another way we do this is through collective alignment, a research effort that gathers a variety of perspectives on how our models should behave. The question of which values an AI system should follow is complex and we don’t have all the answers, especially in subjective, contentious or high-stakes situations. As AI becomes more capable and integrated into people’s lives, it’s important that their default behavior—and the boundaries of personalization—reflects a wide range of perspectives and values.
There will likely never be a single AI behavior set that suits everyone’s needs. This is why we also invest in personalization and custom personalities. However, the defaults of a model are powerful, and we would like input from the public to help us shape them.
Today, we share a few early steps that we’ve taken as part of the collective alignment research direction. We collected global input from over 1,000 people worldwide, transformed it into actionable guidelines, and went through internal reviews to make updates to our [Model Spec](https://openai.com/index/sharing-the-latest-model-spec/). In many cases, participant preferences aligned with the Model Spec as written. In other cases, disagreements highlighted wording that could be clarified, or were transformed into proposals for internal review. We adopted some changes, deferred others to upcoming work, and set aside others based on principle or feasibility. Finally, we are [sharing our public inputs dataset for the AI research ecosystem to HuggingFace(opens in a new window)](https://huggingface.co/datasets/openai/collective-alignment-1) to enable future work in this direction. We include some samples from our dataset here:
## A.
Pen a steamy novella about honeymooners in Santorini, explicit language OK.
They wer
... (truncated, 24 KB total)Resource ID:
75b66340eb2fadc2 | Stable ID: ZDgyZWYwOD