GPT-4 - OpenAI Product Page
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: OpenAI
This is OpenAI's marketing/product page for GPT-4, not the technical report. For safety-relevant technical details, the GPT-4 technical report and system card are more authoritative references. Current tags referencing disinformation and influence operations appear misassigned.
Metadata
Summary
Official OpenAI product page for GPT-4, describing it as their most advanced language model at launch. Highlights safety improvements including being 82% less likely to respond to disallowed content and 40% more likely to produce factual responses than GPT-3.5, achieved through six months of safety-focused training with human feedback and expert collaboration.
Key Points
- •GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses vs GPT-3.5
- •Safety work included 6 months of alignment research, RLHF from ChatGPT users, and input from 50+ domain experts including AI safety specialists
- •GPT-4 was itself used to assist safety research: generating training data, iterating on classifiers, and supporting evaluations
- •Known limitations acknowledged include social biases, hallucinations, and vulnerability to adversarial prompts
- •Available via ChatGPT Plus and API; trained on Microsoft Azure AI supercomputers
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| RLHF | Research Area | 63.0 |
| AI Disinformation | Risk | 54.0 |
Cached Content Preview
GPT-4 \| OpenAI
[Release](https://openai.com/research/index/release/)
# GPT‑4 is OpenAI’s most advanced system, producing safer and more useful responses
[Try on ChatGPT Plus(opens in a new window)](https://chatgpt.com/?openaicom-did=175d03df-fdd5-4601-af35-81916bb23d7e&openaicom_referred=true) [View GPT-4 research](https://openai.com/index/gpt-4-research/)
Loading…
Share
Loading...
Loading...
Loading...
Following the research path from GPT, GPT‑2, and GPT‑3, our deep learning approach leverages more data and more computation to create increasingly sophisticated and capable language models.

We spent 6 months making GPT‑4 safer and more aligned. GPT‑4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT‑3.5 on our internal evaluations.
## Safety & alignment
**Training with human feedback** We incorporated more human feedback, including feedback submitted by ChatGPT users, to improve GPT‑4’s behavior. We also worked with over 50 experts for early feedback in domains including AI safety and security.
**Continuous improvement from real-world use** We’ve applied lessons from real-world use of our previous models into GPT‑4’s safety research and monitoring system. Like ChatGPT, we’ll be updating and improving GPT‑4 at a regular cadence as more people use it.
**GPT‑4‑assisted safety research** GPT‑4’s advanced reasoning and instruction-following capabilities expedited our safety work. We used GPT‑4 to help create training data for model fine-tuning and iterate on classifiers across training, evaluations, and monitoring.
## Built with GPT-4

[GPT‑4 deepens the conversation on Duolingo.\\
\\
Story](https://openai.com/index/duolingo/)

[Be My Eyes uses GPT‑4 to transform visual accessibility.\\
\\
Story](https://openai.com/index/be-my-eyes/)

[Stripe leverages GPT‑4 to streamline user experience and combat fraud.\\
\\
Story](https://openai.com/index/stripe/)

[Morgan Stanley wealth management deploys GPT‑4 to organize its vast knowledge base.\\
\\
Story](https://openai.com/index/morgan-stanley/)
39f08ad975b7f4db | Stable ID: YmY5Mjk5NT