GPQA Diamond
ReasoningGraduate-level Google-Proof Q&A Diamond subset — extremely difficult questions in physics, chemistry, and biology that even domain experts struggle with.
Models Tested
34
Best Score
91.3%
Median Score
62.5%
Scoring: accuracy
Introduced: 2023-11
Maintainer: David Rein et al.
Leaderboard34 models
| # | Model | Developer | Score |
|---|---|---|---|
| 🥇 | Claude Opus 4.6 | Anthropic | 91.3% |
| 🥈 | Claude Opus 4.5 | Anthropic | 87% |
| 🥉 | Gemini 2.5 Pro | Google DeepMind | 84% |
| 4 | o3 | OpenAI | 83.3% |
| 5 | Gemini 2.5 Flash | Google DeepMind | 82.8% |
| 6 | o4-mini | OpenAI | 81.4% |
| 7 | Grok-3 | xAI | 80% |
| 8 | o3-mini | OpenAI | 79.7% |
| 9 | o1 | OpenAI | 79.2% |
| 10 | o1-preview | OpenAI | 78% |
| 11 | Claude Opus 4 | Anthropic | 74.1% |
| 12 | Claude Sonnet 4.6 | Anthropic | 74.1% |
| 13 | DeepSeek R1 | DeepSeek | 71.5% |
| 14 | Claude Sonnet 4 | Anthropic | 70.3% |
| 15 | Llama 4 Maverick | Meta AI (FAIR) | 69.8% |
| 16 | Claude 3.7 Sonnet | Anthropic | 68% |
| 17 | Claude 3.5 Sonnet | Anthropic | 65% |
| 18 | o1-mini | OpenAI | 60% |
| 19 | DeepSeek V3 | DeepSeek | 59.1% |
| 20 | Llama 4 Scout | Meta AI (FAIR) | 57.2% |
| 21 | Gemini 2.0 Flash | Google DeepMind | 57% |
| 22 | GPT-4.1 | OpenAI | 56.4% |
| 23 | Grok-2 | xAI | 56.4% |
| 24 | GPT-4o | OpenAI | 53.6% |
| 25 | Llama 3.1 | Meta AI (FAIR) | 50.7% |
| 26 | Claude 3 Opus | Anthropic | 50.4% |
| 27 | GPT-4 Turbo | OpenAI | 49.3% |
| 28 | Llama 3.3 | Meta AI (FAIR) | 49.2% |
| 29 | Mistral Large 2 | Mistral AI | 43.9% |
| 30 | Claude 3.5 Haiku | Anthropic | 41.6% |
| 31 | GPT-4o mini | OpenAI | 39.8% |
| 32 | Llama 3 | Meta AI (FAIR) | 39.5% |
| 33 | GPT-4 | OpenAI | 35.7% |
| 34 | Gemini 1.0 Ultra | Google DeepMind | 35.4% |