Longterm Wiki

Philip Tetlock (Forecasting Pioneer)

philip-tetlock (E434)
← Back to pagePath: /knowledge-base/people/philip-tetlock/
Page Metadata
{
  "id": "philip-tetlock",
  "numericId": null,
  "path": "/knowledge-base/people/philip-tetlock/",
  "filePath": "knowledge-base/people/philip-tetlock.mdx",
  "title": "Philip Tetlock (Forecasting Pioneer)",
  "quality": 73,
  "importance": 75,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-02-01",
  "llmSummary": "Philip Tetlock is a psychologist who revolutionized forecasting research by demonstrating that expert predictions often perform no better than chance, while identifying systematic methods and 'superforecasters' who achieve superior accuracy. His work has significant implications for AI safety and existential risk assessment, though faces challenges when applied to long-term, low-probability events with limited feedback loops.",
  "structuredSummary": null,
  "description": "Psychologist and forecasting researcher who pioneered the science of superforecasting through the Good Judgment Project, demonstrating that systematic forecasting methods can outperform expert predictions and intelligence analysts.",
  "ratings": {
    "novelty": 6,
    "rigor": 8,
    "actionability": 7,
    "completeness": 8
  },
  "category": "people",
  "subcategory": null,
  "clusters": [
    "epistemics"
  ],
  "metrics": {
    "wordCount": 3622,
    "tableCount": 2,
    "diagramCount": 0,
    "internalLinks": 11,
    "externalLinks": 100,
    "footnoteCount": 98,
    "bulletRatio": 0.03,
    "sectionCount": 23,
    "hasOverview": true,
    "structuralScore": 12
  },
  "suggestedQuality": 80,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 3622,
  "unconvertedLinks": [
    {
      "text": "AI Risk Surveys | AI Impacts Wiki",
      "url": "https://wiki.aiimpacts.org/uncategorized/ai_risk_surveys",
      "resourceId": "e4357694019bb5f5",
      "resourceTitle": "AI Impacts: Surveys of AI Risk Experts"
    }
  ],
  "unconvertedLinkCount": 1,
  "convertedLinkCount": 0,
  "backlinkCount": 0,
  "redundancy": {
    "maxSimilarity": 19,
    "similarPages": [
      {
        "id": "good-judgment",
        "title": "Good Judgment (Forecasting)",
        "path": "/knowledge-base/organizations/good-judgment/",
        "similarity": 19
      },
      {
        "id": "fri",
        "title": "Forecasting Research Institute",
        "path": "/knowledge-base/organizations/fri/",
        "similarity": 16
      },
      {
        "id": "robin-hanson",
        "title": "Robin Hanson",
        "path": "/knowledge-base/people/robin-hanson/",
        "similarity": 16
      },
      {
        "id": "futuresearch",
        "title": "FutureSearch",
        "path": "/knowledge-base/organizations/futuresearch/",
        "similarity": 15
      },
      {
        "id": "rethink-priorities",
        "title": "Rethink Priorities",
        "path": "/knowledge-base/organizations/rethink-priorities/",
        "similarity": 15
      }
    ]
  }
}
Entity Data
{
  "id": "philip-tetlock",
  "type": "person",
  "title": "Philip Tetlock (Forecasting Pioneer)",
  "description": "Psychologist and forecasting researcher who pioneered the science of superforecasting through the Good Judgment Project, demonstrating that systematic forecasting methods can outperform expert predictions and intelligence analysts.",
  "tags": [
    "forecasting",
    "superforecasting",
    "prediction-accuracy",
    "decision-making",
    "calibration"
  ],
  "relatedEntries": [
    {
      "id": "good-judgment",
      "type": "organization"
    },
    {
      "id": "fri",
      "type": "organization"
    },
    {
      "id": "metaculus",
      "type": "organization"
    }
  ],
  "sources": [],
  "lastUpdated": "2026-02",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links

No external links

Backlinks (0)

No backlinks

Frontmatter
{
  "title": "Philip Tetlock (Forecasting Pioneer)",
  "description": "Psychologist and forecasting researcher who pioneered the science of superforecasting through the Good Judgment Project, demonstrating that systematic forecasting methods can outperform expert predictions and intelligence analysts.",
  "importance": 75,
  "lastEdited": "2026-02-01",
  "update_frequency": 45,
  "sidebar": {
    "order": 55
  },
  "ratings": {
    "novelty": 6,
    "rigor": 8,
    "actionability": 7,
    "completeness": 8
  },
  "quality": 73,
  "llmSummary": "Philip Tetlock is a psychologist who revolutionized forecasting research by demonstrating that expert predictions often perform no better than chance, while identifying systematic methods and 'superforecasters' who achieve superior accuracy. His work has significant implications for AI safety and existential risk assessment, though faces challenges when applied to long-term, low-probability events with limited feedback loops.",
  "clusters": [
    "epistemics"
  ],
  "entityType": "person"
}
Raw MDX Source
---
title: "Philip Tetlock (Forecasting Pioneer)"
description: Psychologist and forecasting researcher who pioneered the science
  of superforecasting through the Good Judgment Project, demonstrating that
  systematic forecasting methods can outperform expert predictions and
  intelligence analysts.
importance: 75
lastEdited: "2026-02-01"
update_frequency: 45
sidebar:
  order: 55
ratings:
  novelty: 6
  rigor: 8
  actionability: 7
  completeness: 8
quality: 73
llmSummary: Philip Tetlock is a psychologist who revolutionized forecasting
  research by demonstrating that expert predictions often perform no better than
  chance, while identifying systematic methods and 'superforecasters' who
  achieve superior accuracy. His work has significant implications for AI safety
  and existential risk assessment, though faces challenges when applied to
  long-term, low-probability events with limited feedback loops.
clusters: ["epistemics"]
entityType: person
---
import {EntityLink, KeyPeople, KeyQuestions, Section} from '@components/wiki';

## Quick Assessment

| Aspect | Assessment |
|--------|-----------|
| **Primary Achievement** | Pioneered forecasting tournaments demonstrating that systematic methods outperform expert intuition; identified "superforecasters" with superior accuracy |
| **Key Publications** | *Expert Political Judgment* (2005), *Superforecasting* (2015) |
| **Institutional Affiliation** | Leonore Annenberg University Professor at University of Pennsylvania (Wharton and Psychology) |
| **Major Projects** | <EntityLink id="E532">Good Judgment</EntityLink> Project (IARPA tournament winner 2011-2015), <EntityLink id="E147">Forecasting Research Institute</EntityLink> |
| **Influence on AI Safety** | Methods applied to existential risk assessment; adversarial collaboration on AI forecasting; EA community adoption of forecasting practices |
| **Key Finding** | Most expert predictions perform no better than chance; "fox-like" integrative thinkers outperform "hedgehog" theorists |

## Key Links

| Source | Link |
|--------|------|
| Official Website | [en.wikiquote.org](https://en.wikiquote.org/wiki/Philip_Tetlock) |
| Wikipedia | [en.wikipedia.org](https://en.wikipedia.org/wiki/Philip_E._Tetlock) |

## Overview

Philip E. Tetlock is a Canadian-born psychologist who revolutionized the study of forecasting accuracy through decades of research demonstrating that expert predictions on political and economic events are often no better than random chance, while identifying systematic methods to achieve superior forecasting performance[^1][^2]. As the Leonore Annenberg University Professor at the University of Pennsylvania with cross-appointments at the Wharton School and School of Arts and Sciences, Tetlock has authored over 200 peer-reviewed articles and nine books examining judgment, decision-making, and prediction accuracy[^3][^4].

Tetlock's most influential work emerged from forecasting tournaments he initiated during the Cold War era through the National Academy of Sciences Committee on Nuclear War Prevention, analyzing over 82,000 predictions from 284 experts between 1984 and 2003[^5][^6]. This research culminated in his landmark 2005 book *Expert Political Judgment*, which documented that experts with access to classified information performed no better than Berkeley undergraduates or "dart-throwing chimpanzees" on long-range forecasts[^7][^8]. However, Tetlock also identified a minority of superior forecasters—"foxes" who integrate diverse perspectives rather than "hedgehogs" who apply single theories—leading to his co-founding of the Good Judgment Project with Barbara Mellers and Don Moore[^9].

The Good Judgment Project won a four-year IARPA-sponsored forecasting tournament (2011-2015) involving thousands of forecasters making over one million predictions on geopolitical events[^10][^11]. The project identified "superforecasters"—ordinary citizens whose accuracy exceeded intelligence analysts with classified information access by 60-85%[^12][^13]. This work established systematic methods for improving prediction accuracy, including training protocols, team dynamics, and aggregation algorithms that have influenced intelligence agencies, forecasting platforms like <EntityLink id="E199">Metaculus</EntityLink>, and the effective altruism community's approach to decision-making under uncertainty[^14][^15].

## History and Academic Career

### Education and Early Career

Tetlock was born in Toronto, Canada, and grew up in Winnipeg and Vancouver[^16]. He received his B.A. in psychology from the University of British Columbia in 1975, followed by an M.A. in 1976 working with Peter Suedfeld on content analysis of diplomatic communications[^17][^18]. He completed his Ph.D. in psychology at Yale University in 1979 under the supervision of Phoebe C. Ellsworth[^19].

From 1979 to 1995, Tetlock served as Assistant Professor of psychology at the University of California, Berkeley, directing the Institute of Personality and Social Research from 1988 to 1995[^20]. He then held the Harold E. Burtt Endowed Chair in Psychology and Political Science at Ohio State University (1996-2001) before returning to Berkeley as the Mitchell Endowed Chair at the Haas School of Business (2001-2010)[^21][^22]. In December 2010, he was appointed Leonore Annenberg University Professor of Democracy and Citizenship at the University of Pennsylvania, becoming a Penn Integrates Knowledge (PIK) Professor with joint appointments in Psychology, Management, and the Annenberg School for Communication[^23][^24].

### Origins of Forecasting Research

Tetlock's forecasting research originated from his work on the National Academy of Sciences Committee for the Prevention of Nuclear War in the early 1980s during Cold War tensions[^25]. He became concerned that public debate on nuclear policy relied heavily on vague, unverifiable predictions that could not be systematically evaluated[^26]. This led him to create the first forecasting tournament during the Cold War to test expert predictions scientifically[^27].

Between 1984 and 2003, Tetlock conducted small-scale forecasting tournaments with 284 experts—including government officials, professors, and journalists spanning ideologies from Marxists to free-market advocates—on geopolitical outcomes[^28][^29]. These experts made predictions about events such as the Soviet Union's collapse, the future of apartheid in South Africa, and Middle East peace prospects. The results formed the empirical basis for his 2005 book *Expert Political Judgment: How Good Is It? How Can We Know?*, published by Princeton University Press[^30].

### Good Judgment Project

The publication of *Expert Political Judgment* directly influenced U.S. intelligence agencies to create a four-year geopolitical forecasting tournament sponsored by IARPA (Intelligence Advanced Research Projects Activity)[^31]. From 2011 to 2015, Tetlock co-led the winning team—the Good Judgment Project—with his spouse Barbara Mellers and UC Berkeley colleague Don Moore[^32][^33]. The multidisciplinary team included experts in statistics, computer science, economics, psychology, and political science[^34].

The project involved thousands of forecasters making over one million predictions on geopolitical questions[^35]. It identified "superforecasters"—high-performing individuals who consistently outperformed both average forecasters and professional intelligence analysts with access to classified information[^36]. According to analysis of the project's results, superforecasters were approximately 60-85% more accurate than average forecasters and demonstrated the ability to distinguish 10-15 degrees of uncertainty while maintaining calibration across hundreds of events[^37][^38].

The Good Judgment Project's success led to the founding of Good Judgment Inc., a consultancy co-founded by Tetlock that offers bespoke forecasting services, workshops for private clients, and the Good Judgment Open platform for crowd-based forecasts[^39][^40]. The project's methods have been adapted for use by U.S. intelligence agencies and inspired forecasting platforms including Metaculus and INFER-Public[^41].

## Research Contributions

### The Fox-Hedgehog Distinction

One of Tetlock's most influential conceptual contributions is the distinction between "fox-like" and "hedgehog-like" thinkers, inspired by Isaiah Berlin's essay "The Hedgehog and the Fox"[^42]. Hedgehogs organize their thinking around a single grand theory or ideology and make bold, confident predictions. Foxes, by contrast, are modest, self-critical thinkers who draw on diverse perspectives and remain skeptical of grand theories[^43].

Tetlock's research demonstrated that fox-like forecasters consistently outperformed hedgehog forecasters, particularly on long-range forecasts[^44]. Foxes showed greater willingness to update their beliefs in response to evidence and were more accurate across a wider range of prediction domains[^45]. However, early critiques noted that while foxes outperformed hedgehogs, they still only modestly exceeded simple benchmarks like extrapolation algorithms, rather than achieving substantial superiority over baseline models[^46].

### Superforecasting Methodology

The Good Judgment Project identified specific attributes and practices associated with superior forecasting performance. Superforecasters typically exhibit:

- **Probabilistic thinking**: Ability to think in granular probabilities rather than binary yes/no predictions
- **Active open-mindedness**: Willingness to consider alternative hypotheses and update beliefs based on evidence
- **Intellectual humility**: Recognition of uncertainty and limits of their knowledge
- **Pattern recognition**: Skill at identifying relevant historical analogies
- **Team collaboration**: Ability to productively combine perspectives with other forecasters
- **Regular practice**: Consistent engagement with forecasting questions to refine judgment[^47][^48]

Tetlock's research demonstrated that forecasting accuracy could be improved through training programs focusing on these cognitive habits, team structures that facilitate information sharing, and aggregation algorithms that appropriately weight the judgments of top performers[^49][^50]. The project developed techniques including extremizing weighted averages (adjusting crowd predictions to account for shared information) and Bayesian question clusters (breaking complex forecasts into component questions)[^51][^52].

### Accountability and Judgment

Beyond forecasting accuracy, Tetlock has extensively researched how accountability affects judgment and decision-making. His 2006 paper "Conflicts of Interest and the Case of Auditor Independence: Moral Seduction and Strategic Issue Cycling" (co-authored with Don Moore, Lloyd Tanlu, and Max Bazerman) analyzed how conflicts of interest in auditing contributed to scandals like Enron and WorldCom[^53][^54]. The paper introduced "moral seduction theory"—the concept that professionals can become unaware of moral compromise from conflicts of interest at a micro level—and "issue-cycle theory" explaining how such conflicts persist at a macro level in major accounting firms[^55].

Tetlock has warned that accountability mechanisms can degrade into "bureaucratic rituals" or "Potemkin villages"—symbolic facades designed to deflect critics rather than genuinely improve decision-making[^56]. His work emphasizes that outcome accountability requires careful, calibrated implementation through controlled evaluation rather than simple demands to "hold rascals accountable"[^57].

## Application to Existential Risk and AI Safety

### Forecasting Research Institute and X-Risk

In 2022, Tetlock became President and Chief Scientist of the Forecasting Research Institute (FRI), which received over \$6 million in funding from <EntityLink id="E552">Coefficient Giving</EntityLink> for developing forecasting techniques applicable to global catastrophic and existential risks[^58][^59]. In June-October 2022, FRI organized an "Existential Risk Persuasion Tournament" involving 169 experts—80 subject matter experts and 89 superforecasters—estimating probabilities of catastrophes (≥10% of humanity deaths) or extinction (&lt;1,000 humans) by 2030, 2050, and 2100[^60].

Tetlock has acknowledged significant challenges in applying forecasting methods to existential risks, including the lack of feedback loops for learning from errors on long-term predictions, the difficulty of recruiting sufficient expertise, and the potential for information hazards when discussing specific risk scenarios[^61]. His recent research explores "hybrid persuasion-forecasting tournaments" that combine expert argumentation with probabilistic forecasting to improve judgments about low-probability, high-impact events[^62].

### AI Forecasting Work

Tetlock has engaged directly with <EntityLink id="E608">AI governance</EntityLink> concerns through multiple initiatives. He conducted a survey of 135 AI safety and governance researchers on advanced AI risks with Ezra Karger and others[^63]. More recently, his team conducted a two-month intensive adversarial collaboration focused on identifying short-term "cruxes"—key questions about AI that could be resolved by 2030—to explore the limits of how disagreements about AI risks can be resolved through structured debate[^64].

His 2025 research published in *ACM Transactions on Interactive Intelligent Systems* examined how <EntityLink id="E186">large language models</EntityLink> can achieve forecasting accuracy comparable to human forecasters when predictions are combined, raising questions about both AI capabilities in prediction tasks and the potential role of AI systems in risk assessment[^65]. This work suggests that <EntityLink id="E9">AI-augmented forecasting</EntityLink>—combining human judgment with machine learning—may offer advantages over either approach alone for certain types of predictions[^66].

### Influence on Effective Altruism

Tetlock has become a prominent figure in the effective altruism (EA) community, with "Tetlock-style judgmental forecasting" notably more popular within EA than in broader contexts[^67]. <EntityLink id="E552">Coefficient Giving</EntityLink> has directly supported forecasting infrastructure influenced by Tetlock's research, funding FRI, Metaculus, and INFER (a program supporting forecasting use by U.S. policymakers)[^68]. Founders Pledge has evaluated Tetlock's forecasting research on existential risk as high-impact work suitable for philanthropic support[^69].

Tetlock has participated in multiple <EntityLink id="E525">EA Global</EntityLink> conferences through fireside chats and Q&A sessions, discussing topics including prediction algorithms, long-term future considerations, epistemic modesty, and belief updating mechanics[^70][^71]. His work on identifying cognitive biases, tracking prediction accuracy, and conducting systematic post-mortems provides methodological tools relevant to assessing low-probability, high-impact scenarios central to EA priorities[^72].

## Criticisms and Limitations

### Methodological Concerns

Critics have raised several concerns about the scope and interpretation of Tetlock's forecasting research. While fox-like forecasters outperform hedgehog forecasters, early analyses noted that foxes still only modestly exceed simple benchmarks like extrapolation algorithms, raising questions about whether the framework sufficiently distinguishes skill from noise[^73][^74]. Hedgehogs performed worse than basic models—in some tests, slightly below random chance—but the practical significance of foxes' advantage over simple algorithms remains debated[^75].

Tetlock's research confronts inherent challenges in evaluating predictions, including the role of exogenous shocks and missing variables that can undermine even sound analyses, giving undue credit to improbable theories[^76]. Arbitrary time frames for prediction windows (such as 5 versus 10 years for Soviet collapse predictions) can distort evaluations of forecaster accuracy[^77]. Domains involving high combinatorial complexity—such as AI risk debates or complex simulations—reveal blind spots even in skilled forecasters, as the number of relevant variables exceeds human cognitive capacity[^78].

A persistent limitation identified by Tetlock himself is that experts without regular accuracy feedback struggle to convert causal knowledge into probabilistic forecasts[^79]. This challenge is particularly acute for long-term existential risk forecasts, where feedback loops for learning from errors may not exist until after catastrophic outcomes[^80].

### Misinterpretation and Misuse

Tetlock has expressed frustration that his research has been misinterpreted and misused to justify dismissing <EntityLink id="E132">expert opinion</EntityLink> entirely, rather than improving forecasting practices[^81]. He particularly criticized how political figures like Michael Gove cited *Expert Political Judgment* to justify ignoring expert consensus on Brexit consequences, characterizing this as a "dangerous misreading" of his findings[^82]. Tetlock emphasized that "it's not that I'm saying that the experts are going to be right, but I would say completely ignoring them is dangerous"[^83].

Populist "know-nothingism" represents a misreading of Tetlock's work, which demonstrates problems with expert forecasting—including systematic overconfidence and reluctance to change minds—without implying that expert opinion should be completely discounted[^84]. His more recent work, including *Superforecasting*, emphasizes that forecasting accuracy can be improved through better methodology and training, rather than arguing that prediction is fundamentally impossible[^85].

### Accountability Mechanisms

Tetlock's proposals for improving forecaster accountability face significant practical challenges. Implementing respected arbiters to evaluate pundit accuracy encounters difficulties ensuring perceived fairness amid partisan divisions[^86]. Process accountability—requiring forecasters to document their reasoning and methods—can degrade into bureaucratic rituals or symbolic facades ("Potemkin villages") rather than genuine improvement, as observed in domains from <EntityLink id="E598">public education</EntityLink> to intelligence analysis[^87]. Outcome accountability, while valuable, requires complex and calibrated implementation through controlled evaluation rather than simple demands for accountability[^88].

### Scope Limitations

Forecasters are valued for multiple purposes beyond pure accuracy, including ideological comfort, entertainment value, and regret minimization (such as in pandemic preparedness)[^89]. Fox-like thinking helps navigate these conflicting values but isn't solely about predictive performance. Tetlock acknowledges that forecasting serves multiple social functions, and that the temptation exists for activists to exaggerate risks (framing certainty as group commitment) or for ideological groups to exclude those expressing doubt[^90].

Some critics argue that Tetlock's findings about expert underperformance, while methodologically sound for short and medium-term forecasts, have been inappropriately extrapolated to long-range planning domains. Tetlock himself has expressed skepticism about very long-term forecasts (such as IPCC projections to 2100), noting that wide estimate spreads and the lack of feedback mechanisms limit the applicability of his methods to century-scale predictions[^91][^92].

## Recent Developments

Tetlock continues active research and institutional involvement in forecasting. In January 2026, he was appointed to the Board of Directors of ForecastEx, Interactive Brokers' prediction market platform, where his expertise in forecasting and decision-making under uncertainty aligns with the platform's mission to help market participants trade probabilities of future outcomes[^93][^94].

Recent publications include "AI-Augmented predictions: LLM assistants improve human forecasting accuracy" (2025) in *ACM Transactions on Interactive Intelligent Systems*, "Subjective-probability forecasts of existential risk: Initial Results from a hybrid persuasion-forecasting tournament" (2025) in the *International Journal of Forecasting*, and "Long-range subjective-probability forecasts of slow-motion variables in world politics: Exploring limits on expert judgment" (2024) in *Futures and Foresight Science*[^95][^96].

According to the *Financial Times* in October 2025, superforecasters associated with the Good Judgment Project proved 30% more accurate on average than futures markets and continued to beat market predictions on Federal Reserve decisions, demonstrating the continued relevance of Tetlock's forecasting methods[^97]. Tetlock received significant media attention throughout 2024-2025, with appearances and coverage in outlets including the *Financial Times*, *Bloomberg*, *Forbes*, *Newsweek*, *The Guardian*, and *Times Radio*[^98].

## Key Uncertainties

Several important questions remain about the scope and applicability of Tetlock's forecasting methods:

**Scalability to existential risks**: How well do forecasting techniques validated on short and medium-term geopolitical questions transfer to low-probability, high-impact scenarios with limited historical precedent? The lack of feedback loops for century-scale predictions presents fundamental challenges for evaluating and improving long-term forecasts.

**AI augmentation limits**: As large language models achieve forecasting accuracy comparable to human forecasters, what is the optimal division of labor between human and machine intelligence in prediction tasks? Recent research suggests hybrid approaches may be superior, but the specific conditions favoring human versus AI forecasting remain unclear.

**Institutional adoption barriers**: Despite demonstrated accuracy improvements, why have forecasting tournaments and superforecaster methods seen limited adoption outside intelligence agencies and specialized platforms? Organizational resistance, incentive misalignment, and the multiple non-accuracy functions that expert predictions serve may present barriers beyond methodological validation.

**Long-term forecast calibration**: Can any systematic methods achieve meaningful calibration for predictions extending decades or centuries into the future, or are such forecasts inherently limited by irreducible uncertainty and the absence of feedback mechanisms for learning?

**Information hazards in risk assessment**: How should forecasting tournaments balance the value of detailed, specific predictions about existential risks against the potential for such forecasts to provide roadmaps for malicious actors or create self-fulfilling prophecies?

## Sources

[^1]: [Philip E. Tetlock, PhD | Annenberg School for Communication at the University of Pennsylvania](https://www.asc.upenn.edu/people/faculty/philip-tetlock-phd)
[^2]: [Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab](https://thedecisionlab.com/thinkers/political-science/philip-tetlock)
[^3]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^4]: [Philip Tetlock | Alliance for Decision Education](https://alliancefordecisioneducation.org/learn/about-the-alliance/team/phil-tetlock/)
[^5]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^6]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^7]: [Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab](https://thedecisionlab.com/thinkers/political-science/philip-tetlock)
[^8]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^9]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^10]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^11]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^12]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^13]: [How to win at forecasting - Philip Tetlock | Edge.org](https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting)
[^14]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^15]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^16]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^17]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^18]: [Philip Tetlock wins Grawemeyer Award (2008)](http://grawemeyer.org/2008-philip-tetlock/)
[^19]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^20]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^21]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^22]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^23]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^24]: [Philip E. Tetlock, PhD | Annenberg School for Communication at the University of Pennsylvania](https://www.asc.upenn.edu/people/faculty/philip-tetlock-phd)
[^25]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^26]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^27]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^28]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^29]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^30]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^31]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^32]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^33]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^34]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^35]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^36]: [Philip Tetlock - PIK Professors - University of Pennsylvania](https://pikprofessors.upenn.edu/meet-the-professors/philip-tetlock)
[^37]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^38]: [How to win at forecasting - Philip Tetlock | Edge.org](https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting)
[^39]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^40]: [Good Judgment - About](https://goodjudgment.com/about/)
[^41]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^42]: [How to win at forecasting - Philip Tetlock | Edge.org](https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting)
[^43]: [Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab](https://thedecisionlab.com/thinkers/political-science/philip-tetlock)
[^44]: [How to win at forecasting - Philip Tetlock | Edge.org](https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting)
[^45]: [Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab](https://thedecisionlab.com/thinkers/political-science/philip-tetlock)
[^46]: [Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound](https://www.cato-unbound.org/2011/07/11/dan-gardner-philip-tetlock/overcoming-our-aversion-acknowledging-our-ignorance)
[^47]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^48]: [Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab](https://thedecisionlab.com/thinkers/political-science/philip-tetlock)
[^49]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^50]: [Evidence on good forecasting practices from the Good Judgment Project | AI Impacts](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/)
[^51]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^52]: [How to win at forecasting - Philip Tetlock | Edge.org](https://www.edge.org/conversation/philip_tetlock-how-to-win-at-forecasting)
[^53]: [Conflicts of Interest and the Case of Auditor Independence (PDF)](https://faculty.wharton.upenn.edu/wp-content/uploads/2012/04/Tetlock_2006-auditorsmooreetalpiece.pdf)
[^54]: [Conflicts of Interest and the Case of Auditor Independence | Semantic Scholar](https://www.semanticscholar.org/paper/Conflicts-Of-Interest-And-The-Case-Of-Auditor-Moral-Moore-Tetlock/b6bda07b7f487fb7c51cf142c1fb79837f17d4ee)
[^55]: [Conflicts of Interest and the Case of Auditor Independence | Semantic Scholar](https://www.semanticscholar.org/paper/Conflicts-Of-Interest-And-The-Case-Of-Auditor-Moral-Moore-Tetlock/b6bda07b7f487fb7c51cf142c1fb79837f17d4ee)
[^56]: [Evaluating Intelligence: A Competent Authority | National Academies](https://www.nationalacademies.org/read/13062/chapter/16)
[^57]: [Evaluating Intelligence: A Competent Authority | National Academies](https://www.nationalacademies.org/read/13062/chapter/16)
[^58]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^59]: [New Coefficient Giving Grantmaking Program: Forecasting | EA Forum](https://forum.nunosempere.com/posts/ziSEnEg4j8nFvhcni/new-open-philanthropy-grantmaking-program-forecasting)
[^60]: [Philip E. Tetlock - Wikipedia](https://en.wikipedia.org/wiki/Philip_E._Tetlock)
[^61]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^62]: [Philip Tetlock Faculty Page | University of Pennsylvania Psychology](https://psychology.sas.upenn.edu/node/418)
[^63]: [AI Risk Surveys | AI Impacts Wiki](https://wiki.aiimpacts.org/uncategorized/ai_risk_surveys)
[^64]: [Adversarial Collaboration on AI Risk | Wiley Online Library](https://onlinelibrary.wiley.com/doi/10.1111/risa.70023)
[^65]: [Philip Tetlock Faculty Page | University of Pennsylvania Psychology](https://psychology.sas.upenn.edu/node/418)
[^66]: [Philip Tetlock Faculty Page | Wharton School](https://mgmt.wharton.upenn.edu/profile/tetlock/)
[^67]: [Why is EA so enthusiastic about forecasting? | EA Forum](https://forum.effectivealtruism.org/posts/hRoyBc6aCyrvQ9g3p/why-is-ea-so-enthusiastic-about-forecasting)
[^68]: [Why is EA so enthusiastic about forecasting? | EA Forum](https://forum.effectivealtruism.org/posts/hRoyBc6aCyrvQ9g3p/why-is-ea-so-enthusiastic-about-forecasting)
[^69]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^70]: [Philip Tetlock Fireside Chat | EA Forum](https://forum.effectivealtruism.org/posts/Df68zNGwpvL4pkpDG/philip-tetlock-fireside-chat)
[^71]: [Interview with Prof Tetlock on epistemic modesty | EA Forum](https://forum.effectivealtruism.org/posts/nAccbCPkLatXNBKdZ/interview-with-prof-tetlock-on-epistemic-modesty-predicting)
[^72]: [Prof. Philip Tetlock's Forecasting Research | Founders Pledge](https://www.founderspledge.com/research/prof-philip-tetlocks-forecasting-research)
[^73]: [Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound](https://www.cato-unbound.org/2011/07/11/dan-gardner-philip-tetlock/overcoming-our-aversion-acknowledging-our-ignorance)
[^74]: [Philip Tetlock: Forecaster, author, and renowned social psychologist | The Decision Lab](https://thedecisionlab.com/thinkers/political-science/philip-tetlock)
[^75]: [Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound](https://www.cato-unbound.org/2011/07/11/dan-gardner-philip-tetlock/overcoming-our-aversion-acknowledging-our-ignorance)
[^76]: [Evaluating Intelligence: A Competent Authority | National Academies](https://www.nationalacademies.org/read/13062/chapter/16)
[^77]: [Evaluating Intelligence: A Competent Authority | National Academies](https://www.nationalacademies.org/read/13062/chapter/16)
[^78]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/philip-tetlock-forecasting-research/)
[^79]: [Adversarial Collaboration on AI Risk | Wiley Online Library](https://onlinelibrary.wiley.com/doi/10.1111/risa.70023)
[^80]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/philip-tetlock-forecasting-research/)
[^81]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/prof-tetlock-predicting-the-future/)
[^82]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/prof-tetlock-predicting-the-future/)
[^83]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/prof-tetlock-predicting-the-future/)
[^84]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/prof-tetlock-predicting-the-future/)
[^85]: [Philip Tetlock interview | Conversations with Tyler](https://conversationswithtyler.com/episodes/philip-e-tetlock/)
[^86]: [Overcoming Our Aversion to Acknowledging Our Ignorance | Cato Unbound](https://www.cato-unbound.org/2011/07/11/dan-gardner-philip-tetlock/overcoming-our-aversion-acknowledging-our-ignorance)
[^87]: [Evaluating Intelligence: A Competent Authority | National Academies](https://www.nationalacademies.org/read/13062/chapter/16)
[^88]: [Evaluating Intelligence: A Competent Authority | National Academies](https://www.nationalacademies.org/read/13062/chapter/16)
[^89]: [Philip Tetlock interview | Conversations with Tyler](https://conversationswithtyler.com/episodes/philip-e-tetlock/)
[^90]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/philip-tetlock-forecasting-research/)
[^91]: [Fireside chat with Philip Tetlock | Effective Altruism](https://www.effectivealtruism.org/articles/fireside-chat-with-philip-tetlock)
[^92]: [Philip Tetlock on forecasting and existential risks | 80,000 Hours Podcast](https://80000hours.org/podcast/episodes/philip-tetlock-forecasting-research/)
[^93]: [ForecastEx Appoints Philip Tetlock to Board | Business Wire](https://www.businesswire.com/news/home/20260122202508/en/ForecastEx-Appoints-Renowned-Forecaster-Dr.-Philip-Tetlock-to-Board-of-Directors)
[^94]: [ForecastEx Appoints Philip Tetlock to Board | Barchart](https://www.barchart.com/story/news/37175732/forecastex-appoints-renowned-forecaster-dr-philip-tetlock-to-board-of-directors)
[^95]: [Philip Tetlock Faculty Page | University of Pennsylvania Psychology](https://psychology.sas.upenn.edu/node/418)
[^96]: [Philip Tetlock Faculty Page | University of Pennsylvania Psychology](https://psychology.sas.upenn.edu/node/418)
[^97]: [Good Judgment Press & News](https://goodjudgment.com/about/press-news/)
[^98]: [Good Judgment Press & News](https://goodjudgment.com/about/press-news/)