Longterm Wiki

Vidur Kapur

vidur-kapur (E580)
← Back to pagePath: /knowledge-base/people/vidur-kapur/
Page Metadata
{
  "id": "vidur-kapur",
  "numericId": null,
  "path": "/knowledge-base/people/vidur-kapur/",
  "filePath": "knowledge-base/people/vidur-kapur.mdx",
  "title": "Vidur Kapur",
  "quality": 38,
  "importance": 45,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-02-01",
  "llmSummary": "Vidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discussions. While he appears to be a competent practitioner in forecasting and risk assessment, his individual contributions lack documented track records or major novel insights.",
  "structuredSummary": null,
  "description": "Superforecaster and AI policy researcher involved in existential risk forecasting, early warning systems for global catastrophes, and effective altruism community discussions",
  "ratings": {
    "novelty": 3,
    "rigor": 4,
    "actionability": 2,
    "completeness": 6
  },
  "category": "people",
  "subcategory": null,
  "clusters": [
    "ai-safety",
    "epistemics"
  ],
  "metrics": {
    "wordCount": 1538,
    "tableCount": 2,
    "diagramCount": 0,
    "internalLinks": 17,
    "externalLinks": 27,
    "footnoteCount": 25,
    "bulletRatio": 0.06,
    "sectionCount": 15,
    "hasOverview": true,
    "structuralScore": 12
  },
  "suggestedQuality": 80,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 1538,
  "unconvertedLinks": [],
  "unconvertedLinkCount": 0,
  "convertedLinkCount": 0,
  "backlinkCount": 0,
  "redundancy": {
    "maxSimilarity": 16,
    "similarPages": [
      {
        "id": "ai-impacts",
        "title": "AI Impacts",
        "path": "/knowledge-base/organizations/ai-impacts/",
        "similarity": 16
      },
      {
        "id": "arb-research",
        "title": "Arb Research",
        "path": "/knowledge-base/organizations/arb-research/",
        "similarity": 15
      },
      {
        "id": "nuno-sempere",
        "title": "Nuño Sempere",
        "path": "/knowledge-base/people/nuno-sempere/",
        "similarity": 15
      },
      {
        "id": "vipul-naik",
        "title": "Vipul Naik",
        "path": "/knowledge-base/people/vipul-naik/",
        "similarity": 15
      },
      {
        "id": "futuresearch",
        "title": "FutureSearch",
        "path": "/knowledge-base/organizations/futuresearch/",
        "similarity": 14
      }
    ]
  }
}
Entity Data
{
  "id": "vidur-kapur",
  "type": "person",
  "title": "Vidur Kapur",
  "description": "Vidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discussions. While he appears to be a competent practitioner in forecasting and risk assessment, his individ",
  "tags": [],
  "relatedEntries": [],
  "sources": [],
  "lastUpdated": "2026-02",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links

No external links

Backlinks (0)

No backlinks

Frontmatter
{
  "title": "Vidur Kapur",
  "description": "Superforecaster and AI policy researcher involved in existential risk forecasting, early warning systems for global catastrophes, and effective altruism community discussions",
  "importance": 45,
  "lastEdited": "2026-02-01",
  "update_frequency": 45,
  "sidebar": {
    "order": 50
  },
  "ratings": {
    "novelty": 3,
    "rigor": 4,
    "actionability": 2,
    "completeness": 6
  },
  "quality": 38,
  "llmSummary": "Vidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discussions. While he appears to be a competent practitioner in forecasting and risk assessment, his individual contributions lack documented track records or major novel insights.",
  "clusters": [
    "ai-safety",
    "epistemics"
  ],
  "entityType": "person"
}
Raw MDX Source
---
title: Vidur Kapur
description: Superforecaster and AI policy researcher involved in existential
  risk forecasting, early warning systems for global catastrophes, and effective
  altruism community discussions
importance: 45
lastEdited: "2026-02-01"
update_frequency: 45
sidebar:
  order: 50
ratings:
  novelty: 3
  rigor: 4
  actionability: 2
  completeness: 6
quality: 38
llmSummary: Vidur Kapur is a superforecaster and AI policy researcher involved
  in multiple forecasting organizations and the Sentinel early warning system,
  contributing to AI risk assessment and EA Forum discussions. While he appears
  to be a competent practitioner in forecasting and risk assessment, his
  individual contributions lack documented track records or major novel
  insights.
clusters: ["ai-safety","epistemics"]
entityType: person
---
import {EntityLink, KeyPeople, KeyQuestions, Section} from '@components/wiki';

## Quick Assessment

| Attribute | Assessment |
|-----------|------------|
| **Primary Role** | Superforecaster and AI Policy Researcher |
| **Key Affiliations** | <EntityLink id="E532">Good Judgment</EntityLink>, <EntityLink id="E569">Swift Centre</EntityLink>, <EntityLink id="E560">Samotsvety</EntityLink>, RAND, <EntityLink id="E426">ControlAI</EntityLink>, <EntityLink id="E566">Sentinel</EntityLink> |
| **Focus Areas** | AI existential risk, biological risks, geopolitical forecasting, global catastrophe early detection |
| **Notable Contributions** | Sentinel early warning system, AI x-risk forecasting, EA Forum discussions on utilitarianism and bias |
| **Education** | London School of Economics (LSE), University of Chicago |
| **Community Presence** | Active on EA Forum; limited <EntityLink id="E538">LessWrong</EntityLink> presence |

## Key Links

| Source | Link |
|--------|------|
| Official Website | [tmcapital.com](https://tmcapital.com/people/vidur-kapur/) |
| Wikipedia | [en.wikipedia.org](https://en.wikipedia.org/wiki/Mahabharat_(2013_TV_series)) |

## Overview

Vidur Kapur is a superforecaster and AI policy researcher affiliated with multiple forecasting organizations including <EntityLink id="E532">Good Judgment</EntityLink>, <EntityLink id="E569">Swift Centre</EntityLink>, <EntityLink id="E560">Samotsvety</EntityLink>, and RAND, while also working as an AI Policy Researcher at <EntityLink id="E426">ControlAI</EntityLink>[^1]. His work focuses on existential risks from AI, biological threats, and geopolitical instability, with particular emphasis on early detection and rapid response to global catastrophes.

Kapur is a key member of the <EntityLink id="E566">Sentinel</EntityLink> team, a project dedicated to early detection and response for global catastrophes[^2]. The project emphasizes rapid foresight on timescales of days to weeks, positioned as increasingly critical as AI capabilities integrate into society and potential dangers escalate. His forecasting work spans AI timelines, catastrophic risk scenarios, and geopolitical events, contributing to organizations that collectively shape risk assessment in the effective altruism and AI safety communities.

Beyond forecasting, Kapur maintains an active presence on the Effective Altruism Forum, where he contributes posts and commentary on utilitarianism, political bias, ethics, and cause prioritization[^3]. His dual focus on technical forecasting and philosophical engagement positions him as a bridge between quantitative risk assessment and broader EA community discussions.

## Background and Education

Kapur attended the London School of Economics (LSE) and the University of Chicago[^4]. His educational background in economics and social sciences informed his later work in forecasting and policy analysis. Before entering the forecasting and AI policy space, he faced personal challenges related to cultural alienation and identity, which he has addressed publicly through comedy and personal essays[^5].

His transition into forecasting and effective altruism appears to have occurred in the mid-2010s, with early engagement in <EntityLink id="E518">Center for Applied Rationality</EntityLink> (CFAR) workshops on applied rationality and AI safety around 2014[^6]. These programs, designed for machine learning researchers and <EntityLink id="E202">MIRI</EntityLink> Summer Fellows, focused on <EntityLink id="E297">technical AI safety research</EntityLink> components and strategy.

## Forecasting Work

### Superforecasting Organizations

Kapur works with several prominent forecasting platforms and organizations. As a superforecaster with Good Judgment, he contributes to probabilistic forecasting on geopolitical and technological developments[^7]. His work with Samotsvety, a group of elite forecasters, has included high-stakes AI risk assessments. Notably, Samotsvety estimated a 30.5% risk of AI catastrophe killing the vast majority of humanity by 2200[^8].

His involvement with RAND and a hedge fund (name not disclosed in sources) extends his forecasting expertise into both policy and financial domains, suggesting applications of probabilistic reasoning across diverse contexts[^9].

### Sentinel Project

The Sentinel project represents Kapur's most prominent contribution to existential risk mitigation. Sentinel focuses on early detection and response for global catastrophes, emphasizing rapid observation-orientation cycles to identify emerging threats[^10]. The project improved content quality and distribution following a November 2024 fundraise, with Kapur listed as a key team member[^11].

Sentinel's work includes forecasting AI developments and geopolitical risks. For example, Kapur participated in Sentinel Minutes podcast episodes discussing <EntityLink id="E218">OpenAI</EntityLink>'s plans for an automated research intern by September 2026 and a true AI researcher by March 2028[^12]. He has also contributed forecasts on drone attacks, regime change scenarios (such as Iranian political instability), and AI model capability timelines[^13].

In a podcast on Iranian regime change, Kapur provided probability estimates of 10% by the end of August and 15% by year-end, analyzing base rates, internal and external factors, economic suffering, corruption, and public disillusionment[^14]. This work exemplifies Sentinel's approach of combining geopolitical analysis with structured forecasting methodologies.

## AI Safety and Existential Risk

Kapur's engagement with AI safety encompasses both technical forecasting and conceptual analysis of catastrophic scenarios. He provided comments on a draft post ranking AI existential risk scenarios by "embarrassment level," discussing how predictive models might enable catastrophic terrorist attacks due to insufficient safeguards and offense-biased scientific understanding[^15]. This commentary reflects concern about dual-use AI capabilities and the asymmetry between offensive and defensive applications.

His forecasting work directly addresses AI timelines and capability milestones. Sentinel's projections on automated AI researchers and model doubling times contribute to the broader AI safety community's understanding of when transformative AI capabilities might emerge[^16]. Kapur's role bridges the gap between abstract x-risk scenarios and concrete near-term forecasts that can inform policy and research prioritization.

Kapur's connection to the rationalist and effective altruist communities positions him within networks focused on <EntityLink id="E439">AI alignment</EntityLink> and safety. His participation in CFAR workshops and engagement with MIRI-related training on <EntityLink id="E631">technical AI safety</EntityLink> research demonstrates early involvement in the movement's capacity-building efforts[^17].

## Effective Altruism Forum Contributions

Kapur maintains an active profile on the EA Forum, contributing posts, comments, and "Quick Takes" on various topics[^18]. His contributions often challenge conventional EA thinking or introduce methodological refinements.

### "EAs are not perfect utilitarians"

In a notable post, Kapur argued that EA participants have diverse motivations beyond pure utilitarianism, including prestige, loyalty, novelty-seeking, and burnout avoidance[^19]. He urged caution against over-rationalizing non-utilitarian choices, suggesting that acknowledging human limitations leads to more realistic and sustainable engagement with effective altruism. Community responses noted this aligns with utilitarian self-awareness, as perfect utilitarians would account for human cognitive and motivational constraints.

### Political Debiasing

Kapur co-authored a post on political debiasing and the Political Bias Test, addressing criticisms such as ceiling effects and the ease of gaming bias assessments[^20]. The work discussed pre-tests on Amazon Mechanical Turk and strategies for inferring bias from belief patterns. This methodological contribution reflects his interest in improving how the EA community identifies and corrects for cognitive biases.

### Other Engagements

Kapur has commented on EA's public perception, noting that while the community welcomes internal criticism, this may not be apparent to outsiders[^21]. He has also engaged in discussions on estimating future human value, recommending analyses by Michael Dickens and *What We Owe The Future*, while acknowledging uncertainties like s-risks and the trajectory of human altruism[^22]. His "Quick Takes" include critiques of evidence quality in animal welfare interventions, such as questioning evidence on electric stunning for shrimp welfare[^23].

## Community Reception

Within the EA Forum, Kapur is viewed as a constructive contributor whose posts spark discussion on practical EA limitations and methodological improvements[^24]. His work on political debiasing has drawn methodological feedback, with commenters refining ideas around gaming effects and pre-test design[^25]. No major controversies or criticisms have been noted; interactions focus on collaborative refinement of ideas rather than adversarial debate.

His forecasting work, particularly with Sentinel, has been positively received in EA contexts focused on existential risk. The project's emphasis on rapid foresight aligns with growing concerns about AI acceleration and the need for early warning systems.

## Key Uncertainties

Several aspects of Kapur's work and influence remain unclear from available sources:

- **Quantitative impact**: Specific forecasting track records, accuracy metrics, or Brier scores are not publicly documented, making it difficult to assess predictive performance relative to other superforecasters.
- **Funding and compensation**: No information is available about personal funding, grants received, or compensation from forecasting organizations or ControlAI.
- **Policy influence**: The extent to which Kapur's forecasts inform actual policy decisions or institutional strategies at organizations like RAND or hedge funds is not specified.
- **LessWrong presence**: Despite engagement with rationalist community institutions like CFAR and MIRI, Kapur has limited documented activity on LessWrong, suggesting either focused engagement on EA Forum or undocumented contributions.
- **Relationship to other forecasters**: Collaborative dynamics with other prominent superforecasters (e.g., Samotsvety members) and specific division of labor within Sentinel are not detailed.

## Sources

[^1]: [Sentinel: Early Detection and Response for Global Catastrophes](https://forum.effectivealtruism.org/posts/GAFwc5jGFjqSetJiu/sentinel-early-detection-and-response-for-global)
[^2]: [Sentinel: Early Detection and Response for Global Catastrophes](https://forum.effectivealtruism.org/posts/GAFwc5jGFjqSetJiu/sentinel-early-detection-and-response-for-global)
[^3]: [Vidur Kapur - EA Forum Profile](https://forum.effectivealtruism.org/users/vidur_kapur)
[^4]: [Vidur Kapur: Closets and Comedy](https://sharanyamanivannan.in/2010/10/02/vidur-kapur-closets-and-comedy/)
[^5]: [The Process of Coming Out - New Indian Express, Oct 2, 2010](https://www.newindianexpress.com/magazine/2010/Oct/02/the-process-of-coming-out-191818.html)
[^6]: [Four Free CFAR Programs on Applied Rationality and AI Safety](https://forum.effectivealtruism.org/posts/Y3WTCaM6CmPjCufbe/four-free-cfar-programs-on-applied-rationality-and-ai-safety)
[^7]: [License India Expo - Vidur Kapur Profile](https://www.licenseindia.com/expo/vidur-kapur.php)
[^8]: [Technical AI Safety Crisis and Security Research Report](https://taisc.org/report)
[^9]: [License India Expo - Vidur Kapur Profile](https://www.licenseindia.com/expo/vidur-kapur.php)
[^10]: [Sentinel: Early Detection and Response for Global Catastrophes](https://forum.effectivealtruism.org/posts/GAFwc5jGFjqSetJiu/sentinel-early-detection-and-response-for-global)
[^11]: [Sentinel: Early Detection and Response for Global Catastrophes](https://forum.effectivealtruism.org/posts/GAFwc5jGFjqSetJiu/sentinel-early-detection-and-response-for-global)
[^12]: [Sentinel Minutes Podcast - Spotify](https://open.spotify.com/show/1OfAdHIbYbun1YUtoBME2j)
[^13]: [Forecasts for Drone Attacks and AI Model Doubling Times - Sentinel Blog](https://blog.sentinel-team.org/p/forecasts-for-drone-attacks-and-ai)
[^14]: [Iranian Regime Change: Unpacking Broad Forecasts - Sentinel Blog](https://blog.sentinel-team.org/p/iranian-regime-change-unpacking-broad)
[^15]: [AI X-Risk Approximately Ordered by Embarrassment - Alignment Forum](https://www.alignmentforum.org/posts/mSF4KTxAGRG3EHmhb/ai-x-risk-approximately-ordered-by-embarrassment)
[^16]: [Forecasts for Drone Attacks and AI Model Doubling Times - Sentinel Blog](https://blog.sentinel-team.org/p/forecasts-for-drone-attacks-and-ai)
[^17]: [Four Free CFAR Programs on Applied Rationality and AI Safety](https://forum.effectivealtruism.org/posts/Y3WTCaM6CmPjCufbe/four-free-cfar-programs-on-applied-rationality-and-ai-safety)
[^18]: [Vidur Kapur - EA Forum Profile](https://forum.effectivealtruism.org/users/vidur_kapur)
[^19]: [EAs Are Not Perfect Utilitarians - EA Forum](https://forum.effectivealtruism.org/posts/QopY2jA22LpCJrtfp/eas-are-not-perfect-utilitarians)
[^20]: [Political Debiasing and the Political Bias Test - EA Forum, Sep 11, 2015](https://forum.effectivealtruism.org/posts/yPkiBNW49NZvGvJ3q/political-debiasing-and-the-political-bias-test)
[^21]: [EA's Image Problem - Comment by Vidur Kapur](https://ea.greaterwrong.com/posts/5Mc8vDzDtobvpyjqQ/ea-s-image-problem/comment/8fdoETg992ioWGyh4)
[^22]: [Can We Estimate the Expected Value of Human's Future Life? - EA Forum](https://forum.effectivealtruism.org/posts/tYsop2rx2HNj5rFdd/can-we-estimate-the-expected-value-of-human-s-future-life-in)
[^23]: [Vidur Kapur's Quick Takes - EA Forum](https://forum.effectivealtruism.org/posts/kErKS7sht6PHeLLyb/vidur-kapur-s-quick-takes)
[^24]: [EAs Are Not Perfect Utilitarians - EA Forum](https://forum.effectivealtruism.org/posts/QopY2jA22LpCJrtfp/eas-are-not-perfect-utilitarians)
[^25]: [Political Debiasing and the Political Bias Test - EA Forum, Sep 11, 2015](https://forum.effectivealtruism.org/posts/yPkiBNW49NZvGvJ3q/political-debiasing-and-the-political-bias-test)