Longterm Wiki

Nuño Sempere

nuno-sempere (E579)
← Back to pagePath: /knowledge-base/people/nuno-sempere/
Page Metadata
{
  "id": "nuno-sempere",
  "numericId": null,
  "path": "/knowledge-base/people/nuno-sempere/",
  "filePath": "knowledge-base/people/nuno-sempere.mdx",
  "title": "Nuño Sempere",
  "quality": 50,
  "importance": 45,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-02-01",
  "llmSummary": "Nuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptical views on high AI existential risk estimates and critical perspectives on EA institutions. The article provides comprehensive coverage of his work, achievements, and controversial positions within the rationalist/EA community.",
  "structuredSummary": null,
  "description": "Spanish forecaster and researcher who co-founded Samotsvety Forecasting (winning CSET-Foretell by an 'obscene margin') and founded Sentinel, a non-profit for global catastrophic risk early warning. Known for superforecasting expertise, AI timelines analysis, and critical perspectives on Effective Altruism.",
  "ratings": {
    "novelty": 4,
    "rigor": 6,
    "actionability": 3,
    "completeness": 7
  },
  "category": "people",
  "subcategory": null,
  "clusters": [
    "epistemics"
  ],
  "metrics": {
    "wordCount": 2901,
    "tableCount": 3,
    "diagramCount": 0,
    "internalLinks": 16,
    "externalLinks": 31,
    "footnoteCount": 28,
    "bulletRatio": 0.07,
    "sectionCount": 19,
    "hasOverview": true,
    "structuralScore": 13
  },
  "suggestedQuality": 87,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 2901,
  "unconvertedLinks": [
    {
      "text": "Samotsvety - Track Record",
      "url": "https://samotsvety.org/track-record/",
      "resourceId": "c7b435dfad2f7ca2",
      "resourceTitle": "Samotsvety Track Record"
    }
  ],
  "unconvertedLinkCount": 1,
  "convertedLinkCount": 0,
  "backlinkCount": 0,
  "redundancy": {
    "maxSimilarity": 17,
    "similarPages": [
      {
        "id": "samotsvety",
        "title": "Samotsvety",
        "path": "/knowledge-base/organizations/samotsvety/",
        "similarity": 17
      },
      {
        "id": "good-judgment",
        "title": "Good Judgment (Forecasting)",
        "path": "/knowledge-base/organizations/good-judgment/",
        "similarity": 16
      },
      {
        "id": "rethink-priorities",
        "title": "Rethink Priorities",
        "path": "/knowledge-base/organizations/rethink-priorities/",
        "similarity": 16
      },
      {
        "id": "vidur-kapur",
        "title": "Vidur Kapur",
        "path": "/knowledge-base/people/vidur-kapur/",
        "similarity": 15
      },
      {
        "id": "vipul-naik",
        "title": "Vipul Naik",
        "path": "/knowledge-base/people/vipul-naik/",
        "similarity": 15
      }
    ]
  }
}
Entity Data
{
  "id": "nuno-sempere",
  "type": "person",
  "title": "Nuño Sempere",
  "description": "Nuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptical views on high AI existential risk estimates and critical perspectives on EA institutions. The articl",
  "tags": [],
  "relatedEntries": [],
  "sources": [],
  "lastUpdated": "2026-02",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links

No external links

Backlinks (0)

No backlinks

Frontmatter
{
  "title": "Nuño Sempere",
  "description": "Spanish forecaster and researcher who co-founded Samotsvety Forecasting (winning CSET-Foretell by an 'obscene margin') and founded Sentinel, a non-profit for global catastrophic risk early warning. Known for superforecasting expertise, AI timelines analysis, and critical perspectives on Effective Altruism.",
  "importance": 45,
  "lastEdited": "2026-02-01",
  "update_frequency": 45,
  "sidebar": {
    "order": 50
  },
  "ratings": {
    "novelty": 4,
    "rigor": 6,
    "actionability": 3,
    "completeness": 7
  },
  "quality": 50,
  "llmSummary": "Nuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptical views on high AI existential risk estimates and critical perspectives on EA institutions. The article provides comprehensive coverage of his work, achievements, and controversial positions within the rationalist/EA community.",
  "balanceFlags": [
    "no-criticism-section"
  ],
  "clusters": [
    "epistemics"
  ],
  "entityType": "person"
}
Raw MDX Source
---
title: Nuño Sempere
description: Spanish forecaster and researcher who co-founded Samotsvety
  Forecasting (winning CSET-Foretell by an 'obscene margin') and founded
  Sentinel, a non-profit for global catastrophic risk early warning. Known for
  superforecasting expertise, AI timelines analysis, and critical perspectives
  on Effective Altruism.
importance: 45
lastEdited: "2026-02-01"
update_frequency: 45
sidebar:
  order: 50
ratings:
  novelty: 4
  rigor: 6
  actionability: 3
  completeness: 7
quality: 50
llmSummary: Nuño Sempere is a Spanish superforecaster who co-founded the highly
  successful Samotsvety forecasting group and now runs Sentinel for global
  catastrophe early warning, while being known for skeptical views on high AI
  existential risk estimates and critical perspectives on EA institutions. The
  article provides comprehensive coverage of his work, achievements, and
  controversial positions within the rationalist/EA community.
balanceFlags:
  - no-criticism-section
clusters: ["epistemics"]
entityType: person
---
import {EntityLink, KeyPeople, KeyQuestions, Section} from '@components/wiki';

## Quick Assessment

| Dimension | Assessment |
|-----------|------------|
| **Primary Role** | Superforecaster, researcher, entrepreneur |
| **Key Organizations** | Co-founder of Samotsvety Forecasting; founder of Sentinel; runs Shapley Maximizers OÜ consultancy |
| **Notable Achievements** | Samotsvety won <EntityLink id="E524">CSET</EntityLink>-Foretell competition by approximately 2x margin over second place; ranked 2nd all-time on INFER platform |
| **Core Expertise** | Forecasting methodology, AI timelines, quantified uncertainty, risk assessment |
| **Criticisms** | Communication style described as "bellicose" and sometimes unproductive; skeptical of high AI existential risk estimates |
| **Current Focus** | Building <EntityLink id="E566">Sentinel</EntityLink> as early-warning system for global catastrophes; independent consulting |

## Key Links

| Source | Link |
|--------|------|
| Official Website | [alignmentforum.org](https://www.alignmentforum.org/users/nunosempere) |
| Wikipedia | [en.wikipedia.org](https://en.wikipedia.org/wiki/Fermi_problem) |
| EA Forum | [forum.effectivealtruism.org](https://forum.effectivealtruism.org/posts/LWN6qFhCtPDEJJpeG/cost-effectiveness-of-operations-management-in-high-impact) |

## Overview

Nuño Sempere (born October 10, 1998) is a Spanish forecaster, researcher, and entrepreneur known for exceptional performance in forecasting competitions and critical analysis of existential risk estimates. He co-founded <EntityLink id="E560">Samotsvety</EntityLink>, a forecasting group that won the CSET-Foretell competition "by an absolutely obscene margin, around twice as good as the next-best team in terms of the relative Brier score," according to descriptions of their performance.[^1] Scott Alexander has described Samotsvety members as "some of the best superforecasters in the world."[^2]

Sempere currently leads Sentinel, a non-profit organization focused on early detection and response to global catastrophes including pandemics, wars, and financial crises that could kill over one million people.[^3] The organization processes millions of news items through automated scrapers to identify emerging risks and publishes weekly "Sentinel minutes" providing curated analysis of global catastrophic risks. He also runs Shapley Maximizers OÜ, a consultancy specializing in "niche estimation, evaluation, and impact auditing" for value-producing organizations.[^4]

Beyond his operational work, Sempere has become a prominent voice critiquing aspects of the Effective Altruism community and questioning high existential risk estimates from AI. His 2023 "skepticism braindump" challenged AI doom probabilities around 80% by 2070, arguing they may reflect "selection effects, social pressures, and methodological issues" within the rationalist and EA communities.[^5] He has consulted with major AI labs and institutions, managed teams of 10-20 forecasters, and contributed significantly to forecasting methodology research.[^6]

## Background and Early Career

Sempere studied Mathematics and Philosophy but dropped out due to dissatisfaction with the educational system's inefficiency.[^7] He subsequently pursued development economics and maintained interests in Spanish poetry and literature, having previously written a popular Spanish literature blog.

His forecasting career began on prediction platforms including <EntityLink id="E532">Good Judgment</EntityLink> Open and CSET-Foretell.[^7] Around 2020, he met fellow forecaster Alexei Yagudin at a summer fellowship at Oxford's <EntityLink id="E140">Future of Humanity Institute</EntityLink>, where both developed their forecasting expertise.[^8] In 2020, Sempere served as a Future of Humanity Institute Summer Research Fellow and received a grant from the Long Term Future Fund to conduct "independent research on forecasting and optimal paths to improve the long-term."[^7]

During this period, Sempere worked at the Quantified Uncertainty Research Institute (QURI) on longtermism, forecasting, and quantification research.[^7] At QURI, he programmed <EntityLink id="E200">Metaforecast</EntityLink>.org, a search tool aggregating predictions from multiple forecasting platforms, which he continues to maintain. He also published a Forecasting Newsletter that accumulated thousands of subscribers before he discontinued it as his time became more valuable.[^7]

Sempere has been involved in organizing the European Summer Program on Rationality during multiple years (2017, 2018, 2019, 2020, and 2022).[^7] He spent time in the Bahamas as part of the FTX EA Fellowship.[^7]

## Samotsvety Forecasting

Sempere is a founding member of Samotsvety, a forecasting collective that achieved extraordinary success in competitive forecasting. The group won the CSET-Foretell forecasting competition by performing approximately twice as well as the second-place team in terms of relative Brier score.[^2]

Samotsvety's track record includes multiple first-place finishes on the INFER/CSET-Foretell platform:

- **2020**: 1st place with relative Brier score of -0.912 versus -0.062 for second place; Samotsvety members ranked 5th, 6th, and 7th individually[^9]
- **2021**: 1st place with score of -3.259 versus -0.889 for second place and -0.267 for Pro Forecasters; members ranked 1st, 2nd, 4th, and 5th individually[^9]
- **2022**: 1st place despite reduced participation[^9]

As of early 2024, Samotsvety team members held the 1st, 2nd, 3rd, and 4th positions all-time on INFER rankings.[^9] The team also placed 4th on the Insight Prediction leaderboard as of September 2022, notably due to a correct bet on the Russian invasion of Ukraine.[^9]

Sempere's personal forecasting achievements include ranking in the top 5 during INFER's first season, achieving 2nd best performance in the second season, and holding the 2nd place all-time position as of February 2024.[^9] On Good Judgment Open, his Brier score of 0.206 compared favorably to the median of 0.29 (a ratio of 0.71).[^9]

## Sentinel: Global Catastrophe Early Warning

In recent years, Sempere founded Sentinel, described as a "free early-warning system for global catastrophes" that could kill over one million people.[^3] The organization focuses on high-impact catastrophes including pandemics, wars, and major financial crises, using advanced scraping technology to process millions of news items and maintain a foresight team for large-scale and existential risks.[^3]

In late 2024, Sempere brought on Rai Sur as cofounder and CTO.[^10] Sur previously founded and served as CTO of the crypto fintech startup Alongside (now Universal), which raised over \$13 million from a16z and remained operational for at least five years.[^6] Sur designed systems that secured \$6 million of assets in smart contracts with no security breaches and has experience managing large budgets and multiple employees.[^6]

Sentinel's foresight team includes Lisa (surname redacted), <EntityLink id="E580">Vidur Kapur</EntityLink>, Tolga Bilge, Leif Sigrúnsson, and an anonymous expert geopolitics forecaster.[^6] The organization publishes weekly "Sentinel minutes" that have gained traction within the risk assessment community, with supporters noting they've become a primary news source for major global developments.[^10]

In Q4 2024/Q1 2025, Sentinel sought additional funding to transition to full-time operations, incorporate as a US non-profit, expand the foresight and reserve teams, increase operational capacity, and establish an emergency response fund.[^10] The organization received early funding via <EntityLink id="E547">Manifund</EntityLink>, which Sempere described as "useful to not have money be a bottleneck."[^11] Sentinel is supported by Impact Ops for operations, which enabled the organization to register as a 501(c)(3).[^6]

Recent forecasts from Sentinel's team in early 2025 included a 73% probability (50-90% range) that the US would carry out an attack on Venezuelan territory before the end of 2025, and a 47% probability (20-70% range) that Nicolás Maduro would remain President of Venezuela through March 2026.[^12]

## Shapley Maximizers Consultancy

Sempere founded and runs Shapley Maximizers OÜ, a consultancy registered in Estonia on May 3, 2023.[^4] The company's mission is "niche estimation, evaluation, and impact auditing for value-producing people/organizations to add clarity and improve prioritization via forecasting and judgment."[^4]

Core competencies include research on forecasting incentives, AI progress, <EntityLink id="E228">prediction markets</EntityLink>, and scoring rules, as well as project evaluations. Notable work includes an evaluation of the EA Wiki that received praise for rigor.[^4] Sempere has consulted with major AI labs and other large institutions, and has managed teams of 10-20 forecasters in various contexts.[^6]

Financial data for Shapley Maximizers OÜ shows:

| Metric | 2025 Forecast | Change vs. Prior Year |
|--------|---------------|----------------------|
| Turnover | €6,115 | -91% |
| Average monthly turnover | €510 | N/A |
| Total profit | €97,274 | N/A |
| Net profit | €2,846 | N/A |
| Balance sheet size | €97,283 | +3% |
| Profit margin | 47% | -12% |

The company has a reputation score of 640 and a credit score of 0.01.[^13]

Sempere has stated that he established Shapley Maximizers as a "very profitable" consultancy which he used to bootstrap initial funding for Sentinel, and is now winding it down as he focuses on Sentinel.[^14]

## Research Contributions

Sempere has produced significant research on forecasting methodology, AI assessment, and effective altruism evaluation.

### Forecasting Methodology

Key research outputs include work on incentive problems in forecasting, prediction market design, and technological discontinuities.[^4] He co-authored a paper with Alex Lawsen on "Alignment Problems With Current Forecasting Platforms," published on arXiv in June 2021.[^15] He has also explored practical limitations of forecasting methodologies and their application to AI progress.

In 2023, Sempere created approximately 700 AI safety forecasting questions for <EntityLink id="E552">Coefficient Giving</EntityLink> as part of work with the <EntityLink id="E514">Arb Research</EntityLink> team, along with documents on operationalizing FLOPs (floating point operations) and resolution councils.[^16] He authored "Hurdles of using forecasting as a tool for making sense of AI progress," commissioned by Coefficient Giving, outlining challenges in AI forecasting.[^16]

### Evaluation and Cost-Effectiveness

Sempere conducted a project through the Quantified Uncertainty Research Institute on valuing research works by eliciting comparisons from EA researchers.[^17] This work revealed significant disagreements among researchers about research value, sometimes spanning several orders of magnitude.

He has performed evaluations of various organizations and grant programs:

- **EA Wiki**: External evaluation praised for thoroughness[^4]
- **Long-Term Future Fund**: Analysis of 2018-2019 grantees (23 grants totaling \$803,650), finding 26% more successful than expected (\$178,500), 22% as expected (\$147,250), with 5 grants (\$195,000) not evaluated due to conflicts of interest[^18]
- **Longtermist organizations**: Shallow evaluations of organizations including ALLFED, APPGFG, CSER, CSET, and FLI[^19]

In 2021, Sempere developed cost-effectiveness models for AI safety, noted as a rare quantitative effort comparable to GiveWell-style analysis, though he highlighted issues like long feedback loops and field sensitivity that make such analysis challenging.[^20]

### AI Timelines and Risk Assessment

Sempere has conducted significant work on forecasting human-level AI systems arrival and has explored the limitations of current approaches.[^21] He edited "A Gentle Introduction to Risk Frameworks Beyond Forecasting," written by Nathaniel Cooke, which covers how disaster risks are conceptualized by risk scholars, Normal Accident Theory, and methods professionals use to study the future.[^22]

## Skepticism Toward High AI Existential Risk Estimates

In January 2023, Sempere published "My highly personal skepticism braindump on existential risk from artificial intelligence," critiquing AI doom probabilities around 80% by 2070.[^5] He framed these high estimates as potentially "overhyped due to selection effects, social pressures, and methodological issues," while acknowledging his views as "highly personal" and reactive to rationalist/EA worldviews from 2016-2019.[^5] He noted the document has "significant weaknesses" including verbalization-to-rationalization risks and mixing obvious with obscure points.[^5]

Sempere's main arguments include:

**Selection Effects**: He argues that high existential risk estimates may arise from communities selecting for alarmism. For example, he claims that CFAR (<EntityLink id="E518">Center for Applied Rationality</EntityLink>) "fetishized the end of the world" to justify its importance, injecting "doomy narratives" into the community.[^5]

**Conjunctiveness and Imperfect Concepts**: He expresses skepticism about long conjunctive chains (multiple failures needed for doom scenarios) applied to near-term AI, arguing they rely on "in-the-limit" superintelligence assumptions not action-guiding for current systems. He critiques Nate Soares' rebuttal to Joe Carlsmith's <EntityLink id="E226">power-seeking AI</EntityLink> report as potentially biased under social pressure.[^23]

**Social Dynamics**: Sempere describes feeling "uneasy with pressure to dismiss counterarguments probabilistically," characterizing <EntityLink id="E202">MIRI</EntityLink> and CFAR as "one-sided doomers without paid counterpoints."[^23]

**Forecasting Context**: As part of Samotsvety, Sempere contributes to forecasts that tend toward lower AI risk estimates compared to some other forecasting groups.[^23]

Responses to his skepticism have been mixed. Some commenters view it as healthy sanity-checking of "gung-ho advocates," while others have pushed back on specific claims. For instance, some argue that MIRI does provide non-doom arguments and that disagreements on priors shouldn't be dismissed.[^23] No direct claims in the research data label his skepticism as "exaggerated or misleading," though his own framing acknowledges methodological limitations.

## Effective Altruism Community Engagement and Criticism

Sempere has been an active and increasingly critical voice within the Effective Altruism community. He was formerly a prolific contributor to the EA Forum and <EntityLink id="E538">LessWrong</EntityLink>, though he now primarily posts on his personal site (nunosempere.com/blog) due to dissatisfaction with EA Forum changes.[^14]

### Critiques of EA Institutions

In March 2024, Sempere published "Unflattering aspects of Effective Altruism," outlining several concerns:[^24]

**EA Forum Stewardship**: He criticized the forum's shift toward "catering to marginal/newbie users" with more introductory content, comparing it unfavorably to Reddit. He questioned whether recent changes justify \$2 million per year in funding and 6-8 full-time staff.[^25] Sempere argued that expansion during the FTX era was a "bad judgment call" now requiring downsizing, and expressed concern about moderation against "disagreeable voices."[^25]

**Leadership Accountability**: He claimed EA leaders like <EntityLink id="E156">Holden Karnofsky</EntityLink> at Coefficient Giving prioritize philosophy and funders over community input, citing instances of leaders ignoring comments.[^24]

**Philosophical Seduction**: Sempere argued that while EA ideas are appealing, they can lead to ineffective projects, suggesting the philosophy sometimes masks poor execution.[^24]

**Coefficient Giving's Criminal Justice Reform**: He analyzed approximately \$200 million donated by Coefficient Giving to criminal justice reform from 2013-2021, questioning the sincerity and effectiveness of this focus area.[^26] He suggested that "politics is the mind-killer" may have led to degraded reasoning, principal-agent problems, and motivated reasoning. Specific grants he critiqued included \$2.5 million to Color Of Change Education Fund (approximately 50% of their one-year budget) and \$10,000 to Photo Patch Foundation, which he compared unfavorably to deworming interventions costing \$0.35-\$0.97 per treatment.[^26]

### Communication Style Concerns

Multiple community members have described Sempere's communication style as "bellicose" and sometimes unproductive for fostering dialogue on charged topics.[^27] For example, his phrasing "I disagree with the EA Forum's approach to life" (later softened) caused confusion among readers.[^27] Critics note he sometimes alternates between obvious and controversial points, which can risk rationalization over genuine verbalization of concerns.[^5]

However, defenders emphasize that his critiques often identify real issues even if the framing could be improved. Some argue that the "inferential distance" between Sempere and other community members contributes to misunderstandings, and that his substantive points about feedback loops in "EA machinery" merit serious consideration.[^27]

### Alternative Infrastructure

Sempere created his own "soothing frontend" for the EA Forum (forum.nunosempere.com) that loads in approximately 0.5 seconds versus the official site's approximately 5 seconds, and excludes certain users he considers to contribute low signal-to-noise ratio.[^28] He remains subscribed to the EA Forum RSS feed and skims posts, but has largely moved his own writing to his personal blog.[^14]

## Funding and Financial Information

Sempere has received funding from various sources within the EA and forecasting communities:

- **Long Term Future Fund**: Received a grant of undisclosed amount for "independent research on forecasting and optimal paths to improve the long-term" (2020)[^7]
- **Coefficient Giving**: Received funding for AI forecasting documents and question creation (2023)[^16]
- **Manifund**: Received early funding for Sentinel via Manifund's regranting program, which he described as "psychologically motivating" and ensuring "money wouldn't be a bottleneck"[^11]

Through Manifund, Sempere has directed grants to projects including Riesgos Catastróficos Globales (focusing on catastrophic risks in Spanish-speaking communities) and has considered funding for APART research and forecasting experimentation.[^11]

## Key Uncertainties

Several aspects of Sempere's work and impact remain uncertain or subject to debate:

**Sentinel's Long-term Viability**: While Sentinel has established infrastructure for processing news and publishing weekly analyses, it remains unclear whether the organization can sustain operations long-term and whether its early-warning approach will prove effective at preventing or mitigating large-scale catastrophes.

**AI Risk Assessment Accuracy**: Sempere's skepticism toward high AI existential risk estimates positions him against some prominent voices in the AI safety community. The accuracy of his position versus higher-doom-probability forecasts remains highly uncertain and depends on the development trajectory of AI systems.

**Community Influence Trade-offs**: Sempere's direct, critical communication style has made him a polarizing figure. While some value his willingness to challenge consensus views, others argue his approach hinders productive dialogue. The net impact of his communication style on community epistemics remains debatable.

**Forecasting Methodology Limitations**: While Sempere has achieved exceptional results in forecasting competitions, the applicability of these skills to long-term, low-probability catastrophic risks (where feedback is sparse or nonexistent) remains an open question that he himself has explored in his research on forecasting limitations.

**EA Critique Accuracy**: The extent to which Sempere's criticisms of EA institutions accurately identify problems versus reflect idiosyncratic preferences or incomplete information is contested within the community. His critiques of Coefficient Giving's criminal justice reform funding, for instance, involve complex cause prioritization questions without clear empirical resolution.

## Sources

[^1]: [Alethios Substack - With Nuño Sempere: Superforecasting](https://alethios.substack.com/p/with-nuno-sempere-superforecasting)
[^2]: [EA Forum - Sentinel: Early Detection and Response for Global Catastrophes](https://forum.effectivealtruism.org/posts/GAFwc5jGFjqSetJiu/sentinel-early-detection-and-response-for-global)
[^3]: [YouTube - With Nuño Sempere: Superforecasting and global catastrophic risks](https://www.youtube.com/watch?v=zmsBiFwRlXc)
[^4]: [Nuño Sempere - Consulting](https://nunosempere.com/consulting/)
[^5]: [Nuño Sempere - My highly personal skepticism braindump on existential risk from artificial intelligence](https://nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/)
[^6]: [EA Forum - Sentinel: Early Detection and Response for Global Catastrophes](https://forum.effectivealtruism.org/posts/GAFwc5jGFjqSetJiu/sentinel-early-detection-and-response-for-global)
[^7]: [LessWrong - NunoSempere user profile](https://www.lesswrong.com/users/nunosempere)
[^8]: [NASDAQ - A Look at Samotsvety Forecasting: One of the World's Best Predictors of the Future](https://www.nasdaq.com/articles/a-look-at-samotsvety-forecasting-one-of-the-worlds-best-predictors-of-the-future)
[^9]: [Samotsvety - Track Record](https://samotsvety.org/track-record/)
[^10]: [Manifund - Fund Sentinel for Q4 2024](https://manifund.org/projects/fund-sentinel-for-q4-2024)
[^11]: [Manifund - NunoSempere](https://manifund.org/NunoSempere)
[^12]: [Sentinel Blog - Rising China-Japan tensions, Iran developments](https://blog.sentinel-team.org/p/rising-china-japan-tensions-iran)
[^13]: [Inforegister - Shapley Maximizers OÜ](https://www.inforegister.ee/en/2329278-ID/)
[^14]: [EA Forum - NunoSempere user profile](https://forum.effectivealtruism.org/users/nunosempere)
[^15]: [Semantic Scholar - Alignment Problems With Current Forecasting Platforms](https://www.semanticscholar.org/paper/Alignment-Problems-With-Current-Forecasting-Sempere-Lawsen/53ba622a964bbc6bc2f674d7d4127aff886ff686)
[^16]: [GitHub - NunoSempere/clarivoyance](https://github.com/NunoSempere/clarivoyance)
[^17]: [EA Forum - Valuing research works by eliciting comparisons from EA researchers](https://forum.effectivealtruism.org/posts/hrdxf5qdKmCZNWTvs/valuing-research-works-by-eliciting-comparisons-from-ea)
[^18]: [EA Forum - 2018-2019 Long-Term Future Fund grantees: How did they do?](https://forum.effectivealtruism.org/posts/Ps8ecFPBzSrkLC6ip/2018-2019-long-term-future-fund-grantees-how-did-they-do)
[^19]: [Nuño Sempere - Shallow evaluations of longtermist organizations](https://nunosempere.com/blog/2021/06/24/shallow-evaluations-of-longtermist-organizations/)
[^20]: [EA Forum - Is there an AI Safety GiveWell?](https://forum.nunosempere.com/posts/7JM3PHepa87KB2d5Z/is-there-an-ai-safety-givewell)
[^21]: [Epoch AI - Direct Approach Review - Nuño Sempere](https://epoch.ai/files/direct-approach-review-nuno-sempere.pdf)
[^22]: [LessWrong - A Gentle Introduction to Risk Frameworks Beyond Forecasting](https://www.lesswrong.com/posts/ae3HecTe2uKscabPe/a-gentle-introduction-to-risk-frameworks-beyond-forecasting)
[^23]: [EA Forum - My highly personal skepticism braindump on existential risk](https://forum.effectivealtruism.org/posts/L6ZmggEJw8ri4KB8X/my-highly-personal-skepticism-braindump-on-existential-risk)
[^24]: [Nuño Sempere - Unflattering aspects of EA](https://nunosempere.com/blog/2024/03/05/unflattering-aspects-of-ea/)
[^25]: [Nuño Sempere - EA Forum Stewardship](https://nunosempere.com/blog/2023/10/15/ea-forum-stewardship/)
[^26]: [Nuño Sempere - Coefficient Giving's Criminal Justice Reform bet](https://nunosempere.com/blog/2022/06/16/criminal-justice/)
[^27]: [EA Forum - Unflattering aspects of Effective Altruism (discussion)](https://forum.effectivealtruism.org/posts/coWvsGuJPyiqBdrhC/unflattering-aspects-of-effective-altruism)
[^28]: [Nuño Sempere Forum - Alternative EA Forum frontend](https://forum.nunosempere.com)