This is a comprehensive overview of Polymarket as a prediction market platform, covering its history, mechanics, and accuracy, but has minimal relevance to AI safety beyond brief mentions in the EA/forecasting section. While well-documented, it primarily serves as general reference material about a prediction market platform rather than AI safety analysis.
ForecastBenchProjectForecastBenchForecastBench is a dynamic, contamination-free benchmark with 1,000 continuously-updated questions comparing LLM forecasting to superforecasters. GPT-4.5 achieves 0.101 Brier score vs 0.081 for sup...Quality: 53/100MetaforecastProjectMetaforecastMetaforecast is a forecast aggregation platform combining 2,100+ questions from 10+ sources (Metaculus, Manifold, Polymarket, etc.) with daily updates via automated scraping. Created by QURI, it pr...Quality: 35/100Anthropic IPOAnalysisAnthropic IPOAnthropic is actively preparing for a potential 2026 IPO with concrete steps like hiring Wilson Sonsini and conducting bank discussions, though timeline uncertainty remains with prediction markets ...Quality: 65/100Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $380B valuation (Series G, Feb 2026, $30B raised): $27-76B risk-adjusted EA capital expected. Total funding raised exceeds $...Quality: 65/100
Other
Nuño SemperePersonNuño SempereNuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptic...Quality: 50/100Robin HansonPersonRobin HansonComprehensive biographical entry on Robin Hanson covering his contributions to prediction markets, futarchy governance, and skeptical AI safety positions. The page provides valuable context on a si...Quality: 53/100Vitalik ButerinPersonVitalik ButerinCo-founder of Ethereum and major philanthropic donor. Has given over $1.5B to charity including MIRI, SENS Research Foundation, GiveDirectly, and various EA-aligned organizations.
Organizations
Founders FundOrganizationFounders FundFounders Fund is a $17B contrarian VC firm that has backed major AI companies like OpenAI and DeepMind but shows no explicit focus on AI safety or alignment research, instead emphasizing rapid capa...Quality: 50/100Arb ResearchOrganizationArb ResearchArb Research is a small AI safety consulting firm that produces methodologically rigorous research and evaluations, particularly known for their AI Safety Camp impact assessment and forecasting wor...Quality: 50/100MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100ManifundOrganizationManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100Manifold (Prediction Market)OrganizationManifold (Prediction Market)Manifold is a play-money prediction market with millions of predictions and ~2,000 peak daily users, showing AGI by 2030 at ~60% vs Metaculus ~45%. Platform scored Brier 0.0342 on 2024 election (vs...Quality: 43/100LessWrongOrganizationLessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100