Bridgewater AIA Labs launched a $2B AI-driven macro fund in July 2024 that returned 11.9% in 2025, using proprietary ML models plus LLMs from OpenAI/Anthropic/Perplexity with multi-layer guardrails that reduced error rates from 8% to 1.6%. The division has minimal AI safety relevance, focusing on financial applications rather than safety research.
ForecastBenchProjectForecastBenchForecastBench is a dynamic, contamination-free benchmark with 1,000 continuously-updated questions comparing LLM forecasting to superforecasters. GPT-4.5 achieves 0.101 Brier score vs 0.081 for sup...Quality: 53/100Authoritarian Tools Diffusion ModelAnalysisAuthoritarian Tools Diffusion ModelThis model analyzes how AI surveillance technologies diffuse to authoritarian regimes through commercial sales, development assistance, joint ventures, reverse engineering, and illicit acquisition....Quality: 62/100
Concepts
Large Language ModelsCapabilityLarge Language ModelsComprehensive analysis of LLM capabilities showing rapid progress from GPT-2 (1.5B parameters, 2019) to GPT-5 and Gemini 2.5 (2025), with training costs growing 2.4x annually and projected to excee...Quality: 60/100Labs OverviewLabs OverviewHigh-level overview of ~8 frontier AI labs covering founding dates, key models, safety approaches, and organizational structures, with brief sections on competitive dynamics, safety commitments, an...Quality: 36/100Agentic AICapabilityAgentic AIAnalysis of agentic AI capabilities and deployment challenges, documenting industry forecasts (40% of enterprise apps by 2026, $199B market by 2034) alongside implementation difficulties (40%+ proj...Quality: 68/100Scientific Research CapabilitiesCapabilityScientific Research CapabilitiesComprehensive survey of AI scientific research capabilities across biology, chemistry, materials science, and automated research, documenting key benchmarks (AlphaFold's 214M structures, GNoME's 2....Quality: 68/100
Organizations
OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100Machine Intelligence Research InstituteOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100LessWrongOrganizationLessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100Good Judgment (Forecasting)OrganizationGood Judgment (Forecasting)Good Judgment Inc. is a commercial forecasting organization that emerged from successful IARPA research, demonstrating that trained 'superforecasters' can outperform intelligence analysts and predi...Quality: 50/100Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research ($135M across AI2050 and AI Safety Science programs) while a...Quality: 60/100
Other
Scalable OversightResearch AreaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100