Independent AI safety research organization that accelerates safety research through hackathons, fellowships, and collaborative research sprints. Founded in 2022 and headquartered in Copenhagen, Denmark, Apart Research operates as a remote-first nonprofit. As of 2026, the organization has hosted 42+ research sprints with 3,000+ participants across 50+ global locations, producing 22 peer-reviewed publications at venues including NeurIPS, ICLR, and ACL.
Headcount
4–8
as of Nov 2025
Total Funding Raised
$619,921
as of Jul 2025
Facts
13
Financial
Headcount4–8
Total Funding Raised$619,921
Annual Budget$954,800
Organization
Fellowship Count100
Talent Placements30
Legal StructureNonprofit (501(c)(3) via fiscal sponsor Ashgro Inc)
AI AlignmentApproachAI AlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) show 75%+ effectiveness on measurable safety metrics for existing systems but face critical scalabi...Quality: 91/100
Analysis
Alignment Robustness Trajectory ModelAnalysisAlignment Robustness Trajectory ModelThis model estimates alignment robustness degrades from 50-65% at GPT-4 level to 15-30% at 100x capability, with a critical 'alignment valley' at 10-30x where systems are dangerous but can't help s...Quality: 64/100
Other
InterpretabilityResearch AreaInterpretabilityMechanistic interpretability has extracted 34M+ interpretable features from Claude 3 Sonnet with 90% automated labeling accuracy and demonstrated 75-85% success in causal validation, though less th...Quality: 66/100Buck ShlegerisPersonBuck ShlegerisCEO of Redwood Research, focused on AI safety research and alignment techniques.Max TegmarkPersonMax TegmarkComprehensive biographical profile of Max Tegmark covering his transition from cosmology to AI safety advocacy, his role founding the Future of Life Institute, and his controversial Mathematical Un...Quality: 63/100Value LearningResearch AreaValue LearningTraining AI systems to infer and adopt human values from observation and interactionQuality: 59/100
Organizations
OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100Alignment Research Engineer AcceleratorOrganizationAlignment Research Engineer AcceleratorTraining program that upskills software engineers to become alignment researchers.Apollo ResearchOrganizationApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/100Open PhilanthropyOrganizationOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information.Quality: 52/100ARC EvaluationsOrganizationARC EvaluationsOrganization focused on evaluating AI systems for dangerous capabilities. Now largely absorbed into METR.GoodfireOrganizationGoodfireGoodfire is a well-funded AI interpretability startup valued at $1.25B (Feb 2026) developing mechanistic interpretability tools like Ember API to make neural networks more transparent and steerable...Quality: 68/100
Concepts
AI Scaling LawsConceptAI Scaling LawsEmpirical relationships between compute, data, parameters, and AI performanceQuality: 92/100
Key Debates
Why Alignment Might Be HardArgumentWhy Alignment Might Be HardA comprehensive taxonomy of alignment difficulty arguments spanning specification problems, inner alignment failures, verification limits, and adversarial dynamics, with expert p(doom) estimates ra...Quality: 69/100AI Alignment Research AgendasCruxAI Alignment Research AgendasComprehensive comparison of major AI safety research agendas ($100M+ Anthropic, $50M+ DeepMind, $5-10M nonprofits) with detailed funding, team sizes, and failure mode coverage (25-65% per agenda). ...Quality: 69/100
Risks
Epistemic SycophancyRiskEpistemic SycophancyAI sycophancy—where models agree with users rather than provide accurate information—affects all five state-of-the-art models tested, with medical AI showing 100% compliance with illogical requests...Quality: 60/100