FAR AI
far-ai (E138)← Back to pagePath: /knowledge-base/organizations/far-ai/
Page Metadata
{
"id": "far-ai",
"numericId": null,
"path": "/knowledge-base/organizations/far-ai/",
"filePath": "knowledge-base/organizations/far-ai.mdx",
"title": "FAR AI",
"quality": 32,
"importance": 32,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-31",
"llmSummary": "FAR AI (FAR.AI) is a 2022-founded AI safety research nonprofit led by CEO Adam Gleave and COO Karl Berzins. The organization focuses on technical AI safety research and coordination to ensure safety techniques are adopted. Their research has been cited in Congress and won best paper awards.",
"structuredSummary": null,
"description": "AI safety research nonprofit founded in 2022 by Adam Gleave and Karl Berzins, focusing on making AI systems safe through technical research and coordination",
"ratings": {
"novelty": 2.5,
"rigor": 3,
"actionability": 2,
"completeness": 5
},
"category": "organizations",
"subcategory": "safety-orgs",
"clusters": [
"ai-safety",
"community"
],
"metrics": {
"wordCount": 1359,
"tableCount": 14,
"diagramCount": 0,
"internalLinks": 13,
"externalLinks": 6,
"footnoteCount": 0,
"bulletRatio": 0.09,
"sectionCount": 33,
"hasOverview": true,
"structuralScore": 13
},
"suggestedQuality": 87,
"updateFrequency": 21,
"evergreen": true,
"wordCount": 1359,
"unconvertedLinks": [
{
"text": "FAR.AI",
"url": "https://www.far.ai/",
"resourceId": "9199f43edaf3a03b",
"resourceTitle": "FAR AI"
}
],
"unconvertedLinkCount": 1,
"convertedLinkCount": 1,
"backlinkCount": 0,
"redundancy": {
"maxSimilarity": 16,
"similarPages": [
{
"id": "arc",
"title": "ARC (Alignment Research Center)",
"path": "/knowledge-base/organizations/arc/",
"similarity": 16
},
{
"id": "safety-research-value",
"title": "Expected Value of AI Safety Research",
"path": "/knowledge-base/models/safety-research-value/",
"similarity": 14
},
{
"id": "chai",
"title": "CHAI (Center for Human-Compatible AI)",
"path": "/knowledge-base/organizations/chai/",
"similarity": 14
},
{
"id": "cais",
"title": "CAIS (Center for AI Safety)",
"path": "/knowledge-base/organizations/cais/",
"similarity": 13
},
{
"id": "conjecture",
"title": "Conjecture",
"path": "/knowledge-base/organizations/conjecture/",
"similarity": 13
}
]
}
}Entity Data
{
"id": "far-ai",
"type": "organization",
"title": "FAR AI",
"description": "FAR AI (Forecasting AI Research) is an AI safety research organization founded in 2023 with a focus on adversarial robustness, model evaluation, and alignment research. The organization was co-founded by Dan Hendrycks, a prominent AI safety researcher known for his work on benchmarks, robustness, and AI risk.",
"tags": [
"adversarial-robustness",
"ml-safety",
"benchmarking",
"natural-abstractions",
"evaluation",
"mmlu",
"out-of-distribution-detection",
"safety-evaluations",
"empirical-research",
"academic-ai-safety"
],
"relatedEntries": [
{
"id": "dan-hendrycks",
"type": "researcher"
},
{
"id": "adversarial-robustness",
"type": "safety-agenda"
},
{
"id": "natural-abstractions",
"type": "concepts"
},
{
"id": "benchmarking",
"type": "safety-agenda"
},
{
"id": "metr",
"type": "organization"
},
{
"id": "apollo-research",
"type": "organization"
}
],
"sources": [
{
"title": "FAR AI Website",
"url": "https://far.ai"
},
{
"title": "Dan Hendrycks Google Scholar",
"url": "https://scholar.google.com/citations?user=VUnTdTkAAAAJ"
},
{
"title": "MMLU Paper",
"url": "https://arxiv.org/abs/2009.03300"
},
{
"title": "Natural Abstractions Research",
"url": "https://www.alignmentforum.org/tag/natural-abstraction"
},
{
"title": "Dan Hendrycks on X-risk",
"url": "https://arxiv.org/abs/2306.12001"
}
],
"lastUpdated": "2025-12",
"website": "https://far.ai",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
{
"eaForum": "https://forum.effectivealtruism.org/topics/far-ai"
}Backlinks (0)
No backlinks
Frontmatter
{
"title": "FAR AI",
"description": "AI safety research nonprofit founded in 2022 by Adam Gleave and Karl Berzins, focusing on making AI systems safe through technical research and coordination",
"sidebar": {
"order": 16
},
"quality": 32,
"llmSummary": "FAR AI (FAR.AI) is a 2022-founded AI safety research nonprofit led by CEO Adam Gleave and COO Karl Berzins. The organization focuses on technical AI safety research and coordination to ensure safety techniques are adopted. Their research has been cited in Congress and won best paper awards.",
"lastEdited": "2026-01-31",
"importance": 32,
"update_frequency": 21,
"ratings": {
"novelty": 2.5,
"rigor": 3,
"actionability": 2,
"completeness": 5
},
"clusters": [
"ai-safety",
"community"
],
"subcategory": "safety-orgs",
"entityType": "organization"
}Raw MDX Source
---
title: FAR AI
description: AI safety research nonprofit founded in 2022 by Adam Gleave and Karl Berzins, focusing on making AI systems safe through technical research and coordination
sidebar:
order: 16
quality: 32
llmSummary: FAR AI (FAR.AI) is a 2022-founded AI safety research nonprofit led by CEO Adam Gleave and COO Karl Berzins. The organization focuses on technical AI safety research and coordination to ensure safety techniques are adopted. Their research has been cited in Congress and won best paper awards.
lastEdited: "2026-01-31"
importance: 32
update_frequency: 21
ratings:
novelty: 2.5
rigor: 3
actionability: 2
completeness: 5
clusters:
- ai-safety
- community
subcategory: safety-orgs
entityType: organization
---
import {DataInfoBox, KeyPeople, KeyQuestions, Section, R, EntityLink, DataExternalLinks} from '@components/wiki';
<DataExternalLinks pageId="far-ai" />
<DataInfoBox entityId="E138" />
## Overview
FAR AI (FAR.AI, standing for Frontier AI Research) is an AI safety research nonprofit founded in July 2022 by Adam Gleave (CEO) and Karl Berzins (COO). Adam Gleave completed his PhD in AI at UC Berkeley, advised by <EntityLink id="E290">Stuart Russell</EntityLink>. The organization focuses on technical innovation to make AI systems safe and coordination to ensure these safety techniques are adopted.
FAR AI's research has been cited in Congress, featured in major media outlets, and won best paper awards at academic venues. The organization aims to bridge academic AI safety research with real-world impact through both technical research and policy engagement.
The organization has gained prominence for combining rigorous empirical research with practical safety applications, helping advance the field of AI safety through both technical contributions and ecosystem coordination.
## Risk Assessment
| Risk Category | Assessment | Evidence | Timeline |
|---------------|------------|----------|----------|
| Academic Pace vs. Safety Urgency | Medium | Publication timelines may lag behind rapid AI development | Ongoing |
| Limited Scope Impact | Low-Medium | Robustness research may not directly solve alignment problems | 2-5 years |
| Funding Sustainability | Low | Strong EA backing and academic credentials | Stable |
| Talent Competition | Medium | Competing with labs for top ML researchers | Ongoing |
## Key Research Areas
### Adversarial Robustness
| Research Focus | Approach | Safety Connection | Publications |
|----------------|----------|-------------------|--------------|
| <EntityLink id="E583">Adversarial Training</EntityLink> | Training models to resist adversarial examples | Robust systems prerequisite for alignment | Multiple top-tier venues |
| Certified Defenses | Mathematical guarantees against attacks | Worst-case safety assurances | NeurIPS, ICML papers |
| Robustness Evaluation | Comprehensive testing against adversarial inputs | Identifying failure modes | Benchmark development |
| Distribution Shift | Performance under novel conditions | Real-world deployment safety | ICLR, AISTATS |
### Research Programs
FAR AI operates through several key programs:
| Program | Purpose | Impact | Details |
|---------|---------|--------|---------|
| FAR.Labs | Co-working space | 40+ members | Berkeley-based AI safety research hub |
| Grant-making | Fund external research | Academic partnerships | Early-stage safety research funding |
| Events & Workshops | Convene stakeholders | 1,000+ attendees | Industry, policy, academic coordination |
| In-house Research | Technical safety work | 30+ papers published | Robustness, interpretability, alignment |
### Natural Abstractions Research
| Research Question | Hypothesis | Implications | Status |
|-------------------|------------|--------------|--------|
| Universal Concepts | Intelligent systems discover same abstractions | Shared conceptual basis for alignment | Theoretical development |
| Neural Network Learning | Do NNs learn natural abstractions? | Interpretability foundations | Empirical investigation |
| Alignment Verification | Can we verify shared concepts? | Communication with AI systems | Early research |
| Mathematical Universality | Math/physics as natural abstractions | Foundation for value alignment | Ongoing |
## Current State & Trajectory
### 2024 Research Progress
**Publications**: Continuing high-impact academic publications in adversarial robustness and safety evaluation
**Team Growth**: Expanding research team with ML and safety expertise
**Collaborations**: Active partnerships with academic institutions and safety organizations
### 2025-2027 Projections
| Metric | Current | Status |
|--------|---------|--------|
| Research Papers | 30+ published | Cited in Congress |
| FAR.Labs Members | 40+ | Berkeley-based |
| Events Hosted | 10+ | 1,000+ attendees |
| Research Focus | Robustness, interpretability, evaluation, alignment | Active |
## Strategic Position Analysis
### Organizational Comparisons
| Organization | Focus | Overlap | Differentiation |
|--------------|-------|---------|----------------|
| <EntityLink id="E22">Anthropic</EntityLink> | <EntityLink id="E451">Constitutional AI</EntityLink>, scaling | Safety research | Academic publication, no model development |
| <EntityLink id="E25">ARC</EntityLink> | Alignment research | Theoretical alignment | Empirical ML approach |
| <EntityLink id="E201">METR</EntityLink> | Model evaluation | Safety assessment | Robustness specialization |
| Academic Labs | ML research | Technical methods | Safety mission-focused |
### Unique Value Proposition
- **Academic Credibility**: Publishing at top ML venues (NeurIPS, ICML, ICLR)
- **Bridge Function**: Connecting mainstream ML with <EntityLink id="E56">AI safety concerns</EntityLink>
- **Empirical Rigor**: High-quality experimental methodology
- **Benchmark Expertise**: Proven track record in evaluation design
## Research Impact Assessment
### Citation Analysis
| Publication Type | Citations Range | h-index Contribution | Field Impact |
|------------------|-----------------|---------------------|--------------|
| Benchmark Papers | 500-2000+ | High | Field-defining |
| Robustness Research | 50-300 | Medium-High | Methodological advances |
| Safety Evaluations | 20-100 | Medium | Growing influence |
| Theory Papers | 10-50 | Variable | Long-term potential |
### Industry Adoption
**Research Impact**: FAR AI research cited in Congress and featured in major media
**Collaboration**: Active partnerships with academic institutions and AI labs
**Community Building**: FAR.Labs hosts 40+ researchers working on AI safety
### Theoretical Questions
- **Natural Abstractions Validity**: Will the theory prove foundational for alignment?
- **Robustness-Alignment Connection**: How directly does adversarial robustness translate to value alignment?
- **Scaling Dynamics**: Will current approaches work for more capable systems?
### Organizational Uncertainties
- **Research Timeline**: Can academic publication pace match AI development speed?
- **Scope Evolution**: Will FAR AI expand beyond current focus areas?
- **Policy Engagement**: How involved will the organization become in governance discussions?
### Field-Wide Cruxes
| Uncertainty | FAR AI Position | Alternative Views | Resolution Timeline |
|-------------|-----------------|-------------------|-------------------|
| Value of robustness for alignment | High correlation | Limited connection | 2-3 years |
| Natural abstractions importance | Foundational | Speculative theory | 5+ years |
| Academic vs. applied research | Balance needed | Industry focus | Ongoing |
| Benchmark gaming concerns | Manageable with good design | Fundamental limitation | 1-2 years |
## Funding & Sustainability
### Current Funding Model
| Source Type | Estimated % | Advantages | Risks |
|-------------|-------------|------------|-------|
| EA Foundations | 70-80% | Mission alignment | Concentration risk |
| Government Grants | 10-15% | Credibility | Bureaucratic constraints |
| Private Donations | 10-15% | Flexibility | Sustainability questions |
### Financial Sustainability
**Strengths**: Strong academic credentials attract diverse funding
**Challenges**: Competition with higher-paying industry positions
**Outlook**: Stable given <EntityLink id="E265">growing AI safety investment</EntityLink>
## Criticisms & Responses
### Academic Pace Criticism
**Concern**: Academic publishing too slow for AI safety urgency
**Response**: Rigorous evaluation methodology benefits long-term safety
**Mitigation**: Faster preprint sharing, direct collaboration with labs
### Limited Scope Concerns
**Concern**: Robustness research doesn't address core alignment difficulties
**Response**: Robustness is necessary foundation for aligned systems
**Evidence**: Integration of robustness with value alignment research
### Theoretical Speculation
**Concern**: Natural abstractions theory lacks empirical support
**Response**: Theory guides empirical research program
**Timeline**: 5-year research program to test key hypotheses
## Future Directions
### Research Roadmap
| Timeline | Research Focus | Expected Outputs | Success Metrics |
|----------|----------------|------------------|------------------|
| 2024-2025 | Adversarial robustness scaling | Benchmarks, methods | Lab adoption |
| 2025-2026 | Natural abstractions empirical tests | Theory validation | Academic impact |
| 2026-2027 | Alignment-robustness integration | Unified framework | Safety improvements |
| 2027+ | Policy and governance engagement | Recommendations | Regulatory influence |
### Expansion Opportunities
- **International Collaboration**: Partnerships with European and Asian institutions
- **Policy Research**: <EntityLink id="E608">AI governance</EntityLink> applications of robustness insights
- **Educational Initiatives**: Training next generation of safety researchers
- **Tool Development**: Open-source safety evaluation platforms
## Sources & Resources
### Primary Sources
| Source Type | Links | Content |
|-------------|-------|---------|
| Organization Website | [FAR.AI](https://www.far.ai/) | Mission, team, research |
| About Page | [About FAR.AI](https://www.far.ai/about) | Founders, team |
| Research | [FAR.AI Research](https://www.far.ai/research) | Publications, papers |
### Key Research Areas
| Area | Focus | Impact |
|------|-------|--------|
| Robustness | Adversarial robustness, safety under distribution shift | Foundation for safe deployment |
| Interpretability | Understanding model internals | Alignment verification |
| Model Evaluation | Safety assessment methods | Industry adoption |
| Alignment | Technical alignment research | Long-term safety |
### Related Organizations
| Organization | Relationship | Collaboration Type |
|--------------|-------------|-------------------|
| <R id="ec227634629c8d2e">UC Berkeley</R> | Academic affiliation | Research collaboration |
| <EntityLink id="E57">CHAI</EntityLink> | Safety research | Joint projects |
| <EntityLink id="E202">MIRI</EntityLink> | Theoretical alignment | Natural abstractions |
| <EntityLink id="E24">Apollo Research</EntityLink> | Evaluation methods | Benchmark development |
### Additional Resources
| Resource Type | Description | Access |
|---------------|-------------|--------|
| FAR.Labs | Berkeley co-working space | [FAR.Labs](https://www.far.ai/labs) |
| Events | Workshops and seminars | [Events](https://www.far.ai/events) |
| Blog | Research updates | [What's New](https://far.ai/post/2023-12-far-overview/) |