Skip to content
Longterm Wiki
Back

Alignment Research Center - Wikipedia

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

ARC is a key organization in the AI safety landscape; its ARC Evals team conducts pre-deployment capability evaluations for frontier AI labs including Anthropic and OpenAI, making it directly relevant to deployment safety and governance discussions.

Metadata

Importance: 55/100wiki pagereference

Summary

Wikipedia overview of the Alignment Research Center (ARC), a nonprofit AI safety research organization founded in April 2021 by Paul Christiano. ARC focuses on developing scalable alignment methods, evaluating dangerous AI capabilities, and ensuring advanced AI systems are safe and beneficial. It has expanded from theoretical work into empirical research, industry collaborations, and policy.

Key Points

  • Founded in April 2021 by former OpenAI researcher Paul Christiano, based in Berkeley, California.
  • Develops scalable methods for training AI systems to behave honestly and helpfully, analyzing how alignment techniques could break down.
  • ARC Evals (started by Beth Barnes) focuses on evaluating capabilities and alignment of advanced AI models, including red-teaming frontier models.
  • Funded primarily by Open Philanthropy; notably returned a $1.25M FTX Foundation grant after the FTX collapse on ethical grounds.
  • Expanding scope from theoretical alignment research into empirical work, industry partnerships, and AI policy engagement.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202619 KB
# Alignment Research Center

Alignment Research Center

AI safety research organization

Not to be confused with [Arc Institute](https://en.wikipedia.org/wiki/Arc_Institute "Arc Institute").

| Formation | April 2021;4 years ago(April 2021) |
| Founder | [Paul Christiano](https://en.wikipedia.org/wiki/Paul_Christiano_(researcher) "Paul Christiano (researcher)") |
| Type | [Nonprofit](https://en.wikipedia.org/wiki/Nonprofit "Nonprofit") [research institute](https://en.wikipedia.org/wiki/Research_institute "Research institute") |
| Legal status | [501(c)(3)](https://en.wikipedia.org/wiki/501(c)(3) "501(c)(3)") [tax exempt](https://en.wikipedia.org/wiki/Tax_exempt "Tax exempt") [charity](https://en.wikipedia.org/wiki/Charitable_organization "Charitable organization") |
| Purpose | [AI alignment](https://en.wikipedia.org/wiki/AI_alignment "AI alignment") and [safety research](https://en.wikipedia.org/wiki/AI_safety "AI safety") |
| Location | - [Berkeley, California](https://en.wikipedia.org/wiki/Berkeley,_California "Berkeley, California") |
| Website | [alignment.org](https://www.alignment.org/) |

Alignment Research Center

The **Alignment Research Center** ( **ARC**) is a [nonprofit](https://en.wikipedia.org/wiki/Nonprofit_organization "Nonprofit organization") [research institute](https://en.wikipedia.org/wiki/Research_institute "Research institute") based in [Berkeley, California](https://en.wikipedia.org/wiki/Berkeley,_California "Berkeley, California"), dedicated to the [alignment](https://en.wikipedia.org/wiki/AI_alignment "AI alignment") of advanced [artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence "Artificial intelligence") with human values and priorities.[\[1\]](https://en.wikipedia.org/wiki/Alignment_Research_Center#cite_note-1) Established by former [OpenAI](https://en.wikipedia.org/wiki/OpenAI "OpenAI") researcher [Paul Christiano](https://en.wikipedia.org/wiki/Paul_Christiano_(researcher) "Paul Christiano (researcher)"), ARC focuses on recognizing and comprehending the potentially harmful capabilities of present-day AI models.[\[2\]](https://en.wikipedia.org/wiki/Alignment_Research_Center#cite_note-2)[\[3\]](https://en.wikipedia.org/wiki/Alignment_Research_Center#cite_note-3)

## Details

ARC's mission is to ensure that powerful machine learning systems of the future are designed and developed safely and for the benefit of humanity. It was founded in April 2021 by Paul Christiano and other researchers focused on the theoretical challenges of AI alignment.[\[4\]](https://en.wikipedia.org/wiki/Alignment_Research_Center#cite_note-4) They attempt to develop scalable methods for training AI systems to behave honestly and helpfully. A key part of their methodology is considering how proposed alignment techniques might break down or be circumvented as systems become more advanced.[\[5\]](https://en.wikipedia.org/wiki/Alignment_Research_Center#cite_note-5) ARC has been expanding from theoretical work into empiri

... (truncated, 19 KB total)
Resource ID: 3de5b8fecb182b3a | Stable ID: OTJlNDZmNz