Skip to content
Longterm Wiki
Back

ARENA – AI Safety Education

web
arena.education·arena.education/

ARENA is a widely recommended starting point for technically skilled individuals looking to enter AI safety research, and its curriculum materials are frequently referenced in AI safety onboarding guides and reading lists.

Metadata

Importance: 62/100homepageeducational

Summary

ARENA (Alignment Research Engineer Accelerator) is an educational program designed to train technical AI safety researchers by providing structured curriculum covering mechanistic interpretability, reinforcement learning, and other core alignment topics. It aims to build the pipeline of competent AI safety engineers by offering hands-on, project-based learning. The program serves as a key field-building initiative to address the talent gap in technical AI safety research.

Key Points

  • Provides structured technical curriculum covering mechanistic interpretability, RL from human feedback, and transformer architecture fundamentals
  • Designed to accelerate entry of capable engineers and researchers into AI safety roles at leading organizations
  • Offers both in-person cohort programs and self-study materials freely accessible online
  • Targets individuals with strong ML/software backgrounds seeking to transition into dedicated AI safety research
  • Serves as a pipeline initiative supported by the broader EA and AI safety community to address researcher shortfall

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 20265 KB
[0](https://www.arena.education/cart)

![](https://images.squarespace-cdn.com/content/v1/67e146e032bcbc72c7a584bf/096f397d-d1d0-4f46-bb5c-fb06473a8e9f/DSC02789.JPG)

# We are ARENA.

_ALIGNMENT RESEARCH ENGINEER ACCELERATOR_

## We aim to provide talented individuals with the skills, community, and confidence to contribute directly to technical AI safety.

![Group of people in an office, smiling and making peace signs, sitting at desks with computers.](https://images.squarespace-cdn.com/content/v1/67e146e032bcbc72c7a584bf/1742816998124-GAUHYAG2BALPA6CU393O/DSC02843.JPG)

### ARENA 8.0: May 25 – Jun 26 2026

**We are delighted to announce that our programme’s eighth iteration, ARENA 8.0, will be taking place in summer 2026!**

**Applications to ARENA 8.0 are now closed.**

ARENA 8.0 will be running in-person at the [London Initiative for Safe AI (LISA)](https://www.safeai.org.uk/) from May 25th – June 26th.We will cover the costs of travel to/from London, any visa expenses, accommodation for the full duration of the programme, and provide meals, drinks and snacks on programme days. We aim to ensure that finances do not present a barrier to anyone’s participation in ARENA.

If you’re interested in applying to future iterations of ARENA but won’t be able to attend ARENA 8.0, please fill in this [**expression of interest form**](https://airtable.com/appZIMMH3ywSxS0A9/pag8nBiAteuzm7vGt/form) to ensure you’re kept up-to-date for when we open applications for future iterations.

[**For more information about the programme, see our announcement post on LessWrong.**](https://www.lesswrong.com/posts/DuPjTxacYNrDy64qq/arena-8-0-call-for-applicants)

### Programme Details:

- #### How do we operate?






ARENA provides in-person programming bootcamps, equipping people with the skills, community and confidence to contribute to technical AI safety. We run 2-3 bootcamps each year, and each lasts for 4-5 weeks. We’re based at the [London Initiative for Safe AI (LISA)](https://www.safeai.org.uk/) in Shoreditch, London.



For those who wish to use our materials – whether to learn independently, or run their own AI safety courses – [our curriculum is available online.](https://www.arena.education/curriculum)



We are grateful to be funded by [Coefficient Giving](https://coefficientgiving.org/).

- #### Who should apply to ARENA?






There’s no single profile that we look for at ARENA; in recent iterations, successful applicants have come from diverse academic and professional backgrounds. We intend to keep it this way – this diversity makes our bootcamps a more enriching learning experience for all.



When assessing applications to our programme, we like to see:



  - Applicants who genuinely care about AI safety and making the future development of AI go well;

  - Applicants who are able to code well in Python, and have some knowledge of the maths needed for modern AI (linear algebra, calculus, probability);

  - A solid understanding of how you might best

... (truncated, 5 KB total)
Resource ID: a1298425a282f519 | Stable ID: YTZmYzk5Mj