Skip to content
Longterm Wiki
Back

Anthropic Careers

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Anthropic

This is Anthropic's recruitment portal; relevant to those interested in joining the AI safety field or understanding how a leading safety-focused lab articulates its culture and principles to prospective employees.

Metadata

Importance: 18/100homepage

Summary

Anthropic's careers page outlines the company's mission to build safe and beneficial AI, highlighting their guiding principles and inviting researchers, engineers, and builders to join their work on Claude and AI safety. The page emphasizes a 'race to the top' on safety culture and a commitment to ensuring powerful AI benefits humanity.

Key Points

  • Anthropic seeks researchers, engineers, and builders to work on Claude and AI safety challenges with real stakes.
  • Core principles include acting for global good, holding both risks and benefits of AI in mind, and prioritizing safety-first development.
  • The company aims to set the industry bar for AI safety and inspire competitors to follow suit.
  • Culture emphasizes high trust, low ego, empirical problem-solving, and simplicity over sophistication.
  • Broadly defines 'users' to include customers, policymakers, employees, and all people impacted by their technology.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202611 KB
# Shape how AI meets 
the world

Anthropic builds Claude—AI designed to be helpful, honest, and harmless. We're researchers, engineers, and builders from a range of disciplines, working to make sure powerful AI goes well for everyone. If you're drawn to hard problems with real stakes, we'd like to meet you.

[Explore open roles](https://www.anthropic.com/careers/jobs)

![Video thumbnail](https://www.anthropic.com/_next/image?url=https%3A%2F%2Fcdn.sanity.io%2Fimages%2F4zrzovbb%2Fwebsite%2Fdaa9ebf989c197cae5ef84a639302083713a611e-1760x988.jpg&w=3840&q=75)

### Building Anthropic

Our co-founders discuss the origins of Anthropic, the “race to the top” in AI development, and where AI technology will go from here.

## Principles that guide how we show up for each other and our mission

![ ](https://www-cdn.anthropic.com/images/4zrzovbb/website/b68cbb43d7c8f56f0b14cc867e8d4d74445f78b0-1000x1000.svg)

### Act for the global good

We strive to make decisions that maximize positive outcomes for humanity in the long run. This means we’re willing to be very bold in the actions we take to ensure our technology is a robustly positive force for good. We take seriously the task of safely guiding the world through a technological revolution that has the potential to change the course of human history, and are committed to helping make this transition go well.

![Stylized hand balancing on geometric scale with counterweighted elements](https://www-cdn.anthropic.com/images/4zrzovbb/website/39db33950eb113e504a5b9fc56db490a64673e96-1000x1000.svg)

### Hold light and shade

AI has the potential to pose unprecedented risks to humanity if things go badly. It also has the potential to create unprecedented benefits for humanity if things go well. We need shade to understand and protect against the potential for bad outcomes. We need light to realize the good outcomes.

![Hand holding large heart representing care and compassion](https://www-cdn.anthropic.com/images/4zrzovbb/website/3da76509c888ac18be74e3e9dc0752c66d1a8202-1000x1000.svg)

### Be good to our users

At Anthropic, we define “users” broadly. Users are our customers, policy-makers, Ants, and anyone impacted by the technology we build or the actions we take. We cultivate generosity and kindness in all our interactions—with each other, with our users, and with the world at large. Going above and beyond for each other, our customers, and all of the people affected by our technology is meeting expectations.

![Hand with protective shield and network node in cybersecurity symbol design](https://www-cdn.anthropic.com/images/4zrzovbb/website/b1ce510c468b2920d4f8f61c17a50906801f939a-1000x1000.svg)

### Ignite a race to the top on safety

As a safety-first company, we believe that building reliable, trustworthy, and secure systems is our collective responsibility—and the market agrees. We work to inspire a ‘race to the top’ dynamic where AI developers must compete to develop the most safe and secure AI systems. We want

... (truncated, 11 KB total)
Resource ID: 4d2d026d3cca4d9d | Stable ID: OGZlY2FkMm