Skip to content
Longterm Wiki
Navigation
Updated 2026-03-26HistoryData
Citations verified6 accurate3 flagged3 unchecked
Page StatusContent
Edited 10 days ago2.4k words7 backlinksUpdated every 6 weeksDue in 5 weeks
70QualityGood •77.5ImportanceHigh48.5ResearchLow
Content9/13
SummaryScheduleEntityEdit historyOverview
Tables3/ ~10Diagrams0/ ~1Int. links26/ ~19Ext. links2/ ~12Footnotes12/ ~7References9/ ~7Quotes9/12Accuracy9/12RatingsN:6 R:8 A:5 C:9Backlinks7
Issues2
QualityRated 70 but structure suggests 93 (underrated by 23 points)
Links1 link could use <R> components

Anthropic Long-Term Benefit Trust

Lab

Anthropic Long-Term Benefit Trust

Comprehensive reference page on Anthropic's LTBT governance mechanism, covering legal structure, trustee composition, amendment processes, and criticisms; concludes the Trust's actual enforcement power is uncertain, with the key red flag being that trustees had only appointed 1 of 3 available board seats by late 2024/2025. Useful reference for AI governance researchers but has internal timeline inconsistencies and several unsourced biographical claims.

TypeLab
Related
Organizations
AnthropicCentre for Effective Altruism
People
Paul ChristianoJason MathenyNeil Buddy ShahRichard FontaineMariano-Florentino CuéllarDario AmodeiDaniela Amodei
2.4k words · 7 backlinks

Quick Assessment

DimensionAssessment
What it isIndependent body of five trustees with authority to elect a growing portion (ultimately majority) of Anthropic's board
Key innovationCreates "different kind of stockholder" insulated from financial incentives to balance public benefit with profit
Current powerCan appoint up to 3 of 5 board members, though has only appointed 1 as of late 2025
TimelineDesigned to reach majority board control within 4 years of establishment (≈2027)
Main limitationCan be amended by stockholder supermajority, creating potential override mechanism
StatusExperimental governance structure in operation since 2023
SourceLink
Official Websiteanthropic.com
Wikipediaen.wikipedia.org

Overview

The Anthropic Long-Term Benefit Trust is an independent governance mechanism established by Anthropic to align corporate decision-making with the long-term benefit of humanity alongside traditional stockholder interests. The Trust comprises five financially disinterested trustees who hold special Class T Common Stock, granting them authority to elect and remove an increasing portion of Anthropic's board of directors—ultimately a majority within four years of establishment.12

Paired with Anthropic's Delaware Public Benefit Corporation status, the LTBT represents an experimental approach to addressing what the company characterizes as "unprecedentedly large externalities" from AI development, including national security risks, economic disruption, and fundamental threats to humanity.1 The structure is designed to insulate key governance decisions from short-term profit pressures at "key junctures where we expect the consequences of our decisions to reach far beyond Anthropic."1

The Trust operates as what Anthropic calls "a different kind of stockholder," creating accountability mechanisms independent of financial returns while maintaining a working relationship with company leadership through consultation requirements and information-sharing agreements.12 However, the structure has faced significant criticism within the AI safety community regarding its actual enforcement power and the company's decision not to publish the full Trust Agreement.34

History and Development

Origins and Motivation

The Long-Term Benefit Trust emerged from concerns among Anthropic's founders, including siblings Daniela Amodei (President) and Dario Amodei (CEO), about the lack of external constraints on AI development comparable to those governing other powerful technologies.15 The founders believed that while AI safety aligned with long-term profitability, the potential for extreme events and catastrophic risks required governance mechanisms that could appropriately weigh public interests against commercial pressures.1

An earlier version called the "Long-Term Benefit Committee" was outlined in Anthropic's Series A investment documents in 2021, but its activation was delayed to allow refinement into the current LTBT structure.12 This delay enabled what Anthropic describes as a year-long search process and legal "red-teaming" to improve the governance framework.1

The LTBT is organized as a Delaware "purpose trust"—a trust managed for achieving a purpose rather than benefiting specific beneficiaries.2 This legal form allows the Trust to pursue the mission of "responsibly develop[ing] and maintain[ing] advanced AI for the long-term benefit of humanity" without being constrained by traditional beneficiary-focused trust law.2

At the close of Anthropic's Series C funding round, the company amended its corporate charter to create Class T Common Stock held exclusively by the Trust.12 This special class of shares grants trustees the power to elect directors according to a phased timeline: initially one of five board members, increasing to two, and eventually three (a majority) based on time and fundraising milestones.12

Timeline of Key Events

  • 2021: Anthropic founded as Delaware Public Benefit Corporation; Long-Term Benefit Committee outlined in Series A documents15
  • 2021-2022: Year-long trustee search and legal structure refinement1
  • ~2023: LTBT formally launched with initial five trustees16
  • July 2024: Trust representation scheduled to increase to two of five board members3
  • November 2024: Trust representation scheduled to increase to three of five board members3
  • December 2023: Jason Matheny stepped down to avoid conflicts with RAND Corporation policy work3
  • April 2024: Paul Christiano stepped down to become Head of AI Safety at U.S. AI Safety Institute37
  • 2025: Richard Fontaine appointed as trustee; Mariano-Florentino Cuéllar appointed78

Governance Structure and Powers

Trustee Composition and Independence

The LTBT comprises five voting trustees selected for expertise in AI safety, national security, public policy, and social enterprise.12 Trustees are explicitly insulated from financial interests in Anthropic—they hold no equity and receive no compensation tied to company performance.15 This financial disinterest is central to the Trust's design, intended to ensure decisions appropriately balance public benefit against profit maximization.1

Initial trustees were appointed by Anthropic's board, but subsequent trustees are selected by vote of existing trustees, with consultation requirements ensuring company input.12 Trustees serve only one-year terms, a design choice intended to enable frequent reevaluation while maintaining continuity of oversight.2

Current and Former Trustees

NameRole/ExpertiseStatusNotes
Neil Buddy ShahCEO, Clinton Health Access InitiativeCurrent (Chair)Initial trustee, still serving as of early 202615
Kanika BahlCEO & President, Evidence ActionCurrentInitial trustee; CEO of GiveWell top charity17
Zach RobinsonCEO, Centre for Effective AltruismCurrentInitial trustee17
Richard FontaineCEO, Center for a New American SecurityAppointed 2025National security expert7
Mariano-Florentino CuéllarFormer California Supreme Court JusticeAppointed Jan 2026Global AI governance expert78
Paul ChristianoFounder, Alignment Research CenterDeparted April 2024Left to join U.S. AI Safety Institute137
Jason MathenyCEO, RAND CorporationDeparted December 2023Left to avoid conflicts with RAND policy work13

Board Appointment Powers

The Class T shares grant trustees authority to elect an increasing number of Anthropic's board members according to a phased schedule. The Trust was designed to elect one director initially, increasing to two and eventually three (a majority of five) within four years or upon certain fundraising milestones.12

Critically, despite having authority to appoint up to three directors by late 2024, the Trust had only appointed one board member as of the analysis conducted in late 2024.3 This gap between potential and exercised power has contributed to skepticism about the Trust's effectiveness.34

The certificate of incorporation also grants trustees advance notice of "certain key actions by the board that may materially affect the business of the company or its organization," though the specific threshold for such notice is not publicly disclosed.2

Information Access and Resources

Under a carefully structured agreement, trustees hold broad power to request "any information or resources that are reasonably appropriate to the accomplishment of the Trust's purpose."2 However, Anthropic may withhold information or resources for specified reasons, including preserving confidential customer information or avoiding "clearly unreasonable expense or effort that manifestly exceeds the benefit to be gained by the Trust."2

This balance reflects the tension between trustee independence and operational practicality, giving trustees substantial but not unlimited access to company information and decision-making processes.2

Amendment and Enforcement Mechanisms

Amendment Processes

The Trust Agreement, certificate of incorporation, and key agreements between Trust and company use harmonized amendment processes that balance durability with flexibility.12 Amendments can occur through:

  1. Consent of voting trustees and stockholders2
  2. Consent of voting trustees and company directors (prior to trustees gaining majority board control)2
  3. Supermajority of stockholders (without trustee consent)2

The third mechanism—stockholder supermajority amendment without trustee consent—operates as what Anthropic describes as a "failsafe against the actions of the Voting Trustees" that "safeguards the interests of stockholders."2 The required supermajority percentage increases over time to reflect accumulating experience and the growing need for commitment as AI technology becomes more powerful.2

Enforcement Authority

As permitted by Delaware's purpose trust statute, the Trust Agreement authorizes enforcement by both the company and "groups of the company's stockholders who have held a sufficient percentage of the company's equity for a sufficient period of time."2 Notably, this enforcement structure does not grant trustees themselves the power to enforce the Trust Agreement—a design choice that has drawn criticism for potentially undermining trustee independence.34

Integration with Responsible Scaling Policy

The LTBT is designed to work alongside Anthropic's Responsible Scaling Policy (RSP), which establishes AI Safety Levels (ASL) modeled on biosafety standards.910 The RSP framework pauses training or deployment of powerful models if safety measures lag behind capabilities, with evaluation intervals and safety margins designed to incentivize alignment research progress.9

According to Anthropic, the Trust can "ensure that the organizational leadership is incentivized to carefully evaluate future models for catastrophic risks or ensure they have nation-state level security, rather than prioritizing being the first to market above all other objectives."1 The extent to which trustees actually receive substantive input on RSP decisions versus pro forma consultation remains unclear from public documentation.9

Criticisms and Concerns

Questions About Actual Power

The most substantial criticism of the LTBT centers on whether it provides meaningful oversight or represents what one analysis characterizes as a "powerless" governance mechanism.34 Key concerns include:

Enforcement structure: The Trust can be enforced by stockholders holding "a sufficient percentage of the company's equity for a sufficient period of time" rather than by trustees themselves, suggesting trustees lack independent enforcement authority.24 If trustees make decisions stockholders oppose, stockholders—not trustees—hold the legal power to enforce or challenge those decisions.4

Supermajority amendment: Stockholders can amend the Trust and its powers by supermajority vote without trustee consent.24 Critics note this could be easily achieved if a small number of major investors (such as Amazon and Google, who have made substantial investments in Anthropic) control large share percentages.4

Exercised versus potential power: Despite having authority to appoint three of five board members by November 2024, the Trust had only appointed one director as of late 2024.3 This suggests either trustees are choosing not to exercise their full authority or face constraints not apparent in public documentation.3

Transparency and Documentation

Anthropic has declined to publish the full Trust Agreement, limiting independent assessment of the Trust's actual authority.34 Critics within the AI safety community view this opacity as evidence that the governance mechanism is weaker than Anthropic's public positioning suggests.34 The company's characterization of the LTBT as "an experiment" and "an early iteration that we will build on" may reflect genuine uncertainty about effectiveness rather than confidence in the current design.1

Governance Friction and Trade-offs

The LTBT introduces potential friction between trustees and company leadership in balancing mission integrity against operational agility in a competitive AI development landscape.7 Some analyses suggest this tension could cause delays in partnerships, funding decisions, or deployment timelines, creating trade-offs between safety oversight and commercial viability.7

Counterarguments and Defenses

Not all community analysis accepts the "powerless" framing. Some observers argue the evidence suggests the Trust has significant powers to appoint board members, with the key question being the magnitude of constraints rather than their complete absence.11 One commenter estimated the probability of the Trust being trivially overridable by simple majority shareholders at less than 5%.11

Anthropic's own framing emphasizes that the Trust is not intended to intervene in "day-to-day decisions" or "ordinary commercial strategy," but rather to address "extreme events and the need to handle them with humanity's interests in mind."1 By this standard, the Trust's effectiveness should be judged by its influence at critical decision points rather than ongoing operations.1

Comparison with Other AI Governance Structures

OpenAI Foundation Model

For detailed analysis of OpenAI's governance structure, see OpenAI Foundation.

The LTBT shares conceptual similarities with OpenAI's earlier nonprofit-controlled structure, where a nonprofit foundation held control over a for-profit subsidiary to balance mission and profit motives.12 However, OpenAI's governance crisis in November 2023—when the nonprofit board briefly removed CEO Sam Altman before reversing course under investor pressure—raised questions about whether mission-focused governance can withstand commercial pressures in practice.12

The LTBT attempts to address this challenge through phased power accumulation, financial disinterest of trustees, and supermajority stockholder failsafe provisions. Whether this design proves more durable than OpenAI's structure remains an open empirical question.1

Public Benefit Corporation Baseline

The LTBT builds on Anthropic's Delaware Public Benefit Corporation status, which already grants directors legal authority to balance public benefit with stockholder returns.12 Some critics question whether the Trust adds meaningful constraint beyond what PBC status already provides, particularly given that PBC directors can consider but are not strictly bound by public benefit considerations.6

Anthropic's position is that while PBC status provides "legal latitude," it does not create direct accountability mechanisms or align director incentives with public interests—gaps the LTBT is designed to fill.1

Effective Altruism Connections

Several initial trustees had connections to the effective altruism movement, reflecting Anthropic's origins within EA-adjacent AI safety communities.3 Paul Christiano, founder of the Alignment Research Center, was an initial trustee before departing to join the U.S. AI Safety Institute.13 Zach Robinson's role as Interim CEO of Effective Ventures US represented another direct EA connection.13

The transition in 2024-2026 from trustees with explicit EA ties (Christiano, Robinson) to figures like Richard Fontaine (national security expert) and Mariano-Florentino Cuéllar (global AI governance) has been characterized as a shift from "ideologically driven" to "operationally focused" trustees amid geopolitical and regulatory challenges.7 Whether this represents intentional diversification or coincidental turnover remains unclear from public information.

Key Uncertainties

Several fundamental questions about the LTBT remain unresolved:

Actual enforcement power: Can trustees meaningfully override stockholder preferences on critical decisions, or does the stockholder supermajority amendment provision render the Trust ultimately subordinate to investor interests?34

Exercise of authority: Why has the Trust appointed only one board member despite having authority for three by late 2024?3 Does this reflect strategic choice, informal constraints, or evidence of limited practical power?

Critical decision-making: What constitutes the "key junctures" and "extreme events" where trustees are expected to intervene?1 Without public examples of trustee influence on major decisions, effectiveness remains speculative.

Amendment thresholds: What specific supermajority percentages are required to amend the Trust at different time points?2 These details could determine whether small numbers of large investors effectively control amendment power.

Information access: What information has the company withheld from trustees under the "clearly unreasonable expense" provision, and have trustees challenged such withholding?2

Long-term durability: Will the Trust maintain independence and effectiveness as Anthropic grows, faces competitive pressures, or pursues additional funding that dilutes existing stockholders?

Anthropic explicitly acknowledges the experimental nature of the LTBT, stating it is "an early iteration that we will build on" and emphasizing the company's empiricist approach to observing how the structure functions in practice.1 The ultimate test will be whether the Trust demonstrates meaningful influence on consequential AI development and deployment decisions in the years ahead.

Sources

Footnotes

  1. Anthropic: The Long-Term Benefit TrustAnthropic: The Long-Term Benefit Trust 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

  2. Harvard Law School Forum on Corporate Governance: Anthropic Long-Term Benefit TrustHarvard Law School Forum on Corporate Governance: Anthropic Long-Term Benefit Trust 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

  3. LessWrong: Maybe Anthropic's Long-Term Benefit Trust is PowerlessLessWrong: Maybe Anthropic's Long-Term Benefit Trust is Powerless 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

  4. EA Forum: Maybe Anthropic's Long-Term Benefit Trust is PowerlessEA Forum: Maybe Anthropic's Long-Term Benefit Trust is Powerless 2 3 4 5 6 7 8 9 10 11

  5. Wikipedia: AnthropicWikipedia: Anthropic 2 3 4

  6. The Stakehold: The Anthropic Long-Term Benefit TrustThe Stakehold: The Anthropic Long-Term Benefit Trust 2

  7. Citation rc-3786 2 3 4 5 6 7 8 9 10

  8. Anthropic: Mariano-Florentino Long-Term Benefit TrustAnthropic: Mariano-Florentino Long-Term Benefit Trust 2

  9. LessWrong: Anthropic's Responsible Scaling Policy and Long-Term Benefit TrustLessWrong: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trust 2 3

  10. Alignment Forum: Anthropic's Responsible Scaling Policy and Long-Term Benefit TrustAlignment Forum: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trust

  11. EA Forum comment threadEA Forum comment thread 2

  12. Harvard Law Review: Amoral Drift in AI Corporate GovernanceHarvard Law Review: Amoral Drift in AI Corporate Governance 2

References

1Maybe Anthropic's Long-Term Benefit Trust is powerlessLessWrong·Zach Stein-Perlman·2024

Zach Stein-Perlman critically examines Anthropic's Long-Term Benefit Trust, arguing it may be effectively powerless because stockholders can override or dissolve it via supermajority votes with undisclosed thresholds. The analysis highlights that major investors like Amazon and Google could potentially constitute such a supermajority, and Anthropic's refusal to publish the Trust Agreement suggests the details would reveal its weakness.

★★★☆☆
Claims (1)
The Trust operates as what Anthropic calls "a different kind of stockholder," creating accountability mechanisms independent of financial returns while maintaining a working relationship with company leadership through consultation requirements and information-sharing agreements. However, the structure has faced significant criticism within the AI safety community regarding its actual enforcement power and the company's decision not to publish the full Trust Agreement.
Minor issues85%Feb 22, 2026
Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board.

The claim that the Trust operates as "a different kind of stockholder" is not directly stated in the source, but is an interpretation of the Trust's function. The claim that the Trust creates accountability mechanisms independent of financial returns is an interpretation of the Trust's function, not a direct quote. The claim that the Trust maintains a working relationship with company leadership through consultation requirements and information-sharing agreements is not explicitly stated in the source, but is implied. The source mentions criticism within the AI safety community regarding the Trust's actual enforcement power, but does not explicitly state that this criticism is 'significant'. The source mentions the company's decision not to publish the full Trust Agreement, but does not directly link this to criticism within the AI safety community.

Anthropic announces the appointment of Mariano-Florentino (Tino) Cuéllar to its Long-Term Benefit Trust (LTBT), an independent governance body overseeing Anthropic's public benefit mission. Cuéllar brings expertise in law, governance, and international affairs, including roles as a California Supreme Court Justice and president of the Carnegie Endowment for International Peace. The announcement also notes the departure of founding trustees Kanika Bahl and Zachary Robinson.

★★★★☆
Claims (1)
- 2025: Richard Fontaine appointed as trustee; Mariano-Florentino Cuéllar appointed
Inaccurate0%Feb 22, 2026
Announcements Mariano-Florentino Cuéllar appointed to Anthropic’s Long-Term Benefit Trust Jan 21, 2026 Anthropic’s Long-Term Benefit Trust announced the appointment of Mariano-Florentino (Tino) Cuéllar as a new member of the Trust.

WRONG DATE: The source states that Mariano-Florentino Cuéllar was appointed in 2026, not 2025. UNSUPPORTED: The source does not mention Richard Fontaine.

This Harvard Law Review article examines how AI companies exhibit 'amoral drift'—a structural tendency to deprioritize ethical considerations as commercial pressures intensify—and analyzes failures in current corporate governance mechanisms to constrain this drift. It argues that existing legal and organizational structures are insufficient to ensure AI development remains aligned with public interests.

Claims (1)
The LTBT shares conceptual similarities with OpenAI's earlier nonprofit-controlled structure, where a nonprofit foundation held control over a for-profit subsidiary to balance mission and profit motives. However, OpenAI's governance crisis in November 2023—when the nonprofit board briefly removed CEO Sam Altman before reversing course under investor pressure—raised questions about whether mission-focused governance can withstand commercial pressures in practice.
Accurate100%Feb 22, 2026
The drama that played out at OpenAI, where powerful investor-supplier Microsoft and an irreplaceable labor force successfully reinstated Sam Altman after he was fired by the nonprofit board, 18 suggests that OpenAI may already have drifted substantially from its initial commitments despite its novel structure.

Anthropic's Responsible Scaling Policy introduces AI Safety Levels (ASL), a tiered framework modeled on biosafety standards that mandates escalating safety and security requirements as AI capabilities increase. Current models like Claude are classified ASL-2, while higher levels trigger stricter red-teaming, interpretability requirements, and operational controls. The policy also establishes the Long-Term Benefit Trust as a novel governance mechanism for overseeing transformative AI development.

★★★☆☆
Claims (1)
The LTBT is designed to work alongside Anthropic's Responsible Scaling Policy (RSP), which establishes AI Safety Levels (ASL) modeled on biosafety standards. The RSP framework pauses training or deployment of powerful models if safety measures lag behind capabilities, with evaluation intervals and safety margins designed to incentivize alignment research progress.
Accurate100%Feb 22, 2026
Today, we’re publishing our Responsible Scaling Policy (RSP) – a series of technical and organizational protocols that we’re adopting to help us manage the risks of developing increasingly capable AI systems.

Anthropic introduces its Responsible Scaling Policy (RSP), a framework using AI Safety Levels (ASL) modeled after biosafety standards to define escalating safety and security requirements as AI systems become more capable. The policy pairs technical protocols with a governance structure called the Long-Term Benefit Trust to manage catastrophic risks from misuse or autonomous harmful behavior. Current Claude models are classified ASL-2, with higher levels requiring progressively stringent safety demonstrations before deployment.

★★★☆☆
Claims (1)
The LTBT is designed to work alongside Anthropic's Responsible Scaling Policy (RSP), which establishes AI Safety Levels (ASL) modeled on biosafety standards. The RSP framework pauses training or deployment of powerful models if safety measures lag behind capabilities, with evaluation intervals and safety margins designed to incentivize alignment research progress.

An analysis of Anthropic's Long-Term Benefit Trust (LTBT), a governance mechanism designed to ensure the company's mission of beneficial AI development remains prioritized over commercial interests. The piece examines how the trust structure works, who holds power within it, and whether it provides meaningful accountability for AI safety commitments.

Claims (1)
- ~2023: LTBT formally launched with initial five trustees
Inaccurate70%Feb 22, 2026
They established the “Anthropic Long-Term Benefit Trust,” a novel arrangement that empowers experiments in artificial intelligence and ethics to gradually pick a majority of Anthropic’s board of directors.

The source does not state that the LTBT was formally launched in 2023. It only mentions the date of the article's publication. The source does not specify the number of initial trustees.

7Maybe Anthropic's Long-Term Benefit Trust is powerlessEA Forum·Zach Stein-Perlman·2024

This EA Forum post critically examines Anthropic's Long-Term Benefit Trust, arguing it may be a weak governance mechanism that stockholders can override via supermajority vote. The author highlights the opacity surrounding the Trust Agreement's specific thresholds and enforcement details, suggesting that Anthropic's reluctance to publish these terms implies the structure is less robust than publicly claimed.

★★★☆☆
Claims (2)
The Trust operates as what Anthropic calls "a different kind of stockholder," creating accountability mechanisms independent of financial returns while maintaining a working relationship with company leadership through consultation requirements and information-sharing agreements. However, the structure has faced significant criticism within the AI safety community regarding its actual enforcement power and the company's decision not to publish the full Trust Agreement.
Inaccurate70%Feb 22, 2026
But the Trust&#x27;s details have not been published and some information Anthropic has shared is concerning.

The claim that the Trust operates as what Anthropic calls 'a different kind of stockholder' is not directly supported by the source. The source does mention that Anthropic emphasizes the Trust as an experiment and argues that Anthropic will be able to promote safety and benefit-sharing over profit. The claim that the Trust has consultation requirements and information-sharing agreements is not explicitly mentioned in the source. The source does not mention that the criticism comes specifically from within the AI safety community.

Some observers argue the evidence suggests the Trust has significant powers to appoint board members, with the key question being the magnitude of constraints rather than their complete absence. One commenter estimated the probability of the Trust being trivially overridable by simple majority shareholders at less than 5%.
Accurate95%Feb 22, 2026
I would say that the LTBT is powerless iff it can be trivially prevented from accomplishing its primary function—overriding the financial interests of the for-profit Anthropic investors—by those investors, such as with a simple majority (which is the normal standard of corporate control). I think this is very unlikely to be true, p<5%.

This article examines Anthropic's Long-Term Benefit Trust (LTBT), a novel governance structure designed to maintain the company's public benefit mission as it scales. It analyzes how the LTBT serves as an oversight mechanism intended to keep Anthropic accountable to its stated goal of safe AI development rather than pure profit maximization. The piece explores the implications of this model for broader AI governance and corporate accountability in the AI industry.

Claims (1)
- April 2024: Paul Christiano stepped down to become Head of AI Safety at U.S. AI Safety Institute

Anthropic describes the Long-Term Benefit Trust (LTBT), a unique governance body designed to ensure the company remains focused on its mission of safe and beneficial AI development. The LTBT holds special oversight powers intended to prevent Anthropic from being captured by purely commercial incentives, providing a structural safeguard for long-term safety commitments. This represents Anthropic's attempt to institutionalize mission alignment through corporate governance mechanisms.

★★★★☆
Claims (1)
The Trust comprises five financially disinterested trustees who hold special Class T Common Stock, granting them authority to elect and remove an increasing portion of Anthropic's board of directors—ultimately a majority within four years of establishment.
Accurate100%Feb 22, 2026
The Trust is an independent body of five financially disinterested members with an authority to select and remove a portion of our Board that will grow over time (ultimately, a majority of our Board).
Citation verification: 5 verified, 3 flagged, 3 unchecked of 12 total

Structured Data

66 factsView in FactBase →
Founded Date
May 2023

All Facts

66
Organization
PropertyValueAs OfSource
CountryUnited States
HeadquartersSan Francisco, CA
Parent OrganizationAnthropic
Founded DateMay 2023
Legal StructureDelaware purpose trust (Del. Code tit. 12 §3556)
Financial
PropertyValueAs OfSource
Equity Stake0%
Biographical
PropertyValueAs OfSource
Notable ForFirst major AI company to establish an independent governance trust with board-majority appointment power. Designed to prioritize humanity's long-term interests over short-term financial pressures.
7 earlier values
Mar 2026Trustee composition shifted from AI safety/EA backgrounds (Christiano, Robinson) to national security and global governance expertise (Fontaine, Cuéllar). Two of five trustee seats remain vacant as of March 2026.
Mar 2026Trustee composition shifted from AI safety/EA backgrounds (Christiano, Robinson) to national security and global governance expertise (Fontaine, Cuéllar). Two of five trustee seats remain vacant as of March 2026.
Trustees are self-perpetuating: new trustees elected by existing trustees, in consultation with Anthropic's directors and CEO. One-year terms with regular peer re-evaluation.
Full Trust Agreement has never been published, limiting independent assessment. The Certificate of Incorporation is publicly available via Delaware filing but contains less detail than the Trust Agreement.
Full Trust Agreement has never been published, limiting independent assessment. The Certificate of Incorporation is publicly available via Delaware filing but contains less detail than the Trust Agreement.
First major AI company to establish an independent governance trust with board-majority appointment power. Designed to prioritize humanity's long-term interests over short-term financial pressures.
Trustees are self-perpetuating: new trustees elected by existing trustees, in consultation with Anthropic's directors and CEO. One-year terms with regular peer re-evaluation.
General
PropertyValueAs OfSource
Websitehttps://www.anthropic.com/news/the-long-term-benefit-trust
Other
PropertyValueAs OfSource
Trustee Count3 of 5 voting seats filledMar 2026
1 earlier value
Mar 20263 of 5 voting seats filled
Key EventChris Liddell appointed to Anthropic boardFeb 2026
31 earlier values
Feb 2026Chris Liddell appointed to Anthropic board
Jan 2026Zachary Robinson concludes founding term
Jan 2026Mariano-Florentino Cuéllar appointed as trustee
Jan 2026Kanika Bahl concludes founding term
Jan 2026Mariano-Florentino Cuéllar appointed as trustee
Jan 2026Zachary Robinson concludes founding term
Jan 2026Kanika Bahl concludes founding term
Jun 2025Richard Fontaine appointed as trustee
Jun 2025Richard Fontaine appointed as trustee
May 2025Reed Hastings appointed to Anthropic board by LTBT
May 2025Reed Hastings appointed to Anthropic board by LTBT
May 2024Jay Kreps appointed to Anthropic board by LTBT (first trust-elected director)
May 2024Jay Kreps appointed to Anthropic board by LTBT (first trust-elected director)
Apr 2024Paul Christiano departs as trustee
Apr 2024Paul Christiano departs as trustee
Dec 2023Jason Matheny departs as trustee
Dec 2023Jason Matheny departs as trustee
Sep 2023Jason Matheny appointed founding trustee
Sep 2023Long-Term Benefit Trust publicly announced with five founding trustees
Sep 2023Long-Term Benefit Trust publicly announced with five founding trustees
Sep 2023Kanika Bahl appointed founding trustee
Sep 2023Paul Christiano appointed founding trustee
Sep 2023Jason Matheny appointed founding trustee
Sep 2023Zachary Robinson appointed founding trustee
Sep 2023Neil Buddy Shah appointed founding trustee (Chair)
Sep 2023Neil Buddy Shah appointed founding trustee (Chair)
Sep 2023Kanika Bahl appointed founding trustee
Sep 2023Paul Christiano appointed founding trustee
Sep 2023Zachary Robinson appointed founding trustee
May 2023Trust formally established at Series C close; charter amended to create Class T stock
May 2023Trust formally established at Series C close; charter amended to create Class T stock
Amendment Threshold85% of voting powerNov 2025
3 earlier values
Nov 202585% of voting power
Sep 202375% of voting power
Sep 202375% of voting power
Board Seats Controlled3 of 5Nov 2024
5 earlier values
Nov 20243 of 5
Jul 20242 of 5
Jul 20242 of 5
Sep 20231 of 5
Sep 20231 of 5

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

Responsible Scaling PoliciesConstitutional AI

Analysis

Anthropic (Funder)Anthropic IPOFrontier Lab Cost Structure

Other

Dario AmodeiPaul ChristianoDaniela AmodeiJason MathenyNeil Buddy ShahRichard Fontaine

Organizations

OpenAIAlignment Research CenterRAND Corporation

Concepts

EA Shareholder Diversification from Anthropic

Historical

Mainstream Era