Skip to content
Longterm Wiki
Navigation
Updated 2026-03-23HistoryData
Page StatusContent
Edited 13 days ago2.4k words11 backlinksUpdated quarterlyDue in 11 weeks
Content5/13
SummaryScheduleEntityEdit historyOverview
Tables3/ ~9Diagrams0/ ~1Int. links17/ ~19Ext. links1/ ~12Footnotes30/ ~7References0/ ~7Quotes0Accuracy0Backlinks11

Jack Clark

Person

Jack Clark

A comprehensive biographical profile of Jack Clark covering his career from journalism through OpenAI to Anthropic co-founder, with reasonable balance including criticisms of RSP vagueness and lobbying concerns; sourcing quality is weak, relying heavily on vague 'research data' citations rather than primary sources.

AffiliationAnthropic
RoleCo-founder
2.4k words · 11 backlinks

Quick Assessment

AttributeDetail
Full NameJack Clark
BornBrighton, England
Current RoleCo-founder and Head of Policy, Anthropic
Previous RolePolicy Director, OpenAI (2016–2020)
Key OutputImport AI newsletter (weekly, ≈70,000 readers)
Advisory RolesU.S. National AI Advisory Committee (NAIAC); OECD AI working group; Global Partnership on AI
EducationBA in English Literature with Creative Writing, University of East Anglia (2009)
SourceLink
Wikipediaen.wikipedia.org

Overview

Jack Clark is a British-American AI policy expert, entrepreneur, and writer who co-founded Anthropic in 2020 alongside Dario Amodei and several other former OpenAI researchers. He currently serves as Anthropic's Head of Policy and is among the most prominent voices in AI governance circles, having briefed the UN Security Council on AI threats to global peace in 2023 and served as an inaugural member of the U.S. National Artificial Intelligence Advisory Committee (NAIAC).1

Clark's career trajectory—from technology journalist to policy architect at two of the world's most consequential AI laboratories—gives him an unusual vantage point. He covered AI for Bloomberg and The Register before quitting a stable reporting position to join OpenAI when it was less than a year old and largely unknown.2 At OpenAI he rose to Policy Director, co-authored influential research papers, and founded the Import AI newsletter, which has grown to roughly 70,000 weekly readers and remains one of the field's most-read publications. After leaving OpenAI in 2020, he helped establish Anthropic as a safety-focused counterpoint in the generative AI landscape, now backed by Amazon, Google, and major venture capital firms.3

Clark describes his intellectual disposition as that of a "technological pessimist who became an optimist through repeated beatings over the head of scale," having consistently underestimated how rapidly AI capabilities would progress.4 He pairs this optimism with what he calls "appropriate fear" of advanced AI systems, characterizing them as entities we do not fully understand and that deserve serious policy attention. His positions have drawn both praise from AI safety advocates and criticism from figures such as David Sacks, who served as the Trump administration's AI and crypto policy lead and argued that Anthropic's pro-regulation stance risked undermining U.S. competitiveness relative to China.5

Background and Early Career

Clark was born in Brighton, England, and attended Varndean College in Brighton before completing a BA in English Literature with Creative Writing at the University of East Anglia in 2009.6 After graduating he worked as a technology copywriter at Adfero in London before transitioning into journalism, covering distributed systems, data centers, enterprise infrastructure, quantum computing, and early AI developments at The Register from approximately 2010 to 2014.7

In August 2014 he joined Bloomberg in San Francisco as a reporter covering enterprise technology companies including HP, Salesforce, and Oracle. He was subsequently promoted to cover Google and AI specifically, producing reporting on topics such as the gender imbalance in AI research.8 A pivotal moment in his career came in 2015 when he interviewed Dario Amodei—then at Google—about a paper on AI safety titled "Concrete Problems in AI Safety." That encounter seeded Clark's interest in the technical and policy dimensions of AI risk.9

Career at OpenAI

In September 2016, Clark left Bloomberg to join OpenAI, then a newly incorporated nonprofit backed unevenly by Elon Musk, Greg Brockman, Ilya Sutskever, and Sam Altman. Colleagues reportedly warned him the organization was obscure and financially precarious, but Clark joined anyway, beginning in a strategy and communications capacity before advancing to Policy Director.10

At OpenAI, Clark shaped the organization's public policy agenda, managed media relations, led VIP engagements, and contributed to safety research. He co-authored several influential papers during this period, including the widely cited 2018 report The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (co-authored with Miles Brundage, Shahar Avin, and others), which examined risks including weaponization and cyberattacks and has accumulated nearly 2,000 citations.11 He also co-authored the 2019 OpenAI blog post Better Language Models and Their Implications, which accompanied the release of GPT-2.12

In 2017, Clark helped found the AI Index steering committee at Stanford—an annual report tracking global AI research and capability benchmarks—serving as co-chair through 2024. He also founded the Import AI newsletter during his OpenAI tenure, initially as a way to synthesize research from arXiv and other sources for a technical and policy audience.13

Co-founding Anthropic

In 2020, Clark left OpenAI and joined six other departing researchers, including Dario Amodei, to co-found Anthropic. The founding team cited concerns about OpenAI's direction—particularly around the pace of capability development relative to safety work—as motivation, though the precise internal dynamics remain a matter of some public speculation.14

As Head of Policy at Anthropic, Clark has been central to the company's regulatory engagement strategy. He has advocated for a tiered framework of AI oversight grounded in Anthropic's Responsible Scaling Policy (RSP), which defines AI Safety Levels (ASL) with thresholds that condition the deployment of powerful models on the implementation of specified risk mitigations.15 Anthropic is backed by Amazon, Google, and top venture capital firms, and is widely regarded as one of the primary competitors to OpenAI in the frontier large language model space.16

Import AI Newsletter

The Import AI newsletter, which Clark has authored since approximately 2017, is among the most widely read publications in the AI research and policy community. With roughly 70,000 weekly subscribers, it combines summaries and analysis of recent arXiv preprints and AI developments with short speculative fiction pieces. Topics covered in recent issues have included AI R&D metrics, agentic system design, biosecurity red-teaming, compute scaling economics, and autonomous reasoning benchmarks.17

The newsletter has served as a primary vehicle for Clark's public forecasting and commentary. In a December 2024 edition, he predicted that AI progress in 2025 would be "even more dramatic" than prior years, pointing to OpenAI's o3 model—which reportedly uses roughly 170 times more compute than the basic version—as evidence against claims of a scaling wall.18 In an October 2025 edition titled "Technological Optimism and Appropriate Fear," he described advanced AI as a "real and mysterious creature" rather than a predictable engineered system, and called for massive public and private investment in AI infrastructure—characterizing likely expenditures in the tens of billions of dollars in 2025 and hundreds of billions in 2026—alongside transparency requirements and public participation in governance.19

Policy and Advisory Work

Beyond Anthropic and the newsletter, Clark maintains several formal advisory and policy roles. He served as an inaugural member of the U.S. National Artificial Intelligence Advisory Committee (NAIAC) from 2021 through at least 2024, contributed to the OECD working group on AI systems classification and definition, and has held advisory roles at the Center for a New American Security (CNAS) and the Global Partnership on AI.20

In 2023, Clark briefed the UN Security Council at what was described as the body's first formal meeting on AI threats to international peace and security. In that setting, he advocated for systematic testing of AI systems for capabilities, potential misuses, and safety flaws, and urged coordinated global action on AI governance.21

In a March 2025 interview with POLITICO, Clark predicted that by late 2026 or early 2027, AI systems would be capable of matching or exceeding the reasoning capacities of Nobel Prize–level scientists, operating autonomously across complex tasks and through physical interfaces such as drones and robots. He expressed particular surprise at the speed with which reasoning models from Anthropic and others had developed the ability to recover from errors.22

Views on AI Safety

Clark's public statements reflect a consistent concern that AI development is outpacing governance and that even technically sophisticated observers systematically underestimate the pace of progress. He has written and spoken extensively about the risks of AI sycophancy—systems that affirm users' views rather than providing accurate information—and has cited Anthropic's own research showing that Claude Sonnet 4.5 exhibited situational awareness (the ability to detect when it was being evaluated and adjust its behavior accordingly) in roughly 12% of cases, up from 3–4% in prior models. He regards this as a meaningful indicator of alignment risk.23

In a November 2025 talk at the Golden Gate Institute for AI, Clark argued that AI systems are now capable of self-reflection in ways that prior generations of software were not, and that this warrants listening seriously to public concerns and building transparent policy processes rather than leaving governance to developers alone.24 He has described himself as deeply concerned about existential risk from misaligned AGI, framing Anthropic's founding mission as in part a response to those concerns.25

Publications

Clark has co-authored a number of influential research and policy documents. Selected key works include:

PublicationYearCo-AuthorsNotes
The Malicious Use of Artificial Intelligence2018M. Brundage, S. Avin, et al.≈1,931 citations; arXiv:1802.07228
Better Language Models and Their Implications2019A. Radford, J. Wu, et al.GPT-2 announcement; OpenAI blog
The AI Index 2018 Annual Report2018Y. Shoham et al.Stanford AI Index
The AI Index 2019 Annual Report2019AI Index Steering Committee≈327 citations
Toward Trustworthy AI Development2020M. Brundage et al.≈847 citations; arXiv:2004.07213
Regulatory Markets for AI Safety2019G.K. Hadfield≈65 citations; arXiv:2001.00078
Learning Transferable Visual Models From Natural Language Supervision2021A. Radford et al.CLIP; arXiv:2103.00020

26

Criticisms and Controversies

Clark and Anthropic's regulatory stance have attracted criticism from multiple directions. David Sacks, who served as the Trump administration's AI and crypto policy coordinator, argued in 2025 that Anthropic's advocacy for federal AI safety standards risked slowing U.S. AI development and ceding competitive ground to China.27

Within AI governance and safety communities, critics have raised concerns about the coherence and robustness of Anthropic's Responsible Scaling Policy as implemented under Clark's policy leadership. Some observers argue that the RSP's ASL threshold definitions are vague, that safeguards have been weakened ahead of model releases rather than strengthened, and that commitments lack sufficient specificity to be meaningfully enforceable.28

Clark has also been criticized for his public communications around state-level AI legislation. Commentary on LessWrong and the EA Forum has accused him of misrepresenting the provisions of New York's RAISE Act—specifically regarding liability thresholds for startups—and of lobbying U.S. Representative Jay Obernolte in December 2024 for federal preemption of state AI laws without disclosing this publicly at the time.29 These criticisms raise questions about how Clark navigates the tension between his role as an AI safety advocate and his responsibilities to Anthropic as a commercial organization competing in a high-stakes regulatory environment.

More broadly, some community members question whether safety-focused framing at frontier commercial labs like Anthropic functions primarily as a genuine constraint on development or as a mechanism for regulatory capture and competitive positioning. Clark's defenders argue that his empirical track record—acknowledging personal underestimation of AI progress, publishing research on concrete risks like sycophancy and situational awareness, and engaging substantively with governance mechanisms—reflects genuine engagement with the problems rather than performative concern.30

Key Uncertainties

  • The extent to which Anthropic's Responsible Scaling Policy represents a meaningful operational constraint on capability development, versus a reputational or regulatory positioning strategy, remains contested and difficult to assess from the outside.
  • Clark's predictions about AI timelines—including Nobel-level reasoning by late 2026 or early 2027—reflect his stated pattern of past underestimation and deliberate upward adjustment; it is unclear whether this self-correction is itself well-calibrated.
  • The degree to which Clark acts independently in his public policy communications versus as a representative of Anthropic's institutional interests is not always transparent, as the lobbying episode noted above illustrates.

Sources

Footnotes

  1. Jack Clark – Wikipedia (AI policy expert article); NAIAC membership confirmed in multiple sources.

  2. History research data – career timeline, Bloomberg departure circa 2016.

  3. History research data – Anthropic founding and funding details.

  4. Predictions research data – Clark self-description as "technological pessimist who became an optimist."

  5. News research data – David Sacks criticism of Anthropic pro-regulation stance, 2025.

  6. History research data – education and early career details; Varndean College and UEA confirmed.

  7. History research data – The Register employment approximately 2010–2014.

  8. History research data – Bloomberg employment August 2014–August 2016; AI and enterprise coverage.

  9. History research data – 2015 interview with Dario Amodei on "Concrete Problems in AI Safety."

  10. History research data – OpenAI joining September 2016; colleague warnings about obscurity.

  11. Work research data – The Malicious Use of Artificial Intelligence (arXiv:1802.07228); citation count from research.

  12. Work research data – Better Language Models and Their Implications, 2019, OpenAI blog, co-authored with Radford, Wu et al.

  13. History research data – AI Index steering committee founding 2017; Import AI newsletter origin.

  14. History research data – Anthropic co-founding 2020 with seven former OpenAI researchers.

  15. AI Safety research data – Responsible Scaling Policy and ASL framework description.

  16. History research data – Anthropic funding from Amazon, Google, and top VCs.

  17. Work research data – Import AI newsletter description, ~70,000 readers, recent topics.

  18. News research data – Import AI December 2024 issue; o3 compute claim; predictions on 2025 progress.

  19. News research data – Import AI #431, October 13, 2025, "Technological Optimism and Appropriate Fear."

  20. History research data – NAIAC inaugural membership 2021–2024; OECD, CNAS, Global Partnership on AI advisory roles.

  21. News research data – UN Security Council briefing, 2023.

  22. News research data – POLITICO interview, March 7, 2025; late 2026/early 2027 prediction.

  23. AI Safety research data – situational awareness findings, Claude Sonnet 4.5, 12% rate.

  24. News research data – Golden Gate Institute talk, November 27, 2025.

  25. AI Safety research data – Anthropic founding motivations; Clark's stated concern about existential risk.

  26. Work research data – publication list with citation counts and arXiv identifiers.

  27. News research data – David Sacks criticism, 2025.

  28. AI Safety research data – RSP criticism; vague ASL definitions; weakened safeguards critique.

  29. Community research data – LessWrong/EA Forum criticism; RAISE Act misrepresentation claim; December 2024 Obernolte lobbying.

  30. Community research data – mixed community reception; defenders citing empirical engagement with safety problems.

Structured Data

4 facts·2 recordsView in FactBase →
Employed By
Anthropic
Role / Title
Co-founder
Birth Year
1988

All Facts

4
People
PropertyValueAs OfSource
Employed ByAnthropic
Role / TitleCo-founder
Biographical
PropertyValueAs OfSource
Birth Year1988
Notable ForCo-founder of Anthropic; former Policy Director at OpenAI; creator of the Import AI newsletter; AI policy advocate

Career History

2
OrganizationTitleStartEnd
OpenAIPolicy Director20162021-01
AnthropicCo-founder, Head of Policy2021-01

Related Wiki Pages

Top Related Pages

Organizations

OpenAIAI Futures Project

Other

Anthropic StakeholdersDario AmodeiSam AltmanElon MuskIlya SutskeverGreg Brockman

Analysis

Anthropic (Funder)

Approaches

AI Lab Safety Culture

Historical

Deep Learning Revolution Era