Back
Future of Life Institute
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Future of Life Institute
A funding opportunity from the Future of Life Institute for PhD students; useful for researchers seeking support for existential risk and AI safety work, or for understanding how FLI channels resources into the field.
Metadata
Importance: 42/100homepagereference
Summary
The Future of Life Institute offers PhD fellowships to support graduate students working on reducing existential and catastrophic risks from advanced technologies, including AI. The program funds researchers tackling long-term safety challenges to help build the field of existential risk reduction.
Key Points
- •Provides funding for PhD students researching existential and global catastrophic risks, with a focus on AI safety and related fields
- •Aims to grow the pipeline of researchers dedicated to ensuring transformative technologies benefit humanity
- •Supports both technical and governance-oriented research directions relevant to long-term safety
- •Part of FLI's broader mission to steer transformative technology toward positive outcomes
Review
The Future of Life Institute's Vitalik Buterin PhD Fellowship represents a targeted intervention in addressing potential existential risks posed by advanced artificial intelligence. By providing comprehensive financial support ($40,000 annual stipend, tuition coverage, and research expenses) to PhD students, the program aims to cultivate a dedicated research community focused on understanding and mitigating catastrophic AI scenarios. The fellowship's approach is distinctive in its rigorous definition of 'AI existential safety research', which goes beyond traditional AI ethics to specifically analyze potential ways AI could permanently curtail human potential. By supporting technical research on interpretability, verification, objective alignment, and systemic risk assessment, the program takes a proactive stance in developing frameworks and methodologies to prevent potential existential threats from emerging AI technologies. The fellowship also includes unique ethical commitments, such as requiring fellows to avoid working for companies perceived as racing toward potentially risky AGI development.
Cached Content Preview
HTTP 200Fetched Apr 5, 202612 KB
Technical PhD Fellowships - Future of Life Institute
Skip to content All Fellowships Technical PhD Fellowships
The Vitalik Buterin PhD Fellowship in AI Existential Safety is for PhD students who plan to work on AI existential safety research, or for existing PhD students who would not otherwise have funding to work on AI existential safety research. Status: Closed for submissions Deadline: November 21, 2025 Fellows receive:
Tuition and fees for 5 years of their PhD, with extension funding possible.
$40,000 annual stipend at universities in the US, UK and Canada.
A $10,000 fund that can be used for research-related expenses such as travel and computing.
Invitations to virtual and in-person events where they will be able to interact with other researchers in the field.
Applicants who are short-listed for the Fellowship will be reimbursed for this year's application fees for up to 5 PhD programs.
See below for the definition of 'AI Existential Safety research' and additional eligibility criteria.
Questions about the fellowship or application process not answered on this page should be directed to grants@futureoflife.org
The Vitalik Buterin Fellowships in AI Existential Safety are run in partnership with the Beneficial AI Foundation (BAIF) .
FLI offers Buterin Fellowships in pursuit of a vibrant AI existential safety research community free from financial conflicts of interest.
Anyone awarded a fellowship will need to confirm the following: "I am aware of FLI’s assessment that moving from a Buterin Fellowship to working (even on a safety team) for a company that is
a) racing to build AGI/ASI, and
b) not pushing for strong binding AI regulation
is a net negative for humanity. I therefore agree that, if I accept a Buterin Fellowship and take a job at any such company (including Anthropic, GoogleDeepMind, Meta, OpenAI, or xAI) within 2 years of completing my Buterin Fellowship, I will donate half of my gross compensation each month to a charity mutually agreeable to me and FLI, including half of any stock options or bonuses.”
Grant winners
People that have been awarded grants within this grant program: Jared Moore
Stanford University
Class of 2025 View profile Samyak Jain
UC Berkeley
Class of 2025 View profile Luke Bailey
Stanford University
Class of 2024 View profile Angira Sharma
Oxford University
Class of 2024 View profile Yawen Duan
University of Cambridge
Class of 2023 View profile Caspar Oesterheld
Carnegie Mellon University
Class of 2023 View profile Kayo Yin
UC Berkeley
Class of 2023 View profile Johannes Treutlein
UC Berkeley
Class of 2022 View profile Erik Jenner
UC Berkeley
Class of 2022 View profile Erik Jones
UC Berkeley
Class of 2022 View profile Stephen Casper
Massachusetts Institute of Technology
Class of 2022 View profile Xin Cynthia Chen
ETH Zurich
Class of 2022 View profile Usman Anwar
University of Cambridge
Class of 2022 View profile Zhij
... (truncated, 12 KB total)Resource ID:
10a6c63f6de5ab6a | Stable ID: YmFhOGE3NW