EA Global
EA Global
EA Global is a series of selective conferences organized by the Centre for Effective Altruism that connects committed EA practitioners to collaborate on global challenges, with AI safety becoming increasingly prominent (53% of 2024 survey respondents identified it as most pressing). The conferences serve as networking hubs for the EA community but face criticism for insularity, potential neglect of systemic change, and exclusion of Global South voices.
Quick Assessment
| Dimension | Assessment |
|---|---|
| Type | Conference series |
| Organizer | Centre for Effective Altruism |
| Target Audience | Students, professionals, policymakers, and researchers with solid EA understanding |
| Format | 2-3 day events with talks, workshops, discussions, and networking |
| Selectivity | Curated applications reviewed within two weeks |
| Key Topics | Global health, animal welfare, AI safety, biosecurity, existential risks |
| First Event | 2013 (as Effective Altruism Summit) |
| Recent Scale | Hundreds to thousands of attendees per event |
Key Links
| Source | Link |
|---|---|
| Official Website | effectivealtruism.org |
| Wikipedia | en.wikipedia.org |
| EA Forum | forum.effectivealtruism.org |
Overview
EA Global (EAG) is a series of selective conferences organized by the Centre for Effective Altruism (CEA) that connects people committed to effective altruism principles to collaborate on addressing global challenges.1 These weekend-long events target students, professionals, policymakers, and researchers with a solid understanding of EA ideas who are taking significant actions based on them, such as working on EA-inspired projects, pursuing EA-aligned careers, or volunteering for EA organizations.2
The conferences serve as a hub for the EA community to engage in talks, workshops, and discussions on evidence-based approaches to high-impact cause areas including global health and development, animal welfare, long-term risks like AI safety and biosecurity, and other pressing global challenges.3 Attendees describe EA Global as a place to meet "extremely committed, compassionate" individuals focused on long-term impact evaluation, with many finding co-founders for high-impact charities at these events.4
EA Global conferences are run centrally by CEA's Events team, which handles content selection, admissions, and production. Applications are curated and reviewed within two weeks, with approved applicants required to purchase tickets or make donations.5 The events have grown significantly since their inception, with most talks recorded and transcripts made available on the EA Forum.6
History
Founding and Early Years (2013-2015)
The first EA Global event was held in 2013 under the name "Effective Altruism Summit."7 This initial conference emerged from the broader context of the effective altruism movement's development, which had begun in the late 2000s with the founding of organizations like Giving What We Can (2009) and GiveWell (2007).8
By 2015, EA Global had expanded to three major events across different continents. The largest of these took place at Google's Mountain View campus and featured prominent speakers including Elon Musk, Stuart J. Russell, and William MacAskill.9 These early conferences covered topics spanning global poverty, animal advocacy, career workshops, and local chapter development. According to MacAskill, the events demonstrated improved coordination and diversity within the movement.10
The 2015 conferences also included events in Oxford and Melbourne, establishing EA Global's presence across North America, Europe, and Australia. This geographic expansion reflected the movement's growing international reach and CEA's commitment to building a global community around effective altruism principles.11
Growth and Development (2016-2023)
Following the successful 2015 conferences, EA Global became an annual series with multiple events per year. The conferences maintained their focus on connecting EA community members for high-level discussions while expanding their scope to address emerging priorities within the movement.12
During this period, CEA also introduced EAGx conferences as a complement to the main EA Global series. Unlike the centrally-organized EA Global events, EAGx conferences are community-led with CEA providing support and funding, including tools like event apps. The relationship between EA Global and EAGx is often compared to TED and TEDx—EA Global serves as the flagship, globally-focused event with experienced EA practitioners, while EAGx events target broader, more regional audiences and are more welcoming to newcomers exploring EA ideas.13
The conferences increasingly emphasized networking and one-on-one meetings as core components of the experience. CEA introduced apps like Swapcard to facilitate meeting scheduling and networking among attendees.14 This reflected a growing recognition that interpersonal connections and career development were as valuable as formal presentations for many participants.
Recent Events (2024-2026)
Recent EA Global conferences have continued to emphasize cross-cause collaboration and direct engagement with experienced practitioners. EA Global: Boston 2024, held November 1-3 at the Hynes Convention Center, exemplified this approach with its focus on one-on-one meetings and discussions spanning multiple cause areas.15
Upcoming events include EA Global: New York City 2025 (with applications closing September 28, 2025), EA Global: San Francisco (February 13-15, 2026), EA Global: London (May 29-31, 2026), and EA Global: New York City (October 16-18, 2026).16 These events continue to focus on the full range of EA cause areas including farmed animal welfare, global health and development, biosecurity, and AI safety.17
The conferences have maintained their selective admission process, requiring applicants to demonstrate solid understanding of EA principles and significant engagement with EA-aligned work. Typical schedules include a Friday evening reception followed by full days on Saturday and Sunday, concluding around 6pm on Sunday.18
Conference Structure and Activities
Format and Schedule
EA Global events typically span three days, beginning with a Friday evening reception and continuing through full days on Saturday and Sunday.19 The conferences combine multiple formats including keynote talks, panel discussions, workshops, one-on-one meetings, and social activities.20
The events emphasize networking and direct engagement as much as formal presentations. Attendees can schedule one-on-one meetings through apps like Swapcard, allowing for targeted conversations about specific projects, career paths, or research questions.21 This structure reflects the conferences' dual purpose of knowledge sharing and community building.
Most talks at EA Global are recorded and made available online with transcripts posted to the EA Forum, extending the reach of presentations beyond attendees.22 Topics have included technical discussions of AI alignment by researchers like Rohin Shah, strategic considerations about AI safety approaches by speakers like Robert Miles, and broader discussions of cause prioritization and movement strategy.23
Target Audience and Selectivity
EA Global targets a selective audience of people who have moved beyond basic familiarity with EA concepts to taking concrete actions based on EA principles. CEA's Events team reviews applications to ensure attendees are either professionally engaged in EA-aligned work or planning serious career moves in this direction.24
This selectivity serves to maintain the conferences as venues for substantive discussions among committed practitioners rather than introductory events. Applicants describe being drawn to EA Global as an opportunity to meet other "extremely committed, compassionate" individuals working on similar challenges, with many specifically seeking potential co-founders or collaborators for high-impact projects.25
For those newer to EA or seeking less selective events, the EAGx conference series provides more accessible alternatives with broader regional audiences and lower barriers to entry.26
Relationship to the Effective Altruism Movement
Community Building and Networking
EA Global serves as a primary gathering point for the global EA community, which CEA estimates includes approximately 10,000 engaged individuals across 70 countries.27 The conferences facilitate connections that have led to the founding of numerous EA organizations and initiatives.
Attendees consistently cite networking as one of the most valuable aspects of EA Global. Many report finding co-founders for high-impact charities at these events, often through programs like Charity Entrepreneurship Incubation that help translate conference connections into concrete projects.28 The emphasis on one-on-one meetings and small-group discussions creates opportunities for these kinds of productive collaborations.
The conferences also play a role in career development within the EA ecosystem. They connect people working on or planning EA-aligned projects with advisors, funders, and potential employers.29 This has contributed to career changes impacting an estimated 80 million hours of work directed toward high-impact causes.30
Cause Area Integration
EA Global conferences address the full spectrum of cause areas within effective altruism, including global health and development, animal welfare, existential risks, AI safety, biosecurity, and meta-EA work.31 This cross-cause integration reflects EA's commitment to cause neutrality and evidence-based prioritization rather than predetermined focus areas.
Recent community surveys indicate shifting priorities within EA. In the 2024 Annual Impact Report, 53% of respondents selected AI safety as the most pressing cause area, followed by climate change (12%), animal welfare (9%), and global health (8%).32 However, EA Global maintains its broad focus across causes, with speakers and workshops representing diverse perspectives on impact.
This diversity sometimes creates tension within the community. Some participants have noted that certain cause areas like global poverty, which were central to early EA and are supported by robust randomized controlled trial evidence, receive less attention in conference discussions than more speculative priorities like AI safety or longtermism.33 EA Global attempts to balance these competing priorities while maintaining space for evidence-based discussions across all major cause areas.
Research and Evidence Base
EA Global conferences emphasize research grounded in systematic evaluation of interventions. Presentations like Karolina Sarek's "How to do research that matters" at EA Global: London 2019 advocate for preset research processes, clear goals, and combining different types of evidence such as cost-effectiveness analyses and expert opinions.34
The conferences showcase research from organizations like GiveWell, which has used extensive randomized controlled trials to evaluate global health interventions. For example, Cochrane reviews of insecticide-treated bednet distribution have found statistically significant child mortality reductions of 5.53 deaths averted per 1,000 children treated per year.35 This evidence base has influenced hundreds of millions of dollars in charitable donations facilitated through EA evaluators.36
However, EA Global also features discussions of cause areas where evidence is less established, including AI safety, biosecurity, and longtermism. This creates ongoing debates within the community about appropriate weighting of strong empirical evidence versus theoretical arguments about expected value in areas with high uncertainty.37
AI Safety Focus
AI safety has become an increasingly prominent topic at EA Global conferences, particularly in recent years as the effective altruism community has shifted significant attention toward existential risks from advanced AI.38 The 2024 EA community survey found that 53% of respondents identified AI safety as the most pressing cause area, making it the clear priority within the movement.39
AI Safety Content and Speakers
EA Global conferences regularly feature technical presentations on AI alignment and safety. Rohin Shah has presented on AI alignment progress, covering high-level existential risk arguments, specific risks including human unpreparedness for advanced AI and potential for AI deception, and technical approaches like impact regularization and oracle AI designs.40 Robert Miles has spoken about alignment plans and the trade-offs between inaction risk (catastrophe from others developing unaligned AI) and accident risk (problems arising from one's own AI development), as well as the potential for using aligned AI for security hardening.41
These presentations reflect broader debates within the AI safety community about appropriate strategies and the current state of progress. Speakers acknowledge ongoing disagreement about specific sub-questions and debate whether current research outputs are useful or potentially distracting from core challenges.42
Career Development and Networking
EA Global serves as a key venue for AI safety career development and talent coordination. The conferences connect people interested in AI safety work with organizations like Anthropic, OpenAI, DeepMind, and independent research organizations including Redwood Research, FAR AI, ARC, and Timaeus.43
However, this creates challenges as these organizations compete for talent while pursuing different approaches to AI safety. Recent EA Forum discussions note that organizations like METR compete with frontier AI labs for the same talent pool, complicating efforts to build independent safety evaluation capacity.44
Integration with Broader EA Priorities
The prominence of AI safety at EA Global has created some tension within the community. Critics argue that the focus on AI safety and longtermism diverts resources from more evidence-based interventions in areas like global health, where robust randomized controlled trials demonstrate clear impact.45 Some have characterized this as a shift from empirical "old EA" centered on GiveWell-style charity evaluation toward more speculative philosophical debates.46
EA Forum discussions reflect ongoing efforts to address this tension. Some participants have proposed dedicated AI risk-branded events separate from EA Global to attract talent from fields like information security and policy who may be unfamiliar with or put off by the EA brand.47 However, EA Global has maintained its cross-cause approach while accommodating the community's increasing prioritization of AI safety.
Criticisms and Concerns
Systemic Change and Institutional Reform
A persistent criticism of both the effective altruism movement and EA Global is the alleged neglect of institutional and systemic change in favor of direct interventions. Critics argue that EA unjustifiably focuses on individual aid measures like cash transfers or mosquito net distribution rather than addressing root causes such as global capitalist structures or political reform.48 This critique extends to EA Global conferences, which critics characterize as emphasizing technical interventions within existing systems rather than transformative institutional change.
Philosophers Lisa Herzog and Amia Srinivasan have argued that EA's core commitments are often misinterpreted as rejecting moral reasons for institutional reform, but that this interpretation is incorrect—EA principles do not inherently preclude systemic change, and expected value estimation can apply to institutional interventions as well as direct aid.49 However, critics maintain that EA's practical focus remains heavily weighted toward measurable direct interventions.
Community Culture and Insularity
Internal EA Forum discussions have identified significant cultural weaknesses within the EA community that are reflected at EA Global conferences. These include groupthink, "holier than thou" attitudes, excessive focus on internal community dynamics rather than external impact, and insularity.50 The conferences are described as sometimes creating echo chambers where participants reinforce existing views rather than engaging with outside perspectives.
Specific concerns include EA Global being "not media-savvy" and failing to engage effectively with the broader world beyond the EA community.51 Some participants note that the events can feel insular, with attendees too focused on the EA community itself as a status symbol rather than on actual impact. The selectivity of EA Global, while intended to ensure substantive discussions, may contribute to this insularity by limiting exposure to diverse perspectives.
Prioritization and Funding Concerns
Critics have raised concerns about how EA Global reflects and reinforces potentially problematic prioritization within the effective altruism movement. The emphasis on longtermism and existential risks has been criticized for diverting resources from immediate suffering, with critics arguing that prioritizing the productivity of rich countries over aid to the Global South reflects a bias toward plutocrats and tech elites.52
The 2024 EA community survey showed global health receiving interest from 36% of respondents but only 8% selecting it as most pressing, while AI safety dominated at 53%.53 Critics note that global poverty leads actual funding from organizations like Coefficient Giving and Giving What We Can, but elite discussions at conferences like EA Global disproportionately favor speculative cause areas over empirically validated interventions.54
Some have criticized specific funding decisions highlighted at EA Global events, such as Coefficient Giving's $55 million allocation to the Center for Security and Emerging Technology (CSET), which was characterized as misaligned with transformative AI priorities.55 More broadly, critics argue that EA Global lacks a central hub for discussing funding needs across EA organizations, leading to scattered information and missed opportunities.56
Scandal and Ethical Concerns
The association between effective altruism and Sam Bankman-Fried's fraud has raised questions about EA's culture and values. Bankman-Fried, who was sentenced to 25 years in prison, was closely connected to the EA movement and CEA received $13.9 million in grants from the FTX Future Fund.57 Critics argue this reveals "deeper rot" within EA, with the movement's emphasis on world-historical importance potentially justifying immoral actions by inflating adherents' sense of their own significance.58
Defenders counter that scandals involving outliers like Bankman-Fried do not reveal fundamental problems with EA principles, and that the movement's focus on evidence and transparency distinguishes it from ideological movements that have justified harmful actions.59 However, the controversy has prompted ongoing discussions within the EA community about ethics, culture, and accountability that are reflected in EA Global discussions.
Exclusion of Local Voices
Critics from the Global South have argued that EA Global and the broader EA movement perpetuate problematic dynamics by excluding local voices and lived experiences. In regions like Uganda's Busoga, EA interventions such as $100 business grants or bednet distribution are seen as disposable and failing to catalyze self-sustainable change.60 The conferences themselves, while international in scope, are primarily organized and attended by people from Western countries, with no African-led organizations among EA's top-rated charities despite EA's focus on African poverty.61
This critique extends to the structure of EA Global itself, which requires financial resources to attend and emphasizes Western academic and professional norms in its application process and event format. Critics argue this perpetuates global inequities in knowledge production and decision-making about interventions affecting the Global South.62
Funding and Organizational Support
EA Global conferences are funded and organized by the Centre for Effective Altruism, which provides comprehensive support including content selection, speaker recruitment, venue logistics, admissions processing, and production.63 Approved applicants are required to purchase tickets or make donations, providing additional funding for the events.64
While specific budget figures for EA Global are not publicly detailed, the conferences are part of CEA's broader portfolio of community-building work. CEA has provided grants for community building since 2018 and launched a university groups accelerator in 2022.65 The organization's Events team handles multiple conferences per year across different continents, representing a significant operational undertaking.
Recent developments in EA funding include Coefficient Giving's launch of the Abundance and Growth Fund with $120 million over three years for economic growth and scientific progress, as well as updated programs for effective giving and careers.66 GiveWell has moved over $1 billion to effective charities and saved an estimated 150,000+ lives through malaria prevention and other global health interventions.67
Recent Developments and Future Directions
2024-2025 Activities
EA Global: Boston 2024, held November 1-3, exemplified recent conference priorities with its emphasis on cross-cause collaboration and networking through one-on-one meetings.68 The event continued the tradition of bringing together practitioners across global health, animal welfare, AI safety, and other cause areas for substantive discussions about evidence-based impact.
The 2024 Annual Impact Report showed continued growth in EA community engagement, with 34% of surveyed community members changing donation destinations due to EA principles and 18% donating more than 10% of income to EA causes.69 The conferences play a central role in this community development, facilitating the connections and career changes that translate EA principles into action.
Upcoming Events and Plans
Multiple EA Global conferences are scheduled for 2025-2026, including events in New York City (September 2025 and October 2026), San Francisco (February 2026), and London (May 2026).70 These events will continue to address the full range of EA cause areas while adapting to the community's evolving priorities and the changing landscape of global challenges.
EAGx events are also expanding, with EAGxAustralasia 2025 scheduled for November 28-30, 2025 in Melbourne as the ninth annual EA conference in Australia.71 This growth in regional events reflects efforts to make EA Global's benefits more accessible while maintaining the flagship conferences' focus on experienced practitioners.
Organizational Updates
Recent organizational developments in the EA ecosystem include new initiatives from major funders and research organizations. Coefficient Giving launched an RFP for effective giving initiatives with an application deadline of April 20, 2025, and GiveWell has created resources responding to US government foreign assistance policy changes.72 These developments will likely influence discussions at upcoming EA Global conferences.
Organizations represented at EA Global continue to expand their work. Anima International opened speaker registration for the 10th Conference on Animal Rights in Europe (CARE) 2025, and the Fish Welfare Initiative published new monitoring and evaluation plans.73 The Happy Lives Institute released research finding that top EA-recommended charities are approximately 1,000 times more impactful than the least effective charities when measured in Wellbeing-Years (WELLBYs).74
Key Uncertainties
-
Long-term impact measurement: While EA Global facilitates numerous connections and career changes, the ultimate impact of these networking effects on global welfare remains difficult to quantify precisely.
-
Optimal balance between cause areas: The appropriate distribution of attention and resources between well-evidenced interventions (global health) and more speculative but potentially high-impact causes (AI safety, longtermism) continues to be debated within the community.
-
Scalability and accessibility: It remains unclear whether EA Global's selective model or broader EAGx events are more effective for movement growth and whether current formats adequately include diverse global perspectives.
-
Community culture sustainability: Whether EA Global and similar events can address concerns about insularity, groupthink, and exclusion of non-Western voices while maintaining substantive discussions among committed practitioners is uncertain.
-
Integration with mainstream institutions: The extent to which EA Global should remain a community-focused event versus seeking broader influence in mainstream policy, philanthropy, and research institutions is an open question.
Sources
Footnotes
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
EA Global: New York City 2025 - Effective Altruism — EA Global: New York City 2025 - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
Effective Altruism Global - EA Forum — Effective Altruism Global - EA Forum ↩
-
Effective Altruism Global - Wikipedia — Effective Altruism Global - Wikipedia ↩
-
History of Effective Altruism - EA Forum — History of Effective Altruism - EA Forum ↩
-
Effective Altruism Global - Wikipedia — Effective Altruism Global - Wikipedia ↩
-
Effective Altruism Global - Wikipedia — Effective Altruism Global - Wikipedia ↩
-
Effective Altruism Global - Wikipedia — Effective Altruism Global - Wikipedia ↩
-
Effective Altruism Global - Wikipedia — Effective Altruism Global - Wikipedia ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
EA Global: Boston 2024 - YouTube — EA Global: Boston 2024 - YouTube ↩
-
EA Global Events - Effective Altruism — EA Global Events - Effective Altruism ↩
-
EA Global: New York City 2025 - Effective Altruism — EA Global: New York City 2025 - Effective Altruism ↩
-
EA Global: San Francisco 2026 - Effective Altruism — EA Global: San Francisco 2026 - Effective Altruism ↩
-
EA Global: San Francisco 2026 - Effective Altruism — EA Global: San Francisco 2026 - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
Effective Altruism Global - EA Forum — Effective Altruism Global - EA Forum ↩
-
Rohin Shah: AI Alignment Progress - YouTube — Rohin Shah: AI Alignment Progress - YouTube ↩
-
Citation rc-b484 ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
Effective Altruism: Not as Bad as You Think - James Ozden — Effective Altruism: Not as Bad as You Think - James Ozden ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
History of Effective Altruism - effektivaltruism.org — History of Effective Altruism - effektivaltruism.org ↩
-
EA Global: New York City 2025 - Effective Altruism — EA Global: New York City 2025 - Effective Altruism ↩
-
Citation rc-b280 ↩
-
EA and Global Poverty: Let's Gather Evidence - EA Forum — EA and Global Poverty: Let's Gather Evidence - EA Forum ↩
-
How to do research that matters - Karolina Sarek - YouTube — How to do research that matters - Karolina Sarek - YouTube ↩
-
Cause Profile: Global Health and Development - Effective Altruism — Cause Profile: Global Health and Development - Effective Altruism ↩
-
Effective Altruism - Wikipedia — Effective Altruism - Wikipedia ↩
-
EA and Global Poverty: Let's Gather Evidence - EA Forum — EA and Global Poverty: Let's Gather Evidence - EA Forum ↩
-
Annual Impact Report 2024 - Effective Altruism Sweden — Annual Impact Report 2024 - Effective Altruism Sweden ↩
-
Rohin Shah: AI Alignment Progress - YouTube — Rohin Shah: AI Alignment Progress - YouTube ↩
-
Robert Miles: Alignment Plans - YouTube — Robert Miles: Alignment Plans - YouTube ↩
-
EA and Global Poverty: Let's Gather Evidence - EA Forum — EA and Global Poverty: Let's Gather Evidence - EA Forum ↩
-
EA and Global Poverty: Let's Gather Evidence - EA Forum — EA and Global Poverty: Let's Gather Evidence - EA Forum ↩
-
Blueprints for AI Safety Conferences - EA Forum — Blueprints for AI Safety Conferences - EA Forum ↩
-
The Institutional Critique of Effective Altruism - Wharton — The Institutional Critique of Effective Altruism - Wharton ↩
-
The Institutional Critique of Effective Altruism - Wharton — The Institutional Critique of Effective Altruism - Wharton ↩
-
Anonymous Answers: Flaws of EA Community - 80,000 Hours — Anonymous Answers: Flaws of EA Community - 80,000 Hours ↩
-
Anonymous Answers: Flaws of EA Community - 80,000 Hours — Anonymous Answers: Flaws of EA Community - 80,000 Hours ↩
-
Why Effective Altruism and Longtermism Are Toxic Ideologies - Current Affairs — Why Effective Altruism and Longtermism Are Toxic Ideologies - Current Affairs ↩
-
Annual Impact Report 2024 - Effective Altruism Sweden — Annual Impact Report 2024 - Effective Altruism Sweden ↩
-
EA and Global Poverty: Let's Gather Evidence - EA Forum — EA and Global Poverty: Let's Gather Evidence - EA Forum ↩
-
Anonymous Answers: Flaws of EA Community - 80,000 Hours — Anonymous Answers: Flaws of EA Community - 80,000 Hours ↩
-
Anonymous Answers: Flaws of EA Community - 80,000 Hours — Anonymous Answers: Flaws of EA Community - 80,000 Hours ↩
-
Effective Altruism - Wikipedia — Effective Altruism - Wikipedia ↩
-
The Problem with Effective Altruism - Persuasion Community — The Problem with Effective Altruism - Persuasion Community ↩
-
Effective Altruism: Not as Bad as You Think - James Ozden — Effective Altruism: Not as Bad as You Think - James Ozden ↩
-
Effective Altruism is Worse for the Poor - Dear Humanity — Effective Altruism is Worse for the Poor - Dear Humanity ↩
-
Effective Altruism: Not as Bad as You Think - James Ozden — Effective Altruism: Not as Bad as You Think - James Ozden ↩
-
Evidence-based altruism or scientific imperialism? - PMC — Evidence-based altruism or scientific imperialism? - PMC ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
What is EA Global? - Effective Altruism — What is EA Global? - Effective Altruism ↩
-
History - Centre for Effective Altruism — History - Centre for Effective Altruism ↩
-
EA Organization Updates: April 2025 - EA Forum — EA Organization Updates: April 2025 - EA Forum ↩
-
Effective Altruism - Wikipedia — Effective Altruism - Wikipedia ↩
-
EA Global: Boston 2024 - YouTube — EA Global: Boston 2024 - YouTube ↩
-
Annual Impact Report 2024 - Effective Altruism Sweden — Annual Impact Report 2024 - Effective Altruism Sweden ↩
-
EA Global Events - Effective Altruism — EA Global Events - Effective Altruism ↩
-
EAGxAustralasia 2025 - Effective Altruism Australia — EAGxAustralasia 2025 - Effective Altruism Australia ↩
-
EA Organization Updates: April 2025 - EA Forum — EA Organization Updates: April 2025 - EA Forum ↩
-
EA Organization Updates: April 2025 - EA Forum — EA Organization Updates: April 2025 - EA Forum ↩
-
EA Organization Updates: April 2025 - EA Forum — EA Organization Updates: April 2025 - EA Forum ↩
References
A talk or panel from the Effective Altruism Global conference held in Boston in 2024, covering topics relevant to EA priorities including AI safety, existential risk, and related policy or technical concerns. The specific content is unavailable without video metadata, but EAG Boston 2024 featured numerous sessions on AI safety and governance.
This page lists upcoming EA Global and EAGx conferences, which are major gatherings for the effective altruism community covering cause areas including AI safety, global health, and existential risk. Events are hosted globally across cities like London, New York, Nairobi, Jakarta, and the Nordics. These conferences facilitate networking, research sharing, and coordination among practitioners and researchers working on high-impact problems.
“EA GLOBAL: LONDON 29–31 May 2026 Apply now -> EA GLOBAL: NEW YORK CITY 16–18 Oct 2026 Apply now ->”
unsupported: EA Global: New York City 2025 (with applications closing September 28, 2025) unsupported: EA Global: San Francisco (February 13-15, 2026) unsupported: These events continue to focus on the full range of EA cause areas including farmed animal welfare, global health and development, biosecurity, and AI safety.
“EA GLOBAL: LONDON 29–31 May 2026 Apply now -> EA GLOBAL: NEW YORK CITY 16–18 Oct 2026 Apply now ->”
WRONG NUMBERS: The claim mentions an EA Global conference in New York City in September 2025, but the source only mentions one in October 2026. WRONG NUMBERS: The claim mentions an EA Global conference in San Francisco in February 2026, but the source mentions an EA Summit in Helsinki in February 2026. OVERCLAIMS: The claim states that these events will continue to address the full range of EA cause areas while adapting to the community's evolving priorities and the changing landscape of global challenges, but this is not explicitly stated in the source.
Rohin Shah, a prominent AI safety researcher, presents an overview of progress in AI alignment research, covering key challenges, research directions, and developments in the field. The talk likely surveys technical alignment approaches and assesses how the field has evolved.
A chronological history of the Centre for Effective Altruism (CEA) from its founding in 2011 through its major milestones, including the incubation and spin-off of key EA organizations, the coining of the term 'effective altruism,' and the development of community infrastructure like EA Global conferences and EA Funds. The page documents how CEA evolved from an umbrella organization for Giving What We Can and 80,000 Hours into a broader community-building institution.
“We award our first round of Community Building Grants and second round of Effective Altruism Grants.”
Nathan Young investigates whether global poverty has been deprioritized within EA relative to AI safety and animal welfare, presenting survey and funding data suggesting it remains highly valued while theorizing that elite EA discourse may underrepresent it. He argues for global poverty's importance both intrinsically and as a coalition-building tool, while acknowledging that short AI timelines could rationally shift priorities.
“I was honestly quite surprised that very little of the conversation was about how to measurably help the world’s poor. Everyone I talked to was now focusing on things like AI Safety and Wild Animal Welfare.”
unsupported misleading_paraphrase
“Even more surprising was the fact that the most popular arguments weren’t based on measurable evidence, like GiveWell, but based on philosophical arguments and thought experiments.”
“In the 2020 survey , no cause area had a higher average rating (I'm eyeballing this graph) or a higher % of near top + top priority ratings. In 2020, global development was considered the highest priority by EAs in general.”
The source does not contain the 2024 EA community survey results, the percentages for global health interest and selection as most pressing, or the AI safety dominance percentage. It also does not mention elite discussions at conferences like EA Global disproportionately favoring speculative cause areas over empirically validated interventions.
EA Global San Francisco 2026 is a conference organized by the Centre for Effective Altruism, scheduled for February 13-15, 2026 at the Hilton Union Square. The event brings together effective altruism community members for talks, workshops, and networking focused on the latest research and coordination on global projects.
“The event will begin with an opening reception on Friday (but you can arrive at any time during the weekend). Saturday and Sunday will have full-day schedules with content finishing at around 6pm on Sunday evening.”
“The event will begin with an opening reception on Friday (but you can arrive at any time during the weekend). Saturday and Sunday will have full-day schedules with content finishing at around 6pm on Sunday evening. The agenda includes: Talks and discussions on the latest ideas at the frontiers of the EA movement (past examples available here ) Workshops to help improve your thinking and execution Meetups, networking, and social activities”
EA Global (EAG) is a series of international conferences organized by the Centre for Effective Altruism that brings together professionals, researchers, and community members to collaborate on high-impact global challenges. Events span flagship EA Global conferences in major cities and regional EAGx summits, covering topics including AI safety, biosecurity, global health, and existential risk.
“EA Global and EAGx events are designed to help members of the effective altruism community make progress on the world's pressing problems by sharing new thinking and research as well as starting or coordinating on important projects.”
The source does not mention connecting people with advisors, funders, and potential employers. The source does not mention career changes impacting an estimated 80 million hours of work directed toward high-impact causes.
“Each application is reviewed by the CEA team, or relevant EAGx team. We aim to respond within two weeks.”
The claim that 'most talks recorded and transcripts made available on the EA Forum' is not explicitly stated in the source. The source mentions 'Watch past talks' with links to videos, and 'Read and discuss on the EA Forum', but doesn't confirm that most talks are recorded and transcribed. The claim that approved applicants are 'required to purchase tickets or make donations' is slightly misleading. The source states that approved applicants can 'either purchase a ticket or consider making a donation in order to register for the event.' The 'consider making a donation' part implies it's not strictly required.
“Both EA Global and EAGx conferences typically last around 2-3 days.”
The source does not mention a Friday evening reception. The source does not mention one-on-one meetings.
A collection of anonymous candid critiques from EA community members identifying structural and cultural flaws within the effective altruism movement. Topics range from groupthink and status dynamics around longtermism to insularity, poor hiring practices, and insufficient engagement with the outside world. The piece serves as internal self-reflection on how the EA community could improve.
“Not media savvy enough EAs should try to be more media savvy. This applies to avoiding misconceptions around topics, earning-to-give etc. But EAs should also recognise the importance of telling a good story.”
“I think Open Philanthropy putting $55 million into something [CSET] that is not even focused on transformative AI, let alone AGI was not a good idea considering all the other GCR reduction opportunities there are. There are really large funding gaps both for existing and EA-aligned organisations yet to be funded. When a group gets funded, it also doesn’t mean they were able to get full funding. It can also be challenging to learn about all the different EA organisations as there’s no central hub. Lists are very scattered and it can be challenging for the community to learn about them all and what their needs are.”
“These include groupthink, "holier than thou" attitudes, excessive focus on internal community dynamics rather than external impact, and insularity.”
9EA Organization Updates: April 2025 - EA ForumEA Forum·Toby Tremlett🔹 & Dane Valerie·2025·Blog post▸
A monthly compilation of job opportunities, fellowship programs, and organizational updates from the effective altruism community for April 2025. Features time-sensitive opportunities in AI alignment research (MATS, CLR), AI economics (Stripe Fellowship), and career development for EA professionals, alongside updates from major EA organizations like 80,000 Hours, Open Philanthropy, and GiveWell.
“Open Philanthropy has announced the launch of their new Abundance and Growth Fund , which will spend at least $120 million over the next three years to accelerate economic growth and boost scientific and technological progress while lowering the cost of living.”
The claim mentions Coefficient Giving's launch of the Abundance and Growth Fund, but the source attributes the fund to Open Philanthropy. The claim states that GiveWell has saved an estimated 150,000+ lives, but this is not mentioned in the source.
“Open Philanthropy has launched a request for proposals for effective giving initiatives to gather information on opportunities they might be overlooking. Apply by April 20th.”
The RFP is from Open Philanthropy, not Coefficient Giving. The application deadline is April 20, not April 20, 2025. The claim mentions EA Global conferences, but the source only mentions EA Global NYC and EAGxPrague 2025.
“Anima International has opened the speakers’ registration for the 10th Conference on Animal Rights in Europe (CARE) 2025.”
The EA Forum's AI safety topic page aggregates community discussions, research posts, and quick takes on reducing existential risks from advanced AI. It serves as a living index of community thinking spanning technical safety, policy, capacity-building, and emerging concerns like superpersuasive AI and evaluation saturation.
“Some of it is directly useful, some of it is indirectly useful (e.g. negative results, datasets, open-source models, position pieces etc.), and some is not useful and/or a distraction.”
“This means hiring and paying for staff that might otherwise work at frontier AI labs, requiring us to compete with labs directly for talent.”
A comprehensive EA Forum wiki article tracing the origins and development of the effective altruism movement from its philosophical precursors (Singer, Bostrom) through 2022, covering key periods of crystallization, maturation, and shifts toward longtermism and existential risk focus. It documents the movement's institutional development, strategic evolution from earning-to-give to direct work, and response to the FTX crisis.
“2013 saw the first EA Summit, a 7-day event in the San Francisco Bay Area organized by Leverage Research and attended by staff from the Center for Applied Rationality , the High Impact Network, GiveWell , The Life You Can Save , 80,000 Hours , Giving What We Can , Effective Animal Altruism, and the Machine Intelligence Research Institute (MIRI).”
This paper examines the 'institutional critique' of Effective Altruism, which argues that EA's focus on individual charitable giving neglects systemic and structural change. It analyzes whether EA's framework adequately addresses the role of institutions, political action, and collective coordination in solving large-scale problems including existential risks.
Robert Miles presents an overview of various AI alignment plans and approaches, surveying the landscape of proposed technical solutions to the alignment problem. The talk likely covers different schools of thought on how to ensure AI systems remain safe and beneficial as capabilities scale.
EA Global New York City 2025 is a conference organized by the Centre for Effective Altruism bringing together practitioners, researchers, and funders focused on high-impact cause areas including AI safety, global health, and existential risk. The event serves as a networking and knowledge-sharing hub for the effective altruism community.
“EA Global: New York City 2025 will focus on the full range of causes related to effective altruism , including farmed animal welfare, global health and development, biosecurity, AI safety, and more.”
WRONG NUMBERS: The claim lists multiple EA Global events with dates and locations. The source only mentions EA Global: New York City 2025. UNSUPPORTED: The claim mentions EA Global: San Francisco (February 13-15, 2026), EA Global: London (May 29-31, 2026), and EA Global: New York City (October 16-18, 2026). These events are not mentioned in the source. WRONG NUMBERS: The claim states the application deadline for EA Global: New York City 2025 is September 28, 2025. The source confirms this date but specifies the time as 11:59pm ET. WRONG NUMBERS: The claim states EA Global: New York City 2025 will be held October 16-18, 2026. The source states the event will be held October 10-12, 2025.
“EA Global: New York City 2025 will focus on the full range of causes related to effective altruism , including farmed animal welfare, global health and development, biosecurity, AI safety, and more.”
The source mentions 'meta-EA work' but not in the context of cause areas within effective altruism. The claim that cross-cause integration reflects EA's commitment to cause neutrality and evidence-based prioritization is an overclaim. The source does not explicitly state this.
“EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or figuring out how best to work on such projects.”
Effective Altruism Global (EAG) is a recurring conference series central to the EA movement, featuring prominent speakers from AI safety, philosophy, and related fields. The EA Forum topic page aggregates discussions, speaker lists, and community discourse around conference logistics, admissions, and content. It serves as a hub for accessing recorded talks and transcripts from past events.
“Most EA Global talks are recorded, and many have full transcripts.”
unsupported: claim that applications are reviewed within two weeks unsupported: claim that approved applicants are required to purchase tickets or make donations
“Most EA Global talks are recorded, and many have full transcripts.”
The source only mentions that many talks have full transcripts, not that they are posted to the EA Forum. The source does not mention specific topics discussed or speakers like Rohin Shah or Robert Miles.
A critical essay arguing that Effective Altruism and longtermism are ideologically harmful frameworks that distort moral priorities, concentrate power among elites, and obscure present-day injustices in favor of speculative future concerns. The piece contends these movements provide philosophical cover for the wealthy to avoid structural change while feeling virtuous. It represents a left-leaning critique of EA's utilitarian calculus and longtermism's focus on existential risk.
“In fact, you’ve cited longtermist arguments that it’s actually more moral to care about people in the West than in the Global South because they’re the people who can go on and do all the good.”
Wikipedia article covering Effective Altruism Global, the flagship conference series organized by the Centre for Effective Altruism. The conferences bring together researchers, practitioners, and donors focused on high-impact cause areas including AI safety and existential risk. EAG serves as a key networking and coordination hub for the EA and AI safety communities.
Karolina Sarek presents a framework for conducting high-impact research, focusing on how researchers can maximize the real-world significance of their work. The talk addresses prioritization, research selection, and translating findings into meaningful outcomes, likely drawing on her experience in EA and policy-relevant research contexts.
James Özden defends Effective Altruism against post-FTX criticism, acknowledging legitimate concerns about donor power concentration and systemic change neglect while arguing that EA's concrete impacts—such as GiveWell directing $1B+ to save 150,000+ lives—are frequently overlooked by critics. He advocates for a pluralistic approach that appreciates EA's contributions without dismissing valid critiques.
“This is definitely an ongoing topic of discussion within the community, with some concrete reform ideas being presented (albeit none yet actually implemented).”
The source does not mention that the EA movement's focus on evidence and transparency distinguishes it from ideological movements that have justified harmful actions. The source does not mention that the controversy has prompted ongoing discussions within the EA community about ethics, culture, and accountability that are reflected in EA Global discussions.
“Tangibly, it’s a burgeoning intellectual movement with close to 10,000 engaged folks globally, across 70 countries (although mostly in the EU and US).”
The source does not mention EA Global specifically, only Effective Altruism (EA) in general. The source does not mention CEA (Centre for Effective Altruism) specifically.
This Effective Altruism cause profile examines global health and development as a priority cause area, evaluating its importance, tractability, and neglectedness. It outlines why interventions like malaria prevention, deworming, and cash transfers can save lives and improve wellbeing at low cost. The profile serves as an introduction for people considering how to direct charitable resources effectively.
“Lengeler’s (2004) review, which considers more studies and looks at a broader range of outcomes, finds a statistically significant effect on child mortality, summarised as “5.53 deaths averted per 1000 children treated per year.””
The claim states 'hundreds of millions of dollars in charitable donations facilitated through EA evaluators', but the source does not mention EA evaluators. The source states '5.53 deaths averted per 1000 children treated per year', while the claim states '5.53 deaths averted per 1,000 children treated per year'.
This is the 2024 annual impact report from Effective Altruism Sweden, documenting the organization's activities, outreach, and outcomes over the year. It likely covers community building efforts, funding allocation, and progress toward EA goals including AI safety awareness in Sweden.
A critique arguing that Effective Altruism (EA) may inadvertently harm or deprioritize the interests of the global poor, examining structural and philosophical flaws in the EA movement's approach to poverty alleviation and resource allocation. The piece challenges EA's focus on longtermism and existential risk as potentially diverting resources from immediate human suffering.
“If you visited a truly impoverished country like Uganda, you will quickly notice that many of the things that effective altruists call “effective” — from mosquito nets, to $100 business grants that are provided to groups of 3 people — are the same short-term, disposable solutions that have not only kept their recipients in abject poverty, but also, they are the very kind of solutions that often disappear the same day their proponents exit.”
The claim mentions conferences primarily organized and attended by people from Western countries, but the source does not explicitly mention conferences. The claim states that there are no African-led organizations among EA's top-rated charities, but the source only states that none of the charities operating in Africa and labeled 'most effective' by the EA movement are African.
A critical analysis of the Effective Altruism movement, examining its philosophical foundations, institutional failures, and potential blind spots. The piece argues that EA's utilitarian framework and technocratic approach may lead to problematic prioritization decisions and a concentration of influence among a small elite.
“Both Marxism and effective altruism claim that they have figured out the true way to make the world a better place. And both Marxism and effective altruism flatter their adherents into thinking that this gives them a key role in a movement that will prove to have world-historical importance. Taken together, these two beliefs provide a powerful justification for immoral action.”
The article states that the FTX Future Fund gave $30 million to a UK charity, not $13.9 million to CEA. The article does not explicitly state that critics argue that EA reveals 'deeper rot' within EA, but it does mention that opponents of effective altruism argue that the immoral actions of its most prominent advocate(s) reveal a deeper rot at the core of the philosophy.
This paper critically examines the tension between evidence-based altruism (as embodied by the Effective Altruism movement) and concerns about scientific imperialism, questioning whether well-intentioned, data-driven philanthropic and research interventions can impose Western frameworks on global populations. It explores the ethical dimensions of prioritizing measurable impact in international aid and research. The paper raises questions about whose values and methodologies define 'effectiveness' in global humanitarian efforts.
“However, current knowledge frameworks and practices in global health ‘privilege dominant groups, thus diverging from plurality’.”
This post proposes a theory of change for AI safety conferences, arguing they are essential infrastructure for a fast-moving field. It outlines four event formats—online conferences, regional one-day summits, three-day conferences, and high-profile bridge-building events—each designed to foster community, coordinate efforts, and build external legitimacy for the AI safety field.
“But having an explicitly AI risk-branded conference would allow us to expand the scope of the conference to those who are deeply engaged with AI Safety, but aren’t familiar with or might be put off by the EA brand.”
EAGxAustralasia 2025 is a regional Effective Altruism conference organized by Effective Altruism Australia, bringing together individuals interested in using evidence and reason to do the most good. The event serves as a networking and learning opportunity for the EA and related AI safety communities in the Australasian region.
“28th – 30th November in Melbourne we’ll be holding EAGxAustralasia 2025. This will be the ninth annual EA Global conference in Australia!”
Wikipedia's comprehensive overview of Effective Altruism (EA), a philosophical and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. The article covers EA's history, core principles, major cause areas (including global poverty, animal welfare, and existential risk), and prominent organizations and figures. It also addresses criticisms and controversies surrounding the movement.
This page provides an overview of the history and development of the Effective Altruism (EA) movement, tracing its intellectual origins and organizational growth. It covers key milestones, influential figures, and the evolution of EA's focus areas including global health, animal welfare, and existential risk reduction. The resource serves as an introductory reference for understanding how EA came to intersect with AI safety concerns.
“Over 3,000 people have followed 80,000 Hours’ advice and changed careers to make a bigger difference. That equates to roughly 80 million more hours spent working on important problems.”
MISLEADING PARAPHRASE: The claim implies that the connections facilitated by EA organizations directly led to the career changes and the associated hours of work. The source only mentions that 3,000 people followed 80,000 Hours' advice and changed careers, resulting in 80 million more hours spent working on important problems. WRONG NUMBERS: The claim states '80 million hours of work directed toward high-impact causes', while the source states '80 million more hours spent working on important problems'.