Back
Our Mistakes - givingwhatwecan.org
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Giving What We Can
This page is tangentially relevant to AI safety as an example of institutional transparency practices that could inform how AI safety organizations handle mistakes and maintain public accountability.
Metadata
Importance: 18/100organizational reportreference
Summary
Giving What We Can maintains a public record of organizational mistakes, errors in judgment, and lessons learned, demonstrating a commitment to transparency and accountability in the effective altruism community. This page serves as an example of institutional epistemic humility and honest self-assessment practices.
Key Points
- •Documents past organizational errors and misjudgments by Giving What We Can
- •Demonstrates a culture of transparency and accountability within EA-aligned organizations
- •Provides a model for how nonprofits can publicly acknowledge and learn from mistakes
- •Relevant to discussions of organizational integrity and trust-building in cause-prioritization work
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Giving What We Can | Organization | 62.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 20268 KB
Our mistakes · Giving What We Can Our mistakes
As Giving What We Can has grown and changed as an organisation, we recognise that our work has sometimes fallen short of the standards we have for ourselves. We believe we have made significant strides toward our current goals of professionalism, collaboration, and excellent implementation of our projects. But we also want to give context on some of our mistakes, both resolved and unresolved.
Here are some reasons why we have a page documenting our mistakes:
To acknowledge ways our mistakes have affected others.
To share information about problems you may have observed with our work.
To help you assess whether you think our corrective measures are adequate.
When you evaluate us as an organisation, we recommend using this page, but also looking directly at what we've produced, rather than just taking our word for things.
This is not an exhaustive list of every problem with our work. It mostly covers mistakes we made that affected outside stakeholders. We don't list all the ways our projects were inefficient or suboptimal. This page includes mistakes made between 2016 and 2020 when Giving What We Can was functioning as a part of the Centre For Effective Altruism's core operations.
Please contact us if you know of other items that should be listed here, or other ways we could improve.
Last updated: April 2023.
Inaccurate reporting of a significant recurring donation (discovered in 2023)
What happened?
During our 2023 impact evaluation, we confirmed that a very large recurring donation from 2014 to 2023 was not accurately reported. The total difference between the amount reported, and the amount donated, was $96 million. Although this donation was not factored into any of our impact evaluations , it resulted in an incorrect representation of the total amount donated by our community, as seen on our front page.
How have we improved?
Upon discovering the error, we removed the recurring donation, and corrected the past donations so they now reflect the amount given. We also provided a summary of the potential errors and limitations of our databases in our 2023 impact evaluation. We now have a system that alerts us of very large reported donations so that we can more quickly follow up on any that appear like they might not be accurate.
Problems with the quality, consistency, and transparency of our content and recommendations (2020–2022)
What happened?
There were several issues with our recommendations during this period:
Our inclusion criteria informing our recommended charities were not always clear. In 2021, we provided only a vague indication of how we generated our recommendations , and though we provided further detail in 2022 , there was still some missing information (specifically, it wasn’t clear that we only used Founders Pledge and EA Funds to guide our recommendations for “safeguarding the long-term future”).
There were also a few cases where
... (truncated, 8 KB total)Resource ID:
162f33a4098b82d6 | Stable ID: OTc2YTFhZW