Skip to content
Longterm Wiki
Back

Some Background on Our Views Regarding Advanced Artificial Intelligence

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

A foundational public statement from Open Philanthropy, one of the largest funders of AI safety work, explaining the philosophical and strategic rationale behind its AI safety grantmaking priorities.

Metadata

Importance: 72/100blog postprimary source

Summary

Open Philanthropy articulates its foundational concerns about transformative and potentially dangerous advanced AI, explaining why it prioritizes AI safety funding. The post outlines the organization's belief that advanced AI could be among the most transformative and potentially catastrophic technologies in human history, and describes its approach to reducing those risks.

Key Points

  • Open Philanthropy views advanced AI as potentially one of the most transformative and dangerous technologies humanity will develop, warranting significant philanthropic attention.
  • The post explains the organization's concern about both misaligned AI systems and misuse of AI by humans to gain disproportionate power.
  • Open Philanthropy acknowledges deep uncertainty about AI timelines and risks but argues the potential scale of harm justifies prioritizing safety work now.
  • The document outlines the organization's strategy for grantmaking in technical AI safety, policy, and field-building efforts.
  • It reflects an explicit commitment to avoiding 'lock-in' of any single set of values, including those of Open Philanthropy itself.

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026169 KB
Some Background on Our Views Regarding Advanced Artificial Intelligence | Coefficient Giving Skip to Content *+*]:mt-5"> May 6, 2016 Some Background on Our Views Regarding Advanced Artificial Intelligence By Holden Karnofsky We’re planning to make potential risks from advanced artificial intelligence a major priority in 2016. A future post will discuss why; this post gives some background. Summary: I first give our definition of “transformative artificial intelligence,” our term for a type of potential advanced artificial intelligence we find particularly relevant for our purposes. Roughly and conceptually, transformative AI refers to potential future AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution. I also provide (below) a more detailed definition. The concept of “transformative AI” has some overlap with concepts put forth by others, such as “superintelligence” and “artificial general intelligence.” However, “transformative AI” is intended to be a more inclusive term, leaving open the possibility of AI systems that count as “transformative” despite lacking many abilities humans have. I then discuss the question of whether, and when, we might expect transformative AI to be developed. This question has many properties (long timelines, relatively vague concepts, lack of detailed public analysis) I associate with developments that are nearly impossible to forecast, and I don’t think it is possible to make high-certainty forecasts on the matter. With that said, I am comfortable saying that I think there is a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) of transformative AI within the next 20 years. I can’t feasibly share all of the information that goes into this view, but I try to outline the general process I have followed to reach it. Finally, I briefly discuss whether there are other potential future developments that seem to have similar potential for impact on similar timescales to transformative AI, in order to put our interest in AI in context. The ideas in this post overlap with some arguments made by others, but I think it is important to lay out the specific views on these issues that I endorse. Note that this post is confined in scope to the above topics; it does not, for example, discuss potential risks associated with AI or potential measures for reducing them. I will discuss the latter topics more in the future. 1. Defining “transformative artificial intelligence” (transformative AI) There are many ways to classify potential advanced AI systems. For our purposes, we prefer to focus in on the particular classifications that are most relevant to AI’s potential impact on the world, while putting aside many debates that don’t relate to this (for example, whether and when an AI system might have human-like 

... (truncated, 169 KB total)
Resource ID: b48da969219b8ead | Stable ID: MzQyOTFkZT