Skip to content
Longterm Wiki
Back

PauseAI / No AGI Protest @ OpenAI San Francisco - February 12th, 2024

web

This page documents a grassroots AI safety protest event; relevant to understanding public advocacy tactics around AI governance and the OpenAI-Pentagon controversy of early 2024.

Metadata

Importance: 22/100blog postprimary source

Summary

Documentation of a February 2024 protest at OpenAI's San Francisco headquarters organized by PauseAI and No AGI, demanding OpenAI halt AGI development and end its military contracts with the Pentagon. The protest highlighted OpenAI's deletion of policy language prohibiting military use and its subsequent Pentagon partnership as evidence of eroding safety commitments.

Key Points

  • Protesters demanded OpenAI stop working with the Pentagon after the company quietly removed policy language prohibiting military/warfare use in January 2024.
  • The protest cited over 33,000 signatories of the Pause AI letter and warnings from hundreds of scientists about AGI posing existential risks.
  • Sam Altman's own statements were used against OpenAI, noting he said brakes should be applied when AI improves in ways 'we don't fully understand.'
  • The event was a coalition between PauseAI and external group No AGI; PauseAI later clarified it has no affiliation with No AGI/StopAI following security threats.
  • Media coverage included Bloomberg, VentureBeat, and ReadWrite, indicating moderate mainstream visibility for AI safety protest movements.

Cited by 1 page

PageTypeQuality
Pause AdvocacyApproach91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20264 KB
## Top

# PauseAI / No AGI Protest @ OpenAI San Francisco - February 12th, 2024

- Where: San Francisco, OpenAI HQ
- When: 12th of February 2024, 16:30 - 18:00
- [Facebook event](https://fb.me/e/78BzWmaaj)
- [Website](https://openaiprotest.com/)

Other International Locations / Times:
UK (exact location TBD) / 4:00 pm GMT

## Why we are protesting OpenAI

OpenAI is trying to build an AI smarter than humans.
Hundreds of scientists are warning that this could cause the end of humanity.
This is why over 33,000 people have signed the Pause letter, urging AI companies like OpenAI to halt their advancements.
Even Sam Altman himself, the CEO of OpenAI, has said that we should hit the brakes [“if AI models are improving in ways that we don’t fully understand”](https://time.com/6288584/openai-sam-altman-full-interview/).
In a different interview, Sam mentioned that predicting capabilities is a [“fun guessing game”](https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded) for OpenAI employees.
In other words: even OpenAI does not understand how their models improve.
The time to hit the brakes is _now_.

## Join us and tell OpenAI “Stop working with the Pentagon!”

On January 10th, without any announcement, OpenAI deleted the language in its usage policy\* that had stated that OpenAI doesn’t allow its models to be used for “activities that have a high chance of causing harm” such as “military and warfare”. Then, on January 17th, TIME reported that OpenAI would be taking the Pentagon as a client. On 2/12, we will demand that OpenAI end its relationship with the Pentagon and not take any military clients. If their ethical and safety boundaries can be revised out of convenience, they cannot be trusted.

AI is rapidly becoming more powerful, far faster than virtually any AI scientist has predicted. Billions are being poured into AI capabilities, and the results are staggering. New models are outperforming humans in many domains. As capabilities increase, so do the risks. Scientists are even warning that AI might end up destroying humanity.

According to their charter, “OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at all economically valuable work—benefits all of humanity.” But many humans value their work and find meaning in it, and hence do not want their jobs to be done by an AGI instead. What protest co-organizer Sam Kirchner of No AGI calls “the Psychological Threat” applies even if AGI doesn’t kill us.

## Contact

- Holly Elmore, PauseAI US Director ( [Twitter](https://twitter.com/ilex_ulmus))
- Sam Kirchner, No AGI ( [Twitter](https://twitter.com/No_AGI_))

## Media Coverage

- [Bloomberg](https://www.bloomberg.com/news/newsletters/2024-02-13/ai-protest-at-openai-hq-in-san-francisco-focuses-on-military-work)
- [ReadWrite](https://readwrite.com/stop-working-with-pentagon-openai-staff-face-protests/)
- [VentureBeat](https://venturebeat.com/ai/protesters-ga

... (truncated, 4 KB total)
Resource ID: 7971dd2cd39b5bac | Stable ID: MTk3ODk5Nm