Skip to content
Longterm Wiki
Back

LLVM project adopts 'human in the loop' policy following AI-driven nuisance contributions - DevClass

web

A concrete real-world example of an open-source project enacting formal AI governance policy due to operational harms from unsupervised AI-generated contributions; relevant to discussions of human oversight, AI deployment norms, and institutional responses to AI misuse.

Metadata

Importance: 52/100news articlenews

Summary

The LLVM open-source compiler project implemented a 'human in the loop' policy requiring human oversight for AI-assisted code contributions after experiencing a wave of low-quality, AI-generated pull requests that burdened maintainers. The policy reflects growing concerns in open-source communities about AI tools generating superficially plausible but unhelpful or incorrect contributions at scale. This represents an early practical governance response to AI-generated content flooding collaborative software projects.

Key Points

  • LLVM maintainers were overwhelmed by AI-generated contributions that appeared legitimate but were low-quality or introduced subtle bugs, consuming significant review time.
  • The new policy mandates that a human must take responsibility for any AI-assisted contribution, ensuring accountability before submission.
  • This is one of the first major open-source projects to formally codify AI contribution governance in response to real operational disruption.
  • The case illustrates a broader challenge: AI tools can generate high volumes of plausible-looking but low-value work that degrades collaborative infrastructure.
  • The policy highlights the need for human oversight mechanisms ('human in the loop') as AI coding assistants become more widely used in software development.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202612 KB
[Jump to main content](https://www.devclass.com/ai-ml/2026/01/21/llvm-project-adopts-human-in-the-loop-policy-following-ai-driven-nuisance-contributions/4079585#main)

DEVCLASS AD

[AI/ML](https://www.devclass.com/tag/ai-ml)

# LLVM project adopts "human in the loop" policy following AI-driven nuisance contributions

[Tim AndersonTimAnderson](https://www.devclass.com/author/tim-anderson)

Publishedwed 21 Jan 2026 // 14:55 UTC

The LLVM compiler project has adopted a new policy banning code contributions submitted by AI agents without human approval, as well as AI-assisted contributions when not reviewed and understood by the contributor.

The policy is required because of the increasing number of “LLM \[large language model\] assisted nuisance contributions to the project,” [according](https://github.com/llvm/llvm-project/commit/18695b27d565) to the documentation update. The new policy follows a [debate on the matter](https://discourse.llvm.org/t/rfc-llvm-ai-tool-policy-start-small-no-slop/88476) which highlights issues with AI-assisted code.

DEVCLASS AD

LLVM is among the most critical open source projects and its decisions may influence others facing similar problems. The cURL project recently [closed its bug bounty program](https://www.theregister.com/2026/01/21/curl_ends_bug_bounty/) following pressure on maintainers caused by low quality AI submissions. Other projects to propose or adopt AI policies include Fedora Linux, Gentoo Linux, Rust, and QEMU; in most cases these are stricter than that adopted by LLVM.

The LLVM project’s AI policy is summarized as permitting AI assistance provided there is a human in the loop. This means not just glancing over the code, but that the contributor reviews all the code and is able to answer questions about it without reference back to the AI which generated it. In addition, contributors should label contributions that contain substantial AI-generated content. Agents which submit contributions without human approval are therefore forbidden.

There is also a ban on use of AI tools for GitHub issues marked “good first issue.” These are commonly non-urgent issues which are suitable learning opportunities, and use of AI wastes that opportunity.

DEVCLASS AD

Some in the community regard the new policy as too permissive. “An overly permissive AI contribution policy betrays the duty of care we have to our users,” [said one](https://discourse.llvm.org/t/rfc-llvm-ai-tool-policy-human-in-the-loop/89159/16), while another [said](https://discourse.llvm.org/t/our-ai-policy-vs-code-of-conduct-and-vs-reality/88300/7) “I’m vastly in favor of changing our AI policy to just disallow it.”

The policy identifies a core issue: that use of AI “shifts effort from the implementor to the reviewer.” Maintainer time is a scarce resource, and contributions must be worth more than the time it takes to review them.

Copyright is another issue, and the policy states that AI systems raise unanswered questions around copyright. Contrib

... (truncated, 12 KB total)
Resource ID: bb2d7b4ce7d570cf | Stable ID: YTAyYTI0OT