Back
Harvard Law Review: Amoral Drift in AI Corporate Governance
webharvardlawreview.org·harvardlawreview.org/print/vol-138/amoral-drift-in-ai-cor...
Published in the Harvard Law Review, this piece offers a legal and corporate governance lens on AI safety failures, relevant for those studying institutional design, regulatory approaches, and why AI companies may fail to self-govern effectively.
Metadata
Importance: 72/100journal articleanalysis
Summary
This Harvard Law Review article examines how AI companies exhibit 'amoral drift'—a structural tendency to deprioritize ethical considerations as commercial pressures intensify—and analyzes failures in current corporate governance mechanisms to constrain this drift. It argues that existing legal and organizational structures are insufficient to ensure AI development remains aligned with public interests.
Key Points
- •AI companies face structural incentives that systematically erode ethical commitments over time, a phenomenon termed 'amoral drift'
- •Standard corporate governance mechanisms (boards, fiduciary duties, internal ethics teams) are inadequate to constrain AI-specific harms
- •The article critiques high-profile governance failures at major AI labs as symptoms of deeper structural problems, not isolated incidents
- •Proposes legal and regulatory reforms to create binding accountability mechanisms for AI developers
- •Argues that voluntary safety commitments without enforceable legal backstops are insufficient to govern transformative AI systems
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Anthropic Long-Term Benefit Trust | Organization | 70.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202698 KB
[Skip to main content](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#main)
[Download](https://harvardlawreview.org/wp-content/uploads/2025/03/138-Harv.-L.-Rev.-1633-1.pdf)
[See Footnotes](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnotes-container)
ChatGPT’s debut in November of 2022 set off a race in Silicon Valley to develop and monetize artificial intelligence (AI).[1](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-1) Within a few months, Microsoft invested $10 billion in OpenAI, the company behind ChatGPT.[2](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-2) Anthropic, a competitor of OpenAI, raised similarly impressive amounts of money from companies and investors hoping to participate in the AI revolution.[3](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-3)
Well before ChatGPT emerged, commentators warned of the risks advanced AI might pose.[4](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-4) Observers who predict existential threats to humanity from superintelligent AI point to the difficulty of precisely controlling it.[5](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-5) They reason that superintelligent AI might pursue a human-directed goal without balancing its goal against general human values.[6](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-6) For example, with access to enough tools, a superintelligent AI instructed to maximize paperclip production might end up “converting . . . large chunks of the observable universe into paperclips.”[7](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-7) Alternatively, a superintelligent AI may develop its own unexpected goals — goals that do not necessarily account for human wellbeing.[8](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-8) The proposed solution to these types of existential AI risks is “AI alignment”: the challenging task of ensuring that the values of an AI align with human values.[9](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-9) Critics believe AI startups are moving much faster than AI alignment research can keep up, at great risk to humanity.[10](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corporate-governance/#footnote-ref-10)
Even if these existential risks sound far-fetched, AI certainly does present a challenge to existing legal and social frameworks. Companies have already demonstrated that AI can learn from and reflect human racial and gender biases.[11](https://harvardlawreview.org/print/vol-138/amoral-drift-in-ai-corpor
... (truncated, 98 KB total)Resource ID:
ab0dc9abee0cef4d | Stable ID: Mjc1ZTljNz