Causal Representation Learning for AI Safety
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Explores causal representation learning as a foundational approach for AI safety, examining how understanding causal structures in reinforcement learning systems can improve transparency, interpretability, and policy alignment for AGI development.
Paper Details
Metadata
Abstract
In the long term, reinforcement learning (RL) is considered by many AI theorists to be the most promising path to artificial general intelligence. This places RL practitioners in a position to design systems that have never existed before and lack prior documentation in law and policy. Public agencies could intervene on complex dynamics that were previously too opaque to deliberate about, and long-held policy ambitions would finally be made tractable. In this whitepaper we illustrate this potential and how it might be technically enacted in the domains of energy infrastructure, social media recommender systems, and transportation. Alongside these unprecedented interventions come new forms of risk that exacerbate the harms already generated by standard machine learning tools. We correspondingly present a new typology of risks arising from RL design choices, falling under four categories: scoping the horizon, defining rewards, pruning information, and training multiple agents. Rather than allowing RL systems to unilaterally reshape human domains, policymakers need new mechanisms for the rule of reason, foreseeability, and interoperability that match the risks these systems pose. We argue that criteria for these choices may be drawn from emerging subfields within antitrust, tort, and administrative law. It will then be possible for courts, federal and state agencies, and non-governmental organizations to play more active roles in RL specification and evaluation. Building on the "model cards" and "datasheets" frameworks proposed by Mitchell et al. and Gebru et al., we argue the need for Reward Reports for AI systems. Reward Reports are living documents for proposed RL deployments that demarcate design choices.
Summary
This whitepaper examines the policy and safety implications of reinforcement learning systems, which are increasingly viewed as a path toward artificial general intelligence. The authors identify four categories of risks inherent to RL design choices: scoping the horizon, defining rewards, pruning information, and training multiple agents. They propose that policymakers need new governance mechanisms drawing from antitrust, tort, and administrative law to manage these risks, and introduce 'Reward Reports'—living documents that would transparently document RL design choices for proposed deployments across domains like energy infrastructure, social media, and transportation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Yoshua Bengio | Person | 39.0 |
Cached Content Preview
# Computer Science > Machine Learning
**arXiv:2202.05716** (cs)
\[Submitted on 11 Feb 2022\]
# Title:Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems
Authors: [Thomas Krendl Gilbert](https://arxiv.org/search/cs?searchtype=author&query=Gilbert,+T+K), [Sarah Dean](https://arxiv.org/search/cs?searchtype=author&query=Dean,+S), [Tom Zick](https://arxiv.org/search/cs?searchtype=author&query=Zick,+T), [Nathan Lambert](https://arxiv.org/search/cs?searchtype=author&query=Lambert,+N)
View a PDF of the paper titled Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems, by Thomas Krendl Gilbert and 3 other authors
[View PDF](https://arxiv.org/pdf/2202.05716)
> Abstract:In the long term, reinforcement learning (RL) is considered by many AI theorists to be the most promising path to artificial general intelligence. This places RL practitioners in a position to design systems that have never existed before and lack prior documentation in law and policy. Public agencies could intervene on complex dynamics that were previously too opaque to deliberate about, and long-held policy ambitions would finally be made tractable. In this whitepaper we illustrate this potential and how it might be technically enacted in the domains of energy infrastructure, social media recommender systems, and transportation. Alongside these unprecedented interventions come new forms of risk that exacerbate the harms already generated by standard machine learning tools. We correspondingly present a new typology of risks arising from RL design choices, falling under four categories: scoping the horizon, defining rewards, pruning information, and training multiple agents. Rather than allowing RL systems to unilaterally reshape human domains, policymakers need new mechanisms for the rule of reason, foreseeability, and interoperability that match the risks these systems pose. We argue that criteria for these choices may be drawn from emerging subfields within antitrust, tort, and administrative law. It will then be possible for courts, federal and state agencies, and non-governmental organizations to play more active roles in RL specification and evaluation. Building on the "model cards" and "datasheets" frameworks proposed by Mitchell et al. and Gebru et al., we argue the need for Reward Reports for AI systems. Reward Reports are living documents for proposed RL deployments that demarcate design choices.
| | |
| --- | --- |
| Comments: | 60 pages |
| Subjects: | Machine Learning (cs.LG); Computers and Society (cs.CY) |
| Cite as: | [arXiv:2202.05716](https://arxiv.org/abs/2202.05716) \[cs.LG\] |
| | (or [arXiv:2202.05716v1](https://arxiv.org/abs/2202.05716v1) \[cs.LG\] for this version) |
| | [https://doi.org/10.48550/arXiv.2202.05716](https://doi.org/10.48550/arXiv.2202.05716)<br>Focus to learn more<br>arXiv-issued DOI via DataCite |
| Journal reference: | Center for Long Term Cybersecuri
... (truncated, 8 KB total)e9198ceb2f81b684 | Stable ID: NTg2NTgwMW