Back
Long-Term Planning and Situational Awareness in OpenAI Five - ADS
webui.adsabs.harvard.edu·ui.adsabs.harvard.edu/abs/2019arXiv191206721R/abstract
A 2019 OpenAI paper probing the internal representations of OpenAI Five; relevant to mechanistic interpretability and understanding emergent planning in large-scale RL agents, which has implications for AI safety research on situational awareness.
Metadata
Importance: 52/100arxiv preprintprimary source
Summary
This paper investigates how OpenAI Five's model-free deep reinforcement learning agent develops internal representations of game knowledge over training, introducing a technique to extract plans and subgoals from hidden states. The authors demonstrate that the agent exhibits situational awareness and evidence of planning minutes in advance, analyzing predictions during the historic matches against DotA 2 world champions OG.
Key Points
- •Studies distributed representations learned by OpenAI Five to understand how game knowledge emerges during training in a model-free RL system.
- •Introduces a general technique to learn a model from an agent's hidden states to identify formation of plans and subgoals without explicit hierarchical structure.
- •Finds evidence that the agent plans toward subgoals minutes before executing them, suggesting emergent long-horizon reasoning.
- •Demonstrates situational similarity learning across actions, providing insight into the black-box nature of high-dimensional RL agents.
- •Relevant to interpretability research: shows post-hoc probing of internal representations can reveal structured planning in opaque RL systems.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Deep Learning Revolution Era | Historical | 44.0 |
Cached Content Preview
HTTP 200Fetched Feb 22, 20262 KB
Long-Term Planning and Situational Awareness in OpenAI Five - ADS Now on home page ADS | --> Long-Term Planning and Situational Awareness in OpenAI Five Raiman, Jonathan ; Zhang, Susan ; Wolski, Filip Abstract Understanding how knowledge about the world is represented within model-free deep reinforcement learning methods is a major challenge given the black box nature of its learning process within high-dimensional observation and action spaces. AlphaStar and OpenAI Five have shown that agents can be trained without any explicit hierarchical macro-actions to reach superhuman skill in games that require taking thousands of actions before reaching the final goal. Assessing the agent's plans and game understanding becomes challenging given the lack of hierarchy or explicit representations of macro-actions in these models, coupled with the incomprehensible nature of the internal representations. In this paper, we study the distributed representations learned by OpenAI Five to investigate how game knowledge is gradually obtained over the course of training. We also introduce a general technique for learning a model from the agent's hidden states to identify the formation of plans and subgoals. We show that the agent can learn situational similarity across actions, and find evidence of planning towards accomplishing subgoals minutes before they are executed. We perform a qualitative analysis of these predictions during the games against the DotA 2 world champions OG in April 2019. Publication: arXiv e-prints Pub Date: December 2019 DOI: 10.48550/arXiv.1912.06721 arXiv: arXiv:1912.06721 Bibcode: 2019arXiv191206721R Keywords: Computer Science - Computation and Language; Computer Science - Machine Learning full text sources Preprint | 🌓
Resource ID:
0fa324567bde555e | Stable ID: Y2VlYjc2Nj