Skip to content
Longterm Wiki
Back

ZKML Survey (Kang et al.)

paper

Authors

Sean J. Wang·Honghao Zhu·Aaron M. Johnson

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A survey on zero-knowledge machine learning (ZKML) that addresses privacy-preserving AI systems and verifiable computation, relevant to AI safety through enhanced transparency and auditability of ML systems.

Paper Details

Citations
9
1 influential
Year
2023

Metadata

arxiv preprintprimary source

Abstract

Autonomous off-road driving is challenging as risky actions taken by the robot may lead to catastrophic damage. As such, developing controllers in simulation is often desirable as it provides a safer and more economical alternative. However, accurately modeling robot dynamics is difficult due to the complex robot dynamics and terrain interactions in unstructured environments. Domain randomization addresses this problem by randomizing simulation dynamics parameters, however this approach sacrifices performance for robustness leading to policies that are sub-optimal for any target dynamics. We introduce a novel model-based reinforcement learning approach that aims to balance robustness with adaptability. Our approach trains a System Identification Transformer (SIT) and an Adaptive Dynamics Model (ADM) under a variety of simulated dynamics. The SIT uses attention mechanisms to distill state-transition observations from the target system into a context vector, which provides an abstraction for its target dynamics. Conditioned on this, the ADM probabilistically models the system's dynamics. Online, we use a Risk-Aware Model Predictive Path Integral controller (MPPI) to safely control the robot under its current understanding of the dynamics. We demonstrate in simulation as well as in multiple real-world environments that this approach enables safer behaviors upon initialization and becomes less conservative (i.e. faster) as its understanding of the target system dynamics improves with more observations. In particular, our approach results in an approximately 41% improvement in lap-time over the non-adaptive baseline while remaining safe across different environments.

Summary

This paper presents a model-based reinforcement learning approach for autonomous off-road driving that balances robustness with adaptability. The method combines a System Identification Transformer (SIT) that learns context vectors representing target dynamics and an Adaptive Dynamics Model (ADM) that probabilistically models system behavior, controlled online by a Risk-Aware Model Predictive Path Integral (MPPI) controller. The approach addresses the limitation of domain randomization by enabling safe initial behavior while becoming progressively less conservative as it gathers more observations about the target system, achieving approximately 41% improvement in lap-time over non-adaptive baselines across simulation and real-world environments.

Cited by 1 page

PageTypeQuality
AI Governance Coordination TechnologiesApproach91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202647 KB
# Pay Attention to How You Drive: Safe and Adaptive Model-Based Reinforcement Learning for Off-Road Driving

Sean J. Wang, Honghao Zhu, and Aaron M. Johnson
All authors are with the Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
sjw2@andrew.cmu.edu, honghaoz@andrew.cmu.edu, amj2@andrew.cmu.edu

###### Abstract

Autonomous off-road driving is challenging as risky actions taken by the robot may lead to catastrophic damage. As such, developing controllers in simulation is often desirable as it provides a safer and more economical alternative. However, accurately modeling robot dynamics is difficult due to the complex robot dynamics and terrain interactions in unstructured environments. Domain randomization addresses this problem by randomizing simulation dynamics parameters, however this approach sacrifices performance for robustness leading to policies that are sub-optimal for any target dynamics. We introduce a novel model-based reinforcement learning approach that aims to balance robustness with adaptability. Our approach trains a System Identification Transformer (SIT) and an Adaptive Dynamics Model (ADM) under a variety of simulated dynamics. The SIT uses attention mechanisms to distill state-transition observations from the target system into a context vector, which provides an abstraction for its target dynamics. Conditioned on this, the ADM probabilistically models the system’s dynamics. Online, we use a Risk-Aware Model Predictive Path Integral controller (MPPI) to safely control the robot under its current understanding of the dynamics. We demonstrate in simulation as well as in multiple real-world environments that this approach enables safer behaviors upon initialization and becomes less conservative (i.e. faster) as its understanding of the target system dynamics improves with more observations. In particular, our approach results in an approximately 41% improvement in lap-time over the non-adaptive baseline while remaining safe across different environments.

###### Index Terms:

model-based reinforcement learning, robust control, adaptive control, sim2real

## I Introduction

Autonomous off-road driving has the potential to revolutionize applications such as environmental monitoring, planetary exploration, and agricultural automation by enabling robots to reach remote and challenging terrains \[ [1](https://ar5iv.labs.arxiv.org/html/2310.08674#bib.bib1 ""), [2](https://ar5iv.labs.arxiv.org/html/2310.08674#bib.bib2 ""), [3](https://ar5iv.labs.arxiv.org/html/2310.08674#bib.bib3 ""), [4](https://ar5iv.labs.arxiv.org/html/2310.08674#bib.bib4 "")\]. However, developing autonomous controllers for off-road driving can be challenging due to the dangerous nature of driving over uneven, unpredictable, and unstructured terrains. Inappropriate or misjudged actions can cause substantial damage to the robot, requiring expensive and time-intensive recovery and repair efforts.

Consequently, simulation has beco

... (truncated, 47 KB total)
Resource ID: e1037aade20094ee | Stable ID: NDlmMWJlNz