Skip to content
Longterm Wiki
Back

[2309.15817] Identifying the Risks of LM Agents with an LM-Emulated Sandbox

paper

Authors

Yangjun Ruan·Honghua Dong·Andrew Wang·Silviu Pitis·Yongchao Zhou·Jimmy Ba·Yann Dubois·Chris J. Maddison·Tatsunori Hashimoto

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A key empirical paper on agentic LLM safety evaluation; introduces ToolEmu as a scalable testing sandbox relevant to researchers studying risks of tool-using AI agents prior to real-world deployment.

Paper Details

Citations
250
36 influential
Year
2023

Metadata

Importance: 72/100arxiv preprintprimary source

Abstract

Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks - such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tailed risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables the testing of LM agents against a diverse range of tools and scenarios, without manual instantiation. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes tools and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment.

Summary

ToolEmu is a framework that uses a language model to emulate tool execution for testing LM agents against safety risks, eliminating the need for manual tool implementation. It includes an automated LM-based safety evaluator and a benchmark of 36 high-stakes tools and 144 test cases. Results show even the safest LM agents fail nearly 24% of the time, revealing critical gaps before real-world deployment.

Key Points

  • Uses an LM to emulate tool execution, enabling scalable safety testing without manual tool implementation or environment setup.
  • Includes an automated LM-based safety evaluator that identifies and quantifies agent failures across diverse high-stakes scenarios.
  • Human evaluation confirms 68.8% of ToolEmu-identified failures would be valid real-world failures, validating the approach.
  • Benchmark of 36 high-stakes tools and 144 test cases reveals even the best LM agents fail 23.9% of the time.
  • Highlights risks such as private data leakage and financial losses from agentic LM tool use, motivating safer agent development.

Cited by 1 page

PageTypeQuality
Tool Use and Computer UseCapability67.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20261 KB
Conversion to HTML had a Fatal error and exited abruptly. This document may be truncated or damaged.

[◄](https://ar5iv.labs.arxiv.org/html/2309.15816) [![ar5iv homepage](https://ar5iv.labs.arxiv.org/assets/ar5iv.png)](https://ar5iv.labs.arxiv.org/) [Feeling\\
\\
lucky?](https://ar5iv.labs.arxiv.org/feeling_lucky) [Conversion\\
\\
report](https://ar5iv.labs.arxiv.org/log/2309.15817) [Report\\
\\
an issue](https://github.com/dginev/ar5iv/issues/new?template=improve-article--arxiv-id-.md&title=Improve+article+2309.15817) [View original\\
\\
on arXiv](https://arxiv.org/abs/2309.15817) [►](https://ar5iv.labs.arxiv.org/html/2309.15818)
Resource ID: 893d2bf900cb93c0 | Stable ID: ZjdkMWY1Mj