Skip to content
Longterm Wiki
Back

Improving Factuality and Reasoning through Multiagent Debate

paper

Authors

Yilun Du·Shuang Li·Antonio Torralba·Joshua B. Tenenbaum·Igor Mordatch

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

This paper proposes using multiagent debate among LLM instances to improve factuality and reasoning quality, directly addressing AI safety concerns about hallucination and error correction through emergent collaborative mechanisms.

Paper Details

Citations
1,410
190 influential
Year
2023

Metadata

arxiv preprintprimary source

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities in language generation, understanding, and few-shot learning in recent years. An extensive body of work has explored how their performance may be further improved through the tools of prompting, ranging from verification, self-consistency, or intermediate scratchpads. In this paper, we present a complementary approach to improve language responses where multiple language model instances propose and debate their individual responses and reasoning processes over multiple rounds to arrive at a common final answer. Our findings indicate that this approach significantly enhances mathematical and strategic reasoning across a number of tasks. We also demonstrate that our approach improves the factual validity of generated content, reducing fallacious answers and hallucinations that contemporary models are prone to. Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate. Overall, our findings suggest that such "society of minds" approach has the potential to significantly advance the capabilities of LLMs and pave the way for further breakthroughs in language generation and understanding.

Summary

This paper proposes a multiagent debate approach where multiple LLM instances propose different responses and iteratively debate their reasoning over multiple rounds to reach a consensus answer. The method is applied to existing black-box models without task-specific modifications. The authors demonstrate that this "society of minds" approach significantly improves performance on mathematical and strategic reasoning tasks, while also enhancing factual validity by reducing hallucinations and fallacious answers. The technique represents a complementary approach to existing prompting methods like self-consistency and chain-of-thought reasoning.

Cited by 1 page

PageTypeQuality
Scalable OversightResearch Area68.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202660 KB
# Improving Factuality and Reasoning in Language Models through Multiagent Debate

Yilun Du

MIT CSAIL

yilundu@mit.edu

&Shuang Li

MIT CSAIL

lishuang@mit.edu

&Antonio Torralba

MIT CSAIL

torralba@mit.edu

&Joshua B. Tenenbaum

MIT CSAIL, BCS, CBMM

jbt@mit.edu

&Igor Mordatch

Google Brain

imordatch@google.com

###### Abstract

Large language models (LLMs) have demonstrated remarkable capabilities in language generation, understanding, and few-shot learning in recent years. An extensive body of work has explored how their performance may be further improved through the tools of prompting, ranging from verification, self-consistency, or intermediate scratchpads.
In this paper, we present a complementary approach to improve language responses
where multiple language model instances propose and debate their individual responses and reasoning processes over multiple rounds to arrive at a common final answer.
Our findings indicate that this approach significantly enhances mathematical and strategic reasoning across a number of tasks. We also demonstrate that our approach improves the factual validity of generated content, reducing fallacious answers and hallucinations that contemporary models are prone to. Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate.
Overall, our findings suggest that such "society of minds" approach has the potential to significantly advance the capabilities of LLMs and pave the way for further breakthroughs in language generation and understanding. Project website at [https://composable-models.github.io/llm\_debate/](https://composable-models.github.io/llm_debate/ "").

## 1 Introduction

Large language models (LLMs) have demonstrated remarkable language generation, understanding, and few-shot learning capabilities in recent years. These methods are trained on a massive corpus of text on the internet, where the quality and accuracy of extracted natural language may not be ensured. Thus, current models may suffer from confidently hallucinating facts or making implausible jumps in chains of reasoning. An extensive body of recent work has focused on improving factual accuracy and reasoning in language models. These range from prompting models with few or zero-shot chain-of-thought demonstrations, use of verification, self-consistency, or intermediate scratchpads.

We note that these techniques are applied over a single model instance. Instead, we propose a complementary approach inspired by _The Society of Mind_\[ [19](https://ar5iv.labs.arxiv.org/html/2305.14325#bib.bib19 "")\] and multi-agent settings, where multiple language model instances (or agents) individually propose and jointly debate their responses and reasoning processes to arrive at a single common answer. More specifically, given a query, multiple instances of a language model first generate individual candidate answers to a query. Then each individual model instance reads a

... (truncated, 60 KB total)
Resource ID: 992190a4815d67ed | Stable ID: ZWFmOTA5NG