Debate, Deliberate, Decide (D3): Cost-Aware Adversarial Framework
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Presents D3, a multi-agent adversarial framework using structured debate among specialized agents to improve LLM evaluation reliability and interpretability, addressing critical AI safety concerns about consistent and transparent model assessment.
Paper Details
Metadata
Abstract
The evaluation of Large Language Models (LLMs) remains challenging due to inconsistency, bias, and the absence of transparent decision criteria in automated judging. We present Debate, Deliberate, Decide (D3), a cost-aware, adversarial multi-agent framework that orchestrates structured debate among role-specialized agents (advocates, a judge, and an optional jury) to produce reliable and interpretable evaluations. D3 instantiates two complementary protocols: (1) Multi-Advocate One-Round Evaluation (MORE), which elicits k parallel defenses per answer to amplify signal via diverse advocacy, and (2) Single-Advocate Multi-Round Evaluation (SAMRE) with budgeted stopping, which iteratively refines arguments under an explicit token budget and convergence checks. We develop a probabilistic model of score gaps that (i) characterizes reliability and convergence under iterative debate and (ii) explains the separation gains from parallel advocacy. Under mild assumptions, the posterior distribution of the round-r gap concentrates around the true difference and the probability of mis-ranking vanishes; moreover, aggregating across k advocates provably increases expected score separation. We complement theory with a rigorous experimental suite across MT-Bench, AlignBench, and AUTO-J, showing state-of-the-art agreement with human judgments (accuracy and Cohen's kappa), reduced positional and verbosity biases via anonymization and role diversification, and a favorable cost-accuracy frontier enabled by budgeted stopping. Ablations and qualitative analyses isolate the contributions of debate, aggregation, and anonymity. Together, these results establish D3 as a principled, practical recipe for reliable, interpretable, and cost-aware LLM evaluation.
Summary
This paper introduces D3 (Debate, Deliberate, Decide), a multi-agent adversarial framework for evaluating Large Language Models with improved reliability and interpretability. The framework uses role-specialized agents (advocates, judges, and optional juries) to conduct structured debates, offering two complementary protocols: MORE for parallel advocacy and SAMRE for iterative refinement with token budgets. The authors provide theoretical analysis showing that debate converges to true score differences and parallel advocacy increases score separation, validated through experiments on MT-Bench, AlignBench, and AUTO-J that demonstrate state-of-the-art agreement with human judgments while reducing biases and enabling cost-efficient evaluation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Scalable Oversight | Research Area | 68.0 |
Cached Content Preview
# Adversarial Multi-Agent Evaluation of Large Language Models through Iterative Debates
Chaithanya Bandi
Martian
chaithanya@withmartian.com
&Abir Harrasse
EMINES, UM6P
Abir@withmartian.com
###### Abstract
This paper explores optimal architectures for evaluating the outputs of large language models (LLMs) using LLMs themselves. We propose a novel framework that interprets LLMs as advocates within an ensemble of interacting agents, allowing them to defend their answers and reach conclusions through a judge and jury system. This approach offers a more dynamic and comprehensive evaluation process compared to traditional human-based assessments or automated metrics. We discuss the motivation behind this framework, its key components, and comparative advantages. We also present a probabilistic model to evaluate the error reduction achieved by iterative advocate systems. Finally, we outline experiments to validate the effectiveness of multi-advocate architectures and discuss future research directions.
## 1 Introduction
The rapid advancement of large language models (LLMs) has revolutionized the field of natural language processing, enabling the development of increasingly sophisticated AI systems capable of generating human-like text, engaging in dialogue, and performing complex language tasks \[ [5](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib5 "")\]. As these models grow in size and capability, the challenge of accurately evaluating their performance and aligning their outputs with human preferences has become increasingly critical \[ [3](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib3 ""), [15](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib15 ""), [49](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib49 "")\].
Traditional evaluation methods, such as human assessments and automated metrics, often struggle to capture the nuances and complexities of LLM outputs, leading to a gap between model performance and user expectations \[ [7](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib7 ""), [17](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib17 ""), [24](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib24 "")\]. Human evaluations are time-consuming, expensive, and prone to inconsistency and bias \[ [12](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib12 ""), [27](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib27 "")\], while automated metrics frequently fail to align with human judgments, particularly in open-ended generation tasks \[ [29](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib29 ""), [13](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib13 ""), [22](https://ar5iv.labs.arxiv.org/html/2410.04663#bib.bib22 "")\].
To address these challenges, we propose a novel framework for evaluating LLM outputs using LLMs themselves as interacting agents in a courtroom-inspired, multi-agent system. Our approach draws inspiration from various fields, including decision theory, economics, psychology, le
... (truncated, 86 KB total)c0aa2a41806c68b4 | Stable ID: YzdhMjllNj