Skip to content
Longterm Wiki
Back

2024 study at the CHI Conference

web

A peer-reviewed CHI 2024 paper providing empirical user research on AI honesty and truthfulness perception, relevant to alignment researchers studying how truthfulness failures manifest in real-world human-AI interaction.

Metadata

Importance: 45/100conference paperprimary source

Summary

A 2024 CHI Conference study examining how users perceive and evaluate honesty and truthfulness in conversational AI systems, exploring the gap between user expectations and actual AI behavior. The research investigates how deceptive or misleading AI outputs affect user trust and experience. Findings likely inform design guidelines for more transparent and trustworthy AI interfaces.

Key Points

  • Examines user mental models around AI honesty and how people assess whether an AI is being truthful or deceptive
  • Investigates the impact of AI truthfulness failures on user trust and willingness to rely on AI systems
  • Presented at CHI 2024, a top-tier human-computer interaction venue, lending methodological credibility
  • Bridges alignment concerns (truthfulness) with practical UX implications for deployed AI systems
  • Provides empirical grounding for design recommendations around transparent and honest AI communication

Cited by 1 page

PageTypeQuality
Epistemic SycophancyRisk60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
[skip to main content](https://dl.acm.org/doi/10.1145/3613904.3642459#skip-to-main-content)

Several features on this page require Premium Access.

[Learn more](https://dl.acm.org/about/upgrade) [Sign in](https://dl.acm.org/action/showLogin?redirectUri=%2Fdoi%2F10.1145%2F3613904.3642459)

Contents

- Information & Contributors
- Bibliometrics & Citations
- View Options
- References
- Figures
- Tables
- Media
- Share

## Abstract

### Abstract

Large language models (LLMs) powered conversational search systems have already been used by hundreds of millions of people, and are believed to bring many benefits over conventional search. However, while decades of research and public discourse interrogated the risk of search systems in increasing selective exposure and creating echo chambers—limiting exposure to diverse opinions and leading to opinion polarization, little is known about such a risk of LLM-powered conversational search. We conduct two experiments to investigate: 1) whether and how LLM-powered conversational search increases selective exposure compared to conventional search; 2) whether and how LLMs with opinion biases that either reinforce or challenge the user’s view change the effect. Overall, we found that participants engaged in more biased information querying with LLM-powered conversational search, and an opinionated LLM reinforcing their views exacerbated this bias. These results present critical implications for the development of LLMs and conversational search systems, and the policy governing these technologies.

### AI Summary

### AI-Generated Summary (Experimental)

This summary was generated using automated tools and was not authored or reviewed by the article's author(s). It is provided to support discovery, help readers assess relevance, and assist readers from adjacent research areas in understanding the work. It is intended to complement the author-supplied abstract, which remains the primary summary of the paper. The full article remains the authoritative version of record. [Click here to learn more](https://dl.acm.org/generative-ai/summarizations "Learn more about ACM AI summaries").

Click here to comment on the accuracy, clarity, and usefulness of this summary. Doing so will help inform refinements and future regenerated versions.

To view this AI-generated plain language summary, you must have Premium access.

## 1 Introduction

Exposure to diverse viewpoints is essential for critical thinking, balanced views, and informed decision-making, and at a collective level, preventing opinion polarization or even dangerous radicalization. However, such ideals are often challenging to achieve because people have a natural tendency of selective exposure \[ [16](https://dl.acm.org/doi/10.1145/3613904.3642459#Bib0016)\], or confirmation bias \[ [63](https://dl.acm.org/doi/10.1145/3613904.3642459#Bib0063)\], favoring consonant information and avoiding dissonant information. In the last two decades, much research and public disco

... (truncated, 98 KB total)
Resource ID: b7b6e436dc9cbce9 | Stable ID: NGQxMjdmMW