Skip to content
Longterm Wiki
Back

Big Data & Society (2025)

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: SAGE Journals

Empirical social science study relevant to AI safety concerns around LLM hallucination, user over-trust, and the epistemic harms of deploying AI systems in information-seeking contexts without adequate verification mechanisms.

Metadata

Importance: 52/100journal articleprimary source

Summary

This experimental study investigates how ChatGPT can produce a 'Chat-Chamber' effect, where LLMs provide incorrect but proattitudinal information that users fail to verify. Using a two-phase experiment comparing ChatGPT 3.5 and Google Search on LGBTQIA+ representation data in India and Ireland, researchers found that LLM hallucinations can reinforce existing beliefs uncritically.

Key Points

  • Introduces the 'Chat-Chamber' concept: LLMs generating incorrect but belief-confirming information that users accept without cross-checking
  • Experimental design used ChatGPT 3.5 vs. Google Search to compare information retrieval on LGBTQIA+ elected representatives in India and Ireland
  • LLM hallucinations are not random errors but can be systematically proattitudinal, amplifying echo-chamber and filter-bubble dynamics
  • Semi-structured interviews revealed users rarely verify ChatGPT outputs, increasing susceptibility to reinforced misinformation
  • Findings raise concerns about AI deployment in information-seeking contexts where epistemic accuracy and user trust are critical

Cited by 1 page

PageTypeQuality
Epistemic SycophancyRisk60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
[Skip to main content](https://journals.sagepub.com/doi/10.1177/20539517241306345#skipNavigationTo)

Intended for healthcare professionals

[Big Data & Society](https://journals.sagepub.com/home/BDS)

[Impact Factor: **5.9**/ 5-Year Impact Factor: **9.0**](https://journals.sagepub.com/metrics/bds)

[Journal Homepage](https://journals.sagepub.com/home/BDS)

[Submission Guidelines](https://journals.sagepub.com/author-instructions/BDS)

Contents

- [Abstract](https://journals.sagepub.com/doi/10.1177/20539517241306345#abstract)
- [Introduction](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-1)
- [Previous work](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-2)
- [Objectives](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-3)
- [Data and methods](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-4)
- [Results](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-5)
- [Discussion](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-6)
- [Conclusion](https://journals.sagepub.com/doi/10.1177/20539517241306345#sec-7)
- [Declaration of conflicting interests](https://journals.sagepub.com/doi/10.1177/20539517241306345#conflict)
- [Funding](https://journals.sagepub.com/doi/10.1177/20539517241306345#funding)
- [ORCID iD](https://journals.sagepub.com/doi/10.1177/20539517241306345#orcid)
- [References](https://journals.sagepub.com/doi/10.1177/20539517241306345#bibliography)

[PDF/EPUB](https://journals.sagepub.com/doi/reader/10.1177/20539517241306345)

[More](https://journals.sagepub.com/doi/10.1177/20539517241306345#core-collateral-more)

## Abstract

This study investigates the potential for ChatGPT to trigger a media effect that sits at the intersection of echo-chamber communication and filter bubbles. We devised a two-phase, two-stage experimental design with ChatGPT 3.5 (treatment group) and Google search engine (control group) by asking participants to find out how many LGBTQIA+ individuals served as elected representatives in India (first phase) and Ireland (second phase). The similar trajectories of legal reforms observed in these countries, and their small number of LGBTQIA+ elected representatives, allowed us to identify the fault lines in ChatGPT's creation of knowledge and information around LGBTQ issues. We followed the experimental study with semi-structured interviews to identify whether the chatbot reinforced previously held beliefs and whether users cross-checked the information provided by ChatGPT. Our results show that Large Language Models may provide incorrect but proattitudinal information that remains unchecked and unverified by the users, an effect we refer to as Chat-Chamber. We conclude with a discussion of our findings and recommendations for future research in the area.

## Introduction

The public release of OpenAI's generative artificial intelligence chatbot ChatGPT in November 2022 was met with wall-to-wall coverage in the press and reports that it cou

... (truncated, 98 KB total)
Resource ID: 43803a2e241204fc | Stable ID: ODRiYTNkYT