Back
Detecting Disinformation and Astroturfing in Social Media: Bot Detection Methodologies and Challenges
webfrontiersin.org·frontiersin.org/journals/sociology/articles/10.3389/fsoc....
Relevant to AI safety discussions around misuse of generative AI for influence operations; provides sociological framing for bot detection and disinformation challenges that intersect with AI deployment governance concerns.
Metadata
Importance: 42/100journal articleanalysis
Summary
This Frontiers in Sociology article examines methodologies for detecting coordinated inauthentic behavior, astroturfing campaigns, and automated bots in online social media environments. It analyzes the technical and sociological dimensions of disinformation spread, evaluating detection approaches and their limitations in identifying AI-generated or bot-driven influence operations.
Key Points
- •Surveys current bot-detection techniques used to identify automated accounts and coordinated inauthentic behavior on social platforms
- •Examines astroturfing as a form of manufactured grassroots consensus that undermines public discourse and democratic processes
- •Discusses the arms race between increasingly sophisticated AI-generated disinformation and detection countermeasures
- •Highlights sociological impacts of automated disinformation campaigns on public opinion and trust in institutions
- •Identifies gaps in current detection methods, particularly as generative AI lowers barriers to producing convincing synthetic content
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Powered Consensus Manufacturing | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202664 KB
## CONCEPTUAL ANALYSIS article
Front. Sociol., 10 May 2023
Sec. Media Governance and the Public Sphere
Volume 8 - 2023 \| [https://doi.org/10.3389/fsoc.2023.1150753](https://doi.org/10.3389/fsoc.2023.1150753)
# Disinformation a problem for democracy: profiling and risks of consensus manipulation
- [FP\\
\\
Francesco Pira \*](https://loop.frontiersin.org/people/2184045)
- Department of Ancient and Modern Civilizations, University of Messina, Messina, Italy
Article metrics
[View details](https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2023.1150753/full#metrics)
## Abstract
The aims of this article is to analyze in the post-pandemic era of technological wars how platformisation and the opacity that characterizes it can generate manipulative effects on the dynamics of consensus building. We are now in the era of the self-informative program; the hierarchical dimension of sources has vanished in parallel to the collapse of the authority, credibility, and trustworthiness of classical sources. Now, the user creates his own informative program, which gives rise to a new relationship between digital individuals. With this framework in mind, I intend to analyze the narrative of this post-pandemic phase proposed by mainstream media, using the tool of the fake news hexagon, to verify the impact and spread of fake news through social networks where emotionalism, hate speech, and polarization are accentuated. In fact, the definition of the fake news hexagon was the starting point to study through a predefined method the dynamics of proliferation of fake news to activate correct identification and blocking tools, in line with what is defined in the Digital Transformation Institute's manifesto. [1](https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2023.1150753/full#fn0001) Platforms drive the process of identity construction within containers that adapt to the demands of individuals, leading toward a flattening out of results from web searches as these follow the principle of confirmation bias. We assist to an increasing lack of recognition of the other, the individual moves away from commitment, sacrifice, and achieving a higher collective good. It becomes quite evident how, in the face of the collapse of authority, as this new dimension takes hold, the understanding of reality and the construction of public identity can no longer be the result of the ability to decipher messages alone. Media and social multidimensionality necessitate developing new interpretive processes.
## 1\. Introduction
In a previous study, we considered how digital society could offer an ideal framework to stimulate the growth of social capital (Pira, [2021](https://www.frontiersin.org/journals/sociology/articles/10.3389/fsoc.2023.1150753/full#B33)), if individuals were able to equip themselves with interpretive tools that would allow them to contribute to the creation of social capital. Building on the conclusions of that work, the ai
... (truncated, 64 KB total)Resource ID:
a5e16c1dcb586ab8 | Stable ID: MzZmOGQ1NT