Skip to content
Longterm Wiki
Back

Stanford Persuasive Technology Lab

web
captology.stanford.edu·captology.stanford.edu/

Relevant to AI safety discussions around manipulation, deceptive alignment, and the ethics of AI systems designed to influence human behavior; provides academic grounding for concerns about persuasive AI.

Metadata

Importance: 52/100homepage

Summary

The Stanford Persuasive Technology Lab studies how computers, mobile phones, and digital technology can be designed to change people's attitudes and behaviors. Founded by B.J. Fogg, it pioneered the field of 'captology' (computers as persuasive technologies), examining both beneficial applications and manipulation risks. Its research is foundational for understanding how AI-driven systems can exploit psychological vulnerabilities.

Key Points

  • Coined the term 'captology' to describe the study of computers as persuasive technologies influencing human behavior and attitudes
  • Research directly informs understanding of how recommendation algorithms, social media, and AI systems can manipulate users
  • Founded by B.J. Fogg, whose behavior model underlies many modern persuasive design patterns used in tech products
  • Explores ethical boundaries between legitimate persuasion and manipulation, relevant to AI alignment and deployment concerns
  • Alumni have gone on to design persuasive systems at major tech companies, raising questions about responsible application

Cited by 1 page

PageTypeQuality
AI-Induced Cyber PsychosisRisk37.0
Resource ID: abf808359c5eff72 | Stable ID: M2FkMmMwOD