Back
Center for Humane Technology
webhumanetech.com·humanetech.com/research
CHT bridges near-term AI harms (manipulation, mental health) and broader AI safety concerns; useful for governance and societal-impact perspectives, though more policy/advocacy focused than technical AI safety research.
Metadata
Importance: 42/100homepage
Summary
The Center for Humane Technology (CHT) researches the societal harms of persuasive technology, social media, and AI systems, advocating for design and policy reforms that prioritize human well-being over engagement metrics. Their work examines how technology platforms exploit psychological vulnerabilities, contributing to polarization, mental health crises, and erosion of democratic norms. They produce reports, frameworks, and public education materials aimed at policymakers, technologists, and the general public.
Key Points
- •Investigates how algorithmic recommendation systems and persuasive design manipulate user behavior and amplify harmful content
- •Examines the societal risks of AI and social media, including polarization, misinformation, and mental health degradation
- •Produces policy recommendations and frameworks for technology governance and responsible AI deployment
- •Founded by former tech insiders (including Tristan Harris) bringing industry-insider perspective to tech critique
- •Connects near-term harms from current AI/tech systems to longer-term risks of increasingly powerful AI
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Induced Cyber Psychosis | Risk | 37.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20260 KB
### Sorry, the page can't be found The page you were looking for appears to have been moved, deleted or does not exist. [Back to Homepage](https://www.humanetech.com/) ## Cookie Consent We use cookies to ensure that you can experience the website in the best possible way. [Click here](https://www.humanetech.com/privacy-policy) to find out more about our cookies policy Accept
Resource ID:
aefa1c5f656ee68c | Stable ID: ZGIyMmIyM2