Skip to content
Longterm Wiki
Back

Author

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

Written in mid-2022, this post is a notable early example of near-term AI scenario planning on the Alignment Forum, offering a grounded narrative counterpart to more abstract takeoff and timeline discussions; useful for understanding how forecasters conceptualized AI trajectories before GPT-4 and subsequent developments.

Metadata

Importance: 58/100blog postanalysis

Summary

A speculative future history written in 2022 projecting AI development trajectories from 2022 to 2026, using a conditional narrative methodology where each year's scenario grounds the next. The post explores plausible near-term harms like AI-driven propaganda and societal disruption, alongside questions about AI consciousness and agency, arguing concrete vignettes usefully complement abstract timeline discussions.

Key Points

  • Uses a conditional methodology: each year's scenario is written first, then subsequent years are conditioned on it to maintain internal consistency.
  • Highlights near-term AI misuse risks including scaling LLMs for persuasion, propaganda, and disinformation campaigns.
  • Explores emerging societal problems from AI deployment even absent catastrophic safety failures, suggesting harms can accrue gradually.
  • Raises speculative questions about AI consciousness and agency as capabilities scale, framing them as live concerns within the timeline.
  • Argues concrete narrative vignettes complement abstract discussions of AI timelines and takeoff dynamics by making scenarios more legible.

Cited by 1 page

PageTypeQuality
Short AI Timeline Policy ImplicationsAnalysis62.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202679 KB
![Background Image](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/splashArtImagePromptA%20giant%20magnifying%20glass%20examining%20a%20microchip/sbdi7ouy6kvakkymy5zi)

[What 2026 looks like](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#)

20 min read

•

[2022](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2022)

•

[2023](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2023)

•

[2024](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2024)

•

[That isn’t to say these AIs aren’t causing problems. Massive models are being fine-tuned to persuade/propagandize.](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#That_isn_t_to_say_these_AIs_aren_t_causing_problems__Massive_models_are_being_fine_tuned_to_persuade_propagandize__)

•

[2025](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2025)

•

[2026](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#2026)

•

[What about all that AI-powered propaganda mentioned earlier?](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#What_about_all_that_AI_powered_propaganda_mentioned_earlier__)

•

[Now let’s talk about the development of chatbot class consciousness.](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#Now_let_s_talk_about_the_development_of_chatbot_class_consciousness_)

[AI Timelines](https://www.alignmentforum.org/s/5Eg2urmQjA4ZNcezy)

[AI Timelines](https://www.alignmentforum.org/w/ai-timelines)[Forecasting & Prediction](https://www.alignmentforum.org/w/forecasting-and-prediction)[Forecasts (Specific Predictions)](https://www.alignmentforum.org/w/forecasts-specific-predictions)[AI Persuasion](https://www.alignmentforum.org/w/ai-persuasion)[AI Takeoff](https://www.alignmentforum.org/w/ai-takeoff)[Fuzzies](https://www.alignmentforum.org/w/fuzzies)[AI](https://www.alignmentforum.org/w/ai) [Curated](https://www.alignmentforum.org/recommendations)

# 124

- [spotify-podcast-badge-wht-blk-165x40Created with Sketch.](https://open.spotify.com/show/7vqBzO0ejqiLiXyTECEeBY)

# [What 2026 lookslike](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like)

by [Daniel Kokotajlo](https://www.alignmentforum.org/users/daniel-kokotajlo?from=post_header)

6th Aug 2021

20 min read

[169](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#comments)

# 124

[Review by\\
\\
Daniel Kokotajlo](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like#a4y2hqLmsxQr4YBgf)

This was written for the [Vignettes Workshop](https://www.alignmentforum.org/posts/jusSrXEAsiqehBsmh/vignettes-workshop-ai-impacts).[\[1\]](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like-daniel-s-median-future?commentId=wL7FSxbsJs5EEZZEj) The goal is to write out a **detailed**

... (truncated, 79 KB total)
Resource ID: 9904be321ab63a9a | Stable ID: MmJkYzRiMz