Skip to content
Longterm Wiki
Back

Elicit Blog - AI for Human Reasoning

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Elicit

Elicit is a tool developed with AI safety considerations in mind, used by researchers including those in the AI safety community to assist with literature review and evidence synthesis; this blog post explains the product philosophy.

Metadata

Importance: 35/100blog posthomepage

Summary

This Elicit blog post outlines the vision for using AI to augment and improve human reasoning and research workflows, particularly in scientific literature review and evidence synthesis. It describes how tools like Elicit aim to help researchers find, summarize, and evaluate evidence more reliably. The post frames AI assistance as a complement to human judgment rather than a replacement.

Key Points

  • Elicit is an AI research assistant designed to augment human reasoning rather than automate it away entirely.
  • The tool focuses on scientific literature review, helping users find relevant papers and extract structured information.
  • A core goal is improving the quality and reliability of evidence synthesis in research contexts.
  • The post reflects alignment-adjacent thinking about keeping humans in the loop during AI-assisted reasoning tasks.
  • Elicit positions itself as a tool for epistemic improvement, reducing cognitive biases in research workflows.

Cited by 1 page

PageTypeQuality
Elicit (AI Research Tool)Organization63.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202613 KB
Will We Get Wise Enough Fast Enough? - Elicit 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Will We Get Wise Enough Fast Enough?

 On AI for Human Reasoning

 Andreas Stuhlmüller

 Cofounder & CEO

 Mission

 Nov 26, 2025 

 Nov 26, 2025 

 1 min read

 This post is based on a talk I gave at The Curve in October 2025. You can watch it below or at this link . I gave this talk to an audience that generally expects AI to become powerful quite soon. If you're skeptical of that timeline, I'd still encourage you to engage with the core question: even with today's AI tools, can we build systems that help humans reason better—and can we do it fast enough to matter? 

 

 Two Visions

 Let me start with two visions. Imagine you have a question about the future—say, "What would happen if California tried to secede from the US?"

 You type it into an AI system. The system operationalizes the question in a few different ways, checks with you, and then shows you a sequence of forecasts and conditional forecasts. Here are the different trajectories: some lead to no real change, some lead to federal concessions, most (maybe 80%) lead to California still being part of the US. You can zoom into intermediate steps if you want. Everything is laid out clearly. And this works reliably for any question you have about the future.

 Wouldn't that be nice?

 Here's a different vision. You're on the platform formerly known as Twitter, and you're unsure how reliable different people are. But next to every person, you see a reliability indicator: "This person is pretty truthful overall, tends to overhype particular AI developments, sometimes overestimates progress—based on 300 claims we've found and checked." You can look into the details if you want. For any person you care about, you can figure out how truthful they tend to be over time.

 These are two specific visions I really like. But they're part of a bigger dream.

 The Dream

 The bigger dream of AI for human reasoning is something like this:

 Get really good at predicting and exploring the future 

 Get really good at aligning on what's true as a society

 Build tools for better-advised decision-making 

 Enable easy negotiation and trusted agreements 

 Support good governance and oversight 

 In some sense, if you could do all of these, you're kind of done. That's more or less the whole problem.

 The important thing about this dream: this is not a dream about superintelligence . A lot of these tools you can build today with current AI. Many people are doing so.

 I could stop here and say: "Look, it's a beautiful dream, we're building it, it's great." But let me zoom into some details, because there's a question I keep asking myself: Will AI make us wise enough fast enough? 

 That has two parts: the "fast enough" part and the "wise enough" part.

 Fast Enough

 Let me start with "fast enough" since it's the easier component.

 A lot of people proba

... (truncated, 13 KB total)
Resource ID: 18d1cf20d9789716 | Stable ID: YzdiMzE1ZW