Skip to content
Longterm Wiki

Externalizing Reasoning

Interpretabilityemerging

Ensuring AI systems' chain-of-thought reasoning faithfully reflects their actual internal computations.

Organizations
3
Grants
3
Total Funding
$258K
Cluster: Interpretability
Parent Area: Interpretability

Tags

interpretabilitychain-of-thoughtfaithfulness

Grants3

NameRecipientAmountFunderDate
Meridian — Avoiding Encoded Reasoning in LLMsMeridian$245KCoefficient Giving2025-06
Mechanistic Interpretability research for unfaithful chain-of-thought (1 month)Iván Arcuschin Moreno$11KManifund2024-11-19
Unprompted Unfaithful Chain of Thought Dataset ProjectRobert Krzyzanowski$2KManifund2025-01-04

Funding by Funder

FunderGrantsTotal Amount
Coefficient Giving1$245K
Manifund2$13K