Skip to content
Longterm Wiki
Back

Research Publications

web
conjecture.dev·conjecture.dev/research

This is the research index page for Conjecture, a London-based AI safety startup; useful for tracking their evolving CoEm agenda and published cross-organizational alignment discussions, though individual linked papers carry more depth.

Metadata

Importance: 45/100homepage

Summary

Conjecture's research hub presents their primary safety agenda centered on Cognitive Emulation (CoEm), an AI architecture designed to bound system capabilities and make reasoning interpretable and controllable. Rather than directly solving alignment for AGI, they propose building predictably boundable intermediate systems as a simpler near-term step. The page indexes key publications including their foundational CoEm proposal, a roadmap for 'Cognitive Software,' and cross-organizational alignment discussions.

Key Points

  • Cognitive Emulation (CoEm) is Conjecture's core safety proposal: build predictably boundable AI systems rather than attempting direct AGI alignment.
  • The agenda positions CoEm as a simpler, more tractable intermediate step toward full alignment solutions, not a final answer.
  • Conjecture critiques current AI scaling paradigms as lacking any known safety techniques, framing this as a critical and underappreciated risk.
  • Research includes cross-organizational dialogue, e.g., a published discussion between Paul Christiano (ARC) and Conjecture's Gabriel Alfour on alignment cruxes.
  • Founded by EleutherAI alumni including Connor Leahy, Sid Black, and Gabriel Alfour, with notable VC backing from AI and tech figures.

Cited by 1 page

PageTypeQuality
ConjectureOrganization37.0

Cached Content Preview

HTTP 200Fetched Mar 15, 20269 KB
[Product](https://www.conjecture.dev/product)

[Research](https://www.conjecture.dev/research)

[About Us](https://www.conjecture.dev/about)

[Contact](https://www.conjecture.dev/contact)

## Ensuring a good  future with advanced  AI systems

Our research agenda focuses on building Cognitive Emulation - an AI architecture that bounds systems' capabilities and makes them reason in ways that humans can understand and control.

### Cognitive Emulation Articles

![](https://framerusercontent.com/images/hYIIOCGBWXC8itzDdHJtA7N8czo.png)

Oct 19, 2023

#### An Introduction to Cognitive Emulation

All human labor, from writing an email to designing a spaceship, is built on cognition. Somewhere along the way, _human intuition_ is used in a _cognitive algorithm_ to form a meaningful idea or perform a meaningful task. Over time, these ideas and actions accrue in communication, formulating plans, researching opportunities, and building solutions. Society is entirely composed of these patterns, and Cognitive Emulation is built to learn and _emulate_ them.

[Learn More](https://www.conjecture.dev/cognitive-emulation)

[![](https://framerusercontent.com/images/zSuyWVG1hI5U1y1RmPs5clb7Q.png)\\
\\
Dec 2, 2024\\
\\
**Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI**\\
\\
An overview of Conjecture's approach to "Cognitive Software," and our build path towards a good future.](https://www.conjecture.dev/research/conjecture-a-roadmap-for-cognitive-software-and-a-humanist-future-of-ai) [![Cognitive Emulation: A Naive AI Safety Proposal](https://framerusercontent.com/images/Z8ovddqBQ3MY1BiCxaWiAVibsNM.png)\\
\\
Feb 25, 2023\\
\\
**Cognitive Emulation: A Naive AI Safety Proposal**\\
\\
This post serves as a signpost for Conjecture’s new primary safety proposal and research direction, which we call Cognitive Emulation (or “CoEm”). The goal of the CoEm agenda is to build predictably boundable systems, not directly aligned AGIs. We believe the former to be a far simpler and useful step towards a full alignment solution.](https://www.conjecture.dev/research/cognitive-emulation-a-naive-ai-safety-proposal) [![conjecture logo](https://framerusercontent.com/images/tFMsWtPIbJSUs3zqu75NESsOdpE.png)\\
\\
Apr 8, 2022\\
\\
**We Are Conjecture, A New Alignment Research Startup**\\
\\
Conjecture is a new alignment startup founded by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale alignment research. We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried. Our founders and early staff are mostly EleutherAI alumni and previously independent researchers like Adam Shimi. We are located in London.](https://www.conjecture.dev/research/we-are-conjecture-a-new-alignment-research-startup)

### Alignment Articles

![](https://framerusercontent.com/images/AuLvIKp1ovfBv4fzzWubZykPRA.png)

Oct 21, 2023

#### Alignment

Today, one paradigm dominates the AI industry: 

... (truncated, 9 KB total)
Resource ID: 296aaf722d89ca8c | Stable ID: ODdhMTRjZG