Skip to content
Longterm Wiki
Back

Conjecture - AI Safety Research Blog

web
conjecture.dev·conjecture.dev/

Conjecture is a UK-based AI safety company pursuing the cognitive emulation research agenda; their blog is a primary source for understanding CoEm and related technical safety work.

Metadata

Importance: 45/100blog posthomepage

Summary

Conjecture is an AI safety research company focused on cognitive emulation (CoEm) as an approach to building aligned AI systems. Their blog covers technical AI safety research, interpretability, and alignment strategies with a particular emphasis on making AI systems that reason more like humans in interpretable ways.

Key Points

  • Conjecture develops Cognitive Emulation (CoEm), an approach to alignment that aims to build AI systems mimicking human cognitive processes
  • Research focus includes interpretability, understanding AI internals, and developing safer training paradigms
  • The company takes a commercial approach to AI safety, attempting to make safety-focused AI economically viable
  • Blog covers both technical research and broader strategic thinking about AI risk and alignment
  • Conjecture was founded by former EleutherAI and other AI research community members

Cited by 2 pages

PageTypeQuality
ConjectureOrganization37.0
Survival and Flourishing FundOrganization59.0

7 FactBase facts citing this source

EntityPropertyValueAs Of
ConjectureFounded DateMar 2022
ConjectureTotal Funding Raised$25 millionDec 2022
ConjectureFounded Bysid_CrXoCsIucX, sid_kLIpOZtU1n, sid_n0d7I3OAej
ConjectureHeadquartersLondon, UK
ConjectureLegal StructurePrivate company
Connor LeahyEmployed By0u4J70VqFYMar 2022
Connor LeahyRole / TitleCEO & Co-founder, ConjectureMar 2022

Cached Content Preview

HTTP 200Fetched Mar 20, 20265 KB
[Product](https://www.conjecture.dev/product)

[Research](https://www.conjecture.dev/research)

[About Us](https://www.conjecture.dev/about)

[Contact](https://www.conjecture.dev/contact)

# Redefining AI Safety

Building a new AI architecture to ensure the controllable,

safe development of advanced AI technology.

Request a Demo

[Learn More](https://www.conjecture.dev/research)

###### Navigating Complexities

### The challenge of AI Safety

[Learn More](https://www.conjecture.dev/alignment)

Unpredictable

AI systems generate hallucinations and inadvertently leak sensitive information, compromising their reliability.

Incoherent

AI responses are inconsistent in their outputs and reasoning, creating challenges to effective interaction.

Inept

Systems fail on basic tasks, raising significant obstacles to building reliable automation.

Uninterpretable

AI's inner workings remain uninterpretable, making it difficult to trust the accuracy of its outputs and debug when it’s incorrect.

###### Revolutionizing AI Deployment

### Introducing  Cognitive Emulation

Building and deploying AI systems that are both powerful and safe faces great challenges in the current AI paradigm. And so, we are building Cognitive Emulation: A different vision for powerful AI systems that are designed to follow the same, trusted reasoning processes we do.

[Learn More](https://www.conjecture.dev/cognitive-emulation)

Trained for specific tasks

![](https://framerusercontent.com/images/OcdNOU81cxKK0D41c20Pp1cm0I.svg)

Build AI by component

![](https://framerusercontent.com/images/OcdNOU81cxKK0D41c20Pp1cm0I.svg)

Automate real workflows

![](https://framerusercontent.com/images/OcdNOU81cxKK0D41c20Pp1cm0I.svg)

![](https://framerusercontent.com/images/P3X8BYwnNMYgfvn1iMkjiI8tYrU.svg)

Solve more complex problems

![](https://framerusercontent.com/images/AKdvrCqmihK7jzBVcK0BFb8lAVA.svg)

Capability vs. Safety

##### Scaling and the Control Problem

The AI industry is racing to scale ever-larger models without considering the risks. While capabilities advance at a rapid pace, safety lags far behind. The current imbalance underscores the urgency for innovative solutions like Cognitive Emulation as an alternative to the grave risks associated with scaling.

![gradient fill fading from black to white](https://framerusercontent.com/images/UwxSmjiqx49OvH3n6O4TKAAvk.png?scale-down-to=512)

![gradient fill fading from black to white](https://framerusercontent.com/images/UwxSmjiqx49OvH3n6O4TKAAvk.png?scale-down-to=512)

![gradient fill fading from black to white](https://framerusercontent.com/images/UwxSmjiqx49OvH3n6O4TKAAvk.png?scale-down-to=512)

![gradient fill fading from black to white](https://framerusercontent.com/images/UwxSmjiqx49OvH3n6O4TKAAvk.png?scale-down-to=512)

Amplified Risks

##### Scaling Exacerbates the Dangers

![](https://framerusercontent.com/images/XVj4Y44qtI1T6lDQnQD5ma2ps.png?scale-down-to=512)

Amplifying these models through scaling only makes it harder

... (truncated, 5 KB total)
Resource ID: b7aa1f2c839b5ee8 | Stable ID: Mjg0ZTRmMG