Skip to content
Longterm Wiki
Back

AI Safety and Entrepreneurship - Alignment Forum

blog

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

Useful for AI safety researchers or entrepreneurs exploring how to found new organizations or ventures; provides a structured overview of the support ecosystem available within the AI safety community as of version 1.3.0.

Metadata

Importance: 42/100wiki pagereference

Summary

A curated directory compiled by Chris Leong on the Alignment Forum cataloging entrepreneurial support infrastructure for AI safety work, including incubation programs, venture capital sources, and organizational support services. It serves as a practical guide for founders and researchers looking to start AI safety organizations or ventures, including options for pre-idea founders.

Key Points

  • Catalogs incubation programs specifically designed for AI safety startups, including those open to founders without a defined idea yet
  • Lists venture capital and funding sources tailored to AI safety organizations and research ventures
  • Covers organizational support services that can assist new AI safety ventures in getting established
  • Aggregates relevant articles and discussions on building AI safety startups and organizations
  • Serves as a living reference for the AI safety entrepreneurship ecosystem, versioned and updated over time

Cited by 1 page

PageTypeQuality
Lionheart VenturesOrganization50.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202615 KB
[Discussion0](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship/discussion)

7

[AI Safety & Entrepreneurship](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#)

[Chris\_Leong](https://www.alignmentforum.org/users/chris_leong)

•

[Incubation Programs:](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#Incubation_Programs_)

•

[AIS Friendly General Program:](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#AIS_Friendly_General_Program_)

•

[VC:](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#VC_)

•

[Organisational Support:](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#Organisational_Support_)

•

[Other:](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#Other_)

•

[Dustbin - Things that previously existed](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?version=1.3.0#Dustbin___Things_that_previously_existed)

[You are viewing version 1.3.0 of this page. Click here to view the latest version.](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship?lens=main)

# AI Safety & Entrepreneurship

[Discussion0](https://www.alignmentforum.org/w/ai-safety-and-entrepreneurship/discussion)

7

Edited by [Chris\_Leong](https://www.alignmentforum.org/users/chris_leong)last updated 27th Nov 2025

You are viewing revision 1.3.0, last edited by [Chris\_Leong](https://www.alignmentforum.org/users/chris_leong)

Name

### **Articles**:

[There should be more AI safety organisations](https://www.alignmentforum.org/posts/MhudbfBNQcMxBBvj8/there-should-be-more-ai-safety-orgs) (Marius Hobbhahn, 2023-09-21)

[Why does the AI Safety Community need help founding projects?](https://www.lesswrong.com/posts/tPjAgWpsQrveFECWP/ryan-kidd-s-shortform?commentId=JDcp5AhWwk9ZCv59r) (Ryan Kidd, 2024-07-12)

[AI Assurance Tech Report](https://www.aiat.report/) (2024)

[AI Safety as a YC Startup](https://www.lesswrong.com/posts/QxJFjqT6oFY3jo47s/ai-safety-as-a-yc-startup-1) (Lukas Peterson, 2025-01-08)

[Alignment can be the ‘clean energy’ of AI](https://www.lesswrong.com/posts/irxuoCTKdufEdskSk/alignment-can-be-the-clean-energy-of-ai) (Cameron Berg, Judd Rosenblatt, and Trent Hodgeson, 2025-02-22)

[AI Tools for Existential Security](https://forum.effectivealtruism.org/posts/6m6Q7jgB8iwXGYhXw/ai-tools-for-existential-security) (Lizka Vaintrob and Owen Cotton-Barratt, 2025-03-14)

[What makes an AI Startup "Net Positive" for Safety](https://www.lesswrong.com/posts/o3sEHE8cqQ5hcqgkG/what-makes-an-ai-startup-net-positive-for-safety) (Jacques Thibodeau, 2025-04-18)

[AI Safety Undervalues Founders](https://www.lesswrong.com/posts/yw9B5jQazBKGLjize/ai-safety-undervalues-founders) (Ryan Kidd, 2025-11-16)

* * *

### **Incubation Programs:**

[Catalyze Impact](https://www.catalyze-impact.org/): Catalyze is an AI Safety incubator helping individuals and teams start new AI 

... (truncated, 15 KB total)
Resource ID: a2df73f882754394 | Stable ID: NTk1OWZlYz