Stampy / AISafety.info
Stampy / AISafety.info
Detailed reference page for AISafety.info/Stampy, covering technical architecture, team structure, funding status (2024 emergency campaign seeking $40K), and community programs like the paused Distillation Fellowship ($2,500/month/fellow). Well-structured compilation but adds no original analysis and has limited relevance to prioritization decisions.
Quick Assessment
| Dimension | Value | Notes |
|---|---|---|
| Content Coverage | 280+ live answers, hundreds of drafts | As of October 2023 soft launch1 |
| Data Sources | ≈10,000–100,000 documents | Alignment literature across forums, blogs, and academic sources |
| Accessibility | Free web interface, Discord bot, chatbot | No account required |
| Community Integration | YouTube bridge, karma voting, write-a-thons | Active Discord community |
| Open Source | Yes (MIT license) | 10 public GitHub repositories |
| Maintenance | Volunteer team; Distillation Fellowship paused | Emergency funding sought in 20242 |
Project Details
| Attribute | Details |
|---|---|
| Name | Stampy / AISafety.info |
| Organization | Ashgro Inc (501(c)(3) nonprofit) |
| Founder | Rob Miles |
| Website | aisafety.info |
| GitHub | github.com/StampyAI (10 repositories) |
| Dataset | HuggingFace: alignment-research-dataset |
| Discord | Rob Miles AI Discord (active community) |
| License | MIT (open source) |
Overview
AISafety.info is a collaborative Q&A wiki focused on existential risk from artificial intelligence, founded by AI safety educator Rob Miles. The project combines human-written educational content with an LLM-powered chatbot, a Discord bot bridging YouTube and Discord communities, and structured programs for content creation.
The site is premised on the view that smarter-than-human AI may arrive in the near future and could pose existential risks to humanity. Its content covers these concerns through structured Q&A, addresses common objections, and provides pathways to deeper engagement with AI safety literature and careers. The site's framing reflects the EA/rationalist community's perspective on AI risk, which is one position within broader AI safety discourse.
Key Components
| Component | Purpose | Technology |
|---|---|---|
| Q&A Wiki | Human-written answers to AI safety questions | Web frontend (Remix/Cloudflare) |
| Stampy Chatbot | LLM-powered answers with citations | RAG pipeline; supports 24 models across OpenAI, Anthropic, Google, and OpenRouter3 |
| Discord Bot | YouTube integration, community moderation | Python, modular architecture |
| Alignment Research Dataset | Curated corpus for chatbot | HuggingFace, approximately 10,000–100,000 documents |
History
The Stampy project originated in September 2020 as a Discord bot for Rob Miles's AI safety community server.4 The bot watched YouTube comments and facilitated community Q&A; its name derives from a "stamp collector" scenario referenced in an early Computerphile video.
The project evolved into a structured content effort with the launch of the Distillation Fellowship, a paid three-month editorial program. Two fellowship cohorts were completed before the website's public debut.1 The full AISafety.info website soft-launched in October 2023 with 280 live answers and hundreds of additional answers in draft.1 The soft launch was accompanied by announcements on LessWrong and the EA Forum, with a full public launch planned to leverage Rob Miles's YouTube channel audience.
By 2024, the organization disclosed a funding shortfall, describing itself as operating on a "skeleton crew" with monthly burn rate reduced from approximately $12,000 to $6,000, and launched an emergency fundraising campaign seeking $40,000 to sustain operations for three to four months.2
Content & Statistics
Wiki Content
| Metric | Value |
|---|---|
| Live Answers | 280+ (as of October 2023 soft launch)1 |
| Draft Answers | Hundreds in development |
| Content Updates | Ongoing community contributions |
| Feedback System | Google Docs integration for comments |
Alignment Research Dataset
The chatbot draws from a curated corpus hosted on HuggingFace:
| Metric | Value |
|---|---|
| Document Count | Approximately 10,000–100,000 (range reflects varied corpus sources) |
| Monthly Downloads | ≈1,600 |
| License | MIT |
| Language | English |
Sources include:
- Academic: arXiv papers, Arbital
- Forums: Alignment Forum, LessWrong, EA Forum
- Organizational blogs: MIRI, Google DeepMind, OpenAI
- Individual blogs: Eliezer Yudkowsky, Gwern Branwen
- Educational: AGI Safety Fundamentals course
- Video: YouTube playlists on AI safety
Technical Architecture
Stampy Chatbot (RAG Pipeline)
The chatbot uses Retrieval-Augmented Generation (RAG) with a three-step process:
- Retrieval: Search the alignment-research-dataset for semantically similar chunks using vector embeddings
- Context Assembly: Feed relevant text snippets into an LLM's context window
- Generation: Produce a summary with citations to source documents
Multi-Model Support: The stampy-chat codebase supports 24 models across four providers: OpenAI (including GPT-4o, o1, and o3), Anthropic (including Claude 3.7 Sonnet and Claude Sonnet 4), Google (Gemini 2.5 Flash and Gemini 2.5 Pro), and OpenRouter.3 The live default model is configured via environment variable and is not publicly documented.
Dual Response Strategy: Stampy prioritizes human-written answers from the wiki when available, falling back to AI-generated responses for novel questions. This reduces hallucination risk for common questions while maintaining coverage for the "long tail."
Acknowledged Limitations: The documentation explicitly warns that "like all LLM-based chatbots, it will sometimes hallucinate." Source citations allow users to verify accuracy.
Discord Bot Architecture
The Discord bot (StampyAI/stampy) has evolved significantly from its original purpose:
Module System: Rob Miles implemented a "bidding" architecture where different modules compete to handle messages, minimizing computation by only activating relevant handlers.
Key Modules:
- Question management (Questions, QuestionSetter)
- Factoid database
- Wolfram Alpha integration
- LLM response generation (GPT-4 whitelist available)
- Alignment Forum search
YouTube-Discord Bridge
A distinctive feature is bidirectional integration with Rob Miles' YouTube channel:
- YouTube → Discord: Interesting comments from YouTube videos are posted to Discord, sparking community discussions
- Discord → YouTube: Quality responses can be posted back as official YouTube replies
Quality Control via Stamps: The system uses a "stamp" emoji reaction for karma voting. When responses receive enough stamps, they can be posted to YouTube. Critically, stamp value varies by user reputation using a PageRank-style algorithm—users with more stamps have more voting power.
| Feature | Description |
|---|---|
| Stamp Reactions | Karma voting for response quality |
| PageRank Weighting | Vote weight proportional to voter's reputation |
| Threshold Posting | Responses posted to YouTube when stamp threshold met |
| Bot Identity | Prevents random users from posting as official channel |
Repository Ecosystem
Stampy maintains 10 public repositories. Star counts are as of March 2025.5
| Repository | Stars | Purpose |
|---|---|---|
| stampy-ui | 41 | Web frontend (TypeScript) |
| stampy | 40 | Discord bot (Python) |
| alignment-research-dataset | 23 | Data scraping pipeline (Python) |
| stampy-chat | 15 | Conversational chatbot (TypeScript) |
| stampede | 2 | Elixir chatbot framework (alpha; last commit September 2024) |
| StampyAIAssets | 4 | Logos and branding |
| stampy-nlp | — | NLP microservices (Python) |
| stampy-extension | — | Browser extension |
| GDocsRelatedThings | — | Google Docs integration |
| AISafety.com | 2 | Issue tracker (54 open issues) |
The stampede Elixir framework, originally designed as a multi-service chatbot backend, had its last commit in September 2024 and remains at alpha status with no releases and no documented stable API.5
Team & Community Programs
Team Structure
| Role | Description |
|---|---|
| Founder | Rob Miles (YouTube creator, AI safety educator) |
| Editors | Paid staff from Distillation Fellowship programs |
| Developers | Volunteer contributors |
| Community | Discord members, write-a-thon participants |
Distillation Fellowship
A structured 3-month paid program for content creation:
- Completed: Two fellowship cohorts1
- Compensation: $2,500 per month per fellow6
- Purpose: Train editors to distill complex AI safety content into accessible answers
- Output: Significant portion of the 280+ live answers
- Current Status: Paused pending funding; the organization disclosed operating at reduced capacity in 2024 with the fellowship identified as a target for resumed funding contingent on a successful fundraising campaign2
Write-a-thons
Community events for collaborative content creation:
- Format: Multi-day focused writing sprints
- History: At least three write-a-thon events have been held
- Output: Batch content creation and answer improvement
Use Cases
For Newcomers
AISafety.info provides structured entry points for people encountering AI risk arguments for the first time. The site's content reflects a particular perspective on AI risk prominent in EA/rationalist communities, and should be understood as one school of thought within broader AI safety discourse.
- Start with basic questions and progress to advanced topics
- Find responses to specific objections
- Understand reasoning behind AI safety concerns
- Access cited sources for deeper reading
For Content Creators
The platform supports AI safety communication:
- Reference answers when addressing common questions
- Link skeptics to well-structured objection responses
- Consistent explanations across audiences
- Google Docs integration for collaborative editing
For Researchers
While primarily aimed at broader audiences:
- Entry points into technical literature via dataset
- Career guidance for field entry
- Community connections via Discord
Strengths and Limitations
Strengths
| Strength | Evidence |
|---|---|
| Accessible explanations | Content written for general audiences |
| Quality control | PageRank-style voting prevents low-quality YouTube responses |
| Community integration | YouTube bridging creates feedback loop |
| Structured programs | Distillation Fellowship produces consistent content |
| Comprehensive dataset | ≈10,000–100,000 documents from major alignment sources |
| Open source | All code publicly available, MIT licensed |
Limitations
| Limitation | Impact |
|---|---|
| Chatbot accuracy | LLM hallucination risk; users must verify sources |
| Volunteer capacity | Development and content dependent on contributor availability |
| Opinionated framing | Content is premised on AI x-risk concerns prominent in EA/rationalist communities; the site presents this framing as well-supported rather than as one position among several in ongoing AI safety debates; observers outside this community may characterize the site as an advocacy and outreach platform rather than a neutral educational resource |
| Dataset maintenance | Ongoing work to clean and update sources |
| Single community perspective | Content and editorial choices primarily reflect EA/rationalist community epistemics and philosophical commitments, including longtermism; this shapes which questions are asked and how answers are framed, extending beyond mere community affiliation to substantive editorial stances |
| Funding uncertainty | As of 2024, the organization was operating on reduced capacity and ran an emergency fundraising campaign; Distillation Fellowship paused2 |
Funding & Sustainability
Current Model
| Source | Type |
|---|---|
| Individual Donations | Via website and Every.org |
| EA Community | Grants and donations |
| Manifund | Project funding platform |
| Volunteer Labor | Primary development resource |
Funding Status
As of 2024, the organization disclosed a significant funding shortfall. In public fundraising materials, it described operating on a "skeleton crew" with monthly burn rate reduced from approximately $12,000 to approximately $6,000, and sought $40,000 in emergency funding to sustain operations for three to four months.2 The fundraising materials identified delivering an improved chatbot and a collaboration video with Rob Miles as immediate commitments, and described the Distillation Fellowship as a target for renewal if the campaign succeeded.
Resource Needs
- Distillation Fellowship funding for continued content creation
- Developer time for frontend redesign and chatbot improvements
- Dataset curation for ongoing maintenance
External Links
- AISafety.info
- Stampy GitHub Organization
- Alignment Research Dataset (HuggingFace)
- Rob Miles YouTube
- EA Forum Soft Launch Announcement
- LessWrong Announcement
- Every.org Donation Page
- How Stampy Chatbot Works
Footnotes
-
Stampy's AI Safety Info soft launch, LessWrong / EA Forum, October 2023. ↩ ↩2 ↩3 ↩4 ↩5
-
GitHub - StampyAI/stampy-chat: settings.py, accessed March 2025. ↩ ↩2
-
GitHub - StampyAI/stampy, repository created September 28, 2020. ↩
-
StampyAI GitHub Organization, accessed March 2025. ↩ ↩2
-
AI Safety Info Distillation Fellowship, EA Forum, 2023. ↩
References
AISafety.info is a community hub providing accessible introductions, explainers, and curated resources on AI safety topics. It serves as an entry point for those new to the field as well as a reference for practitioners, covering technical safety, alignment concepts, and related research areas.