Skip to content
Longterm Wiki

Jailbreak Research

Evaluationactive

Research on prompt injection, jailbreaking attacks, and defenses for language model safety filters.

Organizations
4
Grants
2
Total Funding
$50K
Cluster: Evaluation
Parent Area: AI Evaluations

Tags

evaluationsadversarialprompt-injection

Grants2

NameRecipientAmountFunderDate
4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension programAaquib Syed$30KLong-Term Future Fund (LTFF)2024-01
ETH Zurich — Research on Prompt Injection AttacksETH Zürich$20KCoefficient Giving2025-06

Funding by Funder

FunderGrantsTotal Amount
Long-Term Future Fund (LTFF)1$30K
Coefficient Giving1$20K