Jailbreak Research
EvaluationactiveResearch on prompt injection, jailbreaking attacks, and defenses for language model safety filters.
Organizations
4
Grants
2
Total Funding
$50K
Cluster: Evaluation
Parent Area: AI Evaluations
Tags
evaluationsadversarialprompt-injection
Grants2
| Name | Recipient | Amount | Funder | Date |
|---|---|---|---|---|
| 4-month stipend to study refusals and jailbreaks in chat LLMs under Neel Nanda as part of the MATS 5.0 extension program | Aaquib Syed | $30K | Long-Term Future Fund (LTFF) | 2024-01 |
| ETH Zurich — Research on Prompt Injection Attacks | ETH Zürich | $20K | Coefficient Giving | 2025-06 |
Funding by Funder
| Funder | Grants | Total Amount |
|---|---|---|
| Long-Term Future Fund (LTFF) | 1 | $30K |
| Coefficient Giving | 1 | $20K |