Skip to content
Longterm Wiki
All Source Checks
Publication

Publication: Universal and Transferable Adversarial Attacks on Aligned Language Models by Andy Zou, Zifan Wang, Nicholas Carlini et al. (2023)

confirmed95% confidence

1 evidence check

Last checked: 4/3/2026

The source text confirms all key fields in the record. The title matches exactly. The authors listed (Andy Zou, Zifan Wang, Nicholas Carlini et al.) are confirmed—the source shows these three plus three additional authors (Milad Nasr, J. Zico Kolter, Matt Fredrikson), so the 'et al.' notation is appropriate and accurate. The publication year 2023 is confirmed by the arxiv link (2307.15043, which is July 2023). The URL https://llm-attacks.org/ is explicitly shown as the website hosting this research. The publication type as 'paper' is confirmed by the explicit '[Paper]' link to arxiv.org/abs/2307.15043. All fields are directly supported by the source text.

Evidence — 1 source, 1 check

confirmed95%Haiku 4.5 · 4/3/2026
Found: Title: 'Universal and Transferable Adversarial Attacks on Aligned Language Models'; Authors: Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J. Zico Kolter, Matt Fredrikson; Year: 2023 (inferred f

Note: The source text confirms all key fields in the record. The title matches exactly. The authors listed (Andy Zou, Zifan Wang, Nicholas Carlini et al.) are confirmed—the source shows these three plus three additional authors (Milad Nasr, J. Zico Kolter, Matt Fredrikson), so the 'et al.' notation is appropriate and accurate. The publication year 2023 is confirmed by the arxiv link (2307.15043, which is July 2023). The URL https://llm-attacks.org/ is explicitly shown as the website hosting this research. The publication type as 'paper' is confirmed by the explicit '[Paper]' link to arxiv.org/abs/2307.15043. All fields are directly supported by the source text.

Debug info

Record type: publication

Record ID: xFD4v0FaVJ