Skip to content
Longterm Wiki
Back

Scalable watermarking for identifying large language model outputs

paper

Authors

Sumanth Dathathri·Abigail See·Sumedh Ghaisas·Po-Sen Huang·Rob McAdam·Johannes Welbl·Vandana Bachani·Alex Kaskasoli·Robert Stanforth·Tatiana Matejovicova·Jamie Hayes·Nidhi Vyas·Majd Al Merey·Jonah Brown-Cohen·Rudy Bunel·Borja Balle·Taylan Cemgil·Zahra Ahmed·Kitty Stacpoole·Ilia Shumailov·Ciprian Baetu·Sven Gowal·Demis Hassabis·Pushmeet Kohli

Credibility Rating

5/5
Gold(5)

Gold standard. Rigorous peer review, high editorial standards, and strong institutional reputation.

Rating inherited from publication venue: Nature

Presents SynthID-Text, a practical watermarking technique for identifying LLM-generated text in production systems, addressing AI safety concerns around detecting synthetic content and potential misuse of generated text.

Paper Details

Citations
83
Year
2024
Methodology
peer-reviewed
Categories
Nature

Metadata

journal articleprimary source

Summary

This paper introduces SynthID-Text, a production-ready watermarking scheme for identifying text generated by large language models. The method preserves text quality while enabling high detection accuracy with minimal computational overhead, requiring only modifications to the sampling procedure without affecting model training. The authors demonstrate the scheme's effectiveness through evaluations across multiple LLMs and a large-scale live experiment with nearly 20 million Gemini responses, showing improved detectability compared to existing methods while maintaining text quality and model capabilities.

Cited by 2 pages

PageTypeQuality
AI Content AuthenticationApproach58.0
AI Model SteganographyRisk91.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202671 KB
Scalable watermarking for identifying large language model outputs | Nature 
 
 
 

 
 

 
 

 

 
 
 
 

 

 
 
 
 
 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 
 

 
 

 

 

 
 

 
 
 

 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 

 
 Skip to main content 

 
 
 
 Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
 the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
 Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
 and JavaScript.

 
 

 

 

 
 
 

 
 
 Advertisement

 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 

 
 
 
 

 

 

 
 
 
 
 
 
 
 Scalable watermarking for identifying large language model outputs
 
 
 
 
 
 
 Download PDF 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Download PDF 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 Subjects

 
 Computer science 
 Information technology 

 
 

 
 
 

 
 

 
 

 
 Abstract

 Large language models (LLMs) have enabled the generation of high-quality synthetic text, often indistinguishable from human-written content, at a scale that can markedly affect the nature of the information ecosystem 1 , 2 , 3 . Watermarking can help identify synthetic text and limit accidental or deliberate misuse 4 , but has not been adopted in production systems owing to stringent quality, detectability and computational efficiency requirements. Here we describe SynthID-Text, a production-ready text watermarking scheme that preserves text quality and enables high detection accuracy, with minimal latency overhead. SynthID-Text does not affect LLM training and modifies only the sampling procedure; watermark detection is computationally efficient, without using the underlying LLM. To enable watermarking at scale, we develop an algorithm integrating watermarking with speculative sampling, an efficiency technique frequently used in production systems 5 . Evaluations across multiple LLMs empirically show that SynthID-Text provides improved detectability over comparable methods, and standard benchmarks and human side-by-side ratings indicate no change in LLM capabilities. To demonstrate the feasibility of watermarking in large-scale-production systems, we conducted a live experiment that assessed feedback from nearly 20 million Gemini 6 responses, again confirming the preservation of text quality. We hope that the availability of SynthID-Text 7 will facilitate further development of watermarking and responsible use of LLM systems.

 

 
 
 

 
 
 
 
 
 Similar content being viewed by others

 
 
 
 
 
 
 
 
 
 A psychometric framework for evaluating and shaping p

... (truncated, 71 KB total)
Resource ID: a01e51407f492f11 | Stable ID: ZTliOTI2Yz