Back
[2310.07298] Beyond Memorization: Violating Privacy Via Inference with Large Language Models
paperAuthor
Yair Oppenheim
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Research paper investigating privacy vulnerabilities in large language models through inference attacks, demonstrating risks beyond simple memorization and informing safety considerations for LLM deployment.
Paper Details
Citations
193
17 influential
Year
2023
Metadata
arxiv preprintprimary source
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Powered Investigation | Capability | 40.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 20260 KB
[2310.07298] Untitled Document Conversion to HTML had a Fatal error and exited abruptly. This document may be truncated or damaged. ◄ Feeling lucky? Conversion report Report an issue View original on arXiv ►
Resource ID:
a5bb602c9990122e | Stable ID: N2I3OWQ3Nz