Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Google DeepMind

This is DeepMind's official publications page filtered by 'safety', serving as a regularly updated index of their safety-relevant research output; useful for tracking the lab's priorities and finding primary sources on specific safety topics.

Metadata

Importance: 62/100homepagereference

Summary

A curated index of DeepMind/Google DeepMind research publications filtered by the 'safety' tag, covering 240 papers spanning topics such as AI consciousness, existential safety, human-AI alignment, AI personhood, and technical safety research. The listing spans multiple years and reflects the breadth of safety-related work coming out of one of the world's leading AI labs.

Key Points

  • 240 publications tagged as safety-related from DeepMind, spanning 2025-2026 and earlier years
  • Includes foundational and exploratory topics: existential safety, AI consciousness, human-AI alignment, and AI personhood
  • 'Imitation Learning is Probably Existentially Safe' and 'The Abstraction Fallacy' suggest engagement with core AI safety debates
  • Covers both technical safety research and sociotechnical/philosophical dimensions of AI safety
  • Useful as a living index to track DeepMind's evolving safety research agenda over time

Cited by 1 page

PageTypeQuality
AI Safety Research Value ModelAnalysis60.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20265 KB
[Skip to main content](https://deepmind.google/research/publications/?tag=safety#page-content)

# Publications

Explore a selection of our recent research on some of the most complex and interesting challenges in AI.

240 publications

- [23 April 2026\\
\\
Dynamic Reflections: Probing Video Representations with Text Alignment](https://deepmind.google/research/publications/193694/)
- [10 March 2026\\
\\
The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness](https://deepmind.google/research/publications/231971/)
- [15 February 2026\\
\\
Simplicity and Complexity in Combinatorial Optimization](https://deepmind.google/research/publications/225507/)
- [5 February 2026\\
\\
Hybrid neural–cognitive models reveal how memory shapes human reward learning](https://deepmind.google/research/publications/94006/)
- [9 January 2026\\
\\
TRecViT: A Recurrent Video Transformer](https://deepmind.google/research/publications/122591/)
- [21 November 2025\\
\\
Imitation Learning is Probably Existentially Safe](https://deepmind.google/research/publications/42697/)
- [4 November 2025\\
\\
To Mask or to Mirror: Human-AI Alignment in Collective Reasoning](https://deepmind.google/research/publications/180362/)
- [30 October 2025\\
\\
A Pragmatic View of AI Personhood](https://deepmind.google/research/publications/210560/)
- [29 September 2025\\
\\
AI-Generated Video Detection via Perceptual Straightening](https://deepmind.google/research/publications/160567/)
- [24 September 2025\\
\\
Video models are zero-shot learners and reasoners](https://deepmind.google/research/publications/203190/)
- [24 September 2025\\
\\
EmbeddingGemma: Powerful and Lightweight Text Representations](https://deepmind.google/research/publications/194199/)
- [4 September 2025\\
\\
Improving cosmological reach of LIGO usingDeep Loop Shaping](https://deepmind.google/research/publications/145314/)
- [3 September 2025\\
\\
RoboBallet: Planning for Multi-Robot Reaching with Graph Neural Networks and Reinforcement Learning](https://deepmind.google/research/publications/111579/)
- [8 August 2025\\
\\
Properties of Algorithmic Information Distance](https://deepmind.google/research/publications/148245/)
- [1 August 2025\\
\\
Visual Intention Grounding for Egocentric Assistants](https://deepmind.google/research/publications/192581/)
- [16 July 2025\\
\\
Dialogues Between Technologists and the Art Worlds](https://deepmind.google/research/publications/181976/)
- [13 July 2025\\
\\
Large Language Models as Rankers, Judges, and Assistants: A Perspective on the Potential Over-Reliance on LLMs in IR](https://deepmind.google/research/publications/147939/)
- [13 July 2025\\
\\
SLIM: ONE-SHOT QUANTIZED SPARSE PLUS LOW-RANK APPROXIMATION OF LLMS](https://deepmind.google/research/publications/148040/)
- [13 July 2025\\
\\
Long-Form Speech Generation with Spoken Language Models](https://deepmind.google/research/publications/126936/)
- [1 July 2025\\
\\
Rethinking Example Selection in the Era of 

... (truncated, 5 KB total)
Resource ID: 70b4461a02951e08 | Stable ID: ZmRlZjMxZT