entity
Rohin Shah
Metadata
| Source Table | entities |
| Source ID | rohin-shah |
| Entity Type | person |
| Description | Rohin Shah is a research scientist at Google DeepMind working on AI alignment. He previously wrote the influential Alignment Newsletter summarizing AI safety research. His work focuses on reward learning, value alignment, and understanding the alignment problem from both technical and conceptual per… |
| Wiki ID | E1297 |
| Children | 2 total(2 fact) |
| Created | Apr 9, 2026, 4:27 PM |
| Updated | Apr 9, 2026, 4:27 PM |
| Synced | Apr 9, 2026, 4:27 PM |
Record Data
id | rohin-shah |
wikiId | E1297 |
stableId | Rohin Shah(person) |
entityType | person |
title | Rohin Shah |
description | Rohin Shah is a research scientist at Google DeepMind working on AI alignment. He previously wrote the influential Alignment Newsletter summarizing AI safety research. His work focuses on reward learning, value alignment, and understanding the alignment problem from both technical and conceptual per… |
website | — |
tags | [ "ai" ] |
clusters | — |
status | — |
lastUpdated | — |
customFields | [
{
"label": "Role",
"value": "Research Scientist"
}
] |
relatedEntries | — |
metadata | {
"expertRole": "Research Scientist",
"affiliation": "deepmind"
} |
Debug info
Thing ID: sid_GAlcP90tvw
Source Table: entities
Source ID: rohin-shah
Wiki ID: E1297
Entity Type: person