Skip to content
Longterm Wiki
All Source Checks
Citation

Gwern Branwen - Footnote 34

contradicted40% confidence

1 evidence check

Last checked: 4/3/2026

The source only mentions Gwern's discussion of China as a "fast follower" in AI races. It does not mention his exploration of whether AI automation speeds capabilities over safety, or his questioning of whether automated alignment research precedes takeoff. The source also does not mention meta-alignment failures in RL where agents pursue corrupted utility functions, reducing original value.

Evidence — 1 source, 1 check

contradicted40%Haiku 4.5 · 4/3/2026
Found: In discussions on the Alignment Forum and EA Forum, Gwern has explored whether AI automation speeds capabilities over safety, questioned if automated alignment research precedes takeoff, and discussed

Note: The source only mentions Gwern's discussion of China as a "fast follower" in AI races. It does not mention his exploration of whether AI automation speeds capabilities over safety, or his questioning of whether automated alignment research precedes takeoff. The source also does not mention meta-alignment failures in RL where agents pursue corrupted utility functions, reducing original value.

Debug info

Record type: citation

Record ID: page:gwern:fn34