Quick Assessment
| Dimension | Assessment |
|---|---|
| Primary Role | CEO and co-founder of Safe Superintelligence Inc. (SSI); formerly Chief Scientist at OpenAI (2015–2024) |
| Key Contributions | Co-author of AlexNet (2012), sequence-to-sequence learning (2014), and the dropout paper (2014); led research at OpenAI on the GPT series, DALL-E, and RLHF |
| Key Publications | "ImageNet Classification with Deep Convolutional Neural Networks" (2012); "Sequence to Sequence Learning with Neural Networks" (2014); "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" (2014) |
| Institutional Affiliation | Safe Superintelligence Inc. (SSI) |
| Influence on AI Safety | Co-led OpenAI's Superalignment team (2023–2024); founded SSI (2024) with the explicit mission of building safe superintelligence; has made public statements on the inadequacy of current AI generalization and the limits of scaling |
Overview
Ilya Sutskever (born December 8, 1986) is an Israeli-Canadian computer scientist and a co-founder of Safe Superintelligence Inc. (SSI). He was previously a co-founder and Chief Scientist of OpenAI, where he worked from 2015 to 2024. Prior to OpenAI, he conducted research at Google Brain and completed his PhD under Geoffrey Hinton at the University of Toronto. His research contributions include co-authorship of AlexNet (2012), which demonstrated large-scale convolutional neural network performance on image classification; the sequence-to-sequence learning framework (2014), which shaped modern natural language processing; and the foundational dropout regularization paper (2014).
Sutskever co-led OpenAI's Superalignment team from its founding in July 2023 until his departure in May 2024. He subsequently co-founded SSI in June 2024 with Daniel Gross and Daniel Levy, with the stated mission of building safe superintelligence as a singular technical goal, without concurrent product development or commercial obligations. SSI raised over $3 billion across multiple funding rounds through April 2025, reaching a reported valuation of $32 billion.
In a November 2025 interview, Sutskever argued that the current period represents a transition away from scaling-dominated progress toward a new phase requiring fundamental research breakthroughs, describing current large language model generalization as "jagged" and "dramatically worse than people."