Ilya Sutskever

Ilya Sutskever

Palo Alto & Tel Aviv

I think about intelligence—what it is, how it works, and how to build it safely. Co-founded OpenAI, now building Safe Superintelligence Inc. The future is going to be very interesting.

Links

Education

University of Toronto 2007 – 2013

PhD in Computer Science, Machine Learning

Advisor: Geoffrey Hinton. Thesis: Training Recurrent Neural Networks.

University of Toronto 2005 – 2007

MSc in Computer Science

University of Toronto 2002 – 2005

BSc in Mathematics

Open University of Israel 2000 – 2002

Early studies

Started university coursework as a teenager in Jerusalem.

Experience

Co-founder, Safe Superintelligence Inc. 2024

Palo Alto & Tel Aviv

One goal: safe superintelligence. No distractions, no products, just the most important technical problem of our time.

Co-founder & Chief Scientist, OpenAI 2015 – 2024

San Francisco

Built the research team from the beginning. Led work on GPT, scaling laws, and co-led the Superalignment initiative.

Research Scientist, Google Brain 2013 – 2015

Mountain View

Created sequence-to-sequence learning, worked on TensorFlow, and contributed to AlphaGo.

Postdoctoral Researcher, Stanford University 2012 – 2012

Stanford

Brief postdoc with Andrew Ng before joining DNNResearch.

Projects

AlexNet (2012)

With Alex Krizhevsky and Geoff Hinton. Won ImageNet by a landslide and changed everything. Deep learning went from academic curiosity to the future of AI overnight.

Sequence-to-Sequence Learning

The architecture behind neural machine translation. Showed that you could map arbitrary sequences to sequences with deep nets. Later evolved into transformers and LLMs.

GPT & Scaling Laws

Bigger models trained on more data get predictably better. This simple insight drove the creation of GPT-2, GPT-3, GPT-4, and the modern AI boom.

Superalignment

How do you ensure AI systems smarter than humans remain aligned with human values? The most important unsolved problem in the field.