Skip to Content
Artificial intelligence

DeepMind’s AI can now play all 57 Atari games—but it’s still not versatile enough

Atari game cartridges
Atari game cartridgesFlickr / Digital Game Museum

The news: An artificial intelligence called Agent57 has learned to play all 57 Atari video games in the Arcade Learning environment, a collection of classic games that researchers use to test the limits of their deep-learning models. Developed by DeepMind, Agent57 uses the same deep reinforcement learning algorithm to achieve superhuman levels of play even in games that previous AIs have struggled with. Being able to learn 57 different tasks makes Agent57 more versatile than previous game-playing AIs. 

What’s in a game? Games are a great way to test AIs. They provide a variety of challenges that force an AI to come up with a range of strategies and yet still have a clear measure of success—a score—to train against. But four Atari games in particular have proved tough to beat. In Montezuma’s Revenge and Pitfall, an AI must try a lot of different strategies before hitting on a winning one. And in Solaris and Skiing there can be long waits between action and reward, making it hard for an AI to learn which moves earn the best payoff. 

Meta-mind: To meet these challenges, Agent57 brings together multiple improvements that DeepMind has made to its Deep-Q network, the AI that first beat a handful of Atari games back in 2012, including a form of memory that lets it base decisions on things it has previously seen in the game and reward systems that encourage the AI to explore its options more fully before settling on a strategy. These various techniques are then managed by a meta-controller, which balances the trade-offs between going ahead with a particular strategy and doing more exploration. 

Why it matters: For all their success, the best deep-learning models we have today are not very versatile. Most tend to be good at one thing and one thing only. Training an AI to excel at more than one task is one of the biggest open challenges in deep learning. The ability to learn 57 different tasks makes Agent57 more versatile than previous game-playing AIs, but—and this often gets missed—it still can’t learn to play more than one game at a time. Agent57 can learn to play 57 games, but it cannot learn to play 57 games at once. It needs to retrain for each new game even though it can use the same algorithm to do so. In this way Agent57 is similar to AlphaZero, DeepMind’s deep reinforcement learning algorithm, which can learn to play chess, Go, and shogi—but again, not all at once. True versatility, which comes so easily to a human infant, is still far beyond AIs’ reach. 

Deep Dive

Artificial intelligence

Google DeepMind used a large language model to solve an unsolved math problem

They had to throw away most of what it produced but there was gold among the garbage.

AI for everything: 10 Breakthrough Technologies 2024

Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.

What’s next for AI in 2024

Our writers look at the four hot trends to watch out for this year

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.