Google’s AI wins Space Invaders, proves “human-level control”

Chris Burns - Feb 25, 2015, 3:59pm CST
Google’s AI wins Space Invaders, proves “human-level control”

A new study has been published this week which suggests that artificial intelligence can now learn “human-level control.” The team of researchers come from Google’s DeepMind, where they’re using Space Invaders – the video game – to show how the search for truly human artificial intelligence isn’t too far off. The machine learns to play the video game, learns to win at the video game, and dominates all humans at the game they’ve created to help us defend our planet against the alien hordes.

It’s like The Last Starfighter – only this time, Google has created a computer that can defeat the invaders. Researchers at DeepMind suggest that Reinforcement Learning is one key to optimizing artificial intelligence.

Reinforcement Learning allows a system to generalize past experience to new situations. Just like humans and other animals do every day, this computer is able to see the game, learn what works and what doesn’t work, and move forward to win.

While in the past, artificial intelligence was really only able to achieve success in reinforcement learning in low-dimensional state spaces – like a maze – here the system can achieve something greater.

Here a deep Q-network, as they call it, is able to receive “only the pixels and the game score as inputs” and output results that “surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester.”

As good as a professional human.

This system was tested across a set of 49 games, all using the same algorithm, network architecture, and hyperparameters.

Above you’ll see DeepMind in action. This bit of action is captured by the scientific journal Nature – the same place where the research paper we’re speaking of is published.

According to the researchers, the deep Q-network has now been able to attain mastery at 22 games – including Space Invaders.

At this time, the computer is only able to master games where progress is made in very tiny increments. If you’re playing a game that requires long-term strategy, this DeepMind software isn’t quite up to the task – yet.

The team is working on expanding the system’s memory span. This would allow the artificial intelligence to parse more complex games and ultimately learn in a way that allows it to grow.

You can view the full paper Human-level control through deep reinforcement learning” in the journal Nature in issue 518, 529–533 (26 February 2015) doi:10.1038/nature14236. This paper has a large number of authors and an even longer list of corresponding authors and contributors.

The base author list reads as follows: Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis.

Must Read Bits & Bytes