Deep networks are pretty shallow

3381036206_3c0ecb4255_z

A new artificial neural network can play Space Invaders as well as the best human players. (CC Image from http://www.flickr.com).

There has been a lot of interest lately generated by Google DeepMind – an AI (artificial intelligence) company owned by Google. DeepMind have created a deep neural network (an artificial brain) that can teach itself how to play Atari video games. Some of these games it can play as well as the best human players.

As it learns to play, it initially just makes random moves and loses the game quickly, like a baby or a small child playing for the first time. But over time it learns to associate certain conditions on the screen with success or failure.

In space invaders for example, it soon understands that being directly under a missile fired from an attacking alien spaceship is a bad place to be, since it quickly results in losing the game. Conversely, shooting the mothership that zooms across the top of the screen is good, since it instantly boosts the score.

Learning to play like this is a significant feat. The neural network is not pre-programmed to know what the game is. All it can do is look at the screen, move the game character, and get its score. Until now, no computer could learn on its own to do something this complicated.

DeepMind have done this by using a technique called deep learning. Deep learning is a recent invention and is still being developed and improved. There are 2 major components needed to make it work:

1. A large neural network constructed in a hierarchy of many layers of artificial neurons, where each layer is connected to the next. The job of this network is to find recurring patterns in its input (in this case the input is the moving image on the screen).

2. Connections from the network to the output that moves the game character. These connections are updated when the network learns from its score – if it makes a move that increases its score, then it learns to make that move again next time it sees the same (or a similar) input pattern on the screen.

A game it plays well is Space Invaders, but a game it doesn’t is Ms Pac Man. The difference between these games tells us a lot about the limitations of deep learning, and how far we still have to go to make truly intelligent artificial brains.

The difference that matters is not that the games look very different – rather it is that Space Invaders doesn’t require the neural network to ‘plan ahead’ very far. In Space Invaders, everything it needs to know is on the screen in front of it. If there is an incoming missile, dodge it. If there is an alien above, shoot it.

On the other hand, Pac Man requires the neural network to plan ahead – for example, don’t go down this path if a ghost could enter from the other end, since the character could get trapped with nowhere to turn.

To do this requires forward thinking – deciding where a path leads to, how far away from the other end the ghosts are, which direction they are heading, etc. Forward thinking is something that people do very well, but even our best state-of-the-art AI still fails at.

There are many other differences between deep learning and brains, but the inability to plan very far into the future is the biggest, and will likely be the hardest to overcome – mainly because we have very little idea how our brains do this!

So even though deep learning is very impressive, there is still a long road ahead before we can make truly smart computers.


Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s