Will some form of sleeping be essential in artificial intelligence

doodle-245

There have been several high profile papers published on sleep in the last few years. We now know that sleep performs many functions – physiological (cleaning out waste), dynamical (rebalancing synaptic connections) and functional (memory consolidation). Let’s tackle them one by one.

Physiological. During the day, metabolic processes cause the buildup of waste products in the brain. While you sleep, special waste disposal channels open up throughout the brain and fluid is flushed through to drain out the waste. This process must interfere with brain function somehow since it does not occur while you are awake. AI is not likely to need sleep for this reason.

Dynamical. Cortical brain activity is poised near to what is known in physics as a ‘critical’ or ‘phase change’ point. This allows brain activity to be chaotic. Chaotic activity is required to allow the brain to flexibly and dynamically process information, rather than being a simple bunch of reflexes. In all chaotic dynamical systems, the balance of the interactions between elements (neurons in this case) has to be finely tuned, or the chaotic state is lost. In the case of the brain, the loss of chaos results in disrupted thinking, which explains why people end up hallucinating and going psychotic with extreme sleep deprivation. During sleep, connections between neurons are rebalanced, keeping the brain tuned in the critical dynamical regime. In a computer, rebalancing of simulated neural networks can happen on the fly since it is a simple renormalization step. The brain can’t do vector renormalisation directly, so it needs to go offline and do it indirectly – this seems to occur using slow wave oscillations that happen during deep (non-REM) sleep. So an AI based on neural nets will need synaptic renormalization, but can probably fudge it using vector maths rather than needing to go offline.

Functional. Memories of events that happen to you are stored initially in a brain structure called the hippocampus. The important memories are transferred gradually to the cortex over days, weeks or even months, and the rest are forgotten. This transfer is a delicate process of inserting new memories into the cortical neural system without perturbing (too much) the existing information. Neural networks are prone to ‘catastrophic forgetting’, which is where new information added to a network tends to overwrite and destroy information that was already there, and the brain appears to suffer from the same problem – this is why we need the hippocampus and we don’t store new memories directly into the cortex. Adding new memories only works by gentle nudging of the synaptic connections that store the memories towards their new values, interleaved with periods of nudging back in the other direction to retain existing memories. This process also happens during sleep. Because of the delicacy of the process, the cortex can’t be awake and be processing sensory information at the same time as it is trying to incorporate new memories. There are no shortcuts for this – going offline is the only option, so an AI will almost certainly need to sleep for this reason.

So the answer is Yes, an AI will need to sleep, not for all the reasons a biological brain does, but for at least some of them.

Advertisements

Tiny insect brains beat our best computers

Dragonfly

The brains of dragonflies are exceptionally powerful computers. [Image: CC Flickr]

We don’t normally think of insects as very intelligent creatures. But in fact we are learning that insect brains are absolute wonders of fast, efficient, powerful computation.

The most powerful supercomputers we have today are probably close to being as powerful as a tiny insect brain. But the computers weigh millions of times more and use billions of times more power!

And that’s not the biggest hurdle. Even if our computers are powerful enough, we still have no idea how to program them to make them do the amazing things that even ‘simple’ insects are capable of.

Here is one example: The brains of many flying insects contain neurons (brain cells) that are able to ‘lock on’ to other flying targets. We call these neurons ‘small target motion detectors’ (STMDs).

Dragonflies, for example, are able to hunt down smaller flying insects and literally snatch them out of the sky. Despite having tiny brains, they do this with lightning speed in cluttered environments like thick vegetation without losing track of their targets, or getting distracted by shadows, or crashing into anything.

A team of scientists in Australia are studying the dragonfly brain to try to understand how they accomplish these amazing feats. They are starting to uncover some of the principles that the STMD neurons use to lock on to a target:

  • If there are multiple targets (i.e. several STMD neurons responding simultaneously) the best target is selected and the other neurons are temporarily shut down.
  • Once a target is selected, the STMD neuron that is tracking it starts responding even more strongly over time. This makes it easier for the dragonfly to ignore distractions, and also to track the target if it is temporarily obscured (e.g. by flying behind something).

So far, the scientists have tested these theories only in computer simulations. Even so, they have discovered some interesting facts – for example, how long an STMD neuron continues to fire strongly controls how long the dragonfly will keep trying to chase a target that has disappeared, and the dragonfly brain seems to use the optimal setting for catching its prey!

Next, the scientists are going to translate their findings into a robot to test their theories in the real world. They admit, however, that they are still a long way from understanding everything about how the dragonfly brain tracks flying targets, homes in, and catches them.

So when it comes to vision and movement, even minuscule insect brains outclass our best computers.

Routing information through the brain

https://www.flickr.com/photos/sosico/8285020035/

Brain signals continuously shift and change. (CC Image from http://www.flickr.com/photos/sosico/8285020035)

One big missing piece in the puzzle of how brains work is how information is routed throughout the brain. There are so many connections between neurons in the brain (up to 1 quadrillion – that’s 1,000,000,000,000,000!). With connection numbers like this, you might think that every part of the brain must be connected to just about every other part.

But when we monitor brain activity, what we see is that different parts of the brain somehow manage to totally disconnect from others at certain times. What it means to disconnect is that brain activity in one region becomes completely independent of activity elsewhere. This disconnection happens despite the actual physical connections still being present.

An article published 3 days ago in the prestigious journal Science has given us some clues about how these connections and disconnections may occur, by looking at a part of the brain called the hippocampus. The hippocampus is involved in language (in humans), in the formation of memories (in all mammals) and in our sense of location and direction.

With all these functions, the hippocampus is connected to many different parts of the brain. The scientists recorded from the hippocampus of rats while they performed different tasks, like searching for food (which rats like, especially when they find it!) or having to run through a wide open space (which rats don’t like as it makes them anxious).

What the scientists found was that different neurons in the hippocampus became more active in the different situations. What was most intriguing was that the neurons that became more active in any given situation all tended to connect to only certain other parts of the brain.

So when the hippocampus needed to send information to one part of the brain, it used mostly those neurons that connected to that particular part. For sending information to a different part of the brain, it would activate a different set of neurons.

This solves a small part of the puzzle of how the brain controls where information goes. The big question now is – How does the hippocampus activate just those neurons that have the right connections?

Since all the information starts in the hippocampus, somehow the hippocampus itself is making the decision about where the information needs to go, and then preferentially activating the appropriate neurons. HOW? No-one knows!

Using a different set of neurons with different connections to different parts of the brain, as discussed in this article, is just the last stage of a complex routing process that is somehow occurring in the brain. The BIG question – How does the brain choose and activate the right neurons in the first place? – still needs to be answered!


Deep networks are pretty shallow

3381036206_3c0ecb4255_z

A new artificial neural network can play Space Invaders as well as the best human players. (CC Image from http://www.flickr.com).

There has been a lot of interest lately generated by Google DeepMind – an AI (artificial intelligence) company owned by Google. DeepMind have created a deep neural network (an artificial brain) that can teach itself how to play Atari video games. Some of these games it can play as well as the best human players.

As it learns to play, it initially just makes random moves and loses the game quickly, like a baby or a small child playing for the first time. But over time it learns to associate certain conditions on the screen with success or failure.

In space invaders for example, it soon understands that being directly under a missile fired from an attacking alien spaceship is a bad place to be, since it quickly results in losing the game. Conversely, shooting the mothership that zooms across the top of the screen is good, since it instantly boosts the score.

Learning to play like this is a significant feat. The neural network is not pre-programmed to know what the game is. All it can do is look at the screen, move the game character, and get its score. Until now, no computer could learn on its own to do something this complicated.

DeepMind have done this by using a technique called deep learning. Deep learning is a recent invention and is still being developed and improved. There are 2 major components needed to make it work:

1. A large neural network constructed in a hierarchy of many layers of artificial neurons, where each layer is connected to the next. The job of this network is to find recurring patterns in its input (in this case the input is the moving image on the screen).

2. Connections from the network to the output that moves the game character. These connections are updated when the network learns from its score – if it makes a move that increases its score, then it learns to make that move again next time it sees the same (or a similar) input pattern on the screen.

A game it plays well is Space Invaders, but a game it doesn’t is Ms Pac Man. The difference between these games tells us a lot about the limitations of deep learning, and how far we still have to go to make truly intelligent artificial brains.

The difference that matters is not that the games look very different – rather it is that Space Invaders doesn’t require the neural network to ‘plan ahead’ very far. In Space Invaders, everything it needs to know is on the screen in front of it. If there is an incoming missile, dodge it. If there is an alien above, shoot it.

On the other hand, Pac Man requires the neural network to plan ahead – for example, don’t go down this path if a ghost could enter from the other end, since the character could get trapped with nowhere to turn.

To do this requires forward thinking – deciding where a path leads to, how far away from the other end the ghosts are, which direction they are heading, etc. Forward thinking is something that people do very well, but even our best state-of-the-art AI still fails at.

There are many other differences between deep learning and brains, but the inability to plan very far into the future is the biggest, and will likely be the hardest to overcome – mainly because we have very little idea how our brains do this!

So even though deep learning is very impressive, there is still a long road ahead before we can make truly smart computers.


Zapping your brain can make you more creative

www.flickr.com/photos/leogistic/4515906859

Want to get creative? Zap your brain! [CC Image from http://www.flickr.com/photos/leogistic/%5D

Scientists have discovered that applying mild electric currents to the brain can actually cause you to think more creatively.

Your brain uses minute electric currents to communicate between neurons. These electric currents occur in waves throughout the brain. These waves can be measured by placing electrodes on your scalp.

As your brain does different things, your brain waves change – the number of waves each second (called oscillations) varies from less than one per second up to several hundred per second.

Very slow waves, called delta waves (which occur at a few oscillations per second or less) occur when you are in deep sleep. Faster waves called alpha waves (around 10 per second), are associated with relaxation, day-dreaming and creativity. Beta waves (up to about 35 per second) and gamma waves (35 and above) occur when you are concentrating.

Now, scientists have discovered that applying mild electric currents to the brain at the same speed as alpha waves can actually cause you to think more creatively. When the same currents are applied at gamma speed, creativity doesn’t improve.

This is an astonishing result, because it means that it is not just the electric current that causes the increase in creativity. It is the electric current at the correct oscillation speed.

It also means that alpha waves are not just a side-effect of relaxed, creative thinking – instead, alpha waves somehow actually cause your brain to be creative.

Neuroscientists often debate whether brain oscillations do anything important. Evidence and ideas have been emerging for many years that oscillations are intimately involved in brain function. This latest study is another strong piece of evidence that supports these ideas.

Here is a link to a scientific summary (called an abstract) of the research:

Functional role of frontal alpha oscillations in creativity


Common Brain Myths (or are they real?)

The previous post listed some startling but true facts about the brain. This post does the opposite – let’s put some common brain myths to the test.

Many people believe these myths. Many others are quick to point out their serious flaws. However, like many myths, most of these brain myths have at least a grain of truth behind them.

As an example, let’s take a look at what must be the most common brain myth out there:

Myth #1: You only use 10% of your brain.

Why it’s mostly Untrue. From the many different types of brain scans that we can do, it’s very clear that most parts of our brain are being used almost continuously. The brain is a finely tuned machine and it wouldn’t make sense to have bits lying around taking up valuable space and not being used.

Grain of truth #1: In any brain part that is being used, usually only a very small number of brain cells (neurons) are active at any given time. The rest really are doing nothing at any particular moment. So in a sense, at any given moment we are only using a tiny portion of our brain! But the neurons that are active change rapidly from moment to moment, and any single neuron doesn’t stay inactive for very long. So over a significant period of time (anywhere from a few minutes to an hour or two), it is safe to say that pretty much every single neuron in our brain is used.

Grain of truth #2: Our brains are made up of thousands of interconnected components that interact in very complex ways. Some parts of our brain actually inhibit other parts.  For example, there is a condition called savant syndrome, where people with some forms of mental disability or brain damage nevertheless exhibit extraordinary abilities in music, memory, art or other fields. Savant syndrome seems to be caused by the failure of other parts of their brains to inhibit the parts that are giving them these amazing talents. In fact, savant syndrome can be induced in healthy people by temporarily shutting down these other brain parts! (We can temporarily shut down parts of our brains using powerful electro-magnets – strange but true!!). So it seems we all may have savants inside us, they are just being inhibited by other parts of our brain.

Grain of truth #3: There is another brain condition called alien hand syndrome, where another inhibition mechanism is damaged. People with alien hand syndrome find that one of their hands will often do its own thing, like pick up a cup that is in front of them and try to pour the contents into their mouths, or pick up a pencil and start writing. Parts of our brain are ready at any moment to perform common actions like this, but they are stopped by inhibition from other parts, and are usually only released when they are needed. So these inhibited parts are temporarily unused portions of our brain (of course they will usually spring into action when they are needed).

Grain of truth #4: Finally, recent research has shown that people who learn new skills faster are actually able to do so by shutting down competing parts of their brains! By shutting down these parts, they let the skill-learning parts of their brains do what they do best, which is learn the skill, without interference from the other parts that often have a tendency to ‘over-think’ the problem.

SO there you have it. This myth is mostly untrue, but it does have some grains of truth. In particular, we all may have some extraordinary talents hidden inside us that are being suppressed by more mundane parts of our brains!


Some amazing brain facts

Let’s kick off this blog with some amazing but true information about the brain.

1. Your brain is about 60% fat! [1]  The fat makes up the electrical insulation around the nerve fibres (called axons).

2. Nerve impulses (spikes) travel through insulated axons at more than 400 km/h [2] (that’s more than 250 mph)!

3. If you lined up all the axons in your brain end-to-end they would go around the earth 4 times [3] (160,000 km or 100,000 miles)!

4. Your brain contains about 100,000,000,000 (100 billion) neurons and up to 1,000,000,000,000,000 (a quadrillion) connections between them!

5. Neurons are tiny. You can fit about 100 of them (arranged in a 10×10 square) inside the smallest dot you can see on a standard computer screen, like the dot here ⇒ .

6. Even so, if you laid all your neurons out side by side, they would make a line that stretched for 1000 km (600 miles)!

7. And even though your brain makes up only 2% of your body weight, it uses 20% of your blood and 20% of your oxygen [4]. By weight, it’s the hardest working part of your body (more than your muscles!).

8. Every time you learn something new (e.g. a new skill or even just a simple memory) it changes the structure of your brain.

9. Parts of your brain that you use a lot get more connected to the rest of your brain (and maybe bigger too), and parts that you don’t use lose connections. Use it or lose it!

10. Brain waves are real – they can be recorded using electrodes placed on your scalp. They change as your brain does different things.

11. When you dream, your brain disconnects from your body (your body is paralysed!) so that you don’t act out your dreamed actions.

12. It’s very hard to tickle yourself since your brain distinguishes between your own touch and somebody else’s.

13. The power your brain constantly uses is about 20 Watts – enough to power a light-bulb continuously for your entire life!

References

[1] CY Chang, DS Ke, JY Chen (2009). “Essential fatty acids and human brain”. Acta Neurol Taiwan.

[2] Hursh JB (1939). “Conduction velocity and diameter of nerve fibers”. American Journal of Physiology 127: 131–39.

[3] L Marner, JR Nyengaard, Y Tang, B Pakkenberg (2003). “Marked Loss of Myelinated Nerve Fibers in the Human Brain with Age”. Journal of Comparative Neurology 462:144–152.

[4] Hartline DK, Colman DR (2007). “Rapid conduction and the evolution of giant axons and myelinated fibers”. Curr. Biol. 17 (1).