Monday, November 26, 2012

Gary Marcus - The Brain in the Machine

Writing for The New Yorker, NYU Professor of Psychology Gary Marcus [author of Kluge: The Haphazard Evolution of the Human Mind (2009), The Birth of the Mind: How a Tiny Number of Genes Creates The Complexities of Human Thought (2004), and The Algebraic Mind: Integrating Connectionism and Cognitive Science (2003)] takes a critical look at the new IBM brain simulation of a macaque monkey (code name Compass), running on 96 of the world's fastest computers. It's the biggest simulation ever constructed, and it cannot even begin to approximate a primate brain in any real sense.

As Marcus points out:
For more than twenty-five years, scientists have known the exact wiring diagram of the three hundred and two neurons in the C. Elegans roundworm, but in at least half a dozen attempts nobody has yet succeeded in building a computer simulation that can accurately capture the complexities of the simple worm’s nervous system.
This is exactly why I think Ray Kurzweil is way over-confident in the creation of a computer brain that can pass the Turing Test (by 2029). The brain requires a body, and that is whole level of complexity that computers will never be able to simulate.

The Brain in the Machine

brain-simulation-compass.jpg
 
Half a trillion neurons, a hundred trillion synapses. I.B.M. has just announced the world’s grandest simulation of a brain, all running on a collection of ninety-six of the world’s fastest computers. The project is code-named Compass, and its initial goal is to simulate the brain of the macaque monkey (commonly used in laboratory studies of neuroscience). In sheer scale, it’s far more ambitious than anything previously attempted, and it actually has almost ten times as many neurons as a human brain. Science News Daily called it a “cognitive milestone,” and Popular Science said that I.B.M.’s “cognitive computing program… just hit a major high.” Are full-scale simulations of human brains imminent, as some media accounts seem to suggest?

Compass is part of long-standing effort known as neuromorphic engineering, an approach to build computers championed in the nineteen-eighties by the Caltech engineer Carver Mead. The premise behind Mead’s approach is that brains and computers are fundamentally different, and the best way to build smart machines is to build computers that work more like brains. Of course, brains aren’t better than machines at every type of thinking (no rational person would build a calculator by emulating the brain, for instance, when ordinary silicon is far more accurate), but we are still better than machines at many important tasks, including common sense, understanding natural language, and interpreting complex images. Whereas traditional computers largely work in serial (one step after another), neuromorphic systems work in parallel, and draw their inspiration as much as possible from the human brain. Where typical computers are described in terms of elements borrowed from classical logical (like “AND” gates and “OR” gates), neuromorphic devices are described in terms of neurons, dendrites, and axons. 

In some ways, neuromorphic engineering, especially its application to neuroscience, harkens back to an older idea, introduced by the French mathematician and astronomer Pierre-Simon Laplace (1749-1825), who helped set the stage for the theory of scientific determinism. Laplace famously conjectured:
An intellect which at a certain moment [could] know all forces that set nature in motion, and all positions of all items of which nature is composed, [could] embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Much as Laplace imagined that we could, given sufficient data and calculation, predict (or emulate) the world, a growing crew of neuroscientists and engineers imagine that the key to artificial intelligence is building machines that emulate human brains, neuron by neuron. Ray Kurzweil, for instance, (whose new book I reviewed last week) has, quite literally, bet on the neuromorphic engineers, wagering twenty thousand dollars that machines could pass the Turing Test by 2029 by using simulations built on detailed brain data (that he anticipates will be collected by nanobots). The neuroscientist Henry Markram (who collaborates with I.B.M. engineers) has gone even further, betting his entire career on “whole brain emulation.”

Others, including myself, are less sanguine. At present, we still know too little about how individual neurons work to know how to put them together into viable networks. For more than twenty-five years, scientists have known the exact wiring diagram of the three hundred and two neurons in the C. Elegans roundworm, but in at least half a dozen attempts nobody has yet succeeded in building a computer simulation that can accurately capture the complexities of the simple worm’s nervous system. As the N.Y.U. neuroscientist Tony Movshon notes, “Merely knowing the connectional architecture of a nervous system is not enough to deduce its function.” One also needs to know the signals flowing among the elements of neural circuits, because the same circuit can perform many different functions under different circumstances. By extension, building a device whose wiring diagram mimics the brain (e.g. Markram’s Blue Brain) does not guarantee that such a device can simulate the brain in any useful way.

One guess as to why scientists have struggled to build adequate brain models is that neurons—which form the basic unit of Compass—are too coarse a grain for capturing computation. It might be like trying to understand the weather by looking at clouds, without having an understanding of the mechanics of the water droplets within. For example, as the Rutgers scientist Randy Gallistel has argued, the common theory that equates neuron-wise connections with memory may be wrong; smaller molecules like RNA might be a significantly more energy-efficient substrate for memory. If Gallistel’s speculation is correct, building a neural network that focusses on neurons while ignoring RNA is unlikely to ever succeed.

Either way, whether or not the engineers are starting with the correct basic units, the real issue is that we still know too little about how the brain is organized to understand how neurons build genuine intelligence. I.B.M.’s Compass has more neurons than any system previously built, but it still doesn’t do anything with all those neurons. The short report published on the new system is full of vital statistics—how many neurons, how fast they run—but there’s not a single experiment to test the system’s cognitive capacities. It’s sort of like having the biggest set of Lego blocks in town without a clue of what to make out of them. The real art is not in buying the Legos but in knowing how to put them together. Until we have a deeper understanding of the brain, giant arrays of idealized neurons will tell us less than we might have hoped. Simply simulating individual neurons without knowing more about how the brain works at the circuit level is like throwing Legos in a pile and hoping that they create a castle; what we really need are directions for creating the castle, but this can only come from psychologists and neuroscientists working together closely to try to understanding the kinds of circuits out of which minds are made.

Carver Mead, as it happens, was also one of the early champions of Moore’s Law, the idea that computers are rapidly increasing in power, doubling every eighteen to twenty-four months. I.B.M.’s latest success is a testament to that law; even ten years ago, a machine of Compass’s scope was almost inconceivable. But what’s not doubling every eighteen to twenty-four months is our understanding of how the brain actually works, of the computations and circuits that are underlie neural function. In debates about when artificial intelligence will come, many writers emphasize how cheap computation has become. But what I.B.M. shows is that you can have all the processing power in the world, but until you know how to put it all together, you still won’t have anything nearly as smart as the human brain.

Illustration by John Ritter.

No comments: