Machine Understanding


More Tech Stuff:

Indexing Books: Lessons in Language Computations

Client-Side Frame Manipulation Inside the Microsoft Internet Explorer Object Model with Visual Basic .NET

Replacing a PC power supply

Constructing a Mandelbrot Set Based Logo with Visual Basic.NET and Fireworks

October 16, 2009

Bottoms Up for Machine Understanding

The first paragraph of "Towards a Mathematical Theory of Cortical Micro-circuits" [Dileep George & Jeff Hawkins, PLoS Computational Biology, October 2009, Volume 5, Issue 10] states:

Understanding the computational and informtion processing roles of cortical circuitry is one of the oustanding problems in neuroscience. ... the data are not sufficient to derive a computational theory in a purely bottom-up fashion.

My own cortex, probably not wanting to give itself a headache by proceeding too rapidly into what looks like a dense and difficult paper, immediately drifted off into thoughts on deriving a computational theory in a purely bottom-up fashion.

The closer we get to the physical bottom, the easier it seems to be to understand that the project might work. Suppose we could model an entire human brain at the molecular level. We imagine a scanner that can tell us, for a living person who we admit is intelligent and conscious, where each molecule is to a sufficient degree of exactitude. We also would have a computational system for the rules of molecular physics, and appropriate inputs and outputs.

Unless you believe that the human mind is not material (a dualist or idealist philosophic view), such a molecular-detail model should run like a human brain. At first it should think (and talk and act, to the extent the outputs allow) exactly like the person whose brain was scanned.

However, that does not mean scientists would understand how the brain works, or how a computational machine could exhibit understanding. Reproducing a phenomena and understanding a phenomena are not the same thing. The advantage of such a molecular computional brain model would be that we could run experiments on it in a way that could not be done on human beings or even on other mammals. We could start inputting and tracing data flows. We could interrupt and view the model in detail at any time. We could change parameters and isolate subsystems. Perhaps, further in the future, such a model could even be constructed without having to scan a human brain for the initial state.

At present, for a bottoms-up approach that might actually be workable in less than a decade, we would probably want to do a neuron-by-neuron model (probably with all the non-neural supporting cells in the brain as well). However, a lot of new issues arise even at this seemingly low level, even if we presume we have some way to scan into the model all of the often-complicated axon to dendrite paths and synapses. If learning is indeed based on synapse strength (Hebbian hypothesis), we would need both a good model for synapse dynamics and a detailed original state of synapses. This would require modeling the synapses themselves at the molecular level, or perhaps one-level up at the some molecular aggregate level. In effect it would not be possible to model an adult brain that has exhibited intelligence and understanding. We would need to start with a baby brain and hope that it would go through a pattern of neural information development similar to that of a human child.

A complete neural-level model would be much easier to test intelligence hypothesese on than a molecular-level model. It would not in itelf indicate that we understand how humans understand the world. By running the model in two parallel instances (with identical starting states), but with selected malfunctions, we could probably isolate most of the sub-systems required for intelligence. This should help us build a comprehensible picture of how intelligence operates once it is established and of how it can be constructed by neural circuits from whatever the raw ingredients of intelligence turn out to be.

Despite our lack of such complete bottom-up models, I don't think it is too early to try to reconstruct how the brain works, or how to make machines intelligent. The paper outlines the HTM approach to this subject. HTM was based on a great deal of prior work in neuroscience and in modeling neural aggregates with computers. Often in science success has come from the combination of bottom-up and top-down approaches. Biological species, and fossil species, were long catalogued and studied before Darwin's theory of evolution revealed the connections between species. Darwin did not invent the concept of evolution of species, or of inherited traits, which many scientists already believed were shown by the fossil record and living organisms. He added the concept of natural selection, and suddenly evolution made sense. The whole picture popped into view in a way that any intelligent human being could see.