µ
Machine Understanding

Search

More Tech Stuff:

Indexing Books: Lessons in Language Computations

Client-Side Frame Manipulation Inside the Microsoft Internet Explorer Object Model with Visual Basic .NET

Replacing a PC power supply

Constructing a Mandelbrot Set Based Logo with Visual Basic.NET and Fireworks


March 9, 2010

Evaluating HTMs, Part 6: Why Time is Necessary for Learning

See also Part 1, Part 2, Part 3, Part 4, Part 5

"Hierarchical Temporary Memory, Concepts, Theory, and Terminology " by Hawkins and George, Section 5, Why is Time Necessary to Learn? clarifies the role of temporal sequences and temporal pattern points in both learning and recognition (inference) by HTMs.

The authors use a good example, a cut versus uncut watermelon, to distinguish between pattern matching algorithms, and how HTM's learn to recognize patterns that are created by objects (causes, in HTM vocabulary). Any real world animal, when viewed, presents an almost infinite number of different visual representations. If you use a type of animal, say horses instead of a particular horse, the data is even more divergent. Pattern matching does not work well. But allow an HTM to view an animal or set of animals over time, and it will build up the ability to recognize an animal from different viewpoints: front, back, profile, or against most sorts of backgrounds.To do that requires data presented over time. Data that is close sequentially should be similar but not identical. Early data might be of a horse, head on, far away, which gradually resolves to a horse viewed close up. So over time the HTM can capture the totality of the horse.

Combining recognition of causes with names given by an outside source is also considered. Thus no amount of viewing a horse will tell an HTMs that human's call the thing "horse." You can do "supervised learning" with an HTM, training it to associate a name with a cause by imposing states on the top level of the HTM hierarchy. But it should be a simple extension to have a vocabulary learning HTM and an object learning HTM in a hierarchy with a learn-to-name-the-object HTM on top.

Once an HTM has learned to recognize images (or other types of data) it can recognize static images (or data). The authors say "The Belief Propagation techniques of the hierarchy will try to resolve the ambiguity of which sequences are active." I am not clear on that. It seems to me that static temporal patterns happen often enough in the real world so that some temporal pattern points will represent static causes. If the horse stands still in the real world, it would generate such temporal patterns. As the data goes up the hierarchy it tends to filter out ambiguity and stabilize causes, so a leaping horse should still be the same as a frozen image of a leaping horse at some point high enough in the hierarchy.

Section 6 is a sort of frequently asked questions part of the paper. I'm not sure if I'll cover all the sections or in what order, and I do want to go back to section 3.3 on belief propagation before closing out this series.