Years ago, I read On Intelligence from Jeffrey Hawkins as one of my first introduction on brain-derived artificial intelligence.
Far from the statistical black boxes, he had the ambition to explain a surprising pattern seen in the neocortex and extract an algorithm from it.
There are a lot of observed patterns in the neocortex, the most scrutinized blob of fat, and most of these are functional divisions; we know for long that the brain is split between functional areas, from localized brain damage and, more recently, through the study of synesthesia and the technical possibilities of imagery displaying live activation patterns in the brain.
But what J. Hawkins presents in this book is something that lambda people have never heard about if before;
a pattern that is not localized but repeats all across the neocortex, a pattern that can be observed in every centralized nervous system but has a specificity for humans…
And this well-sold feature is nothing less than the layering of the neocortex. Which really makes sense.
As our skin is layered, we have different functions orchestrated at different levels. That also means a 1st degree burn is not as bad as a 3rd degree burn on your skin. Even damaging your epidermis won’t even hurt as it’s not touching your nerves or anything really alive.
And you can probably expect a similar importance in variation between neural layers. Except it doesn’t have 3 layers like the skin but up to 6 packed on really thin sheet.
This is the big announcement; we have 6 similarly organized layers all across our functional areas. Except the motor parts having 4, and other mammals having only 4, we are the 6 neural layers monkey !
He states then the importance of pyramidal neurons in this organization, not mentioning the glia cells functions at all, and ends up in a hierarchical representation where the hippocampus is atop.
A point where he insists the most; those divisions are functionally organized in volumes, or “cortical columns”, carrying the process unit and communicating with other cortical columns; which explains functional localization. The extracted algorithm idea was named “Hierarchical Temporal Memory” and hastly led to Numenta which produced white papers, open sourced and seemed to have an online machine learning algorithm that detects irregularities through a stream, with a correct result rate but not exceptional.
I wouldn’t depict myself the brain that way today, but back then it left me with a deep impression and the will to clarify as much as I could.
As I tried from the neural approach, I learnt painfully that we’re really lacking functional studies of complex behavior from “neural elements” compound to go further in this direction.
Now, I have much more perspective on this book to ask other open questions I would like to hear about;
One of the first thing that stunned me is the absence of architecture plasticity. If there are hierarchical patterns supposed to form from the “blank” neural sheet of a newborn, cortical columns is not enough to define functionally which layer requires which nodes and how gates are shaped.
Similarly, I have never seen a machine learning algorithm optimizing both its nodes and edges; we usually approximate the size of hidden layers (the number of hidden features) to make it good enough for learning to happen. The brain seems more to relate on time-precision, dynamic hierarchy and redundancy.
What do you think we lose in that mitigated approach? Do we have already algorithms optimizing both nodes and edges in a learning network? How to encompass for the hierarchical approach?
I am not sure I would consider the hippocampus as the top of the cognitive hierarchy.
In neurogenesis, we observe cortex builds on top of another. We have also some bootstrap period, the youth, required to progressively develop and integrate structured data processes. As such, the physical world is both the first processing system we build upon and our reference to tests hypothesis.
Would that be correct to consider that the hard world acts at both ends of that hierarchy, making it more of a cognition cycle ?
Regarding the importance of pyramidal neurons, there are numerous of them especially in complex sensing areas like those related to vision and audition or even spatial representation. The nature of those signals are vibrations that are divided to much more familiar categories like a color or the pitch of a note.
And there are some regularities as those can be represented in a computer from 3 dimensions. For instance, we can represent those senses with the 3 perceptive dimensions;
sound: amplitude, frequency and duration
color: luminosity, hue and saturation
pain: intensity, location and spread
Could this way of perceiving our world be caused by pyramidal neurons? Is it also causing us a part of subjectivity? Isn’t there a similarity between those 3 terms across senses?