Biomedical Engineering Reference
In-Depth Information
neurons to become highly excited. This is easy to see for an ordinary VQ codebook. Imagine a
probability density function in a high-dimensional input space (the raw input to the region).
The feature detector responses can be represented as points spread out in a roughly equiprobable
manner within this data cloud (at least before projection into their low-dimensional subspaces)
(Kohonen, 1995). Thus, given any specific input, we can choose to highly excite a roughly uniform
number of highest appropriate precedence feature detector vectors that are closest in angle to that
input vector.
In effect, if we imagine a rising externally supplied operation control signal (effectively
supplied to all of the feature detector neurons that have not been shut down by the precedence
principle), as the sum of the control signal and each neuron's excitation level (due to the external
inputs) climbs, the most highly excited neurons will cross their fixed ''thresholds'' first and ''fire''
(there are many more details than this, but this general idea is hypothesized to be correct). If the rate
of rise of the operate signal is constant, a roughly fixed number of not-inhibited feature detector
neurons will begin ''firing'' before local inhibition from these ''early winners'' prevents any more
winners from arising. This leaves a fixed set of active neurons of roughly a fixed size. The theory
presumes that such fixed sets will, by means of their coactivity and the mutually excitatory
connections that develop between them, tend to become established and stabilized as the internal
feature attractor circuit connections gradually form and stabilize. Each such neuron group, as
adjusted and stabilized as an attractor state of the module over many such trials, becomes one of
the symbols in the lexicon.
Each final symbol can be viewed as being a localized ''cloud'' in the VQ external input
representation space composed of a uniform number of close-by coactive feature detector
responses (imagine a VQ where there is not one winning vector, but many). Together, these
clouds cover the entire portion of the space in which the external inputs are seen. Portions of the
VQ space with higher input vector probability density values automatically have denser clouds.
Portions with lower density have more diffuse clouds. Yet, each cloud is represented by roughly the
same number of vectors (neurons). These clouds are the symbols. In effect, the symbols form a
Voronoi-like partitioning of the occupied portion of the external input representation space
(Kohonen, 1984, 1995); except that the symbol cloud partitions are not disjoint, but overlap
somewhat.
Information theorists have not spent much time considering the notion of having a cloud
of ''winning vectors'' (i.e., what this theory would term a symbol ) as the outcome of the
operation of a vector quantizer. The idea has always been to only allow the single VQ codebook
vector that is closest to the ''input'' win. From a theoretical perspective, the reason clouds of
points are needed in the brain is that the connections which define the ''input'' to the module
(whether they be sensory inputs arriving via thalamus, knowledge links arriving from other portions
of cortex, or yet other inputs) only connect (randomly) to a sparse sampling of the feature vectors.
As mentioned above, this causes the feature detector neurons' vectors to essentially lie in relatively
low-dimensional random subspaces of the VQ codebook space. Thus, to comprehensively charac-
terize the input (i.e., to avoid significant information loss) a number of such ''individually
incomplete,'' but mutually complementary, feature representations are needed. So, only a cloud
will do. Of course, the beauty of a cloud is that this is exactly what the stable states of a feature
attractor neuronal module must be in order to achieve the necessary confabulation ''winner-take-
all'' dynamics.
A subtle point the theory makes is that the organization of a feature attractor module is
dependent upon which input data source is available first. This first-available source (whether
from sensory inputs supplied through thalamus or active symbol inputs from other modules) drives
development of the symbols. Once development has finished, the symbols are largely frozen
(although they sometimes can change later due to symbol disuse and new symbols can be added
in response to persistent changes in the input information environment). Since almost all aspects of
cognition are hierarchical, once a module is frozen, other modules begin using its assumed fact
Search WWH ::




Custom Search