Information Technology Reference
In-Depth Information
If the representations are sparse, the vast majority of cells will become inactive
quickly. The network design must ensure that cells become inactive only if they are
not needed for further computation.
Ordered update does not require global communication. If integrate-and-fire
neurons are used as processing elements, those cells that receive a salient stimulus
that fits their receptive field will fire earlier than cells that get a suboptimal stimulus.
The firing cells trigger their neighbors via excitatory links if the neighboring cells
are already close enough to the firing threshold. This leads to an avalanche effect
that produces a fast traveling wave of activity. The wave is actively propagated un-
til it either collides with a wave from the opposite direction or it reaches locations
that have too low activity to be triggered. If the cells have approximately the same
refractory time, all cells that participated in the wave will synchronously become
sensitized for a new trigger event again.
The ordered update of cells effectively converts the recurrent lateral connectivity
into a feed-forward network where the graph structure depends on the relative ac-
tivities. In [23] I applied the activity-driven update to a binarization network similar
to the one described in the previous section. I demonstrated that binarization using
activity-driven update improved ZIP code recognition performance as compared to
the buffered update mode.
Although the activity-driven update offers some advantages over buffered up-
date, it will not be used in the remainder of the thesis. The reason for that deci-
sion are the computational costs involved with implementing the activity-driven up-
date on a serial machine. However, if the basic processing elements were chosen to
be integrate-and-fire neurons implemented with an event-based simulator, activity-
driven update would occur naturally. In this case, the ordering would be done using
a priority queue for the events.
4.3.4 Invariant Feature Extraction
The last example of this chapter demonstrates that invariant feature extraction is
possible in the Neural Abstraction Pyramid architecture. In Section 2.1, we saw
that the ventral stream of the human visual system extracts features which are in-
creasingly invariant to object transformations. One example of the supporting neu-
robiological evidence was published by Ito et al. [108]. They found invariance of
neural responses to object size and position in the inferotemporal cortex (IT). Such
invariance is useful for object recognition since transformations, such as translation,
rotation and scaling, do not alter the identity of an object.
However, perfect invariance to such transformation is not desirable. If rotation
invariance were perfect, the digits 6 and 9 could not be distinguished and if scale in-
variance were perfect, a model car could not be distinguished from its full-size orig-
inal. There is neurobiological evidence that only limited invariance is implemented
only for familiar stimuli in the human visual system. For example, Logothetis et
al. [146] found view-tuned cells in area IT that responded to complex stimuli and
showed limited invariance to several transformations. Nazir and O'Regan [165]
and Dill and Fahle [53] found evidence against invariance for random dot patterns.
Search WWH ::




Custom Search