Information Technology Reference
In-Depth Information
(a) (b) (c)
Fig. 3.18. Iterative pattern completion proposed by Seung [209]: (a) architecture of the net-
work (two layers are connected by vertical feedback loops); (b) learned receptive fields (to-
pographic feature map); (c) iterative pattern completion (images adapted from [209]).
sketched in Figure 3.17(a). For patterns with continuous variability, such discrete
attractors may not be appropriate. Seung [209] proposed to represent continuous
pattern manifolds with attractive manifolds of fixed points, continuous attractors, as
illustrated in Figure 3.17(b). These attractors are parameterized by the instantiation
or pose descriptors of the object. All instantiations have similar low energy, such
that a change in pose can be achieved without much effort. When confronted with
an incomplete pattern, the network dynamics quickly evolves towards the closest ob-
ject representation. Thus, the incomplete pattern is projected orthogonally against
the manifold and hence completed to a pattern with the same pose.
Seung suggested using a neural network with vertical feedback to learn such
continuous attractors, as shown in Figure 3.18(a). The network consists of two
16 × 16 sheets of neurons that compute a weighted sum of their inputs, followed by
a rectification nonlinearity. Both layers are connected by 5 × 5 local receptive fields.
The sensory input is initialized to the incomplete pattern and trained to reconstruct
the original pattern after two iterations. Normalized images of the handwritten digit
two that have been degraded by setting a 9 × 9 patch, placed at a random location, to
zero are used as incomplete patterns. By training with gradient descent on the com-
pletion error, the weights shown in Figure 3.18(b) emerge. They form a topographic
map of localized oriented features. Figure 3.18(c) illustrates the reconstruction pro-
cess for an example. One can see that the network is indeed able to fill-in the missing
image parts. Note that this is not as difficult as it seems, since the network knows
a-priori that the target image will be a normalized digit of class two.
Somato-Dendritic Interactions Integrating Top-Down and Bottom-Up Signals.
Siegel et al. [210] proposed a model that involves vertical feedback between two
areas, as sketched in Figure 3.19(a). Both areas are reciprocally connected by ex-
citatory axons. The excitatory neurons have two sites of synaptic integration. The
apical dendrite integrates top-down influences, while bottom-up projections termi-
nate in the basal dendritic tree. The areas also contain inhibitory neurons that project
to all excitatory neurons.
Each area is modeled as a one-dimensional array. Both are connected by local
retinotopic links. The neurons are simulated using a conductance-based model with
active sodium and potassium conductances for spike generation. Synaptic conduc-
tances are implemented for glutamergic and two types of gabaergic transmission.
Search WWH ::




Custom Search