Information Technology Reference
In-Depth Information
I
framed input with mean 0.5
input:
4.031
∗I - 0.8
lateral:
.387 ∗F + .141 ∗S F B
F
backward:
inverse of forward proj. from Edges
input:
4.031
∗I + 0.75
lateral:
.413 ∗B + .105 ∗S F B
B
backward:
inverse of forward proj. from Edges
S F B
← F + B
Fig. 4.13. ZIP code binarization - Layer 0 features. The image is represented in terms of
foreground ( F ) and background ( B ) features. The activities of the feature arrays as well as
the potentials of the contributing projections are shown. The weight-templates are scaled such
that the weight with the largest magnitude is drawn in black or in white.
The resolution of the layers decreases from 240 × 96 to 120 × 48 to 60 × 24 hy-
percolumns, as the image-patch corresponding to a single hypercolumn increases
from 1 × 1 to 2 × 2 to 4 × 4 pixels. All three layers are surrounded by a three-pixel
wide border that is updated using wrap-around copying. The transfer functions of
all projection units are linear, as are the transfer functions of the inhibitory output
units. In contrast, the output units of the excitatory features have a transfer function
f p sig ( β = 2 , see Section 4.2.4) that is zero for negative inputs and approaches one
for large inputs. Hence, inhibition grows faster than excitation if the network's ac-
tivity increases. All bias values in the network are zero, and the projection weights
to the output units are one if not noted otherwise. Input projections and forward
projections to excitatory features as well as projections to inhibitory features are
computed with direct access to avoid unnecessary delays. Lateral projections and
backward projections to excitatory features need buffered access since they receive
input from features that are updated later.
The separation of excitatory and inhibitory features forces the network designer
to use specific excitatory and unspecific inhibitory projections. This, together with
the nonnegative transfer function of the excitatory output units, makes the activity
of most feature arrays sparse. The design of the network's connectivity is motivated
by the Gestalt principles, discussed in Chapter 1. In particular, the principle of good
continuation plays an important role for grouping aligned features to objects. In the
following, the design of the individual layers is described in more detail.
Search WWH ::




Custom Search