Information Technology Reference
In-Depth Information
recognition [173]. In addition, small recurrent neuro-controllers [175] have been
designed that solve non-trivial control tasks. In the last years, it has been realized
that Pearl's belief propagation algorithm [177] can be applied to graphical proba-
bility models that contain loops [76]. These message-passing schemes have been
used successfully for the decoding of error-correcting codes [155]. Last, but not
least, recurrence has been successfully applied to combinatorial optimization prob-
lems [217].
The concepts of attractors and energy functions have been central to the theory
of recurrent neural networks. Hopfield [101] investigated symmetrically connected
networks with binary units that were asynchronously updated. He showed that each
update does not increase an energy function E = 2
P
ij w ij S i S j , where S k is the
state of unit k and w ij is a weight connecting units i and j . This yields monotonic
convergence of the network's state towards an attractor that has a locally minimal
value of the energy E .
The deterministic Hopfield network might get trapped in local minima of the en-
ergy function. To avoid this, stochastic neural units have been introduced. This leads
to the Boltzman machine that samples the states of the network according to their
Boltzman probability distribution [1]. To adapt the distribution of the visible units
of a Boltzman machine to a desired distribution, a simple learning algorithm [2] is
available. It performs gradient descent on the divergence between the two distribu-
tions. Although learning is slow, hidden units allow Boltzman machines to capture
the higher order statistics of a data distribution.
Because fully connected recurrent networks have too many free parameters to
be applicable to image processing tasks, in the following, models that have specific
types of recurrent connectivity are reviewed: lateral interactions, vertical feedback,
and the combination of both.
3.2.1 Models with Lateral Interactions
Lateral interactions are the ones that are easiest to realize in the cortex, since they
require only short links between neighboring neurons within a feature map. Hence,
it is likely that the neurons of the visual system have been arranged such that the
most intensive interactions can be realized with lateral links. Lateral interactions
have also been used in some image processing algorithms.
For instance, the compatibility between a recognized primitive and its neighbor-
hood is the basis for relaxation labeling [195] techniques. The compatibilities define
constraints for the interpretation of image patches which are satisfied iteratively us-
ing stochastic label updates. Relaxation labeling has been applied to edge linking
and to segmentation problems.
Another example for the use of lateral interactions in image processing is
anisotropic diffusion [178]. Here, the image is smoothed by a diffusion process that
depends on the local intensity gradient. Thus, smoothing occurs tangential to an
edge, but not in the direction orthogonal to the edge. Anisotropic diffusion is a ro-
bust procedure to estimate a piecewise constant image from a noisy input image.
Search WWH ::




Custom Search