Information Technology Reference
In-Depth Information
Input 3 6 12 Target Input 3 6 12 Target
Fig. 9.19. Noise removal and contrast enhancement. The activities of the network's outputs
are shown over time. The stable network outputs approximate the targets.
The contributions from input projections to the output units are mainly excita-
tory. They look like a low-pass copy of the input and contain the background level
as well as the noise.
The lateral contributions are mostly excitatory. Lines excite themselves and their
surroundings. A wider neighborhood of the lines is inhibited weakly.
Most interesting are the contributions via backward projections. They vary with
the background level. Images with darker background receive more inhibition than
brighter images. The inhibition is distributed quite uniformly, indicating the exis-
tence of a global background level estimate in the higher layers. Exceptions are
the lines and the image borders. The network has learned that the lines are more
probable in the center of the image than at its border.
Figure 9.21 shows the network's performance over time for the entire dataset.
The output error falls rapidly to a lower level than in the occlusion experiment and
remains low. This implies that occlusions represent a more severe degradation than
low contrast combined with noise. As before, generalization is good, and the net-
work converges to an attractor representing the reconstruction.
9.5 Reconstruction from a Sequence of Degraded Digits
The Neural Abstraction Pyramid architecture is not restricted to reconstruct static
images from static inputs. Since the networks are recurrent, they are also able to
integrate information over time. Thus, if image sequences are available, they can be
used to improve the reconstruction quality.
It has been demonstrated by other researchers that video material can be re-
constructed with higher quality when taking into account neighboring frames than
Search WWH ::




Custom Search