Image Processing Reference
In-Depth Information
ficients within a generalized multiply-accumulate context. It should be noted, how-
ever, that even this linear method is then mapped into digital logic for computation.
In these terms, linear models may be perceived as restricted subsets within a
logical framework. Hence, a nonlinear solution to the same problem will be a more
general result that will be either better or the same as the linear solution, provided
that other conditions are met. One of the most important conditions is that sufficient
training data is available.
So why do linear solutions remain so common? There are a number of reasons.
The first is familiarity. Engineers and signal processors are trained in linear tech-
niques and are reluctant to depart from the security of these familiar solutions un-
less the subsequent improvements are great.
Also, the superposition properties of linear models makes parameter estimation
straightforward. This means that a small number of examples of system behavior
may be used to infer performance across a range of conditions. In theory, a linear
system may be completely described by observing the same number of training ex-
amples as the rank of the system. In practice, even allowing for the system observa-
tions to be noisy, the model may be fully characterized with only a small amount of
over-determination. Also, if the linear system model is extended by adding extra
parameters, only a linear increase in the number of training examples is required.
The situation is much more complex for nonlinear systems. The task is to seek
the optimal logical mapping from all possible mappings. No simple superposition
properties exist, and in the most general unconstrained design case, every combina-
tion of input variables must be observed a sufficient number of times in order to es-
timate the conditional probabilities of the output. Extending the system model by
adding more parameters leads to a rapid increase in the size of the required training
set. This contrasts sharply with the linear problem where it is only required that one
estimate the autocorrelation matrix, which is a much smaller set of values than the
conditional probabilities.
For logical mappings containing a large number of variables, the required
training set may be impossibly large. It may well be that even after observing a
huge set of training examples, some combinations have not been observed or have
been observed an insufficient number of times to make a statistically accurate esti-
mate of their conditional probabilities.
In the face of these estimation difficulties, it is not surprising that linear meth-
ods remain popular. Also, in many problems such as circuit analysis and audio ap-
plications, linear solutions are quite satisfactory. These systems are inherently
linear with their steady state and transient behavior being completely modeled as a
product of sinusoids and decaying exponentials. Other systems make much use of
Gaussian noise models and these sit naturally in a linear context. In these cases
there is no need to look any further, this model is satisfactory.
However, these linear approaches that work so well for many problems are not
necessarily as useful for image processing applications. The 2D nature of image
processing problems combined with human visual perception often requires more
involved decisions than is the case in 1D signal processing. For example, the tasks
Search WWH ::




Custom Search