Digital Signal Processing Reference
In-Depth Information
on the construction of algorithms for efficient determination of Boolean for-
mulae from examples. 63
While the focus in computational learning theory has mostly been on the
complexity of learning and the design of efficient inference algorithms, very
similar types of problems have been studied in nonlinear signal processing,
specifically, in optimal filter design. 64 - 70 Indeed, the specific roles of com-
putational learning 77 and pattern recognition theory 78 in nonlinear signal
processing have been studied. Inference typically involves designing an es-
timator from some predefined class of estimators that minimizes the error of
estimation among all estimators in the class. An important role in filter design
is played by these predefined classes or constraints. For example, as discussed
above, stack filters are represented by the class of monotone Boolean func-
tions. Although it would seem that imposing such constraints can only result
in a degradation of the performance (larger error) relative to the optimal fil-
ter with no imposed constraints, constraining may have certain advantages.
These include prior knowledge of the degradation process (or in the case of
gene regulatory networks, knowledge of the likely class of functions, such
as canalizing functions), tractability of the filter design, and precision of the
estimation procedure by which the optimal filter is estimated from observa-
tions. For example, we often know that a certain class of filters will provide
a very good sub-optimal filter, while lessening the data requirements for its
estimation.
To quantify the advantage (or disadvantage) of filter constraint, suppose
data from a sample of size n are used to design an estimate
ψ
n of the optimal
filter
ψ
opt . The error,
ε
n ,of
ψ
n cannot be less than the error,
ε
opt , of the optimal
= ε
ε
filter. The corresponding design cost is
opt , and the error of the
n
n
ε
= ε
+
designed filter is decomposed as
n . Hence, the expected error of
n
opt
the designed filter is
E [
ε n ]
= ε opt +
E [
n ]
.
The essential problem for nonlinear filtering is that satisfactory filtering of-
ten requires large windows, especially for images, and it is often impossible
to obtain large enough samples to sufficiently reduce E [
n ]. Thus, optimiza-
tion is constrained to some subclass C of filters. If
ψ
C is an optimal filter in
C with error
ε
C and design error
n, C , then
ε
ε
opt and E [
n, C ]
E [
n ].
C
The error of a designed constrained filter,
ψ
n, C , possesses the decomposition
ε
= ε
C
+
= ε
ε
n, C . The cost of constraint is given by
opt . Hence,
n, C
C
C
ε
= ε
opt
+
+
n, C , and
n, C
C
E [
ε
n, C ]
= ε
+
+
E [
n, C ]
.
(13.3)
opt
C
Constraint is statistically beneficial if and only if E [
ε n, C ]
E [
ε n ], which is
true if and only if
C
E [
n ]
E [
n, C ]
.
(13.4)
The savings in design error must exceed the cost of constraint.
Search WWH ::




Custom Search