Digital Signal Processing Reference
In-Depth Information
where
is the replication operator. Note that although the discussion pre-
sented in Section 2.2 restricted the replication weights to positive integer
values, the filter can easily incorporate real-valued (positive and negative)
weights utilizing the procedures in Reference 16.
The weighted median filter can be generalized to the fuzzy weighted me-
dian filter by simply replacing the crisp observation samples with their fuzzy
counterparts,
(2.46)
As the fuzzy order statistics and the fuzzy spatial order samples constitute
the same set, this definition of the fuzzy weighted median is equivalent to
first weighting the (crisp) spatial order samples and then selecting the fuzzy
median from this expanded set, i.e.,
FWMED[ x ]
FWMED[ x ]
=
MED[
w 1
x 1 ,
w 2
x 2 ,
...
,
w N
x N ]
.
=
MED[
w 1
x 1 ,
w 2
x 2 ,
...
,
w N
x N ]
(2.47)
(2.48)
Under both equivalent definitions, the fuzzy weighted median is able to ex-
ploit spatial correlations, through spatial weighting; limit the influence of out-
liers, through ranking and median selection; and introduce selected weighted
averaging of similarly valued samples, through the use of fuzzy samples.
=
FMED[
w 1
x 1 ,
w 2
x 2 ,
...
,
w N
x N ]
.
2.4.2.3 Optimization
The optimization of the fuzzy weighted median filter follows an approach
similar to that presented for the class of affine filters. In the fuzzy weighted
median filter case, however, the error criteria chosen is the mean absolute
error (MAE) criteria. This is because the MAE criteria arise naturally out of
the ML development under the Laplacian assumption. Additionally, median-
type filters are typically applied to signals with heavy-tailed distributions.
Utilizing the MSE for such signals tends to overemphasize the influence of
outliers in the optimization procedure, and thus lower-power error criteria
are typically employed.
Consider first the optimization of the fuzzy order statistics, from which the
optimization of the fuzzy median follows directly. Under the MAE criteria, the
cost to be minimized is J
d
(γ ) =
E
( |
d
| )
, where the optimization of the j th
order statistic is achieved by setting d
=
x
. Differentiating this cost criteria
(
j
)
with respect to
, substituting into a stochastic gradient-based algorithm, and
replacing the expectation operator with instantaneous estimates yields
γ
) µ γ
J
(γ )
∂γ
γ(
n
+
1
) = γ(
n
(
n
)
(2.49)
sgn
(
d
(
n
)
x j
(
n
))
= γ(
n
) + µ γ
γ
2
(
n
)
i = 1
R i, ( j ) (
2
x i (
n
)
x j (
n
))(
x i (
n
)
x
) (
n
))
(
j
×
,
(2.50)
i = 1
R i, ( j )
where
µ γ
is the step size and, as before, a positivity constraint is placed on
γ
.
Search WWH ::




Custom Search