Digital Signal Processing Reference
In-Depth Information
1
1dautocov
2dautocov
0.8
0.6
0.4
0.2
0
0
50
100
150
200
250
300
|tau|
(a) analyzed image
(b) autocorrelation (1d/2d)
Figure 5.1
One- and two-dimensional autocovariance coecients (b) of the gray-scale
128 × 128 Lena image (a) after normalization to variance 1. Clearly, using local
structure in both directions (2-D autocov) guarantees that for small τ ,higher
powers of the autocorrelations are present than by rearranging the data into a
vector (1-D autocov), thereby losing information about the second dimension.
ICA algorithms in which i.i.d. samples are assumed, this choice does not
influence the result. However, in time-structure-based algorithms such
as AMUSE and SOBI, results can vary greatly, depending on the choice
of this mapping.
The advantage of using multidimensional autocovariances lies in the
fact that now the multidimensional structure of the data set can be used
more explicitly. For example, if row concatenation is used to construct
s ( t ) from the images, horizontal lines in the image will make only trivial
contributions to the autocovariances. Figure 5.1 shows the one- and two-
dimensional autocovariance of the Lena image for varying τ (respectively
( τ 1 2 )) after normalization of the image to variance 1. Clearly, the two-
dimensional autocovariance does not decay as quickly with increasing
radius as the one-dimensional covariance. Only at multiples of the
image height is the one-dimensional autocovariance significantly high
(i.e. captures image structure).
Search WWH ::




Custom Search