Image Processing Reference
In-Depth Information
transform,
= G ( ω )=exp
ω 2
2(1 ) 2
F{
g ( x )
}
(9.12)
meaning that the result is still a Gaussian, albeit with an inverted σ and a different
constant factor. It is a function that has a “minimal width”, even after Fourier trans-
formation. To clarify what this “minimal width” means, we need to be precise about
what width means. To that end we first recall that the graph of any function (not just
Gaussians) f ( x ) shrinks compared to f ( ax ) with regard to the x -axis if a is real
and 1 <a . Conversely, it will dilate if 0 <a< 1. Second, if we have the Fourier
transform pair f ( x ) ,F ( ω ) then the function pair f ( ax ), F ( a ω ), with a being real
and nonzero, is also a Fourier transform pair. In other words, if f shrinks with regard
to the x -axis, its Fourier transform will dilate with regard to ω .
In case f is the uniform distribution, e.g., the characteristic function in Fig. 6.3
left, we know how to interpret the width; we just need to measure the length between
the two “ends” of the graph as we measure the length of a brick. The problem with
the widths of general functions is that they may not have clearly distinguishable ends.
The not-so-farfetched example is the Gaussian family. Therefore one needs to bring
further precision to what is meant by the width of a function.
Assuming that the function is integrable, one way to measure the width of a
function f ( x ), is via the square root of its variance
Δ ( f )=
−∞
m 0 ) 2 f ( x ) dx 2
( x
(9.13)
where
/
f ( x )=
|
f ( x )
|
−∞ |
f ( x )
|
dx
(9.14)
and
m 0 =
−∞
x f ( x ) dx
(9.15)
The “˜” applied to f sees to it that f becomes a probability distribution function, by
taking the magnitude of f and normalizing its area to 1. Because f can be negative
or complex in some applications, the magnitude operator is needed to guarantee that
the result will be positive or zero. After this “conversion”, the integral measures the
ordinary variance of a probability distribution function.
We can see by inspection that Δ ( f ( αx )) and Δ ( F ( α )) will be inversely propor-
tional to each other to the effect that their product will equal to a constant γ that is
independent of α :
Δ F ω
α
γ = Δ [ f ( αx )]
·
(9.16)
The exact value of γ depends on f , and thereby also on F . As a direct consequence
of Cauchy-Bunyakovski-Schwartz inequality, it can be shown that
1
γ = Δ ( f )
·
Δ ( F )
(9.17)
 
Search WWH ::




Custom Search