Digital Signal Processing Reference
In-Depth Information
A
X 0
x 0
C
x 2
e -jπ
X 1
B
x 1
X 2
e -j2π/2
e -jπ/2
D
x 3
e -jπ
e -j3π/2
X 3
Figure 12.1. FFT
ow graph.
graph, it is apparent that X 0 ¼ A þ B ¼ [x 0 þ x 2 ] þ [x 1 þ x 3 ], which
is the same result. The order of computations would be to
compute pairs {A, C} and {B, D} in the first stage. The next stage
would be to compute {X 0 ,X 2 } and {X 1 ,X 3 }.
These stages (our example above has two stages) are
composed of “butterflies”. Each butterfly has two complex inputs
and two complex outputs. The butterfly involves one or two
complex multiplications and two complex additions. In the first
stage, there are two butterflies to compute the two pairs {A, C}
and {B, D}. In the second stage, there are two butterflies to
compute the two pairs {X 0 ,X 2 } and {X 1 ,X 3 }. The complex expo-
nentials multiplying the data path are known as “twiddle factors”.
In higher N count FFTs, they are simply sine and cosine values
that are usually stored in a table.
Now we can see why the FFT is effective in reducing the
number of computations. Each time the FFT doubles in size (N
increases by a factor of two), we need to add one more stage.
Thus, a four-point FFT requires two stages; an eight point FFT
requires three stages and a 16 point FFT requires four stages, and
so on. The amount of computations required for each stage
is proportional to N. The required number of stages is equal to
log 2 N. Therefore, the FFT computational load increases by
N log 2 N. The DFT computational load increases as N 2 .
Search WWH ::




Custom Search