Digital Signal Processing Reference
In-Depth Information
Figure 5.5.
applying yet another set of coefficients to these pixels
e
yielding
a 16:1 downscaling for this video stream.
5.2 Implementing Video Scaling
You will recall that video scaling is mathematically equivalent
to digital filtering since we are multiplying a pixel value by
a coefficient and then summing up all the product terms. The
implementation is thus very similar to the implementation of two
1-D filters.
Let's stay with the example we used in Figure 5.5 .
First we will need to store the four lines of video. While we are
working on only four pixels
in practice the
entire video line will have to be stored on-chip. This will account
for an appreciable amount of memory. For example, in a 1080p
video frame, each video line means 1920 pixels with, for example,
each pixel requiring 24 bits
one from each line
e
e
46 Kbits or 5.7 KB. In the video
processing context, this is called the line buffer.
If you are using a 9-tap filter in the vertical dimension you will
need 9 line buffer memories.
Figure 5.6 shows the resources required to implement a filter.
In general you will need memories for each video line store as
well as memory for storing the coefficient set. You will also need
multipliers for generating the product of the coefficient and
pixels, and finally an adder to sum the products.
One way to implement this 2-D scaler (i.e. 2-D filter) is by
cascading two 1-D filters, as shown in Figure 5.7
ΒΌ
this is an
implementation that is published by Altera for their FPGAs.
The implementation in Figure 5.7 consists of two stages, one
for each 1-D filter. In the first stage, the vertical lines of pixels are
fed into line delay buffers and then fed to an array of parallel
e
Search WWH ::




Custom Search