Digital Signal Processing Reference
In-Depth Information
Due to the limited precision of the FFT-based intermediate results,
equation (13.11) does contain small numerical errors. Moreover, some
caution concerning the above theoretical gain seems to be appropriate:
FFT routines generally have a larger overhead and use more memory
than the direct convolution. Also, the convolution is real-valued, whereas
the multiplication in the frequency domain and the FFTs need complex
operations, so the speed-up factor should be decreased by a factor of 4
to 6.
Example
Figure 13.3 shows an experiment, in which two images of sizes w = 1600,
h = 1200 with marked ROIs (see 13.3(a) and 13.3(b)), are stitched
together using the above algorithm. The 2-D autocorrelation R δ is
calculated, using FFTs, by equation (13.11). The result is a complex-
valued matrix; however, the sum over all complex components is about
10 15 lower than the sum over the real ones. Hence it comes from
numerical errors and is discarded. The autocorrelation is displayed in
figure 13.3(c). Clearly a dominant maximum at
ˆ
237) is
present. The more precise distance measure from (13.2) is then applied,
using an exhaustive search within the square ˆ
δ 0
= (1159 ,
δ
+(
±
5 ,
±
5), which yields
the final
238). Obviously the FFT had already localization
yielded a nice match, though in some situations (e.g., given complicated
masking borders) fine-tuning by equation (13.2) is more important.
In our MATLAB realization the exhaustive search within the small
square took about twice as long as the FFT-based calculation of the
full-size autocorrelation (though the latter consumed considerably more
memory), so the speed-up factor was around 38000.
δ 0 = (1159 ,
13.4
Cell Classifier
In this section, we will explain how to generate a cell classifier that is
a function mapping image patches to cell confidence values. For this a
sample set of cells and non cells is generated; then an artificial neural
network is trained using this sample set.
Search WWH ::




Custom Search