Digital Signal Processing Reference
In-Depth Information
is performed to reduce the dimension to 10, and a generalized regression
network is trained with five hidden neurons. The classifier is applied
(see section 13.5 for algorithmic details) to identify rotated characters
in figure 13.7(c). Five characters were correctly identified, a single one
was not, and the algorithm produced no misclassifications. This result
is good, considering the small training set.
13.5
Confidence Map
Generation
The cell classifier has to be trained only once. Given such a cell classifier,
section pictures can now be analyzed as follows.
A pixelwise scan of the image yields an image patch with center
location at the scan point; the cell classifier is then applied onto this
image patch in order to give a probability determining whether a cell is
located at the given position or not. This yields a probability distribution
over the whole image which is called a confidence map . Each point of
the confidence map is a value in [0 , 1] stating how probable it is that a
cell is depicted at the specified location.
In practice, a pixelwise scan can be too expensive in terms of
calculation time, so a grid value γ can be introduced, and the picture is
scanned only every γ -th pixel. This yields a rasterization of the original
confidence map, which for small γ can still be fine enough to detect cells.
Figure 13.8 shows the rasterized confidence map of a section part. The
maxima of the confidence map correspond to the cell locations; small but
non zero values in the confidence map typically depict misclassifications
that can be avoided by thresholding.
Depending on the type of cell classifier, a method to increase perfor-
mance similar to that in section 13.3 can be applied. In the simplest case,
the classifier is a linear separator (for example, learned by a perceptron
or one of the above unsupervised techniques). Then
σ
x,y
I 1 ( x, y ) W ( x, y ) ,
n×n
ζ :
R
[0 , 1] ,I 1
(13.15)
n×n the trained
where I 1
is the n
×
n image patch to be tested, W
∈ R
weight matrix, and σ :
R → R
is an increasing nonlinearity (already
Search WWH ::




Custom Search