Biomedical Engineering Reference
In-Depth Information
the segmentation. For the sake of simplicity of the presentation, we assume
classifiers that internally use NN interpolation for atlas lookup and therefore
only produce one unique label as their output. If the (unknown) ground truth
for voxel x is i , we say that x is in class i and write this as x C i .
11.6.1
A Binary Classifier Performance Model
An EM algorithm described by Warfield et al. [79] estimates the classifier per-
formance for each label separately. The method is based on the common perfor-
mance parameters p (sensitivity) and q (specificity), i.e., the fractions of true
positives and true negatives among the classified voxels. The parameters p and
q are modeled independently for each classifier k and each class C i (label in the
segmentation) as the following conditional probabilities:
p ( k )
i
= P ( e k ( x ) = i | x C i ) and q ( k )
i
= P ( e k ( x ) = i | x C i ) .
(11.12)
From these definitions, an EM algorithm that estimates p and q from the classifier
decisions can be derived as described by Warfield et al. [79]. From the computed
classifier performance parameters for each label, a contradiction-free final seg-
mentation E at voxel x can be computed as
E ( x ) = arg max
i
P ( x C i | e 1 ( x ) ,..., e K ( x )) .
(11.13)
Here, the probability P ( x C i | e ) follows from the classifiers' decisions and their
performance parameters using Bayes' rule. For details on the application of this
algorithm to classifier fusion, see [60].
11.6.2
A Multilabel Classifier Performance Model
In a generalization of the Warfield algorithm to multilabel segmentations [60],
the classifier parameters p and q are replaced by a matrix of label cross-
segmentation coefficients λ
( k )
i , j . These describe the conditional probabilities that
for a voxel x in class C i the classifier k assigns label j = e k ( x ), that is,
( k )
i , j = P ( e k ( x ) = j | x C i ) .
(11.14)
λ
This formulation includes the case that i = j , i.e., the classifier decision for
that voxel was correct. Consequently, λ
( k )
i , i is the usual sensitivity of classifier
k for label i . We also note that for each classifier k the matrix ( λ
( k )
i , j ) i , j is a
row-normalized version of the “confusion matrix” [83] in Bayesian multiclassifier
Search WWH ::




Custom Search