Digital Signal Processing Reference
In-Depth Information
Fig. 3.13
The tree cube taxonomy of classifiers. The figure is taken from [ 66 ]
3.5.3
MRFs Vs. CRFs
This section summarizes some main differences between MRFs and CRFs.
Formulation : In MRFs, the posterior is proportioned to the joint probability using
the Baye's rule, and the joint probability is modeled by defining the likelihood and
prior while CRFs model the posterior directly. In MRFs, the unary and pairwise
potentials are functions of observed data at individual site and only the labels,
respectively. While in CRFs, the unary and pairwise potentials are functions of the
whole observed data and labels.
Feature space : In MRFs, since the distributions of the observed data should be
modeled, low-dimension features, like color and motion, are used in common. While
in CRFs, more complex discriminative features would be selected to improve the
predictive performance.
Performance : Compared with CRFs, MRFs can handle data missing problem and
new class adding problem. While CRFs have better predictive performance since
CRFs model the posterior directly. On the other hand, since CRFs relax the assump-
tion of conditional independence of the observed data, they can incorporate global
information in the model.
Training data : MRFs can augment small number of expensive labeled data with
large number of unlabeled data while CRFs need much labeled data for training.
Data modeling : In MRFs, appropriate distributions need to be selected to model the
observed data. In CRFs, good classifier algorithms should be designed for learning
from labeled data.
Search WWH ::




Custom Search