Biomedical Engineering Reference
In-Depth Information
This chapter reports on our development of an object-oriented software
system for automatic retinal image registration by mutual information maxi-
mization. For maximum portability the software is written in Java which “is
written once, but runs everywhere”, using MVC (model-view-control) frame-
work. We use the simplex downhill method (see [31]) as the optimization al-
gorithm which is easy to implement, and is quick in practice. We demonstrate
that this algorithm registers temporal and stereo retinal image pairs of four pa-
tients with a very high success rate (86%), a satisfactory registration accuracy
compared to point matching results, and within a clinically acceptable time
(12 ± 3 sec.).
4.2
Registration by Mutual Information
Maximization
4.2.1
Mutual Information
For two random variables A and B , the mutual information is
p AB ( a , b )
p A ( a ) p B ( b ) ,
I ( A , B ) =
p AB ( a , b ) log
A
B
where p AB ( a , b ) is the joint probability density function (pdf), and p A ( a ) and
p B ( b ) are marginal pdfs, Gonzalez et al. [32]. I ( A , B ) is related to entropy [ H ( A ),
H ( B )], conditional entropy [ H ( A | B ) , H ( B | A )], and joint entropy [ H ( A , B )] by
I ( A , B ) = H ( A ) + H ( B ) H ( A , B )
= H ( A ) H ( A | B )
= H ( B ) H ( B | A ) .
Mutual information measures the interdependence of two random variables.
If two variables are independent, then their joint pdf is the product of their
marginal pdfs, i.e., p AB ( a , b ) = p A ( a ) p B ( b ). Substituting this into the definition
of mutual information one gets zero. That is to say, the mutual information is
minimal. On the other hand, if two random variables are related by a one-to-
one mapping, the mutual information is maximal. In fact, in the latter case,
H ( A ) = H ( B ) .
Search WWH ::




Custom Search