Geoscience Reference
In-Depth Information
When modeling context at pixel
u
via the local conditional probability
* [( )|
, the
pc
k
uc
]
G ()
u
g
weights
-th category indicators are derived per solution of the
(ordinary indicator kriging) system of equations:
for the
k
{
wg
(
u
),
= 1
,
,
G
( )}
u
k
g
G
()
u
Â
w
() (
u
s
u
-+=
u
)
y
s
(
u
-
u
,
g
=
1
,
,
G
()
u
k
g
'
k
g
'
g
k
k
g
g
'
=
1
(11.6)
G
()
u
Â
w
()
u
=
1
k
g
'
g
'
=
1
where denotes the Lagrange multiplier that is linked to the constraint on the weights; see
Goovaerts (1997) for details. The solution of the above system yields a set of weights that
account for: (1) any spatial redundancy in the training samples by reducing the influence of clusters
and (2) the spatial correlation between each sample indicator
y k
G ()
u
of the
k
-th category and the
i k
()
u
unknown indicator for the same category.
A favorable property of OIK is its data exactitude: at any training pixel, the estimated probability
identifies the corresponding observed indicator; for example, .
This feature is not shared by traditional spatial classifiers, such as the nearest neighbor classifier
(Steele et al., 2001), which allow for misclassification at the training locations. On the other hand,
at a pixel
i k ()
u
* [( )|
* [(
pc
k
uc
]
pc
uc
)|
]
=
i
(
u
)
g
k
g
g
k
g
that lies further away from the training locations than the correlation length of the
indicator covariance model , the estimated OIK probability is very similar to the corresponding
prior class proportion (i.e., ). In short, the only information exploited by IK is
the class labels at the training sample locations and their spatial correlation. Near training locations,
IK is faithful to the observed class labels, whereas away from these locations IK has no other
information apart from the
u
S k
pc
* [( )|
uc =
]
p
k
g
k
K
prior (constant) class proportions
.
{,
pk
= 1
, ,}
K
k
11.2.3
Combining Spectral and Contextual Information
Once the two conditional probabilities and are derived from
spectral and spatial information, respectively, the goal is to fuse these probabilities into an updated
estimate of the conditional probability , which
accounts for both information sources. In what follows, we will drop the superscript * from the
notation for simplicity, but the reader should bear in mind that all quantities involved are estimated
probabilities. In accordance with Bayesian terminology, we will refer to the individual source
conditional probabilities, and , as preposterior probabilities and retain
the qualifier posterior only for the final conditional probability
* [( )| ( )]
* [( )|
pc
k
uxu
pc
k
uc
]
g
* [ ( ) |
pc
u xuc
( ),
]
=
Prob{ ( )
C
u
=
c
|
xuc
( ),
}
k
g
k
g
* [( )| ( )]
* [( )|
pc
k
uxu
pc
k
uc
]
g
* [( )| ( ),
that accounts
pc
k
uxuc
]
g
for both information sources.
Bayesian updating of the individual source preposterior probabilities for, say, the
k
-th class is
accomplished by writing the posterior probability
in terms of the prior proba-
pc
k
[( )| ( ),
uxuc
]
g
bility
p k
and the joint likelihood function
:
p
[(),
xu c
| ()
c
u =
c
]
g
k
p
[(),
xu c
u
xu c
| ()
c
= ◊
c
]
p
g
k
k
pc
[ ( ) |
u xuc
( ),
]
=
Prob { ( )
C
u
=
c
|
xuc
( ),
}
=
(11.7)
k
g
k
g
p
[ ( ),
]
g
p
[ ( ),
xu c
|
c
( )
u
==
c
]
Prob{
X
(
u
)
=
x
(
u
),
,
X
(
u
)
=
x
(
u
),
C
(
u
)
=
c
,
,
C
(
u
)
=
c
|
where
g
k
1
1
B
B
1
k
G
k G
1
c
()
u
=
c k
}
denotes the probability that the particular combination of
B
reflectance values and
G
sample class labels occurs at pixel
u
and its neighborhood (for simplicity,
G
and
are not
G ()
u
differentiated notation-wise). In the denominator,
denotes the marginal (unconditional)
p
[ ( ),
xu c
]
g
Search WWH ::




Custom Search