Information Technology Reference
In-Depth Information
Table 8.2.
Operators and global functions used in the algorithmic descriptions
Fn. / Op.
Description
A
⊗
B
given an
a × b
matrix or vector
A
,and
c × d
matrix or vector
B
,
and
a
=
c, b
=
d
,
A
⊗
B
returns an
a × b
matrix that is the result
of an element-wise multiplication of
A
and
B
.If
a
=
c, d
=1,
that is, if
B
is a column vector with
c
elements, then every
column of
A
is multiplied element-wise by
B
, and the result is
returned. Analogously, if
B
is a row vector with
b
elements, then
each row of
A
is multiplied element-wise by
B
, and the result is
returned.
A
B
the same as
A
⊗
B
, only performing division rather than multi-
plication.
Sum
(
A
)
returns the sum over all elements of matrix or vector
A
.
RowSum
(
A
)givenan
a×b
matrix
A
, returns a column vector of size
a
,where
its
i
th element is the sum of the
b
elements of the
i
th row of
A
.
FixNaN
(
A
,b
) replaces all
NaN
elements in matrix or vector
A
by
b
.
⎛
⎞
m
1
(
x
1
)
···
m
K
(
x
1
)
⎝
⎠
.
.
.
.
.
M
=
.
(8.1)
m
1
(
x
N
)
···
m
K
(
x
N
)
Thus, column
k
of this matrix specifies the degree of matching of classifier
k
for
all available observations. Note that the definition of
M
differs from the one in
Chap.5,where
M
was a diagonal matrix that specified the matching for a single
classifier.
In addition to the matching matrix, we also need to define the
N
×
D
V
mixing
feature matrix
Φ
,thatisgivenby
⎛
⎝
⎞
⎠
φ
(
x
1
)
T
−
−
.
Φ
=
,
(8.2)
φ
(
x
N
)
T
−
−
and thus specifies the feature vector
φ
(
x
) for each observation. In LCS,we
usually have
φ
(
x
) = 1 for all
x
, and thus also
Φ
=(1
,...
1)
T
, but the algorithm
presented here also works for other definitions of
φ
.
8.1.1
Model Probability and Evidence
The Function
ModelProbability
takes the model structure and the data as ar-
guments and returns
L
(
q
)+ln
p
(
M
) as an approximation to the unnormalised
Search WWH ::
Custom Search