Information Technology Reference
In-Depth Information
1 −p sr
14
1 −p sr
14
1 −p sr
14
p sr
···
0
1 −p sr
14
1 −p sr
14
1 −p sr
14
p sr
···
0
.
.
.
. . .
C
=
(2)
1 −p sr
14
1 −p sr
14
1 −p sr
14
···
p sr
0
1
−p sr
14
1
−p sr
14
1
−p sr
14
···
p sr
0
0
···
0
0
0
1
As a subject may confuse the shown objects among each other, we introduce
a third parameter. The object recall probability p or is the subject's chance to
remember the current object. In reverse, 1
p or is the probability to confuse the
current object with all other possible objects. A series containing eight different
objects leads to 1 / 8
1. Here p or = 1 represents a perfect recall of objects
and p or =1 / 8 describes a learner without any memory for objects. We define
the object confusion matrix
p or
with the ij th element being the probability of
assigning object i to object j . Again, the attention is uniformly distributed over
all objects and, as above, in case of an ideal learner
O
O
becomes the identity
matrix.
1 −p or
7
1 −p or
7
1 −p or
7
p or
···
1 −p or
7
1 −p or
7
1 −p or
7
p or
···
.
.
. . .
O
=
(3)
1 −p or
7
1 −p or
7
1 −p or
7
···
p or
1 −p or
7
1 −p or
7
1 −p or
7
···
p or
With the defined state space and transition probabilities the learning process
for every single object can be described. For each object the model performs a
trial by selection of a button, following the state transition matrix. As this is
not a deterministic process we calculate the probabilities to be in a certain state
π
= P ( s i ) for all s i
S . Always starting in state s 1 , the initial distribution
π
( t =0)is π 1 ( t =0)=1and π 2 ... 16 ( t = 0) = 0. For every trial the new state
probability vector
π
( t + 1) results from the product of the transposed transition
matrix
T ∈{ T left ,
T right ,
T up ,
T down }
and the previous state probability vector
π
( t ). Since we allow state confusion, the state confusion matrix gets also part of
the product
T
π
( t +1)=
T
· C · π
( t ) .
(4)
After every trial we need to sum up the actual probability of success and the
probability of success of the previous trial. For instance, if we consider 'left' as
the correct button, the state s 15 automatically leads to success ( s 16 ) in the next
trial (cf. Tab. 1 and (1)). Thus, this sum becomes π 15 ( t +1)= π 15 ( t )+ π 16 ( t )
for that case.
We are now able to calculate the state probabilities for each object separately.
So far the learning process is represented by the state probability vectors
l for
each object l of the series. As we consider the chance to confuse objects we have
to update the state probabilities for all objects, even if just one is really shown
to the subject. This implies, that the probability of success on a presented object
is the weighted sum of the success states of all objects with their probability of
π
Search WWH ::




Custom Search