Information Technology Reference
In-Depth Information
We will propose that the agent says it believes in all those hypotheses that have a
greater confidence value than I n (A) and disbelieves all hypotheses that have a lesser
value.
Therefore, from our simple example with the coins we have:
Initially (before the coin toss):
Agent 1s belief in
H1
=
0 . 4
+
0 . 4
=
0 . 8
And its belief in
H2
=
0 . 2
+
0 . 0
=
0 . 2
So :
Entropy (Agent)
=−
(0 . 8
Log 2 (0 . 8)
+
0 . 2
Log 2 (0 . 2))
=
0 . 26
+
0 . 46
=
0 . 72
Since we require our indifference threshold to be in terms of a probability but the
entropy measure is in terms of a log of a probability then this result needs to be
converted. So the indifference threshold I will be the:
{
inverse
log 2 (0 . 72 )
}
,
=
0 . 61 (approximately) .
This is because:
Log 2 ( 0 . 61 )is approximately equal to 0 . 72 .
If a tail occurs then Agent 1's belief in H1 and H2 becomes:
Entropy (Agent)
=−
(0 . 85
Log 2 (0 . 85)
+
0 . 15
Log 2 (0 . 15))
=
0 . 2
+
0 . 41
=
0 . 61
The indifference threshold I will be 0.66 since
0.61 (approximately).
We can say that when the result of the coin toss is tails, the general agent's
confidence has gone up from 0.61 to 0.66. Given a string of tails this confidence
will eventually reach 1.0, whence the remaining hypothesis (H1 in this case) would
be re-designated as a fact. Nevertheless, in our game theory model the agent is not
so certain of the world that it never acts to test for alternative hypotheses. No matter
how 'certain' they become of a hypothesis, the agents in our model remain open to
conflicting (negative) evidence. (For examples, see Gooding and Addis 2004 ).
Log 2 ( 0.66 )
=
Search WWH ::




Custom Search