Information Technology Reference
In-Depth Information
profile by the matrix representing the result probabilities of all experiments that are
possible. This calculation generates a biased perception for each agent of each of the
experiments. The results can now be made into expectations of results for the agent
thus:
0.8 * Hypothesis: the coin is good—H1
Head
Tail
No response
Exp1: Ask for a Head
0.72
0.04
0.4
Exp2: Ask for a Tail
0.4
0.72
0.4
multiply
0.2 * Hypothesis: the coin is good—H1
Head
Tail
No response
Exp1: Ask for a Head
0.18
0.0
0.02
Exp2: Ask for a Tail
0.04
0.0
0.16
multiply
Note that the total confidence over both hypotheses for each experiment is equal to
one. So for the agent we have:
Exp 1
=−{
0 . 72
Log 2 ( 0 . 72 ) +
0 . 04
Log 2 ( 0 . 04 ) +
0 . 04
Log 2 ( 0 . 04 )
+
0 . 18
Log 2 ( 0 . 18 ) +
0 . 0
Log 2 ( 0 . 0 ) +
0 . 02
Log 2 ( 0 . 02 ) }
1 . 27 ( entropy for H1 and H2 )
I ( Exp 1 ) =
=
0 . 41 ( approximately )
Exp2
=− 0 . 04
Log 2 ( 0 . 04 )
+
0 . 72
Log 2 ( 0 . 72 )
+
0 . 04
Log 2 ( 0 . 04 )
Log 2 ( 0 . 16 )
+
0 . 04
Log 2 ( 0 . 04 )
+
0 . 0
Log 2 ( 0 . 0 )
+
0 . 16
1 . 32 ( entropy for H1 and H2 )
I ( Exp 2 )
=
=
0 . 40 ( approximately )
Taking this criterion alone, experiment 1 (“Ask for a Head”) is a marginally better
choice because it has lower entropy. However, according to the logic of falsification
that many have attributed to science (Popper 1959 ; Wason 1960 ; Lakatos 1970 ), this
is not the best experiment to choose. The exposure of a Tail would eliminate H2, so
we ought to “Ask for a Tail”. There is a way of avoiding an apparent conflict between
confirmatory- and non-confirmatory strategies so that our agents can employ both.
The clue is to note that the Indifference levels (0.40, 0.41) do not sum to unity. This
suggests that these 'probabilities' are not giving the complete story. Something more
needs to be done.
 
Search WWH ::




Custom Search