Information Technology Reference
In-Depth Information
cooperate with each other so that they can cause as much harm to player 1 as possible. Of
course, this assumption is dramatic and most surely not correct, but as we seek to assure the
quality of decisions against all scenarios, it turns out as a sharp worst-case scenario sketch.
This is made explicit in the following result:
Proposition 4.1 (Rass & Schartner (2009)) . Let
Γ =(
N , PS , H
)
with N
= {
1, 2
}
,
= x T Ay , x T By )
PS
= {
PS 1 , PS 2 }
, and H
be a bi-matrix game with game-matrices A
R | PS 1 |×| PS 2 | , B
R | PS 2 |×| PS 1 | for player 1 (honest) and player 2 (adversary), respectively. Let
=
Γ 0
N , PS , x T Ay , x T
y
(
(
)
)
A
be the zero-sum game from player 1's perspective (i.e. player 2 receives
x T Ay), and let v
0
)
the payoff
denote its value (i.e. average outcome under a Nash-equilibrium
strategy in
Γ 0 ). Then
x )
T Ay
v
( Γ 0
) (
(5)
x , y )
for all Nash-equilibria
(
of the game
Γ
.
The proof is by simply observing that player 2 can either play the zero-sum strategy of
Γ 0 (in
this case the assumption is valid and we get equality in (5)) or act according to his own wishes.
In the latter case, he necessarily deviates from the zero-sum strategy and thus increases the
expected revenue for player 1.
4.2 Reasoning games
The observation that a zero-sum game soundly models a worst-case scenario from one
player's point of view (proposition 4.1) leads to a simple way of assuring the quality of a
decision: whenever we are facing random behavior, proposition 4.1 permits calculating the
worst-case distribution and provides us with a behaviorial rule so that we get an assured
outcome under this worst imaginable scenario. This is what we call
Assurance: when facing an uncertain situation, our recommendation should be such
that it provides a guaranteed outcome, independently of how much the observed
behavior deviates from the assumptions under which a decision was made.
Proposition 4.1 is the key to do this, and the process is made rigorous after the following
Example: let us return to the introductory example sketched in section 1.1. We now invoke
the game-theory and the (properly debugged) ontology to get the best answer from the three
candidates.
Recall that the recommendations were PS 1 = { (
s , f
)
,
(
l , m
)
,
(
o , h
) }
, where
(s,f): drive straight (s) at high speed (f)
(l,m): turn left (l) at the next junction, speed can be moderate (m).
(o,h): turn over (o) with high speed (h).
The oncoming traffic can either be slow or fast, and is free to turn left, right or straight at
the next junction. Hence, the set PS 2 is composed from each of possible combinations, i.e.
PS 2
= {
} × {
}
slow, fast
turn left, turn right, go straight
, making up 6 combinations, which we
abbreviate as pairs in PS 2 = { (
.
Assume that the ontology can decide upon the likelihood of an accident for each combination
in PS 1
s , l
)
,
(
f , l
)
,
(
s , r
)
,
(
f , r
)
,
(
s , s
)
,
(
f , s
) }
×
PS 2 . For example, if the recommendation is to drive straight at high speed, and
the oncoming traffic goes left, then the likelihood of an accident is higher than it would be if
the oncoming traffic goes straight too (considering which driver has to give priority). If the
Search WWH ::




Custom Search