Information Technology Reference
In-Depth Information
dences
E =
{
(Director = Spielberg)
(Type = S.F.)
}
,
and Example 3.6, where the experience I 3 of agent A 3 are represented
by the rules R1 to R6, we see that rule R1 and rule R2 are fired, and
the corresponding opinion of the agent A 3 is therefore
B 3 =
{
Good , Averge
}
.
Note that this does not intend to mean A 3 believes the probability of
any other state value other than 'Average' or 'Good' must be zero.
This does not imply that the probability that the quality of the cur-
rent movie is average, and that is good, are both
1
2 ,either.Inmost
applications including the current scenario, the obtained samples (i.e.,
the agents' experience rules) are far too sparse to provide any mean-
ingful assessment to the probabilistic distribution of the state values.
Rather, the opinion
B 3 = { Good , Averge }
should be understood as the following statement from agent A 3 :'From
my (i.e., agent A 3 's) limited (self-conflicting) experience so far, I have
reason to believe the quality of the current movie (i.e., state) may be
either average or good.'
The situation for agent A 1 is simpler. As evidences
Example 3.8
are
E =
{
(Director = Spielberg)
(Type = S.F.)
}
in Example 3.5, there are totally 3 rules in agent A 1 's experience rules
that are fired, which correspond to the previous cases when agent
A 1 watched E.T. the Extra-Terrestrial , The Lost World ,and Jurassic
Park . Hence the corresponding opinion of agent A 1 is
B 1 =
{
Good , Good , Good
}
,
or simply { Good } . It is important to note that, again, this does not in-
tend to mean that agent A 1 believes the probability that the quality of
Search WWH ::




Custom Search