Information Technology Reference
In-Depth Information
others, the following rule
{
Good
}⇒
(movie , movie , movie , movie)
1 (tennis , tennis , tennis , tennis)
In order to estimate the value of the hidden state s ('Quality' in
the example), each agent i is equipped with some experience which
is a collection I i of experience rules. The left hand side of the rule
corresponds to previously observed evidences, and the right hand side
corresponds to the value of the state that occurred. We note that an
agent does not always have coherent past evidences.
Generally, an
agent's past experience contains conflicting rules.
Example 3.6 Consider the experience I 3 of agent A 3 in Table 3.4,
which can be represented by the following experience rules:
R1. (Director = Spielberg)
(Type = S.F.)
Average
R2. (Director = Spielberg)
(Type = S.F.)
Good
Good
R4. (Director = Lucas) (Type = S.F.) Good
R5. (Director = King) (Type = Drama) Bad
R6. (Director = King) (Type = Drama) Bad
Note that rule R1 and rule R2 are in conflict with each other. Rule
R1 states that when Spielberg is the director and the movie type is
S.F., then the movie is average; while Rule R2 states that the movie
quality is good given the same evidences. Therefore, given the current
evidences
R3. (Director = Lucas)
(Type = S.F.)
E = { (Director = Spielberg) (Type = S.F.) },
by agent A 3 's experience I 3 the supported states are members of the
set
. In other words, for agent A 3 , both 'Good' and
'Average' are the possible states given the current evidences E .This
is different from the case for agent A 1 , who is sure that the quality of
the movie is good.
{
Good , Averge
}
Search WWH ::




Custom Search