Information Technology Reference
In-Depth Information
experimental results. More, the results of the experiments are shown as an attempt to describe
the behavior of this kind of system; for example, its additive properties or the consequences
of the choice of the threshold function. The adequacy of such a behavior to describe cognitive
phenomena is an ongoing problem.
However, the experimental results show that it is possible to mimic many commonsense
assumptions about how trust varies while some features are altered; our aim was in fact to
capture trust variations more than assign absolute values to it. In our view, this experiment
confirms the importance of an analytic approach to trust and of its determinants, not sim-
ply reduced to a single and obscure probability measure or to some sort of reinforcement
learning.
In the next two paragraphs we introduce:
some learning mechanisms with respect to the reliability of the belief sources; and
some comparison experiments among different strategies for trusting other agents using a
Contract Net protocol: we will show how our cognitive approach presents some advantages.
11.13 Learning Mechanisms
In the previous paragraphs we have considered the belief sources as a static knowledge of the
agents. Briefly in this part we show how it could be possible to extend this approach by trying
to model some of the dynamics generated by a learning process and by trust itself.
We give an agent in a MAS system the capacity to evaluate its 'sources of opinions', i.e. the
other agents, according to a specific advising attitude parameter: trust in Y as an information
source , (that is different from trust in Y to perform differently).
In order to build a belief source, we considered a node representing a single belief and an
edge representing the impact of this single belief: the value of the edge (the impact factor)
represents the validity of the source with respect to this single communicative episode. Some
elements of this episode are unique (e.g. certainty about the source) but others are shared
between all the other communicative episodes with the same source: the trustfulness of the
source is applicable to all the class of the possible beliefs about the opinions of a source about
an argument. These values can be learned and applied to future cases; for example, if it results,
from some interactions with John, that he systematically lies, the impact of (my belief about)
his opinion (e.g. John says p
...
) will drastically diminish or even become negative, and this
value can be used in further interactions.
As shown the FCM computes its results until it stabilizes. This leads to a stable result for
each node involved in the FCM. Here we propose a second phase: the FCM 'evaluates its
sources', i.e. modifies the impact of each single belief source according to the final value of
its belief source node .
For example, many nodes n 1 ,
,n n representing single beliefs (opinions given by the
source) can contribute to the value of a belief source node N . Each node n j (with 1
...
j
n )
has an impact i 1 ,
,i n over N ; the impact value is calculated by the inner FCM previously
described. After the FCM stabilization, the difference between the (final) value of the belief
source and the value of each single belief (of the source) can be seen in information terms as
an error .
The learning phase consists in trying to minimize errors; in our terms, the impact of a bad
opinion (and the importance of the corresponding source), has to be lowered; the reverse is
...
Search WWH ::




Custom Search