Information Technology Reference
In-Depth Information
a cognitive trustor : this kind of trustor takes into account both the specific features of the
actual trustee and the impact of the environment on their performance. In this implementation
there is no learning for this kind of agent but an a priori knowledge of the specific properties
of the other agents and of the environment. It is clear that in a realistic model of this kind
of agent, the a priori knowledge about both the internal properties of the trustees and the
environmental impact on the global performance will not be perfect. We did not introduce
a learning mechanism for this kind of agent (even if in Section 11.13 we discussed this
problem and showed potential solutions) but we introduced different degrees of errors in
the knowledge of the trustor that corrupted their perfect interpretation of the world. The
cognitive model is built using Fuzzy Cognitive Maps. In particular, two special kind of
agents will be analyzed:
best ability trustor : who chooses the agent with the best ability score.
best willingness trustor : who chooses the agent with the best willingness score.
These two kind of cognitive agents can be viewed as having different 'personalities'.
11.14.3 The Contract Net Structure
We have performed some experiments in a turn world , others in a real time world .Intheturn
world the sequence is always the same. The first agent (randomly chosen) posts their first task
( Who can perform the task
τ
? ) and they collect all the replies from the other agents ( I can
perform the task
in the environment w ). All data given from the offering agents are true
(there is no deception) and in particular the cognitive trustors know the values of ability and
willingness for each agent (as we will see later, with different approximations).
Depending on their delegation strategy, the trustor delegates the task to one of the offering
agents (in this case, even to themselves: self-delegation). The delegated agent tries to perform
the task; if it is successful, the delegating agent gains one Credit ; otherwise it gains none. The
initiative passes to the second agent and so on, repeating the same schema for all the tasks
for all the agents. At the end of each simulation, each agent has collected a number of Credits
that correspond to the number of tasks that the delegated agents have successfully performed.
We have introduced no external costs or gains; we assumed that each delegation costs the
same and the gain of each performed task is the same. Since the agents have the same structure
and the same tasks to perform, gained credits are the measure of success of their delegation
strategy.
In the real time world we have disabled the turn structure; the delegation script is the same,
except for no explicit synchronization of operations. This means that another parameter was
implicitly introduced: time to execute an operation. Collecting and analyzing messages has
a time cost; agents who have more requests need more time in order to fulfill them. In the
same way, agents who do more attempts in performing a task, as well as agents who reason
more, spend more time. In real time world time optimization is another performance parameter
(alternative or together with credits ), and some alternative trust strategies become interesting:
in real time experiments we introduced another strategy:
τ
the first trustful trustor : it is a variant of the cognitive trustor and it has the same FCM
structure; but it delegates to the first agent whose trust exceeds a certain threshold: this is
Search WWH ::




Custom Search