Information Technology Reference
In-Depth Information
A:AGENT X1, ..., Xn:ACTION I1, ..., In, J:REAL
belief(has_capability(A, X1, I1)) ... belief(has_capability(A, Xn, In)) trust(A, J)
trust(A, β * J + (1-β) * (w1*I1 + w2*I2 + ... + wn*In))
Thus, the new trust state in agent A is calculated based on the level J of the old trust
state, combined with the weighted sum of the beliefs in capabilities Ik for the different
actions Xk in the domain of application. For example, a soccer player trusts his
teammate more if he believes that he is good at attacking as well as defending. Again,
the wk 's are weight factors, and the β is a persistence factor.
A next step is to define how beliefs in capabilities are determined. For this, the
mechanism put forward in [8] is reused, which states that a new trust state in some
entity is partly based on the old trust state, and partly on an experience. This is
modeled via the following LEADSTO rule (where the experiences are observed
actions performed by teammates):
A:AGENT X:ACTION I:REAL
belief(has_capability(A, X, I)) observed(performed(A, X, succeeded))
belief(has_capability(A, X, 1 - γ + γ*I))
belief(has_capability(A, X, I)) observed(performed(A, X, failed))
belief(has_capability(A, X, γ*I))
For instance, in case agent X believes that agent Y's capability with respect to
tackling is 0.6, and Y performs a successful tackle, then this belief is strengthened
(where γ is an update speed factor). Note that the mechanism to update trust is also
applied to the self, to model some kind of self-confidence.
Finally, as with emotions, trust states also have an impact on the other states in the
BDI-model. For example, a high trust in a teammate increases the strength of the
intention to pass the ball to this player. Again, these mechanisms are represented
using (mostly domain-specific) rules, which can be found in [6].
3 Simulation Results
To test the basic mechanisms of the model, it has been used to generate a number of
simulation runs within the LEADSTO simulation environment. To this end, various
scenarios have been established in the context of a (simplified) soccer game. The
game has been simplified in the sense that we did not simulate a complete soccer
match (as is the case in the RoboCup environment), including computer-generated
teammates and opponents, and environmental processes (e.g., movements of the ball).
Instead, the tests focused on the behavior of one particular agent (player X). To test
this behavior, the scenarios consisted of a series of predefined events (e.g., 'teammate
Y passes the ball to teammate Z', 'teammate Z shoots on target'), which were
provided as input to player X. Based on these inputs, all actions and emotions derived
by the agent were observed and evaluated, and in case of inappropriate behavior, the
model was improved (manually). Since the main goal of the simulations was to test
the model for decision making in relation to emotions and trust, this simplified setup
was considered a necessary first step. As a next step (see Section 4), the model was
tested in a more complete setting (the RoboCup environment).
To represent the domain-specific aspects of virtual soccer, the different logical
sorts introduced in Section 2 were filled with a number of elements. For example,
Search WWH ::




Custom Search