Information Technology Reference
In-Depth Information
(:action EAT
:parameters (?food - food ?room - room)
:precondition (and (in ?room)(taken ?food)(> (quantity ?food) 0)(not (time-goes-by)))
:effect
(and
(time-goes-by)
(assign (action-time) ( * (conscientiousness) (go-duration)))
(decrease (hunger) (conscientiousness))
(decrease (quantity ?food) (conscientiousness))
(when (and (< (hunger) 0))
(and (assign (hunger) 0)))
(when (and (< (preference ?food) 0))
(and
(increase (valence)
( * (/ (neuroticism) (max-neuroticism))
(- (/ (+ (preference ?food) (eat-preference)) (max-preference)) 1)))
(increase (v-valence)
( * (/ (neuroticism) (max-neuroticism))
(- (max-preference) (/ (+ (preference ?food) (eat-preference)) 2))))))
(when (and (> (preference ?food) 0))
(and
(increase (valence)
( * (/ (neuroticism) (max-neuroticism))
(- (/ (+ (openness) (eat-preference)) (max-preference)) 1)))
(increase (v-valence)
( * (/ (neuroticism) (max-neuroticism))
(- (max-preference) (/ (+ (openness) (eat-preference)) 2))))))
(increase (arousal)
( * (/ (neuroticism) (max-neuroticism))
(- (/ (+ (activation ?food) (eat-activation)) (max-activation)) 1)))
(increase (v-arousal)
( * (/ (neuroticism) (max-neuroticism))
(- (max-activation) (/ (+ (activation ?food) (eat-activation)) 2))))))
Fig. 2. Example of an action ( EAT ) to cater for a need ( hunger )
hard constraints on our model. All agents can perform all actions, but they prefer (soft
constraints) the ones that better suit their preferences, personality and current emotional
state.
3.8
Goals
The agent motivation is to satisfy its basic needs, so goals consist of a set of drives val-
ues that the agent has to achieve. As an example, goals may consist of the achievement
of need values (and emotional variables) that are under a given threshold. They could
be very easily combined with other kinds of standard planning goals, creating other
kinds of domains. For instance, we could define strategy games where agents should
accomplish some tasks, taking into account also their needs.
4
Experiments
We report here the results obtained with the proposed model comparing its performance
to a reactive model. In the case of the deliberative model, we have used an A search
technique with the well-known domain-independent heuristic of FF [22]. This heuristic
is not admissible, but even if it does not ensure optimality, it is good enough for our
current experimentation. In the case of the reactive model, we have used a function to
choose the best action at each step (to cover the drive with the higher value, i.e. the
worse drive). These search techniques have been implemented in an FF-like planner,
S AY P H I [12].
 
Search WWH ::




Custom Search