Biomedical Engineering Reference
In-Depth Information
case of accidents, like fire or explosions, for in-
stance. Each virtual human in such simulations
has to deal with high intensity stimuli (light,
heat, noise, etc.), and the main motivation has
to be related to the danger they represent. Since
these are extreme conditions, we may usually
skip minor motivations like hunger or thirst:
adrenaline allows a human being to ignore them
in such cases. So, we focus here on
safety
as a
simple example of need. In more complex situa-
tions, other motivations like love and sympathy
for others in danger can be easily included to the
model by adding the correspondent Motivation
and Cognition Agents.
A panic situation involves a variety of behav-
iors like: reflex (fright, shock, etc.); selection of
course of action (running away, hiding, confront-
ing, etc.); path finding on rapid changing environ-
ments, etc. All these elements generate internal
conflicts that must be solved very quickly, using
knowledge processing.
Modeling this kind of behavior using the
proposed architecture involves, first of all, the
description of a Motivation Agent we can call
Safety
(after its main goal). It can be specified
as already described (see Figure 2).
The Motivational Drive
m
is the rate a motiva-
tion increases in time while not being satisfied.
Since safety doesn't change in time, in normal
persons (not paranoid, for instance) and in normal
conditions (peace, everyday life),
m
is set to zero.
The intensity of a
Safety
motivation then depends
on stimuli coming from the Alert Cognition Agent
Danger
, which, when in presence of high inten-
sity target stimuli at time
t
, activates an alarm
by sending the correspondent intensity
d
t
that
is responsible for increasing the
Safety
Intensity.
Tolerance threshold
T
min
and
T
max
are specified
according to the character's personality. Calm
people are more difficult to panic, while an anxious
person tends to easily become frightened.
While the
Safety
intensity is between the
two thresholds, a Cognition Agent here named
Protection
is activated as a root of a decision
hierarchy that will select the better action to be
executed. If the chosen action succeeds, a cor-
respondent
p
t
component is sent back, at time
t
, to decrease the Safety Intensity. Otherwise,
from the moment the upper threshold is reached,
behaviors expressing anxiety tend to emerge, and
are executed by Discharge Execution Agents, like
Panic
(also specified according to the character
personality). In this case, the intensity is corre-
spondently decreased, keeping the intensity value
in the upper threshold until a successful action is
finally selected by
Protection
(or some catastrophe
happens and the character dies).
Once the main Motivation Agent
Safety
is
modeled, the
Danger
Alert Agent has to be
specified taking into account the sensitivity of
the characters to the stimuli signaling the danger
situation. Likewise, the
Protection
graph depends
on the knowledge the characters are expected
to have about the environment and the danger
involved. Both
Danger
and
Protection
Agents
are Cognition Agents, which means that they can
be embedded in a motivational graph (see above)
that may include other factors (like, for instance,
competition with or sympathy for others, already
mentioned).
CONCLUSION AND FUTURE
TRENDS
Virtual Characters are frequently related to com-
putational games, but they can have many other
applications, both in technology and research
systems. Behavior mechanisms, though, are
usually very specific of each application. But, as
common expected properties of character behavior
are believability and plausibility, it is important
that action selection can be flexible, autonomous
and adaptable to the environment to satisfy this
criterion. A generic structure for artificial mind
that can provide these qualities is interesting and
useful. In complex environments such as those of
Virtual Reality, this can only be attained through
Search WWH ::
Custom Search