Graphics Reference
In-Depth Information
Moveable objects create a problem for an accumulation-only occupancy map. If an object is only
inserted and never removed from the occupancy map, then it will pollute the map if it moves throughout
the scene and is observed in several places. Thus, dynamic objects demand that an update mechanism
be included with the memory module. If an object is seen at a certain location, other references to its
location must be removed from the occupancy map.
This raises the issue of the validity of the recorded location of a moving object that has been out of
view for awhile. Chances are that if an object is seen moving at one location and then is out of view for a
while, the object is no longer at that location. It seems like a good idea to remove memories of locations
of moving objects if they have not been seen for awhile. Time-stamping memories of moving objects
allows this to be done. As time progresses, the memories that are older than some (possibly object-spe-
cific) threshold are to be removed (e.g., [ 18 ][ 24 ] ).
11.3 Modeling intelligent behavior
Modeling behavior, especially intelligent behavior, is an open-ended task. Even simple flocking and
prey-predator models can be difficult to control. Humans are even more complex and modeling their
behavior is a task that AI has been addressing for decades. This type of AI falls under the rubric of
computer graphics because of the spatial (geometric) reasoning component. In computer animation,
the animator usually wants the visuals of character motion to mimic (or at least be a caricature of)
how humans are seen behaving in real life. While motion capture is a relatively quick and easy
way to model human behavior, developing algorithms to produce human behavior, if done well, are
more flexible and more general than motion capture (mocap)-based animation. In addition, such
models often contribute to a greater understanding of human behavior. Human and animal behavior
have been modeled since the early days of computer animation but only recently have more sophisti-
cated methods been employed for higher level, autonomous cognitive behavior.
For example, the Improv system of Ken Perlin and Athomas Goldberg uses layered scripts with
decision rules to create novel animation [ 29 ]. The AlphaWolf project at Massachusetts Institute of
Technology has modeled not only expressiveness, but also learning and development [ 40 ] . Attempts
have also been made to apply neural net technology to learning behavior (e.g., [ 7 ]). Recently, there has
been a more formal approach by referring to the literature in psychology (e.g., [ 36 ]). It has been applied
to expressions and gestures, personality, and crowd behavior.
11.3.1 Autonomous behavior
Autonomous behavior , as used here, refers to the motion of an object that results frommodeling its cog-
nitive processes. Usually, such behavior is applied to relatively few objects in the environment. As a
result, emergent behavior is not typically associated with autonomous behavior, as it is with particle sys-
tems and flocking behavior, simply because the numbers are not large enough to support the sensation of
an aggregate body. To the extent that cognitive processes are modeled in flocking, autonomous behavior
shares many of the same issues, most of which result from the necessity to balance competing urges.
Autonomous behavior models an object that knows about the local environment in which it exists,
reasons about the state it is in, plans a reaction to its circumstances, and carries out actions that affect its
environment. The environment, as it is sensed and perceived by the object, constitutes the external
Search WWH ::




Custom Search