Information Technology Reference
In-Depth Information
and more complex with hundreds of different situations/states and therefore it
is not easy to predict all the possible situations that could potentially happen
and even more dicult to decide which is the most appropriate actions to take
in such situations. As consequence, many RTS games contain 'holes' in the sense
that the game stagnates or behaves incorrectly under very specific conditions
(these problems rely on the category of 'artificial stupidity' [1]). Thus the reality
of the simulation is drastically reduced and so too the interest of the player;
In addition, there are very interesting research problems in developing AI for
Real-Time Strategy (RTS) games including planning in an uncertain world with
incomplete information, learning, opponent modeling, and spatial and temporal
reasoning [3]. This AI design is usually very hard due to the complexity of
the search space (states describe large playing scenarios with hundreds of units
simultaneously acting). Particular problems are caused by the large search spaces
(environments consisting of many thousands of possible positions for each of
hundreds, possibly thousands, of units) and the parallel nature of the problem -
unlike traditional games, any number of moves may be made simultaneously [4].
Qualitative spatial reasoning (QSR) techniques can be used to reduce complex
spatial states (e.g., using abstract representations of the space [5]). Regarding
evolutionary techniques, a number of biologically-inspired algorithms and multi-
agent based methods have already been applied to handle many of the mentioned
problems in the implementation of RTS games [6,7,8,9,10,11,12,13].
Even in the case of designing a very good script for VPs, the designer has to
confront another well known problem: the VP behavior is usually fixed and rarely
adapts to the level (i.e., skills) of the player. In fact the player can lose interest
in the game because either she is able to beat all the opponents in each level, or
the virtual player always beat her. In this sense, the design of interesting (e.g.,
non-predictable) NPCs is not the only challenge though. It also has to adapt to
the human player, since she might lose interest in the game otherwise (i.e., if the
NPCs are too easy or too hard to beat).
There are many benefits attempting to build adaptive learning AI systems
which may exist at multiple levels of the game hierarchy, and which evolve over
time. This paper precisely deals with the issue of generating behaviors for the
virtual player that evolve in accordance with the player's increasing abilities.
This behavior emerges according to the player skill, and this emergent feature
can make a RTS game more entertaining and less predictable in the sense that
emergent behavior is not explicitly programmed but simply happens [14]. The
attainment of adjustable and emergent virtual players consists here of a process
of two stages that are iteratively executed in sequence: (1) a behavior model
of the human player is created in real time during the execution of a game,
and further the virtual player is evolved off-line (i.e., in between two games)
via evolutionary algorithms till a state that it can compete with the player (not
necessarily to beat her but to keep her interest in the game). This approach has
beenappliedonaRTSgameconstructed specifically for experimentation, and
we report here our experience.
 
Search WWH ::




Custom Search