Information Technology Reference
In-Depth Information
(i.e., 1 if virtual player wins and 2 otherwise). Fitness function was then defined
as
fitness
(
x
)=
10000
∗
(
A−B
)
C∗D
; higher the fitness value, better the strategy. This
fitness was coded to evolve towards aggressive solutions.
4 Experimental Analysis
The experiments were performed using two algorithms: our initial expert system
(RBP), and the algorithm PMEA (i.e., player modeling + EA) presented in
Algorithm 1. As to the PMEA, the EA uses
popsize
= 50,
p
X
=
.
7,
p
M
=
.
01,
and MaxGenerations = 125; mutation is executed as usual at the gene level
by changing an action to any other action randomly chosen. Three different
scenarios where created for experimentation: (1) A map with size 50
×
50 grids,
48 agents in VP army, and 32 soldiers in the human player (HP) team; (2) a
map 54
28,
with 48 VP soldiers, and 53 HP units. Algorithm 1 was executed for a value of
℘
= 20 (i.e., 20 different games were sequentially played), and the RBP was also
executed 20 times; Table 1 shows the results obtained.
×
46, with 43 VP soldiers, and 43 HP units; and (3) a map 50
×
Table 1.
Results: VP
win
= number of virtual player's victories, HP
win
=numberof
human player's victories, HP
death
= average number of deaths in the HP army, VP
death
= average number of deaths in the VP army,
mov
= average number of movements,
and
time
= average time (minutes) dedicated per game
VP
win
HP
win
HP
death
VP
death
mov time
map
50
×
50
RBP
4
16
6
7
5345 3.56
PMEA
6
14
7
7
4866 3.20
map
54
×
46
RBP
9
11
4
3
7185 4.79
PMEA
7
13
6
7
5685 3.80
map
50
×
28
RBP
3
17
3
2
6946 4.63
PMEA
6
14
7
6
6056 3.78
Even though in two of the three scenarios PMEA behaves better than RBP,
note that no significant differences are shown; this is however an expected re-
sult as we have considered just one player what means that the player models
obtained in-between two games are likely similar and thus their corresponding
virtual players also are. In any case, this demonstrates that our approach is
feasible as it produces virtual players comparable - and sometimes better - to
specific and specialized pre-programmed scripts.
5 Conclusions
We have described an algorithm to design automatically strategies exhibiting
emergent behaviors that adapt to the user skills in a one-player war real time
Search WWH ::
Custom Search