Information Technology Reference
In-Depth Information
Organizer to replace GainGold. The lesson learned from this experiment was that for
a simple numeric change, an invalid TMKL2 Organizer can be located and adapted
using an external planning tool.
17.4.5.4 Experiment #4
The first two experiments described above were off-line in the sense that the adapta-
tions were made after a game was completed. The third experiment did not involve
running the game at all before adapting Alice. Experiment #4 is an on-line adaption
in that Alice is changed while she is running. Moreover, her opponent, Barbra is also
adapted during the game.
In this experiment, both Alice and Barbra were reconfigured into two parts, one
allopoietic and the other autopoietic . These term are borrowed from the literature of
self-organizing systems and denote, respectively, the part of a system that changes
and the part that does the changing [ 48 ].
Alice's allopoietic part used a parameter, alpha, to determine how Alice should
divide her resources between obtaining gold or producing warriors. The autopoietic
part of Alice adapted the allopoietic part by adjusting alpha to produce gold only if
she had sufficient defensive capability to fend off Barbra's visible attackers. Similarly
Barbra's allopoietic part used a parameter, beta , to determine the number of warriors
with which to attack Alice's city. Initially beta is set to 0. The autopoietic part of
Barbra adapts the allopoietic part by adjusting the number of warriors Barbra attacks
Alice with after every failed attack.
For both agents, the autopoietic part was itself a (meta-) agent. In particular, the
meta-agent's Environment consisted of a description of the allopoietic part, including
Goals, Mechanisms and (allopoietic) Environment. By monitoring game status, the
meta-agent could make appropriate adjustments to the base agent's parameter by
executing (meta) Operations.
Running Alice versus Barbra resulted in the agents engaging in an arms race.
Eventually Alice was able to defeat Barbra. In winning, Alice collected 186 gold
units, Barbra had 6 dead warriors, Alice had 3 live warriors and never lost a bat-
tle. Barbra adapted herself 4 times and Alice adapted herself 6 times. The lesson
learned was that TMKL2 models allow for simple real-time adaptations by using
meta Operations to control the agent's strategy.
17.5 Related Work
In this section we briefly relate this work to similar research on agent modeling,
meta-reasoning, game playing, design thinking, and computational creativity.
Search WWH ::




Custom Search