Information Technology Reference
In-Depth Information
years and killed one of Frank's powerful attacking units. Because some of her
resources had been allocated to defense, she fared worse in gold acquisition, acquir-
ing only 147 units. The lesson learned was that compensating for a well-understood
limitation could be accomplished by making use of a simple heuristic alteration of
a TMKL2 Model, a small library of patterns, and knowledge of the Environment at
the end of the failing game.
17.4.5.2 Experiment #2
Because of Frank's superior fire power, Experiment #1was an unfair contest for Alice.
To explore how Alice would fare versus a similarly equipped opponent, a second
experiment was conducted. This experiment involved two näive agents named Alice
and Barbra . Both play the simplified version of Freeciv described in Experiment #1.
Barbra's strategy was to focus on producing warriors to attack Alice's city. By so
doing, Barbra wins by overwhelming Alice's defenses. Before succumbing, Alice is
able to acquire 93 units of gold and while living through 1450 years.
The same adaptation process in Experiment #1 was used to adapt Alice and
resulted in the same Alice' being produced as in Experiment #1. Running Alice'
versus Barbra results in Alice' winning. Alice' was able to collect 185 units of gold,
while living through 4700 years. The experiment increased our confidence in the
approach used in Experiment #1.
17.4.5.3 Experiment #3
The previous two experiments were examples of retroactive adaption in which a
failure was mitigated. In Experiment #3, proactive adaption was attempted to take
advantage of a slightly altered game rule. In particular, it now takes 189 gold units for
Alice to win. Tests were run on Alice to see if Alice's model was still valid after the
rule change. REM tested if each Mechanism's Provides condition satisfies its parent
Goal's Makes condition; that is, if the Mechanism was capable of accomplishing the
new Goal. When this test failed, REM located the responsible Mechanism.
In this experiment, REM localized Alice's GainGold Organizer. Next, a replace-
ment Organizer was created to achieve the new win condition. To do this, REM
used an external planning tool called Graphplan [ 7 ]. Graphplan is a mature, publicly
available planner. 4
REM translated the initial game Environment into a Graphplan facts file, amount-
ing to over 10,400 facts. Then all Organizers, Operations, and game rules were trans-
lated into a Graphplan operators file. After pruning out operators with no effects, the
resulting Graphplan file contained 10 operators. Next, REM ran Graphplan on the
facts and operator files. Graphplan was able to generate a three-stage plan capable
of accomplishing Alice's top-level Goal. This plan was then translated back into an
4 http://www.cs.cmu.edu/~avrim/graphplan.html .
Search WWH ::




Custom Search