Information Technology Reference
In-Depth Information
17.4.4 REM
REM [ 53 , 54 ] is an extensible meta-reasoner that reasons over TMK models of soft-
ware agents. REM supports agent self-adaptation because it is capable of monitoring
not only what is happening in the game world, but also the agent's internal state with
respect to the accomplishment of its goals and the methods it used to accomplish
them. Further, it can redesign the agent to adapt it to better accomplish those goals.
When given an agent model and a situation, such as a failed Goal or an altered Envi-
ronment, REM produces an updated agent model engineered either to successfully
accomplish the Goal or to take advantage of the new knowledge in the Environment.
To achieve retrospective adaptation after a failed Goal, REMperforms three steps:
localization (determining which of an agent's subGoals and associated Mechanisms
were inadequate to accomplish the agent's overall Goal), transformation (devising an
alternative Goal), and realization (providing/altering aMechanism to accomplish this
Goal). Localization is accomplished in REMusing a heuristic to find a low-level State
in an Organizer such that the State's Provides condition suffices to accomplish the
failing Goal. Further, the detected State must have a failing precondition (Requires
condition). The presumption is that the State had not been reached, and, if it had been
reached, then the agent would have succeeded. Realization and transformation are
accomplished by matching the failing situation against a library of adaptation plans,
choosing a candidate transformation from the library and applying the result to the
agent's Model to produce a revised Model.
REM sits atop the Powerloom knowledge representation and reasoning system
[ 47 ] that is available publicly. 3 Powerloom supports classification, deduction and
truth maintenance. TMKL2 logical expressions are easily mapped to/from Power-
loom, and REM algorithms are easily expressed in Powerloom's variant of first-order
logic.
17.4.5 Meta-Reasoning for Agent Self-adaptation
To validate our approach to meta-reasoning for self-adaptation in Freeciv-playing
software agents, we have conducted several experiments, each involving variants of
the Alice agent depicted in Fig. 17.2 . In the experiments, Alice plays a simplified
variant of Freeciv against other agents. In particular, the simplified game consists of
two agents. Each agent controls a civilization and is responsible for its government,
economy, citizen morale, and military forces. Each civilization has one city, citizens
in that city, and a number of warriors. All cities, civilians, and warriors are located
on one large continent. Each game tile yields a quantity of food, production, and
trade points each turn of the game. Food points feed a city's civilians; production
points are used to support existing warriors or produce newwarriors. Trade points are
distributed among luxury, tax, and science resources. Initially both players start out
3 http://www.isi.edu/isd/LOOM/PowerLoom/ .
 
Search WWH ::




Custom Search