Game Development Reference
In-Depth Information
Goal-Oriented Action Planning
Utility theory is a great technique for deciding what an agent wants to do, but it
s not
as good for deciding how an agent should perform this action. Goal-Oriented Action
Planning, or GOAP, is a popular methodology that helps solve this particular prob-
lem. It centers on the idea of goals, which are desirable world states that the agent
wants to achieve. It achieves these states through actions, much like you saw previ-
ously. An example of a goal might be to kill the player. An action that satisfies this
goal could be attacking the player. An agent often has multiple goals, although only
one is typically active at any given time. The AI update is then split into two stages:
The first selects the most relevant goal, and the second attempts to solve that goal by
choosing an action or sequence of actions.
This first step of choosing a goal can be elegantly solved by applying utility theory,
decision trees, or any other method you
'
ve seen thus far in this chapter. The second
part is often a bit trickier. For example, let
'
'
s say you
'
ve decided that the goal you
'
want to solve is eating a meal. Unfortunately, you don
t have any food, so you need
to formulate a plan, or a series of actions, that will get you to the goal state of eating
food. This could involve finding your car keys, driving to the store, purchasing food,
and then returning to cook said food.
The idea behind GOAP is that each action has a set of conditions it can satisfy, as
well as a set of preconditions that must be true in order to be satisfied. For example,
eating food will satisfy my goal of eating, but it has the precondition of requiring
cooked food. The action of cooking food satisfies this goal, but it has the precondi-
tion of having a raw food object. When a final action is chosen, the algorithm walks
backward from the goal action through the preconditions, searching for actions that
will solve each one. Finally, at the end of the search, you
re left with an action
sequence that can be executed to achieve the original goal. GOAP is extremely flexi-
ble. As long as a sequence of actions exists to solve a goal, the agent will find a way.
One problem with GOAP (and most forms of advanced AI) is world representation.
This is very much the same problem we had when talking about utility theory. How
can you represent the world in a compact manner? Goals are often expressed as
desirable world states. I desire a world state in which my hunger level is fully satis-
fied. The teapot agent desires a world state in which the player is dead. This world
state then needs to be generated, complete with preconditions and effects.
The other problem is how to search through the action space to find the desirable
world state. Fortunately, there are a number of search algorithms that can help you.
The best one I
'
s Conference in
2006, where he proposed using the A* algorithm, a common search algorithm used
'
ve heard was Jeff Orkin
'
s talk at the Game Developer
'
 
 
Search WWH ::




Custom Search