Game Development Reference
In-Depth Information
Dilemma, where thinking only about our own choice rather than taking into con-
sideration our partner's mindset led us to an acceptable yet not optimal solution.
Only when we considered both of the inputs and results did we arrive at the best
possible outcome.
Once we decided what it was we were going to decide, we identified the individ-
ual components of the whole decision and dealt with each portion individually.
Establishing compartmentalized confidence in each of those steps as we went along
freed us up to concentrate only on the next step. For example, we were confident
that the formulas for the accuracy of the weapons were accurate. We were also con-
fident that the formulas for the damage from the weapons were accurate. Feeling
good about both of those as individual functions , we felt comfortable combining the
two into damage per second. Feeling that damage per second was an accurate mea-
surement of strength, we felt quite secure in the validity of comparing our damage-
dealing power with that of the enemy. We continued the process of adding more
layers—each layer only concerned with the one immediately before it. In the end,
we arrived at our final decision of who to attack and with what.
One of the payoffs of the time we spent in developing this model is that our
agent is now highly dynamic. It responds well to changes in its environment. As
Dudes move, it adapts. Adaptation and change in AI agents is one of the major steps
in making AI seem more “alive� than mechanical and scripted.
There's Always Something Bigger
That doesn't have to be the end, however. We could have continued to combine
this result with something else. For example, we could introduce the idea of other
actions that are not related to attacking: fleeing, hiding, surrendering, grabbing a
health pack or a new weapon, running to a detonator of our own, or even pausing
to take a photo to memorialize the occasion.
To incorporate these other possibilities, we would build a process similar to the
one for attacking and, as we have done a few times already, define a connection
algorithm between them. Our process above then becomes part of a bigger picture.
Rather than simply asking, “Who do we kill and with what?� a higher-level compo-
nent would be asking, “ If we decide to attack , who would we kill and with what?�
There is a subtle difference between those two statements. The former is a final
decision; the latter is a suggestion.
The difference becomes clearer if we imagine processing the decision from the
top down, instead. Imagine that our first decision was between the nebulous con-
cepts of attack , flee , hide , get health , get weapon , and take memorial photo . How can
we decide between them without knowing more about their relative merits? Sure,
we can decide that get health is a high priority if we are low on health, but if there is
Search WWH ::




Custom Search