Game Development Reference
In-Depth Information
list to provide either the single best option or the top n candidates can be a major
performance hit. Now, by considering all options, we only need to pass through the
vector once to weight them and build the edges, and then select.
Remind Me Why We Did This?
Spreading the behavior possibilities around over more options may not seem im-
portant when considering one agent acting one time. When we think back to our
bank example, however, we remember that when we use one algorithm to simulta-
neously drive many agents, we run the risk of having those agents exhibit identical
behaviors. If the Dudes faced 100 agents whose weaponry and distances from the
Dudes were equal, they would see a large variety of reactions from those agents.
Most of them (approximately 82) would select from those top 12 “reasonable�
behaviors—between 5 and 10 agents selecting each of the 12. About 17 of them
would select from the 8 “not quite as good� behaviors—making about 2 agents for
each behavior. And one of our 100 agents is going to look… ahem … like he's only
burning about 15 watts, if you know what I mean.
This may be alarming to some people. “Why would we want any of our agents
to do something that looks dumb?!� The simple answer is… because real people
sometimes do things that look dumb . The more involved answer is based more on the
idea that we are facing many, many agents. In the previous example, having 1 of
the 100 agents doing something odd provides a function of variety rather than one
of accuracy. He is the person who guessed the impossible solution of greater than
66 in the Guess Two-Thirds the Average Game. Our focus is on the people who
guessed reasonably despite the fact that there are unreasonable guessers in the mix.
By making some of the 100 agents more dangerous, it provides us with an
“interesting choice.� As a player, we would have to determine which of the enemies
is more of a threat to us rather than simply take the “kill 'em all!� approach. This
aspect can make the encounter more engaging, interesting, and fun.
Similarly, one agent running the algorithm with the same inputs many times in
succession can produce repetitive behaviors. By stepping away from the narrow view
that we should only consider the best-scoring action, we open up far more varied
behaviors. More importantly, because we've already done all the work to score each
of the potential actions anyway, the extra few steps above are only a small addition.
Expanding Our Horizons
One last aspect of this approach is worth mentioning again. In the previous example,
we had 32 possible choices. By approaching this problem in such an open-ended
fashion and by utilizing the very scalable binary search method, we do not have to
be daunted by including scores—or even hundreds —of possible choices.
Search WWH ::




Custom Search