Game Development Reference
Tester K ran 100 tests and found 3 defects. These represent 17.5% of the test total
and about 9% of the defect total. K has a 26.5 rating.
Tester Z ran 169 tests, which is about 29.5% of the 570 total. Z found 9 defects,
which is 26.5% of that total. Z's total rating is 56.
Tester Z has earned the title of “Best Tester.�?
When you have someone on your team who keeps winning these awards, take her to lunch and
find out what she is doing so you can win some too!
Be careful to use this system for good and not for evil. Running more tests or claim-
ing credit for new defects should not come at the expense of other people or the good
of the overall project. You could add in factors to give more weight to higher-severity
defects to discourage testers from spending all their time chasing and reporting low-
severity defects that won't contribute as much to the game as a few very important
Use this system to encourage and exhibit positive test behaviors. Remind your team
(and yourself!) that some time spent automating tests could have a lot of payback in
terms of test execution. Likewise, spending a little time up front to design your tests
before you run off and start banging on the game controller will probably lead you to
more defects. You will learn more about these strategies and techniques as you pro-
ceed to Parts IV and V of this topic.
This chapter introduced you to a number of metrics you can collect to track and
improve testing results. Each metric from this chapter is listed here, along with the raw
data you need to collect for each in parentheses:
Test Progress Chart (# of tests completed by team each day, # of tests required
Test Completed/Days of Effort (# of tests completed, # days of test effort for
Test Participation (# of days of effort for each tester, # of days each tester
assigned to test)
Test Effectiveness (# of defects, # of tests: for each release and/or tester)
Defect Severity Profile (# of defects of each severity for each release)