Information Technology Reference
In-Depth Information
11.2 BRIEF HISTORY OF AUTOMATED TESTING
TOOLS FOR SOFTWARE
Three hundred test tools sound like a lot of tools. Where did they come from? Is the
market expanding? Will there be 500 tools next year? A brief history of automated
testing tools may lower your anxiety level with so many alternatives.
Robust, industrial strength testing tools began to appear in the software market-
place around 1980. The tools were simple, single-purpose products focused primar-
ily on functional testing activities. Many of these activities were a useful extension
of program code debugging. As these functional testing tools gained popularity,
management tools to organize and maintain tool data, command sets, execution
schedule, results, and reports began to appear in the market.
The combined use of a test management tool and a functional test tool proved to
be much more effective than either tool used separately. The main drawback to such
combined tool use was the awkwardness of the interfaces between the tools. Around
1990, test tool vendors addressed this combined tool awkwardness by marketing
tool suites that offered tightly interfaced test management tools with functional test
tools. Half of these tool suite offerings were written from scratch, leveraging prior
user experience with the individual tools. Half of these tool suite offerings were
company acquisitions with focused integration of tools that already worked pretty
well together.
Around 1995, a new type of automated test tool began to emerge in the market-
place. This new tool was designed to measure software performance as we defi ned
it in Chapter 9: response time and throughput. As with the functional test tools, the
fi rst performance test tools provided only basic performance measurements on a few
computing platforms for simple workloads. As the tools matured, their capabilities
expanded to provide a variety of performance measurements on a variety of com-
puting platforms for complex workloads. Unlike the functional test tool evolution,
performance test tools appeared with management and reporting capabilities from
the beginning. At a cursory level, one would expect the new performance tools to
leverage the management and reporting tool designs already fi eld proven for the
functional test tools; however, performance test tool execution and results are not
based on pass/fail like functional test tools. With functional test tools, there is an
automated comparison of expected versus actual data or screen contents or report
contents. Either the results compare exactly (pass) or they do not (fail). Performance
test tools simply report the speed with which the instrumented code responds. The
goal of performance testing is usually expressed in a slowest response time or a re-
sponse time range that must be manually compared with the tool-reported response
times. The results can either be clear-cut such as “the software function responds
within the response time limit by 2 seconds or not so clear-cut as “the software func-
tion responds just 0.2 seconds slower than the limit that is still acceptable.”
Around 2000, tool vendors began aggregating their tools into more compre-
hensive tool suites, providing test management, functional testing, and performance
testing in the same tool suite. The indisputable advantages of one-stop shopping for
an integrated tool suite caused the automated test tool market to become dominated
Search WWH ::




Custom Search