Information Technology Reference
In-Depth Information
10.1.1 Objective of Testing
Testing has a distinct and definitive role in the software development process.
Testing closes the gap between inception and delivery, verifying systematically
and reproducibly that the design and implementation are not flawed. Only in the
most trivial systems is it possible to eradicate 100 % chance of failure, but testing
aims to minimize the amount of down time a client experiences.
Testing should occur at all levels internally. As systems are designed and
implemented they should be checked and validated. Once again, as systems are
assembled they should be testing for computability. The testing phase, however,
revolves solely around diagnosis and isolation of bugs.
The testing process can be divided into three phases (Hetzel and Hetzel 1991 ):
planning, acquisition and execution & evaluation. The planning phase provides
description for the tester to determine what to test and how to test it. Acquisition
phase is during which the required testing software is manufactured, data sets are
defined and collected, and detailed test scripts are written. During the execution
and evaluation phase the test scripts are executed and the results of that execution
are evaluated to determine whether the product passed the test.
10.1.2 Testing Concepts and Theory
The major output of the planning phase is a set of detailed test plans. In a project
that has functional requirements specified by use cases, a test plan should be
written for each use case. Advantages to this include: Since many managers
schedule development activity in terms of use cases, the functionality that becomes
available for testing will be in terms of use cases. This facilitates determining
which test plans should be utilized for a specific build of the system. Second, this
approach improves the tractability from the test cases back into the requirements
model (McGregor 1994 ).
10.1.2.1 Testing the Requirements Model
Writing the detailed test plans provides a thorough investigation of the require-
ments model. A test plan for a use case requires the identification of the underlying
domain objects for each use case. Since an object will typically apply to more than
one use case, inconsistencies in the requirements model can be located. Typical
errors include conflicting defaults, inconsistent naming, incomplete domain defi-
nitions and unanticipated interactions and are identified.
The individual test cases are constructed for a use case by identifying the
domain objects that cooperate to provide the use and by identifying the equiva-
lence classes for each object. The equivalence classes for a domain object can be
Search WWH ::




Custom Search