Information Technology Reference
In-Depth Information
a repeatable level of software reliability and stability. Unfortunately for customers,
the level of software reliability and stability provided by these captured corporate
processes was far below the level of software reliability and stability of the earlier
systems. It is informed conjecture that the missing ingredient was a comparable
software testing process. For unexplained reasons, this new, lower quality software
became acceptable as the industry norm for a large number of computer users. [11]
Testing did not become a recognized formal software process until the 1990s
when the Y2K Sword of Damocles threatened all industries that relied on computer
power for their livelihood. Testing was thrust to the forefront of frantic software ac-
tivities as the savior of the 21st century. Billions of dollars were spent mitigating the
possible business disasters caused by the shortcuts programmers had taken for years
when coding dates. These shortcuts would not allow programs to correctly process
dates back and forth across the January 1, 2000 century mark or year 2000 or “Y2K”
in the vernacular. The authors think that it is to the credit of the professional testing
community that January 1, 2000 came and went with a collective computer whimper
of problems compared to what could have happened without intervention. Thousands
of businesses remained whole as the calendar century changed. Although some ex-
ecutives mumbled about the cost of all the Y2K testing, wiser executives recognized
how close to disaster they really came, and how much of the ability to do business in
the 21st century they owed to testers and testing processes.
1.4.2 The Ten Principles of Good Software Testing
Y2K testing did not start in a vacuum. Several groups of computer professionals
realized the need to develop a full repertoire of software testing techniques by the
mid-1980s. By the 1990s, software testing whitepapers, seminars, and journal ar-
ticles began to appear. This implies that the groups of the 1980s were able to gain
practical experience with their testing techniques.
Although Y2K testing did represent a very specifi c kind of defect detection and
correction, a surprising number of more general testing techniques were appropri-
ate for retesting the remediated (Y2K-corrected) programs. Thus, the Y2K testing
frenzy directed a spotlight on the larger issues, processes, and strategies for full
development life cycle software testing. These principles are an amalgam of the
professional testing experience from the 1980s and 1990s and the Y2K experience to
yield the following underlying software testing principles.
Principles of good testing
Testing principle 1: Business risk can be reduced by fi nding defects.
If a good business case has been built for a new software application or product,
the majority of the uncontrolled risks can be limited. Indeed, a large part of a good
business case is the willingness to chance the risk of failure in a certain market
space based on the perceived demand, the competition for the same market, and the
timing of the market relative to current fi nancial indicators. With those limits well
established, the focus is on the best way and most timely way to capture the target
Search WWH ::




Custom Search