Hardware Reference
In-Depth Information
A more recent study conducted in Alberta, Canada [3] identified large variance
among projects regarding testing resources in terms of the ratio of developers to test-
ers, showing that about 50% of the studied projects allocated around two developers
to one tester (~50%), whereas 35% invested much less personnel resources in testing
(five developers to one tester, ~20%). Other studies generally support these findings,
substantiating the positive correlation between software development process matur-
ity and the degree of investment in software testing - around 35% of the overall in-
vestment [4, 5].
Testing tasks have been traditionally classified into three phases [6]: 1) Prepara-
tion: plan, design, construct, 2) Execution, and 3) Verification: verify results against
expected outcomes and report. These three stages were often performed sequentially
as in structured software development process models, demanding rather equal re-
source investment. Recently, however, there is a tendency to change this structured
model due to several reasons [7]. One reason is the growing popularity of new soft-
ware development models and techniques, such as agile methods, service oriented
architecture (SOA), and test driven development (TDD), all three indicating testing
processes that somewhat deviate from the structured process models. Along changes
in development models, testing automation has matured and is now more prevalent,
potentially easing the execution phase. Finally, verification and validation processes
become more complex due to the growing complexity of the developed applications
and the data units involved. For example, growing complexity can be attributed to
data representation simultaneously using various techniques as databases, XML files,
encryption, compression, coding, dynamic data location, etc. Consequently, a deeper
understanding of the data structure and characteristics is required during testing, as
well as more sophisticated tools and processes.
In light of the growing complexity of the testing process, Bach [8] advocated ex-
ploratory testing, defined as “any testing to the extent that the tester actively controls
the design of the tests as those tests are performed and uses information gained while
testing to design new and better tests” (p. 2). This methodology addresses the asser-
tion that complete testing preparation is unlikely at an initial phase of the testing
process. Thus, Kaner [9] explained the advantages of exploratory testing in allowing
testers to learn while they test, to get more sophisticated as they learn, interpret and
design their tests differently as they learn more about the product, the market, the
variety of uses of the product, the risks, and the mistakes that are likely to be made by
the humans who wrote the code. Under exploratory testing the test plan evolves dur-
ing the test development and execution, rather than pre-planned before the actual
complexity of the product is realized. This realm, however, might be practically prob-
lematic when having to pre-estimate testing efforts as part of the overall project esti-
mation. Evidently, there is a broad agreement that testing is a complex task, hence
difficult to estimate and quantify. In-depth examination of various testing processes
and techniques is beyond the scope of this work, instead, we focus on the common
building block of all software testing techniques - the TC. Thus, in order to better
understand the problem at hand, we next bring a review of the literature, elaborating
on the single concept common to all testing processes and techniques - the TC.
Search WWH ::




Custom Search