Information Technology Reference
In-Depth Information
came from various web browsers. No matter how a user interacts with software,
there is a testing tool that can automate the tests. This includes PC-based GUIs,
APIs, consoles/keyboards, mobile phones, and, as documented in Gruver, Young &
Fulghum ( 2012 ), even the front panels of laser printers.
Performance Testing: These tests determine the speed of the service under vari-
ous conditions. This testing is performed on the service while in the testing envir-
onment or in a specially built performance testing environment. It should determ-
ine if the performance meets written specifications or requirements. All too often,
however, such specifications are nonexistent or vague. Therefore often the results
of this testing are just compared to previous results. If the entire system, or a spe-
cific feature, works significantly more slowly than the previous release—a per-
formance regression —the test fails.
Load Testing: This special kind of performance testing determines how much load
the system can sustain. It is usually done in the testing environment or in a special
performance testing environment. Such testing involves subjecting the service to
increasingly larger amounts of traffic, or load, to determine the maximum the sys-
tem is able to process. As an example, Google does not use a new Linux kernel
without first doing load testing to verify the kernel changes have not negatively af-
fected how much load a search cluster can sustain. An entire cluster is built with
machines running this kernel release. Search queries are artificially generated at
larger and larger QPS. Eventually the cluster maxes out, unable to do more QPS,
or the system gets so slow that it cannot answer queries in the required number of
milliseconds. If this maximum QPS is significantly less than the previous release,
Google does not upgrade to that kernel.
User Acceptance Testing (UAT): This testing is done by customers to verify that
the system meets their needs and to verify claims by the producer. Customers run
their own tests to verify the new release meets their requirements. For example,
they might run through each business process that involves the service. System
testing involves developers making sure that they don't ship products with defects.
UAT involves customers making sure they don't receive products with defects.
Ideally, any test developed for UAT will be made known to the developers so that
it can be added to their own battery of tests. This would verify such concerns earli-
er in the process. Sadly this is not always possible. UAT may include tests that use
live data that cannot be shared, such as personally identifiable information (PII).
UAT also may be used to determine if an internal process needs to be revised.
Search WWH ::




Custom Search