Information Technology Reference
In-Depth Information
Don't Forget the Human Touch
You could say that even the most robust automated testing is still hap-
pening “from the inside.” To be sure your product behaves as it should for
the user, you must emulate user activities—look at it from the “outside”—
before releasing the software.
I once had the opportunity to speak with a project manager at a large
financial institution whose development team had fashioned a fairly rigor-
ous automated development testing regimen. They had a high degree of
testing at all levels, and they had built a fairly robust auto-deployment
process via their build. However, they did notice that sometimes when
the team deployed their application into company-wide production, glar-
ing user interface-specific issues, such as pages with broken tables and
missing images, would surface. It was particularly painful for this man-
ager, as he would inevitably find out about the issues from other groups
in the company who depended on this application. It turned out that this
development team was focused on automation, but they got a little car-
ried away: No one had ever actually sat down and worked through the
behavior and appearance of their product. This team responded as they
should, not by deprecating the automated testing, but by considering that
the manual review could reveal things a “robot” would not know. Once
they added the manual checks, issues with the UI largely disappeared.
Human testing requires 100% test success. If a test fails, there could
be subtle issues in the environment or the code base that could spell
disaster later, once the application is deployed.
98% Is Still an A, Right?
I once consulted for an organization that had a test-pass threshold of
98%. This strategy was put into place because of a perceived notion that
the organization could never attain a pass rate of 100% due to various
complexities in the code base and environment at any given point in time.
Unfortunately, this strategy of permitting a pass rate with less than 100%
created a situation of uncertainty between builds—they had no way of
ascertaining which tests were failing between releases and whether they
were the same failures or new ones. The CI approach requires that you
automate a requirement for 100% test success; that way, you receive the
data on which tests failed and why.
Search WWH ::




Custom Search