Information Technology Reference
In-Depth Information
on the design between rounds of testing, adding functionality as more require-
ments were incorporated. If a particular area of the product performed below
our expectations, we often retested it in subsequent rounds. Because our focus
was improving the design, if data integrity conflicted with the right thing to do
for the design, we naturally always chose the design. Since we work in an Agile
environment, we had to keep the usability testing cycles quite short, usually less
than 2 weeks per round, often aiming for 1 week. Generally, we tried to keep one
or two iterations ahead of the actual coding work that was going on.
For the purpose of this case study, we focused on four rounds of testing
because they repeated the same workflow task with four different designs. We
will refer to these as Rounds 1-4.
Employing a resource we often use to expand the coverage of our team,
Round 1 of testing was performed in person by a graduate student from the
School of Information at the University of Texas at Austin under the mentorship
of Associate Professor Randolph Bias. We conducted Rounds 2, 3, and 4 of test-
ing remotely via a conference call and WebEx. We shared our desktop via WebEx
and gave the participant control of the mouse and keyboard.
There were a total of 25 participants across the four rounds of testing. We had
3 participants in Round 1, 9 participants in Round 2, 4 participants in Round
3, and 9 participants in Round 4. Participant groups varied for the rounds of
testing, partially due to budget constraints, but also due to the fact that users
of the Web Experience Management product can vary significantly. Users can
range from being long-time, full-time users of the system to brand new, occa-
sional users of the system. Rounds 2 and 3 involved current users of the sys-
tem, as well as users of competitors' systems recruited by a market research firm.
Round 1 involved representative users from the University of Texas, and Round
4 involved current customers exclusively.
10.3.2 Data Collection
Even though all the usability tests were “formative” in nature, we collected
usability metrics for each of the rounds of testing similar to the methodology
reported by Bergstrom, Olmsted-Hawala, Chen, and Murphy (2011) . From our
perspective, usability metrics are just another way of communicating what hap-
pened during a usability test. Of course, we also collect qualitative data and that
data still represent the bulk of our formative usability test results and recom-
mendations. However, we have found metrics, being numbers, are concise and
easy for management and developers to digest. Also, we find them quick and
easy to collect and analyze.
We have standardized a set of metrics at OpenText and these were reported
and tracked across the rounds of tests to communicate improvements in the
design to product owners. Following the ISO definition of usability of “effective-
ness,” “efficiency,” and “satisfaction,” we collected task completion rate, time
on task, the Single Ease Question (SEQ; Sauro & Dumas, 2009; Sauro & Lewis,
Search WWH ::




Custom Search