Database Reference
In-Depth Information
interface. Each user was instructed to "think out loud" to help us understand what they were expecting
from the product. Although users often gave us puzzled looks to start with, they quickly got involved
and had fun with the game-like aspects of working with the prototype. More important, most users had
no problem making the connection between the paper version and the end goal of a more usable
product designed for them.
Members of the development team sat in the room and observed each usability test. Sometimes only
one team member observed; other times there were half a dozen. After—or sometimes even
during—each usability test, we made changes to the prototype. We did as many rounds as we needed
to feel confident that we were on the right track (or in some less ideal cases, as many as we had time
for), and the developers went on to code the functionality that had been hammered out on paper.
Usability Testing of Working Software
It was important for us to retest designs after they were implemented in code. In some cases, there
were features that truly didn't test as well on paper as online. Also, once implementation started, there
were inevitable changes and compromises necessary to express the design in code. So we needed to
validate the more finished designs.
Much of this testing was done in our usability lab: a two-room lab, with rooms separated by one-way
glass. We generally had one developer and a usability specialist sit with the user in the testing room,
and additional team members observed from the other side of the one-way glass. As with the paper
prototypes, we gave users realistic tasks to complete, asked them to "think out loud" while they worked
and to let us know if they experienced frustration or satisfaction with the feature being tested. The
usability specialist or developer sometimes prompted the user for more information along the way.
All the team members were involved in observing the test sessions, although not every person
observed every test. Team members took notes during test sessions on sticky pads, writing down one
issue per sticky. At the end of all the test sessions, we held a debriefing meeting involving the whole
team where we conducted an affinity diagramming exercise, grouping issues from these sticky notes
into like categories. Once the categories were established, it was easier for the team to review the
issues and decide what actions to take. Because the entire team was involved in learning what was
working (or not) in the interface, everyone could contribute to the solution. Thus, the process not only
improved the interface but also served as a useful team-building exercise.
Usability Nights
We've also had a lot of success with what we call " Usability Nights ," which are a fast way to get
informal usability feedback from a group of users. Basically, we take over a training room for an evening
and bring in a teamful of developers and a bunch of local customers. Instead of having everyone work
on a problem at the same time, we set up stations for each tool or interface, staffed by the
development team member(s) who's working on it.
We usually ask the users to bring in examples from their jobs to work on. Typically we're only getting
feedback on one or two interfaces, but on one occasion we tested several development efforts at once.
Users rotated through each station, spending about 30 minutes at each. Depending on the stage of the
project and what the developer wants to learn, they might have used a paper prototype, working
software, or even a hybrid approach combining a paper prototype with the real software.
Usability Nights are a fast way to get a lot of information, and our customers enjoy them as well. They
appreciate the opportunity to be involved in the development of the products they'll eventually use in
their own work.
Search WWH ::




Custom Search