Information Technology Reference
In-Depth Information
6.5 USING SUS TO COMPARE DESIGNS
A number of usability studies that involved comparing different designs
for accomplishing similar tasks have used the SUS questionnaire as one of the
techniques for making the comparison (typically in addition to performance
data).
Traci Hart (2004) of the Software Usability Research Laboratory at Wichita
State University conducted a usability study comparing three different websites
designed for older adults: SeniorNet, SeniorResource, and Seniors-Place. After
attempting tasks on each website, participants rated each of them using the
SUS questionnaire. The average SUS score for the SeniorResource site was 80%,
which was significantly better than the average scores for SeniorNet and Seniors-
Place, both of which averaged 63%.
The American Institutes for Research (2001) conducted a usability study
comparing Microsoft's Windows ME and Windows XP. They recruited 36 par-
ticipants whose expertise with Windows ranged from novice to intermediate.
They attempted tasks using both versions of Windows and then completed
the SUS questionnaire for both. They found that the average SUS score for
Windows XP (74%) was significantly higher than the average for Windows ME
(56%)( p < 0.0001).
Sarah Everett, Michael Byrne, and Kristen Greene (2006), from Rice
University, conducted a usability study comparing three different types of paper
ballots: bubble, arrow, and open response. These ballots were based on actual
ballots used in the 2004 U.S. elections. After using each of the ballots in a simu-
lated election, the 42 participants used the SUS questionnaire to rate each one.
They found that the bubble ballot received significantly higher SUS ratings than
either of the other two ( p < 0.001).
There's also some evidence that participants who have more experience with
a product tend to give it higher SUS ratings than those with less experience.
In testing two different applications (one web based and one desktop based),
McLellan, Muddimer, and Peres (2012) found that the SUS scores from users
who had more extensive experience with a product tended to be about 15%
higher compared to users with either no or limited experience with the product.
6.6 ONLINE SERVICES
More and more companies are learning the value of getting feedback from the
users of their websites. The currently in-vogue term for this process is listen-
ing to the “ Voice of the Customer ,” or VoC studies. This is essentially the same
process as in postsession self-reported metrics. The main difference is that VoC
studies are typically done on live websites. The common approach is that a ran-
domly selected percentage of live-site users get offered a pop-up survey asking
for their feedback at a specific point in their interaction with the site—usually
on logout, exiting the site, or completing a transaction. Another approach is
Search WWH ::




Custom Search