Information Technology Reference
In-Depth Information
two highest ratings. Perhaps you want to aggregate all success data into
one overall success average representing all tasks. Or you might want
to combine several metrics using a z -score transformation (described in
Section 8.1.3) to create an overall usability score.
Verifying responses . In some situations, particularly for online studies,
participant responses may need to be verified. For example, if you notice
that a large percentage of participants are all giving the same wrong
answer, this should be investigated.
Checking consistency . It's important to make sure that data are captured
properly. A consistency check might include comparing task completion
times and successes to self-reported metrics. If many participants com-
pleted a task in a relatively short period of time and were successful but
gave the task a very low rating, there may be a problem with either how
the data were captured or participants confusing the scales of the ques-
tion. This is quite common with scales involving self-reported ease of
use.
Transferring data . It's common to capture and clean up data using Excel,
then use another program such as SPSS to run some statistics (although
all the basic statistics can be done with Excel), and then move back to
Excel to create the charts and graphs.
Datacleanupcantakeanywherefromanhourtoacoupleofweeks.Forsim-
ple usability studies, with just a couple of metrics, cleanup should be very quick.
Obviously, the more metrics you are dealing with, the more time it will take.
Also, online studies can take longer because more checks are being done. You
want to make sure that the technology is coding all the data correctly.
3.6 SUMMARY
Running a usability study including metrics requires some planning. The follow-
ing are some key points to remember.
Thefirstdecisionyoumustmakeiswhetheryouaregoingtotakeafor-
mative or summative approach. A formative approach involves collecting
data to help improve the design before it is launched or released. It is
most appropriate when you have an opportunity to impact the design of
the product positively. A summative approach is taken when you want to
measure the extent to which certain target goals were achieved. Summative
testing is also sometimes used in competitive usability studies.
When deciding on the most appropriate metrics, two main aspects
of the user experience to consider are performance and satisfaction.
Performance metrics characterize what the user does and include mea-
sures such as task success, task time, and the amount of effort required to
achieve a desired outcome. Satisfaction metrics relate to what users think
or feel about their experience.
Budgetsandtimelinesneedtobeplannedoutwellinadvancewhenrun-
ning any usability studies involving metrics. If you are running a formative
Search WWH ::




Custom Search