Information Technology Reference
In-Depth Information
outcomes. When conducting usability testing it
is important to remember that it is the ICT and
its application that is being tested, not the users.
Data is collected through observations and user
ratings. Five point scales can be used to assess
levels of user satisfaction. These tests are gener-
ally conducted over a one hour period, with the
size of a usability testing group usually between
eight to 16 users. This number may vary accord-
ing to the main characteristics of the user group,
with four to six people appropriate for user groups
that are fairly homogenous. Smaller groups with
varied membership may be used to test particular
aspects of a design at different stages of the design
process. The staff who have developed the ICT
and activities observe and listen carefully, taking
notes on user experiences as they follow scenarios
to implement the ICT given to them by the project
team. This information is used to reflect upon how
to improve the design.
It is preferred that usability testing be conducted
throughout the design period on a series of proofs
and prototypes rather than on a final product only.
This requires adequate planning for the alloca-
tion of sufficient time and resources. For each
round of testing it is important to identify what
the specific goals are and to focus on these. For
instance, the focus in an early round might be on
testing for levels of user comfort, and in a later
round on levels of satisfaction. The aim of the test
is to determine how well each goal is being met.
Typically a usability test is assessing both user
performance and preference by collecting data
on: usability problems, user performance, task
completion, speed, and levels of satisfaction.
Successful, accurate completion of a task in
a timely manner is generally considered a more
important measurable usability goal than user
satisfaction. Results on performance are more
reliable than user preference as the latter may
be based on levels of comfort, keeping in mind
that learning new tasks will generate stress. Low
user ratings indicate that the ICT applications
need to be improved. However, high ratings do
not necessarily mean that problems do not exist.
This could be due to the influence of extrane-
ous factors such as users blaming themselves
for difficulties encountered, unwanted personal
attention and the particular tendency amongst
human services workers to be kind to members
the project team.
Ultimately the measure of usability is that
the application of the ICT allows users to do
their tasks in the same amount of time or less,
with similar or improved levels of success and
satisfaction. No doubt there will be varied needs
and experiences amongst members of the user
group with the design tailored to meet as many
user needs as possible to successfully complete
the required task. Ideally the ICT will improve
upon other ways users have achieved their goals
or they are not likely to embrace them unless they
are compelled to do so. If this is the case, ongoing
conflict management strategies will need to be
employed while further user and task analysis is
conducted (Martin & McKay 2007).
The following case study demonstrates the ap-
plication of the project design stages of user and
task analysis, persona and scenario development,
and usability testing for the development of a web
based resource to assist people recovering from
mental illness gain employment.
caSe StuDy: DeSign of
electronic Work requirement
aWareneSS program (e-Wrap).
background
Web based resources available for finding work
are primarily listings of positions that can be
searched according to type of position, location
and pay rate. Templates and examples of resumes
are available as well as tips for interview. These
sites are targeted at the general population and
do not address issues of self-esteem, motivation,
concentration, discrimination, stigma and per-
Search WWH ::




Custom Search