Civil Engineering Reference
In-Depth Information
3. Controlled, taking place in a test environment controlling external factors to assess effectiveness,
efficiency, and satisfaction with defined qualitative and quantitative metrics
7.3.3.2 Summary
Evaluations are typically user-based or expert-based in reference to who conducts the assessment of the
design(s). Again, the generalizability trade-offs of using either class should be considered. User-based
methods have been found to derive more practically realistic and legitimate problems. Expert-driven
evaluations, however, provide a means to extract valid, unbiased issues more accurately than a user
group sampling that is insufficiently small (Maguire, 2001). It is not uncommon in evaluations to
have a shortage of actual users to work with.
Recently, Bainbridge (2004) identified user-based evaluations as consisting of the identification of
representative users and tasks and the observation of problems arising out of the use of a design to
create a task. The evaluations can also be formative in their role in design selection and prototype iter-
ation or alternatively summative in the documentation of efficacy, efficiency, and satisfaction at the com-
pletion of the design iterations. Applicable methods, prototype robustness, measures, and purpose are
decidedly different between summative and formative methods (Bainbridge, 2004). As formative
methods are intimately linked to the iterative portion of UCD, evaluations must be timely, in order
not to delay the overall design approach.
7.3.3.3 Some Examples of Methods and Tools
The testing used in UCD is typically categorized under the umbrella of methods classified as usability
evaluation methods. However, there are several categories of methods that inform the evaluation of the
design process and usability testing is one of those categories. Rubin (1994) asserts ten basic techniques
relevant to the UCD process and specifically concerning the evaluation of potential design choices.
Table 7.7 summarizes several of the methods mentioned by Rubin (1994) including focus group research,
surveys, design walk-through, paper-and-pencil evaluations, expert evaluations, usability audits, usabil-
ity testing, field studies, and follow-up studies. Again, note that usability testing is just one of the several
classes of evaluation methodologies.
Even so, usability testing is probably the most common approach used in implementing UCD and has
been a practice since the early 1980s (Dumas, 2003). It has received significant attention in the last 15 yr
as a valid, reliable, and efficient means of assessment. Several texts portray usability testing methods in
great detail providing extensive instructions for the planning, design, and management of these evalu-
ations (e.g., Dix et al., 1998; Nielsen and Mack, 1994; Rubin, 1994). These texts and others alike
provide step-by-step instructions for specific types of evaluations. Like the entire UCD process, the
use of evaluation techniques is affected by several contextual factors, to which the technique must be
adapted. In this section, three approaches to testing and evaluation are discussed. Two of these fall
under the category of expert-based, inspection methods and the other technique discussed is user-
observation.
Heuristic evaluation and cognitive walkthroughs are two usability evaluation methods that are classi-
fied as expert-based inspection methods. These techniques represent two of the original usability inspec-
tion methods introduced to the HCI community. As a result, several of the subsequently emerging
methods were grounded on heuristic evaluation and cognitive walkthrough (Nielsen and Mack, 1994).
Inspection methods are often lauded for their quick turnaround, limited training requirements, and
ability to generate suitable usability problems without a great deal of involvement from actual users
(Sears and Hess, 1999). To contrast, user-based observation techniques will also be introduced. Intui-
tively, a class of techniques requires significant participation from actual users. User-based observations
may be anchored in usability evaluations, but can in fact range from basic observations to highly con-
trolled empirical methods using special equipment to measure interactions. Typically, because they
involve actual users (or at least representative users) they yield a higher degree of validity in their assess-
ment of the ability of a design to match the identified requirement. User-based methods can sometimes
provide surprising design flaws and strengths not observable through expert-based methods.
Search WWH ::




Custom Search