Civil Engineering Reference
In-Depth Information
evaluation techniques appropriate at different levels of design fidelity from concept through implemen-
tation. From this study, it is presumed that the ease with which an evaluation method is applied is inter-
related with the level of abstraction required of those conducting the assessment to conceptualize the
design and tasks interactions. Those prototypes that have a physical presence and are highly tangible
maintain more flexibility in applicable evaluation techniques than prototypes of a more conceptual
nature (e.g., Wizard-of-Oz methods or paper-based prototypes).
7.3.3.1.2 Testing and Evaluation Considerations
In their summary of human factors methods, Leonard et al. (2005) summarized both resource- and
method-specific criteria for the selection of evaluation techniques, summarized in the form of questions
in Table 7.6. The authors defined these criteria in the context of all human factors methods and not just
UCD. In the context of UCD, the purpose of the evaluation is the assessment of the design in terms of the
requirements and specification that emerged in the first step of UCD. The intended outcome of the evalu-
ation in terms of UCD is to iteratively improve upon the design until
it
is appropriate for
implementation.
Evaluation methods can serve three roles (Stanton and Young, 1999):
1. Functional analyses, to understand the scope of functions that a given design supports
2. Scenario analyses, to evaluate the scope and sequence of activities users must step through with the
design to achieve the desired outcome(s)
3. Structural analyses, to evaluate the efficacy and opinions of a design from the users' perspectives
The generalizability of evaluation outcomes is of key concern in the selection of evaluation studies.
Namely, the information gathered in this testing process must be easily translated into actionable
design changes relative to the users, tasks, and environments of the actual proposed context of use.
That said, those directing the testing process must be sensitive to the trade-offs assumed in the selection
of an evaluation method, setting (field or lab), functionality and tasks assessed, evaluation participants,
and sample size. For example, in the decision to collect evaluation information in the field through
observations in lieu of a more formal usability testing method, designers should question if more is
gained from watching the interactions in context than what is lost from the lack of structure and
control. A lack of generalizable results from an evaluation can translate into longer lead-time on final
development or the recommendation of erroneous design changes that are not truly advantageous for
actual design use in the target work system.
Three levels of inclusion of users in the evaluation process have been identified (Maguire, 2001):
1. Participative, to directly educe user opinions and internal thoughts upon interaction with the
design without evaluator
designer prompting
2. Assistive, which employs a think-aloud protocol methodology in which the user talks out loud
while working with a design to explain their motivation for actions and train of thought. The
designer only intervenes when the user hits an obstacle
/
TABLE 7.6 Questions for Consideration in the Selection of UCD Testing and Evaluation Techniques
Resource-Specific Criteria
Method-Specific Criteria
What is the cost-benefit ratio of using this method?
What is the purpose of the evaluation?
How much time is available for the study?
What is the state of the product
system?
/
How much money is available for the study?
What is the intended outcome of the evaluation?
How many staffs are available for the implementation
and analysis of the study?
How can the users be involved in the evaluation?
How can the designers be involved in the evaluation?
Search WWH ::




Custom Search