Information Technology Reference
In-Depth Information
abstraction levels and from different MDWD processes. Appendix A includes two
examples of metrics from the Web Usability Model with their generic definition.
2. Operationalize the Metrics. The calculation formulas of the selected metrics
should be operationalized by identifying variables from the generic definition of the
metric in the modeling primitives of the selected artifacts, in other words, by estab-
lishing a mapping between the generic description of the metric and the concepts that
are represented in the artifacts. In the evaluation of models (PIM, PSM, and CM), the
calculation of the operationalized formulas may require assistance from an evaluator
to determine the values of the variables involved, or it may require a verification tool
if these formulas are expressed in variables that can be automatically computed from
the input models by query languages such as the Object Constraint Language (OCL).
3. Establish Rating Levels for Metrics. Rating levels are established for ranges of
values obtained for each metric by considering their scale type and the guidelines
related to each metric whenever possible. These rating levels allow us to discover
whether the associated attribute improves the Web application's level of usability, and
are also relevant in detecting usability problems that can be classified by their level of
severity.
The outcomes of the above activities represent the Evaluation specification that
will be used as input by the next stage.
3.3 Design of the Evaluation
The aim of this stage is to design how the evaluation will be performed and what
information will be collected during the evaluation.
1. Define the Template for Usability Reports. This template is defined in order to
present all the data that is related to the usability problems detected. A usability report
is commonly a list of usability problems (UP). Each UP can be described by the fol-
lowing fields: ID , which refers to a single UP; description of the UP; affected attrib-
ute from the Web Usability Model; severity level , which could be low, medium or
critical; artifact evaluated , in which metrics have been applied; source of the problem ,
which refers to the artifact that originates the usability problem (e.g., PIMs, PSMs,
CMs, and transformation rules); occurrences , which refer to the number of appear-
ances of the same UP; and recommendations to correct the UP detected (some rec-
ommendations might also be automatically provided by interpreting the range values).
Other fields that are useful to post-analyze the UP detected can also be added, such as
priority of the UP; effort that is needed to correct the UP; and changes that must be
performed in order to take the aforementioned fields into consideration.
2. Elaborate an Evaluation Plan. Designing the evaluation plan implies: establish-
ing an evaluation order of artifacts; establishing a number of evaluators; assigning
tasks to these evaluators, and considering any restrictions that might conditioned the
evaluation. The recommended order is to first evaluate the artifacts that belong to a
higher abstraction level (PIMs), since these artifacts drive the development of the
final Web application. This allows us to detect usability problems during the early
stages of the Web development process. The artifacts that belong to a lower level of
abstraction (PSMs and CMs) are then evaluated.
Search WWH ::




Custom Search