Information Technology Reference
In-Depth Information
Applicability . Algorithms can be compared not only with regard to their resource
requirements, but with regard to functionality. The basis of such comparisons will
be quite different to those based on, say, asymptotic analysis.
A common error is to compare the resource requirements of two algorithms that
perform subtly different tasks. For example, the various approximate string match-
ing algorithms do not yield the same results—strings that are alike according to
one algorithm can be dissimilar according to another. Comparing the costs of these
algorithms may not be particularly informative.
Human Studies
A variable in many studies is the user. Humans need to be involved to resolve many
kinds of research question: whether the compressed image is of satisfactory quality,
whether the list of responses from the search engine is useful, whether a programming
language feature is of value. Humans can be used to assess outputs—did the robot do
a good job of the housework? Is thisWeb page suitable for children? And humans can
be the subject of experiments—what are themain shortcomings of this user interface?
Which of these technologies is most helpful for navigating an unfamiliar city?
Appropriate use of humans in experiments allows many forms of rich measure-
ment that provide insights into the value of computational methods. These can be
qualitative, such as assessment of ease of use; and subjective, such as self-reported
feedback on new technologies. They can also be quantified, through independent
observation of behaviours and responses.
Design of human studies is treated in detail in research methods texts written
for information systems, psychology, or business methods, and is beyond the scope
of this topic. However, researchers should consider whether humans are needed for
their work, because of the depth and value that a well-designed human intervention
can add to measurement of an experiment. Some of the questions that should be
considered include:
￿
Are human assessors or human subjects needed? To what extent will the results
be persuasive if humans are not used?
￿
Howmany humans will be needed, and who will they be? Howwill their eligibility,
typicality, and relevant competence be determined?
￿
What instructions will they be given, and how will the experimenter avoid com-
municating to the subjects what the desired outcome is? That is, how will subject
bias be avoided?
￿
Across large, repetitive tasks, such as annotation of a collection of items, how will
consistency be ensured?
￿
How carefully will the experiment have to be planned and prepared for? Should it
be run iteratively, with trials used to establish problems that are addressed in later
versions?
￿
Does the experiment need to be blind, or double-blind?
 
Search WWH ::




Custom Search