Information Technology Reference
In-Depth Information
As another example, we might say that the evidence for the claim that a network
is qualitively improved is that average times to transmit a packet are reduced—a
quantity that can be measured. But if the aim of network improvement is simplified to
the goal of reducing wait times, then other aspects of the qualitative aim (smoothness
of transmission of video, say, or effectiveness of service for remote locations) may
be neglected.
In other words, once a qualitative aim is replaced by a single quantitative measure,
the goal of research in the field can shift away from achievement of a practical
outcome, and instead consist entirely of optimization to the measure, regardless
of how representative the measure is of the broader problem. A strong research
program will rest, in part, on recognition of the distinction between qualitative goals
and different quantitative approximations to that goal.
The problem of optimization-to-a-measure is particularly acute for fields that
make use of shared reference data sets, where this data is used for evaluation of new
methods. It is all too easy for researchers to begin to regard the standard data as being
representative of the problem as a whole, and to tune their methods to perform well
on just these data sets. Any field in which the measures and the data are static is at
risk of becoming stagnant.
Good and Bad Science
Questions about the quality of evidence can be used to evaluate other people's
research, and provide an opportunity to reflect on whether the outcomes of your
work are worthwhile. There isn't a simple division of research into “good” and
“bad”, but it is not difficult to distinguish valuable research from work that is weak
or pointless.
The merits of formal studies are easy to appreciate. They provide the kind of math-
ematical link between the possible and the practical that physics provides between
the universe and engineering.
The merits of well-designed experimental work are also clear. Work that exper-
imentally confirms or contradicts the correctness of formal studies has historically
been undervalued in computer science: perhaps because standards for experimenta-
tion have not been high; perhaps because the great diversity of computer systems,
languages, and data has made truly general experiments difficult to devise; or perhaps
because theoretical work with advanced mathematics is more intellectually imposing
than work that some people regard as mere code-cutting. However, many questions
cannot be readily answered through analysis, and a theory without practical confir-
mation is of no more interest in computing than in the rest of science.
Research that consists of proposals and speculation, entirely without a serious
attempt at evaluation, can be more difficult to respect. Why should a reader regard
such work as valid? If the author cannot offer anything to measure, arguably it
isn't science. And research isn't “theoretical” just because it isn't experimental.
Theoretical work describes testable theories.
 
Search WWH ::




Custom Search