Information Technology Reference
In-Depth Information
A related problem is the re-naming fallacy, often observed in the work of scientists
who are attempting to reposition their research within a fashionable area. Calling a
network cache a “local storage agent” doesn't change its behaviour, and if the term
“agent” can legitimately be applied to any executable process then the term's explana-
tory power is slim—a particular piece of research is not made innovative merely by
changing the terminology. Likewise, a paper on natural language processing for “Web
documents” should presumably concern some issues specific to the Web, not just any
text; a debatable applicability to the Web does not add to the contribution. And it
seems unlikely that a text indexing algorithm is made “intelligent” by improvements
to the parsing. Renaming existing research to place it in another field is bad science.
It may be necessary to refine a hypothesis after initial testing; indeed, much of
scientific progress can be viewed as refinement and development of hypotheses to fit
new observations. Occasionally there is no room for refinement, a classic example
being Einstein's prediction of the deflection of light by massive bodies—a hypothesis
much exposed to disproof, since it was believed that significant deviation from the
predicted value would invalidate the theory of general relativity. But more typically
a hypothesis evolves in tandem with refinements in the experiments.
However, the hypothesis should not follow the experiments. A hypothesis will
often be based on observations, but can only be regarded as confirmed if it is able
to make successful predictions. There is a vast difference between an observation
such as “the algorithm worked on our data” and a tested hypothesis such as “the
algorithm was predicted to work on any data of this class, and this prediction has
been confirmed on our data”. Another perspective on this issue is that, as far as
possible, tests should be blind. If an experiment and hypothesis have been fine-tuned
on the data, it cannot be said that the experiment provides confirmation. At best the
experiment has provided observations on which the hypothesis is based. In other
words: first hypothesize, then test.
Where two hypotheses fit the observations equally well and one is clearly simpler
than the other, the simpler should be chosen. This principle, known as Occam's razor,
is purely a convenience; but it is well-established and there is no reason to choose a
complex explanation when another is available.
Defending Hypotheses
One component of a strong paper is a precise, interesting hypothesis. Another compo-
nent is the testing of the hypothesis and the presentation of the supporting evidence.
As part of the research process you need to test your hypothesis and if it is correct—or,
at least, not falsified—assemble supporting evidence. In presenting the hypothesis,
you need to construct an argument relating your hypothesis to the evidence.
For example, the hypothesis “the new range searching method is faster than previ-
ous methods” might be supported by the evidence “range search amongst n elements
requires 2 log 2 log 2 n
c comparisons”. This may or may not be good evidence,
but it is not convincing because there is no argument connecting the evidence to the
+
 
Search WWH ::




Custom Search