Information Technology Reference
In-Depth Information
surprising are papers where the authors have utterly misunderstood the norms of
research or presentation for the field, such as papers where the authors have made
no use of standard resources such as data sets, or, for example, a paper on search
technology written as a narrative from the imagined perspective of a document.
Most curiously of all, in some papers there is no obvious research question, no
statement of aims or goals, and no claimed contribution. A more subtle problem
of this kind is when a paper appears to tell a coherent story, but on inspection it
becomes clear that, say, the experimental results are unrelated to the conclusions. In
some cases they seem to be on a different topic altogether. An example was a paper
that gave results for the efficiency of a string search method but drew the conclusion
that the method enhanced data privacy. Stated so concisely, the paper sounds absurd!
And yet such problems are not rare.
Inconsistency, Inadequacy, and Incompleteness
Some papers seem reasonable in parts, but the parts don't belong in the same docu-
ment. A sensible, well-organised paper may be framed in terms of grandiose, ambi-
tious claims that can only be described as ridiculous. 7 Or there may be a detailed,
insightful literature review, but it is either disconnected from the contribution, or,
bizarrely, the contribution is less interesting than the previous work that was described
so well.
For papers that are overall at a high standard, perhaps the single commonest
problem that leads to rejection is that the experiments are inadequate. There may be
an interesting method, but the experiments are trivial or uninformative, and fall far
short of supporting the claims; often, in these cases, the problem is that the data set
used is too artificial to allow any interesting conclusion to be drawn. Or a small data
set may be used to support claims for applications at an entirely different scale, such
as a set of a few thousand documents being used to make claims about Web search.
Or the data set may not be relevant to the problem at all. It is as if the researchers
(Footnote 6 continued)
dictionary for medical practitioners, among others. And many that were not so respectable; topics
included image enhancement for ancient rock carvings (evaluated on a single image), use of XML
for storing machine maintenance logs (utterly trivial), automated translation of eighteenth-century
English text into modern usage (only arguably modern, but unarguably garbled and harder to read),
and a tool for distinguishing between kinds of spider (use of a computer for a task does not mean
the task is computer science).
7 An example was a Ph.D. thesis that concerned how to develop software specifications in terms
of a particular way of describing assertions and tests. The work was ambitious, but did appear to
achieve reasonable initial outcomes. However, the motivation was that the work would ultimately
make it unnecessary to write programs, and that the specifications could be automatically inferred
from transcripts of human conversation. (This condensation of several pages of rambling text into
a single sentence doesn't convey the full eccentricity of these claims.) No connection was made
between the claims and the actual contribution.
Search WWH ::




Custom Search