Information Technology Reference
In-Depth Information
8.1.1
Early Signs
Detecting early signs of potentially valuable ideas has theoretical and practical
implications. For instance, peer reviews of new manuscripts and new grant proposals
are under a growing pressure of accountability for safeguarding the integrity of
scientific knowledge and optimizing the allocation of limited resources (Chubin
1994 ; Chubin and Hackett 1990 ;Hayrynen 2007 ; Hettich and Pazzani 2006 ).
Long-term strategic science and technology policies require visionary thinking and
evidence-based foresights into the future (Cuhls 2001 ; Martin 2010 ; Miles 2010 ). In
foresight exercises on identifying future technology, experts' opinions were found
to be overly optimistic on hindsight (Tichy 2004 ). The increasing specialization
in today's scientific community makes it unrealistic to expect an expert to have a
comprehensive body of knowledge concerning multiple key aspects of a subject
matter, especially in interdisciplinary research areas.
The value, or perceived value, of an idea can be quantified in many ways. For
example, the value of a good idea can be measured by the number of people's
life it has saved, the number of jobs it has created, or the amount of revenue it
has generated. In the intellectual world, the value of a good idea can be measured
by the number of other ideas it has inspired or the amount of attention it has
drawn. In this chapter, we are concerned with identifying patterns and properties of
information that can tell us something about the potential values of ideas expressed
and embodied in scientific publications. A citation count of a scientific publication is
the number of times other scientific publications have referenced to the publication.
Using citations to guide the search for relevant scientific ideas by way of association,
known as citation indexing, was pioneered by Eugene Garfield in the 1950s
(Garfield 1955 ). It is a general consensus that citation behavior can be motivated
by both scientific and non-scientific reasons (Bornmann and Daniel 2006 ). Citation
counts have been used as an indicator of intellectual impact on subsequent research.
There have been debates over the nature of citations and whether positive, negative,
and self-citations should all be treated equally. Nevertheless, even a negative citation
makes it clear that the referenced work cannot be simply ignored.
Researchers have searched for other clues that may inform us about the potential
impact of a newly published scientific paper, especially clues that can be readily
extracted from routinely available information at the time of publication instead
of waiting for download and citation patterns to build up over time. Factors such
as track record of authors, the prestige of authors' institutions, the prestige of
the journal in which an article is published are among the most promising ones
that can provide an assurance of the quality of the article to an extent (Boyack
et al. 2005 ;Hirsch 2007 ;Kostoff 2007 ; van Dalen and Kenkens 2005 ; Walters
2006 ). The common assumption central to approaches in this category is that great
researchers tend to continuously deliver great work and, along a similar vein, an
article published in a high impact journal is also likely to be of high quality itself.
On the one hand, these approaches avoid the reliance on data that may not be readily
available upon the publication of an article and thus free analysts from constraints
Search WWH ::




Custom Search