Information Technology Reference
In-Depth Information
major revisions, reject”), numerical ratings, and written recommendations with
justifications. Most editors now omit the name of the author before sending manuscripts
for review, presumably to reduce the chances of personal bias. Yet it is frequently
possible to guess the author from self-citations (for example, “This study extends my
previous work on punctuation (Smith, 2010) …”) and use the guess to assess track
record. Even when reviewers cannot identify the author, the editor can, and may use the
knowledge to resolve disagreements among reviewers' assessments.
Reviewers frequently disagree in their assessments of manuscripts, sometimes a
little, sometimes a lot. The correlation between reviewers' assessments of manuscripts
is likely no greater than that achieved by reviewers of grant proposals: a modest but
stubbornly consistent r = +0.50 (Thorngate, Dawes & Foddy, 2009; see also Petty,
Fleming & Fabrigar, 1999; Whitehurst, 1984).
Editors have a wide variety of decision rules for resolving the disagreements --
options ranging from taking the advice of the most-credible reviewer to flipping a
coin. Many of these options can be represented as a weighted average of the
reviewers' assessments and the track record of the manuscript author. Suppose, for
example, that Reviewer X gave Manuscript A her 2 nd highest rating, and Reviewer Y
gave it his 7 th highest rating, out of 100 manuscripts. Suppose also that the editor
counted the number of publications of A's author, compared the count to those of all
other 99 manuscript authors, and calculated that A's author had the 5 th best track
record. The editor might then apply a simple formula such as the one below to resolve
X and Y's disagreement about the rank of manuscript A.
Let:
wmq = the weight give to rank of manuscript quality;
wtr = the weight given to rank of track record; and
wmq + wtr = 1.00;
Weighted rank of Ms A = wmq*(Xrank + Yrank)/2 + wtr*track-rank .
(1)
So if, for example, wmq = 0.7 and wtr = 0.3, then
Weighted rank of manuscript A = 0.7*(2+7)/2 + 0.3*5 = 3.15 + 1.50 = 4.65.
The editor could then compare this weighted rank of 4.65 with the weighted ranks
of all other 99 manuscripts similarly calculated, and publish manuscripts from the
highest-ranked downward until the journal ran out of space. If the editor set wtr = 0.0,
then the resulting weighted rank would simply be the average of the two reviewers'
assessments of manuscript quality. If the editor set wtr = 1.0, then the reviewer's
assessments would be given no weight and only track record would matter. Our
simulation addressed what happens at and between these extremes.
2 The Simulation
The simulation, written in the programming language R and available from the
authors, examined how the above decision rule (Equation 1) for weighing two
reviewers' judgments of manuscript quality and track record influenced three
outcomes of competitions for journal space. These outcomes were:
1. the average percent of the most-talented authors having the greatest number of
publications;
2. the average percent of the highest quality manuscripts that were published; and
3. the percent of winners of a first competition for journal space who accrued the
highest track records in subsequent competitions.
Search WWH ::




Custom Search