Information Technology Reference
In-Depth Information
We constructed our examination by creating 100 hypothetical journals, all of which
we assumed were published four times a year for 25 years (100 issues). We also
assumed that the editors of each journal received 100 new manuscripts per issue and
sent each manuscript to two reviewers. To simplify programming, we additionally
assumed that each manuscript was written by only one author, and that each author
submitted one new manuscript per issue. Thus, the 100 authors submitting
manuscripts to the 1st issue of Journal J were the same as the 100 authors submitting
different manuscripts to the 2nd issue of Journal J, the 3rd issue, etc. One-hundred
different authors submitted to Journal K, another 100 to Journal L, etc. Our selection
of 100 for journals, issues, and manuscripts per issue was arbitrary. Preliminary runs
of the simulation with variations of these numbers gave equivalent results.
In order to determine the effects of resource scarcity on the three simulation
outcomes, we varied npub , the number of the 100 submitted manuscripts that could be
published in each issue. Half the simulation runs allowed 40 out of the 100
manuscripts to be published; the other half allowed only 20 of the 100 manuscripts to
be published.
For each journal, the simulation began by assigning a unique “talent” score to each
of its 100 loyal authors. The talent scores were generated by random sampling from a
normal distribution; in keeping with standard psychometric assumptions (e.g.,
Kaujman, 2009), each author's talent score, like an IQ score, remained constant
across all her/his “career” of 100 manuscript submissions. All authors began their
careers with no publications (so the ranks of all tracks records were tied), submitting
their first manuscript for issue 1 of their assigned journal.
After their talent scores were assigned, the 100 authors each submitted a
manuscript to the first issue of their assigned journal. The manuscripts varied in their
true quality , which we defined as the average of what thousands of reviewers would
judge the quality to be. A number representing the true quality of an author's current
manuscript was calculated by adding or subtracting some normally-distributed,
random error to each author's talent score. This error defined the first of two sources
of chance that could influence the three simulation outcomes. We varied the amount
of this random error by changing the correlation coefficient between talent and true
quality. In half the simulation runs we set the talent/true-quality correlation, rttq =
+0.75; it was rttq = +0.50 for the other half of the runs. The lower the correlation, of
course, the greater the amount of random error was added.
Once the true qualities of the 100 manuscripts were assigned, each manuscript was
assessed by two reviewers. The reviewers independently assigned a new number
representing the judged quality of each manuscript they reviewed. The judged quality
of a manuscript was based on the true quality plus or minus another dollop or
normally-distributed random error. This was the second of the two sources of chance
that could influence the three simulation outcomes. As above, the dollop of error was
varied by changing the correlation coefficient between the true and judged quality
of manuscripts. In half the simulation runs, rtjq = +0.75; for the other half it was
rtjq = +0.50.
At this point in the simulation, each manuscript was given two judgments of
quality, one from each of two reviewers. The judgments of the first reviewer were
now ranked across all 100 manuscripts submitted for the upcoming journal issue. The
judgments of the second reviewer were likewise ranked, then the two ranks of each
manuscript were averaged. Finally, this average rank was combined with the rank of
Search WWH ::




Custom Search