ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Stefano Mizzaro (University of Udine)

In academic publishing, peer review is the practice adopted to evaluate the quality of papers before publication. Each submission undergoes a thorough review by some peers, who decide whether the paper is worthy of publication. The pre-publication filter by peer review is important to guarantee that scientific literature is of high quality. However, although peer review is a well known, widely adopted, and accepted practice, it is not immune to criticism, and its appropriateness has been challenged by many. One recent example is the paper by Tomkins et al. [1].

Some such opponents go beyond discussion and criticisms, and propose concrete alternatives. Such proposals essentially aim to exploit data and information that are available but are not currently used for evaluation purposes. Indeed, after publication, papers are (hopefully) read by fellow researchers, who usually form an opinion about their quality. However, all these opinions remain private, or are communicated among a few collaborators in an informal way, or perhaps manifest themselves (although in a rather implicit way) in citations. In extreme terms, we might say they are lost (at least, for the purposes of research evaluation). At the very least, the research community fails to leverage precious data and information that could help research evaluation.

Some alternative proposals, including Readersourcing [3] and TrueReview [2] make use of this data. Both rely on the opinions of readers, who are asked directly to judge the papers they have read. Reader assessments are then stored, in an open way, as numerical scores. This would then allow a numerical quality score to be computed for papers, authors, and readers too, by means of an algorithm capable of estimating the quality of the assessments. In other words (see the original publications for all the details), readers are invited to submit a score on the quality of the papers they have read; all the scores concerning a specific paper may then be aggregated (and weighted according to the quality of the readers) in order to compute an overall quality score for the paper; the scores of all papers by a certain author may in turn be aggregated in order to compute a quality index for researchers as authors; and the quality of the expressed assessments may be used to compute an overall quality index for researchers as readers.

These proposals, and similar ones, utilise crowdsourcing [3]: the activity of outsourcing, to large crowds of unknown people, tasks that are usually performed by a few experts. Although this approach might seem unfeasible – and it has had its share of criticism - there exist many successful examples of crowdsourcing. Wikipedia, for instance, is a free, good-quality crowdsourced encyclopaedia; marketplaces for crowdsourcing exist and are flourishing [L1], [L2]; and crowdsourcing platforms specialising in research and development activities are available [L3] .

Indeed, crowdsourcing peer review would also address the issue of the scarcity of reviewers, which seems to be a growing problem, as discussed [3]. Although  it is not clear whether crowdsourcing peer review will allow the quality of the scientific literature to remain high, this is an interesting research question that, in my opinion, is worth addressing.

Links:
[L1] www.mturk.com
[L2] www.crowdflower.com
[L3] www.innocentive.com

References:
[1] A. Tomkins, M. Zhang, W. D. Heavlin: “Reviewer bias in single- versus double-blind peer review”, PNAS, 114(48):12708–12713. November 28, 2017, http://www.pnas.org/content/114/48/12708.
[2] L. de Alfaro, M; Faella: “TrueReview: A Proposal for Post-Publication Peer Review” (white paper), technical report UCSC-SOE-16-13, 2016.
[3] S. Mizzaro: “Readersourcing - A manifesto”,Journal of the American Society for Information Science and Technology, 63(8):1666-1672, 2012, Wiley.

Please contact:
Stefano Mizzaro, University of Udine, This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 113 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed