ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Dino Mandrioli (Politecnico di Milano)

Conferences are harmful for research evaluation.

The document [1] addresses the critical issue of informatics research evaluation in a complete and thorough way, touching all the main facets of the problem. I substantially agree with most of the arguments and proposals presented therein. In this note, however, I discuss the only point of serious disagreement, i.e., the role assigned to conferences for the evaluation of research in computer science. I admit that it is a fact that in our community – probably the only scientific community that exhibits this specificity – conferences often overwhelm journals in the research assessment procedure, but I believe that this anomaly is seriously flawed, dangerous, and should be contrasted – not be accepted or, even worse, encouraged.

There are two main arguments to support preferring conferences over journals as the main publication medium:
1. Journal publishing takes too long
2. Some conferences are even more selective than many journals.

My main counterpoints to the above arguments are:

1.a)  It is true that reviewing (nowadays not publishing!) journal papers often takes a long time, but for highly technical papers this is necessary. Research advances quickly but its evaluation needs its own time!

1.b) Some highly prestigious journals (e.g., JACM, TOPLAS) have adopted effective policies to dramatically reduce the average response time: typically, a preliminary quick review states whether the paper is readable and interesting enough to deserve a thorough further reviewing effort; if a paper passes this phase, the author knows that acceptance will depend mostly on technical soundness and can be more patient. See also the symmetric point 2.b) about conference reviewing.

1.c) From a more general, “social”, point of view, this claim is a symptom of the typical “time to market” syndrome which often causes serious damage even in the industrial world: research needs serious evaluation before “going to the market” (remember the “cold fusion” phenomenon).

2.a) Top journals are highly selective too, sometimes even more than top conferences; furthermore, often people try to “submit anyway” to conferences even if the paper is not yet mature (e.g., due to the deadline); or they submit several papers maybe hoping for some “good luck” for some of them. Many authors are more conservative and careful when submitting to high-level journals.

2.b)  Being highly selective does not guarantee high quality. The conference and journal reviewing processes are necessarily different: having many papers to review with hard deadlines, even for serious committee members, leads to superficial reading where generic interest, “sympathy for the topic” (or even for the author), readability, and, in some cases, distributing the load to several sub-reviewers, overwhelms thorough and comparative evaluation. The typical conference bidding process tends to produce clusters of strongly related topics, thus producing the risk of self-referencing environments, just the opposite of the real goal of cross-circulating new ideas. Furthermore, in program committee (PC) meetings, whether in person or electronic, the prevailing opinion is often the one of the member who “speaks the loudest” (“over my dead body”).

The 2014 “NIPS experiment” has been mentioned a lot lately (see e.g., [2]), where the PC was split into two independent subcommittees which agreed only on the top and bottom evaluations and whose opinions varied substantially on about 60% of the papers in-between. This fact was interpreted in various ways; in some cases it was even used as an argument in favour of keeping top conferences highly selective: “If we pick up just the top 10%, we are sure that we reward just the best papers”. I have little doubt that the bottom 10% warrant rejection, but this approach risks rejecting many potentially important papers in the grey area – some of which are probably even more interesting than those unanimously accepted. The literature exhibits plenty of examples of pioneering papers that suffered from several rejections before the community appreciated their value. Important, novel contributions are often controversial and misunderstood; it is easier to gain general appreciation by means of results in fields that are already familiar to a wide audience. Of course, the same problem may occur even with journal publishing, but in this scenario it can be mitigated by a careful selection of reviewers, by the opportunity for rebuttal, and by …more time and thought for the final decision.

2.c) The tight page limits compel authors to submit “ε-papers”: small, maybe relevant, advances can be more easily published than well-developed, completed, long-term research. Technical details are omitted or confined to appendices, which are rarely checked.

In conclusion, I would like conferences to be relegated to their original, authentic goal, i.e., circulation and discussion of ideas. Journals remain the best, natural, certainly not only, medium for research evaluation.

References:
[1] F. Esposito et al.: “Informatics Research Evaluation”, An Informatics Europe Report, Draft Version: 20-10-2017
[2] M. Vardi, “Divination by Program Committee”, CACM, Vol. 60, N. 9, p. 7, 2017.

Please contact:
Dino Mandrioli, Politecnico di Milano, Italy
+39-02-2399-3522
This email address is being protected from spambots. You need JavaScript enabled to view it.
http://home.deib.polimi.it/mandriol/SitoInglese/perswebsiteengl.html

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 113 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed