This section was edited in cooperation with Informatics Europe. Guest editors: Hélène Kirchner (Inria) and Fabrizio Sebastiani (ISTI-CNR)

Research evaluation, as applied to individual researchers, university departments, or research centres, plays a crucial role in recognising and supporting research that can lead to advances in knowledge and benefits to society.

In March 2018, Informatics Europe published a report [1] focusing on the main principles and criteria that should be followed when individual researchers in informatics (computer science) are evaluated for their research activity, addressing the specificities of this area. This subsumes evaluation of a specific piece of research and can often be generalised to university departments or research centres, since their research performance is largely determined by their individuals.

by Dino Mandrioli (Politecnico di Milano)

Conferences are harmful for research evaluation.

The document [1] addresses the critical issue of informatics research evaluation in a complete and thorough way, touching all the main facets of the problem. I substantially agree with most of the arguments and proposals presented therein. In this note, however, I discuss the only point of serious disagreement, i.e., the role assigned to conferences for the evaluation of research in computer science. I admit that it is a fact that in our community – probably the only scientific community that exhibits this specificity – conferences often overwhelm journals in the research assessment procedure, but I believe that this anomaly is seriously flawed, dangerous, and should be contrasted – not be accepted or, even worse, encouraged.

by Stefano Mizzaro (University of Udine)

In academic publishing, peer review is the practice adopted to evaluate the quality of papers before publication. Each submission undergoes a thorough review by some peers, who decide whether the paper is worthy of publication. The pre-publication filter by peer review is important to guarantee that scientific literature is of high quality. However, although peer review is a well known, widely adopted, and accepted practice, it is not immune to criticism, and its appropriateness has been challenged by many. One recent example is the paper by Tomkins et al. [1].

by Alain Girault and Laura Grigori (Inria)

Software is becoming increasingly important in academic research and consequently, software development activities should be taken into account when evaluating researchers (be it individually or teams of researchers). In 2011, a dedicated working group of the Inria Evaluation Committee addressed the issue of software evaluation and proposed criteria that allow researchers to characterise their own software. In the context of recruitment or team evaluation, this self-assessment exercise allows the evaluators to estimate the importance of software development in the activities of the authors, to measure the impact of the software within its target community, and to understand its current state and its future evolution.

by Giovanni Abramo (IASI-CNR and The University of Waikato)

“Impact”, jointly with “citation”, is probably the most frequent term in scientometric literature. It recurs so often that its definition is more and more taken for granted, hardly ever spelled out. Impact is often defined by means of the indicators employed to measure it, rather than as a concept in itself. The obsessive pursuit of measurement resembles a case of gold fever. One apparent aspect of this is to “first measure, and then ask whether it counts”. The aim of this short note is to revisit the conceptualization of impact, and relevant measurement indicator, which could serve in opening a discussion.

Next issue: July 2018
Special theme:
Human-Robot Interaction
Call for the next issue
Image ERCIM News 113 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed