ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Giovanni Abramo (IASI-CNR and The University of Waikato)

“Impact”, jointly with “citation”, is probably the most frequent term in scientometric literature. It recurs so often that its definition is more and more taken for granted, hardly ever spelled out. Impact is often defined by means of the indicators employed to measure it, rather than as a concept in itself. The obsessive pursuit of measurement resembles a case of gold fever. One apparent aspect of this is to “first measure, and then ask whether it counts”. The aim of this short note is to revisit the conceptualization of impact, and relevant measurement indicator, which could serve in opening a discussion.

To better appreciate the role of scientometrics in assessing impact, we need to observe its position within the overall research value chain. Governments and private organizations invest in research to accomplish scientific and technical advancement, which has the potential of contributing to socio-economic progress. Taking for granted that the measurement of “social” impact is beyond the scope of scientometrics, what kind of impact is measured by scientometrics? To answer this question we have to consider scientific activity as “information processing”: the science system consumes, transforms, produces, and exchanges “information”. Scientometrics studies and analyzes, through the reference lists, the exchanges of information encoded in published papers. Scientists collect and analyze prior knowledge encoded in verbal forms and add value to it for producing new knowledge, nearly always encoded in papers made accessible to other scientists. For the production of new knowledge “to have an impact” it has to be used by other scientists. We can state that evaluative scientometrics is aimed at measuring/predicting the impact of research on future scientific advancement. We can conclude that scientometrics concerns “scientific impact” or “scholarly impact”.

But how to measure it? We frequently hear that the proper basis of research evaluation would be for experts to review the work of their colleagues. In fact, reviewers cannot assess impact, at the most being able to predict it, by examining the characteristics of the publication under evaluation.

In scientometrics citation is the natural indicator of impact, as it certifies the use of the cited publication towards the scientific advancement encoded in the citing publication. This position derives from the Mertonian or normative theory [1], according to which scientists cite papers to recognize their influence, being aware that exceptions (uncitedness, undercitation, and overcitation) occur. Although a reviewer might judge certain characteristics of a publication, he/she cannot certify its “use”. Also, no other indicator certifies it. Journal impact indicators reflect the distribution of citations of all hosted publications, not the individual ones. Also altmetrics (i.e. Mendeley reader counts) do not certify the proper use of a publication. But this does not mean that journal impact and/or altmetrics cannot be useful in assessing the scholarly impact of a publication.

Ideally, when the life of a publication is over, i.e. when it is no longer cited, the accrued citations reflect the publication’s impact. Unfortunately policy-makers and managers, hoping to make informed decisions, cannot wait the many years (or decades) for citation life-cycles to end, and so to conduct research assessments.

The very challenge is: how to predict future impact? In literature the answer is not univocal, but what should not be in discussion is that late citation counts (as proxy of long-term impact) serve as the benchmark for determining the best indicator (and its predictive power), in function of the citation time window. Initially, scholars faced the problem of how long the citation time window should be for the early citations to be considered as an accurate proxy of later citations [2]. They have also investigated the possibility of increasing the accuracy of impact prediction by combining early citation counts with journal impact indicators [3].

The identification of alternative indicators or combinations, which could vary in type and weight across disciplines, as a function of the citation time window, and early-citations accrued, should be conducted by comparing their prediction of impact to the benchmark of late citation counts. This is the actual challenge for our community.

The multitude of assumptions, conventions, limitations, and caveats of evaluative citation analysis still apply; we just hope to have stimulated a proper discussion for a definitive convergence on the meaning and measurement of a fundamental concept of the scientometric science. Differently, every assessment exercise becomes metaphysical.

References:
[1] R. K. Merton: “Priorities in scientific discovery”, in R. K. Merton (Ed.), The sociology of science: Theoretical and empirical investigations (pp. 286–324), Chicago: University of Chicago Press, 1973.
[2] G. Abramo, T Cicero, C.A. D’Angelo: “Assessing the varying level of impact measurement accuracy as a function of the citation window length”, Journal of Informetrics, 5(4), 659-667, 2011.
[3] G. Abramo, C.A. D’Angelo, G. Felici: “Predicting long-term publication impact through a combination of early citations and journal impact factor”, working paper, 2017.

Please contact:
Giovanni Abramo, IASI-CNR Italy and The University of Waikato, New Zealand
+39 06 72597362, This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 113 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed