ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
Back Issues Online
Back Issues Online

by Judith ter Schure (CWI)

An estimated 85 % of global health research investment is wasted [1]; a total of one hundred billion US dollars in the year 2009 when it was estimated. The movement to reduce this waste recommends that previous studies be taken into account when prioritising, designing and interpreting new research. Yet current practice to summarize previous studies ignores two crucial aspects: promising initial results are more likely to develop into (large) series of studies than their disappointing counterparts, and conclusive studies are more likely to trigger meta-analyses than not so noteworthy findings. Failing to account for these apects introduces ‘accumulation bias’, a term coined by our Machine Learning research group to study all possible dependencies potentially involved in meta-analysis. Accumulation bias asks for new statistical methods to limit incorrect decisions  from health research while avoiding research waste.

The CWI Machine Learning group in Amsterdam, The Netherlands, develops methods to allow for optional continuation in statistical testing. Thus, in contrast to standard statistical tests, the safe tests we develop retain their statistical validity if one decides, on the spot, to continue data collection and obtain a larger sample than initially planned for ‒ for example because results so far look hopeful but not yet conclusive. Additional research into the application of these safe methods to meta-analysis was inspired by the replicability crisis in science and the movement to red  uce research waste.

The 85 % research waste estimate is calculated by cumulatively considering waste in four successive stages of health research: (1) the choice of research questions, (2) the quality of research design and methods, (3) the adequacy of publication practices and (4) the quality of research reporting. In two of these stages, design and reporting, research waste is caused by a failure to systematically examine existing research. In terms of research design, the paper that estimated the research waste [1] stresses that new studies “should not be done unless, at the time it is initiated, the questions it proposes to address cannot be answered satisfactorily with existing evidence”. Its recommendations about reporting involve that new studies should “be set in the context of systematic assessments of related studies” [1].

In 2014, a series of follow-up papers put forward by the REWARD Alliance showed that the 2009 recommendations remained just as pressing in 2014. The recommendation to always relate new studies to available research in design and reporting acquired the name evidence-based research in 2016 and has since then been promoted by the Evidence-Based Research Network.

Deciding whether existing evidence answers a research question is a difficult task that is further complicated when the accumulation of scientific studies is continuously monitored. This continuously monitoring is key to ‘living systematic reviews’, which are meta-analyses that incorporate new studies as they come available. Restricting further research when a certain boundary of evidence is crossed introduces bias, while continuous monitoring also creates multiple testing problems. As a result, reducing research waste is only feasible with statistical methods that allow for optional continuation or optional stopping.

Optional stopping is a well-studied phenomenon in statistics and machine learning, with a variety of approaches in the frequentist, Bayesian and online learning realm. These approaches are neatly combined, and much generalised, in the safe testing paradigm developed in our group. What is new to the meta-analysis setting is that dependencies arise even without continuously testing a series of studies. The very fact that a series of studies exists already introduces dependencies with results part of it that were at least not unacceptably disappointing to prohibit the expansion into the available series.

Meta-analysis is currently mainly considered when a study series of considerable size is available, with a median number of studies of around 15 in a typical meta-analysis [2]. Large study series are more likely when they include initial promising results within the series than when they include very disappointing ones, just like the availability of the big fish in the cartoon depends on specific smaller fish available. These dependencies introduce accumulation bias that in turn inflates false positive error rates when ignored in statistical testing.

Large study series are more likely when they include initial promising results within the series than when they include very disappointing ones, just like the availability of the big fish in the cartoon depends on specific smaller fish available.  Image: Bobb Klissourski (Shutterstock)
Large study series are more likely when they include initial promising results within the series than when they include very disappointing ones, just like the availability of the big fish in the cartoon depends on specific smaller fish available.
Image: Bobb Klissourski (Shutterstock)

Our research tries to determine how to deal with small-fish-dependent large fish, for various fish sizes. We intend to develop these methods for meta-analysis within the period of my PhD research (2017-2022) and involve other researchers from evidence based medicine, the reducing waste movement, psychology’s reproducibility projects, and software projects such as JASP in implementing and communicating the results.

The recommendations of the 2009 research waste paper are increasingly being heard by chief scientific advisors, funders (such as the Dutch ZonMW) and centres for systematic reviews [3]. Now we need to implement them efficiently with the right statistics.

Links:
[L1] http://rewardalliance.net/
[L2] https://en.wikipedia.org/wiki/Evidence-based_research
[L3] http://ebrnetwork.org/

References:
[1]  I. Chalmers, P. Glasziou: “Avoidable waste in the production and reporting of research evidence”, The Lancet, 374(9683), 86-89, 2009.
[2]  D. Moher, et al.: “Epidemiology and reporting characteristics of systematic reviews, PLoS medicine, 4(3), e78, 2007.
[3]  P. Glasziou, I. Chalmers: “Research waste is still a scandal—an essay by Paul Glasziou and Iain Chalmers”. Bmj, 363, k4645, 2018.

Please contact:
Judith ter Schure, CWI, The Netherlands
+31 20 592 4086
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: July 2024
Special theme:
Sustainable Cities
Call for the next issue
Image ERCIM News 116 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed