Advertisement

In a pilot study, automated real-time systematic review updates were feasible, accurate, and work-saving

Open AccessPublished:September 19, 2022DOI:https://doi.org/10.1016/j.jclinepi.2022.08.013

      Highlights

      • We developed a hybrid human expert/artificial intelligence system to keep systematic reviews up to date.
      • The system continuously surveils PubMed/MEDLINE for new relevant articles, and notifies review authors.
      • A living abstract is made available, which shows the status of the review in real-time.
      • In a pilot, the system was effective and reduced workload in a systematic review of COVID-19 vaccination studies.

      Abstract

      Objectives

      The aim of this study is to describe and pilot a novel method for continuously identifying newly published trials relevant to a systematic review, enabled by combining artificial intelligence (AI) with human expertise.

      Study Design and Setting

      We used RobotReviewer LIVE to keep a review of COVID-19 vaccination trials updated from February to August 2021. We compared the papers identified by the system with those found by the conventional manual process by the review team.

      Results

      The manual update searches (last search date July 2021) retrieved 135 abstracts, of which 31 were included after screening (23% precision, 100% recall). By the same date, the automated system retrieved 56 abstracts, of which 31 were included after manual screening (55% precision, 100% recall). Key limitations of the system include that it is limited to searches of PubMed/MEDLINE, and considers only randomized controlled trial reports. We aim to address these limitations in future. The system is available as open-source software for further piloting and evaluation.

      Conclusion

      Our system identified all relevant studies, reduced manual screening work, and enabled rolling updates on publication of new primary research.

      Keywords

      What is new?

        Key findings

      • We developed a system which continuously identifies newly published trial evidence relevant to a systematic review, enabled by combining artificial intelligence (AI) with human expertise.
      • The semi-automated system found 100% of the relevant studies found by a conventional manual update, during a pilot, when updating a systematic review of COVID vaccination trials.

        What this adds to what was known?

      • Living systematic reviews have been proposed as a new model for keeping evidence syntheses updated. Most current living reviews rely on repeated manual update searches, which are time consuming and laborious.
      • We show that using a hybrid AI/expert model could lead to lower latency updates, potentially reducing workload, and improving the currency of systematic reviews.

        What is the implication and what should change now?

      • Systems which use AI to automatically notify systematic review authors of new evidence (“push” updates) are feasible, and should be piloted on a wider range of systematic reviews.
      • Future research should examine how best to adapt to these technologies for use in more complex reviews (particularly reviews of nontrial evidence, and those with complex inclusion criteria).
      • Journal publishers should investigate models for rapid updating, to enable automated live updates of review status to be published.

      1. Introduction

      For many health conditions and treatments, evidence accumulates rapidly [
      • Marshall I.J.
      • L’Esperance V.
      • Marshall R.
      • Thomas J.
      • Noel-Storr A.
      • Soboczenski F.
      • et al.
      State of the evidence: a survey of global disparities in clinical trials.
      ,
      • Bastian H.
      • Glasziou P.
      • Chalmers I.
      Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?.
      ]. Systematic reviews identify, appraise, and synthesize all empirical evidence on healthcare topics, and are therefore invaluable for making clinical decisions and informing policy. However, most reviews are static publications, which can become quickly out of date as new primary research is published [
      • Shojania K.G.
      • Sampson M.
      • Ansari M.T.
      • Ji J.
      • Doucette S.
      • Moher D.
      How quickly do systematic reviews go out of date? A Survival analysis.
      ]. For the reader, it is currently impossible to determine whether any particular systematic review is up to date, or whether new important research was published after the searches were conducted. For authors, it is unclear whether it is worth the effort of updating their review, given uncertainty about whether new evidence exists which might change their conclusions [
      • Garner P.
      • Hopewell S.
      • Chandler J.
      • MacLehose H.
      • Schünemann H.J.
      • Akl E.A.
      • et al.
      When and how to update systematic reviews: consensus and checklist.
      ]. For commissioners and policy makers, it is unclear when and whether to fund updates of systematic reviews.
      As an example, consider the topic of COVID-19 treatments or vaccines. New studies are being rapidly conducted and published on these topics. A “static” systematic review on either topic, with a search date of 6 months ago (from the time of this writing), is likely to have missed critical new findings, and failed to provide an account of the current science. Given the pace of new published trial evidence in COVID-19, a conventional systematic review would likely become outdated before it was ever published.
      Living systematic reviews have been proposed as one model for keeping rigorous syntheses current with evolving evidence [
      • Elliott J.H.
      • Turner T.
      • Clavisi O.
      • Thomas J.
      • Higgins J.P.T.
      • Mavergames C.
      • et al.
      Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap.
      ,
      • Elliott J.H.
      • Synnot A.
      • Turner T.
      • Simmonds M.
      • Akl E.A.
      • McDonald S.
      • et al.
      Living systematic review: 1. Introduction—the why, what, when, and how.
      ]. The idea is to update syntheses as new evidence emerges, ideally with low latency. For COVID-19 specifically, a number of living reviews are currently being maintained on both treatments and vaccines [
      • Siemieniuk R.A.
      • Bartoszko J.J.
      • Ge L.
      • Zeraatkar D.
      • Izcovich A.
      • Kum E.
      • et al.
      Drug treatments for covid-19: living systematic review and network meta-analysis.
      ,
      • Boutron I.
      • Chaimani A.
      • Meerpohl J.J.
      • Hróbjartsson A.
      • Devane D.
      • Rada G.
      • et al.
      The COVID-NMA project: building an evidence ecosystem for the COVID-19 pandemic.
      ]. To date, living systematic reviews have been achieved by repeating a conventional systematic review update on a frequent basis (updating searches, say, monthly or weekly), screening the results, and extracting data [
      Living Evidence Network
      Guidance for the production and publication of Cochrane living systematic reviews: Cochrane Reviews in living mode. Cochrane.
      ]. This process still depends on review teams having to actively run searches and find new studies (a “pull” model), and will result in some lag between manual search and identification of relevant studies. In addition, conventional database searching can yield large numbers of abstracts which require screening.
      The process of conducting the search, and screening the results to identify potentially relevant abstracts is a large proportion of the work to conduct a systematic review. The findings of this work (whether there are new studies identified or not) are important to readers and policy makers. The main mechanism to provide this information to users is to publish a “full” update. This process, particularly for “empty updates,” is time consuming. There is a need to identify new evidence relevant to existing systematic reviews in a more efficient and less manual way. In addition, there is a need to have a formal way to represent the currency of existing systematic reviews, based on whether all relevant evidence has been incorporated.
      There has been much recent research attention on how to use artificial intelligence (AI) systems to automate (or semi-automate: where AI systems are combined with human experts) living updates [
      • Thomas J.
      • Noel-Storr A.
      • Marshall I.
      • Wallace B.
      • McDonald S.
      • Mavergames C.
      • et al.
      Living systematic reviews: 2. Combining human and machine effort.
      ,
      • Marshall I.J.
      • Wallace B.C.
      Toward systematic review automation: a practical guide to using machine learning tools in research synthesis.
      ]. The most advanced technology in this respect is the use of machine learning (ML) to prioritize studies for screening, which has been found to be accurate and efficient in a number of methodological studies [
      • O’Mara-Eves A.
      • Thomas J.
      • McNaught J.
      • Miwa M.
      • Ananiadou S.
      Using text mining for study identification in systematic reviews: a systematic review of current approaches.
      ,
      • Shemilt I.
      • Simon A.
      • Hollands G.J.
      • Marteau T.M.
      • Ogilvie D.
      • O’Mara-Eves A.
      • et al.
      Pinpointing needles in giant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews.
      ,
      • Wallace B.C.
      • Small K.
      • Brodley C.E.
      • Lau J.
      • Trikalinos T.A.
      Deploying an interactive machine learning system in an evidence-based. IHI ’12.
      ], and is available at the time of writing in several systematic review authoring tools [
      • Hamel C.
      • Kelly S.E.
      • Thavorn K.
      • Rice D.B.
      • Wells G.A.
      • Hutton B.
      An evaluation of DistillerSR’s machine learning-based prioritization tool for title/abstract screening – impact on reviewer-relevant outcomes.
      ,
      • Tsou A.Y.
      • Treadwell J.R.
      • Erinoff E.
      • Schoelles K.
      Machine learning for screening prioritization in systematic reviews: comparative performance of Abstrackr and EPPI-Reviewer.
      ].
      Here, we describe a hybrid system that integrates ML and natural language processing (NLP) methods with human expertise to translate static systematic reviews into living reviews. The system automatically monitors research databases for new, relevant research to a systematic review, and notifies the review authors. This “push” model differs fundamentally from the standard approach to updating reviews, which depends on review authors taking the initiative to periodically search for newly published evidence. We present a formative evaluation of the system, comparing the reliability of (semi-)automatic systematic review updates prospectively with traditional manual update searches for an ongoing systematic review of COVID-19 vaccine efficacy. The system—a collection of trained models and a prototype web interface with which to interact with them—is free and open-source. This work constitutes a step toward translating the idea of living reviews into practice.

      2. Methods

      Before using RobotReviewer LIVE, users develop a review question, and manually (via baseline full systematic manual search and manual screen of abstracts) identify baseline included studies. RobotReviewer LIVE can be used prospectively with new reviews (to replace the need for update searches before publication), or to bring existing reviews up to date—the only requirement to use RobotReviewer LIVE is the availability of the baseline search results and baseline included abstracts.
      To start the process, the user registers their review on RobotReviewer LIVE using the user interface shown in Fig. 1. The system then surveils the medical literature for newly published randomized controlled trials (RCTs) likely to meet the inclusion criteria of the systematic review. Topic experts screen the matching abstracts as and when they are identified, and their inclusion decisions produce live status updates of the systematic review. We illustrate the steps of the updating process in Fig. 2, and describe each in detail below. These steps are run continuously after publication of the initial review.
      Figure thumbnail gr2
      Fig. 2Model for real-time updates of systematic review in response to new evidence. Abbreviations: RCT, randomized controlled trial.

      2.1 Finding new clinical trial reports

      As a first step, we monitor PubMed daily for new clinical trial reports via the Trialstreamer database [
      • Marshall I.J.
      • Nye B.
      • Kuiper J.
      • Noel-Storr A.
      • Marshall R.
      • Maclean R.
      • et al.
      Trialstreamer: a living, automatically updated database of clinical trial reports.
      ]. Trialstreamer identifies all reports of RCTs via a validated ML model (recall 0.97, precision 0.52). Abstracts describing RCTs then go on to detailed automatic extraction of trial characteristics (descriptors of participants, interventions, and outcomes), sample sizes, author conclusions, and indicators of methodological bias. We have described the NLP data extraction methods used in Trialstreamer in detail previously [
      • Marshall I.J.
      • Nye B.
      • Kuiper J.
      • Noel-Storr A.
      • Marshall R.
      • Maclean R.
      • et al.
      Trialstreamer: a living, automatically updated database of clinical trial reports.
      ].

      2.2 Broad topic filter

      In this step, we move from the set of all RCTs (>750,000 at the time of writing) to a topic-focused (but highly sensitive) set for subsequent machine classification. This topic-focused set of RCTs is created by selecting broad topic terms relevant to the question of the systematic review. The available terms are derived from the MeSH vocabulary alongside an indicator of whether the term describes the Population, Interventions, or Outcomes of the trial. The RobotReviewer LIVE interface allows reviewers to select relevant terms using an autocompleting text box.
      The topic-focused set might only include articles that mention a particular condition of interest. The topic-focused set is assumed to be much broader than the set of studies that would be retrieved with a conventional search for a specific systematic review question. In the case of the COVID-19 vaccines review, we included articles in the topic-focused set only if the abstracts contained a mention of COVID-19 or a synonym from a synonym list we generated automatically by minimally processing terms from the Unified Medical Language System Metathesaurus. We have described the method for generating these terms previously [
      • Marshall I.J.
      • Nye B.
      • Kuiper J.
      • Noel-Storr A.
      • Marshall R.
      • Maclean R.
      • et al.
      Trialstreamer: a living, automatically updated database of clinical trial reports.
      ].

      2.3 Machine learning inclusion decisions

      Our goal in this step is to automatically filter out the vast majority of irrelevant articles. We have found previously that ML models with sufficient recall for systematic reviews (which aim to retrieve all research fulfilling their inclusion criteria) will, even in the best case, retrieve a high fraction of false positives. We therefore aim to develop a model with near 100% recall, but add a later screening step by a human expert to remove false positives. A lower precision is therefore acceptable so long as the volume of articles for manual screening is manageable. To achieve this, we train a classification layer on top of “BERT”-based [
      • Devlin J.
      • Chang M.-W.
      • Lee K.
      • Toutanova K.
      BERT: pre-training of deep bidirectional transformers for language understanding. ArXiv181004805 Cs.
      ] representations of input articles. BERT (Bidirectional Encoder Representations from Transformers) is a multilayer neural network language model, which is “pretrained” using a large volume of unlabeled plain text documents (e.g., the full contents of Wikipedia, and large collections of books freely available on the internet). Here specifically, we use the BERT variant BioMed-RoBERTa, which is optimized for scientific research articles by conducting the pretraining on a large collection of scientific articles obtained from Semantic Scholar [
      • Liu Y.
      • Ott M.
      • Goyal N.
      • Du J.
      • Joshi M.
      • Chen D.
      • et al.
      RoBERTa: a robustly optimized BERT pretraining approach. ArXiv190711692 Cs.
      ,
      • Gururangan S.
      • Marasović A.
      • Swayamdipta S.
      • Lo K.
      • Beltagy I.
      • Downey D.
      • et al.
      Don’t stop pretraining: adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
      ]. We make use of the human inclusion and exclusion decisions from the original systematic review to train this model.
      In the task of abstract screening for systematic reviews, there are typically far fewer relevant than irrelevant citations (i.e., most candidate articles retrieved via search will not meet review eligibility criteria). This creates class imbalance [
      • Japkowicz N.
      • Shaju S.
      The class imbalance problem: a systematic study|intelligent data analysis. p429–449.
      ] in the training set, which can in turn result in poor model sensitivity, because overall predictive loss can be largely minimized by predicting that all instances belong to the majority class (i.e., all abstracts are irrelevant). Following prior work [
      • Wallace B.C.
      • Small K.
      • Brodley C.E.
      • Trikalinos T.A.
      Class Imbalance, Redux. 2011 IEEE 11th International Conference on Data Mining.
      ] on methods for achieving a better balance between sensitivity and specificity in imbalanced scenarios, we resample the data to induce a balanced distribution during model training. We construct balanced batches during Stochastic Gradient Descent (SGD) using weighted sampling, such that minority examples (relevant citations) are assigned weights inversely proportional to the prevalence of the minority class.
      We trained our BioMed-RoBERTa model for five epochs using SGD with a learning rate of 10−3 and a momentum of 0.9, yielding a final model that recalled 100% of relevant articles with 40% precision (Area Under the Receiver Operating Characteristic curve 0.97) when evaluated on 10% of the dataset which was held out from training. Our model code is available on our project GitHub page (https://github.com/bwallace/RobotScreen/).

      2.4 Validation of results by systematic review authors

      If the steps above retrieve new potentially relevant articles, systematic review authors are notified by email and invited to screen new abstracts for relevance. This step aims to remove any false positives (i.e., ineligible articles deemed relevant by the model). Although conventional systematic review searches might include hundreds or thousands of articles for manual review, the automatic system (in Step 3) aims to remove the vast majority of these articles. In the case of our example topic (which was subject to particularly high rates of research and publication during the study period), the system identified on average three potentially eligible abstracts per week which were then pushed to the review’s lead author. Review authors can screen the new studies by signing on to the website (Fig. 3, Fig. 4).
      Figure thumbnail gr3
      Fig. 3The RobotReviewer LIVE “Dashboard” to monitor progress of all of user’s reviews.
      Figure thumbnail gr4
      Fig. 4Interface for review authors to validate new studies; their include/exclude decisions are automatically and instantly incorporated into the published live status update.

      2.5 Publication of live status update

      We automatically publish a live update, which makes use of the latest information from both the automated and manual evidence screening (see example in Fig. 2). This text is designed to be displayed as an additional section in the structured abstract, with the header “Automatic updates.” We display the full abstract including the live update section on our website, and also make this available via a REST API so that external journal publishers could opt to display a live, updated version of the abstract as part of the primary research article in future.
      We provide meta-data about new studies (including numbers of studies screened, how many were deemed relevant by the topic expert, and numbers of trial participants). This numerical meta-data is collected from our screening records, and from the structured data in the Trialstreamer database (which has been automatically extracted using NLP models), and displayed following a template.
      As part of this step, we also have explored the use of automatic narrative summaries of newly included studies. We aimed to produce a brief summary of the new studies’ findings to be presented alongside the templated meta-data described in the main paper. We provide further details about this method and results in the Appendix.

      2.6 Evaluation: prospective case study with a COVID-19 vaccination review

      We evaluated the system prospectively in comparison to a conventional manually updated living systematic review on COVID-19 vaccination evidence. The baseline full systematic manual searches for this review were completed and screened on February 9, 2021.
      We ran our comparative evaluation from February 9, 2021 to August 1, 2021. During this period, the review authors performed conventional manual update searches, and we ran the semi-automated system in parallel. We calculated recall with respect to the combined set of included articles from the manual and automatic update systems. Screening of the abstracts found by RobotReviewer LIVE was done by an independent member of the review team, who was not involved in the screening of the manual update searches.
      Due to time taken to screen abstracts on the manual update, the last manual update search done during the evaluation period was on July 1, 2021. The “push” model used by our automated system, where smaller numbers of abstracts were sent to be screened on the day of publication, meant that there was close to no lag between abstract publication and screening, and the live status updates included abstracts published up to and including August 1, 2021. We present results separately until July 1 (which represent a direct comparison of automatic update search performance vs. conventional manual update searches at intervals), and from July 1, 2021 to August 1, 2021 (which evaluate any advantage in screening efficiency with automation) to allow a fair comparison.

      3. Results

      We present a Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram comparing the screening approaches in Fig. 5. The baseline (manual) version of the review search was conducted in February 2021. This yielded 4,493 abstracts, of which 38 both met eligibility criteria and were reports of RCTs.
      Figure thumbnail gr5
      Fig. 5Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram of the live review.
      Manual update searches retrieved 135 abstracts; in contrast, the automated system retrieved 56. Both strategies resulted in the same 31 included abstracts after screening.

      4. Discussion

      We have presented a system for identifying new evidence to include in systematic reviews, and for producing live abstract updates on the currency of systematic reviews. RobotReviewer LIVE combines AI (ML/NLP) with human expertise, and allows new studies to be incorporated in published review reports quickly after publication. We have made the software, ML models, and data needed to implement the system freely available as open-source software. We also provide a prototype of RobotReviewer LIVE that features a simple user interface, which should allow systematic review authors to produce live updates for their existing “static” systematic reviews. This prototype is also available as open-source code.
      We provide an easy-to-use interface to allow experts to validate the automatic search results—potentially providing substantial efficiencies in the updating process, while still providing the assurances afforded by expert verification. In practice, converting a new conventional systematic review to a “living” equivalent using the system could be done in a matter of minutes. We make the technology available as open source, together with a REST API to enable live updates to be used inline in published journal articles, embedded in the websites of third party publishers. Even where a review is not actively kept up to date, this may allow interested individuals to see estimates of the amount of relevant evidence published since the time said review was completed. In the future, this platform may also permit “crowdsourced” maintenance of systematic reviews.
      Related systems have been developed and evaluated, notably the Cochrane “Evidence Pipeline” and Centralised Search Service [
      • Thomas J.
      • McDonald S.
      • Noel-Storr A.
      • Shemilt I.
      • Elliott J.
      • Mavergames C.
      • et al.
      Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews.
      ,
      • Noel-Storr A.H.
      • Dooley G.
      • Wisniewski S.
      • Glanville J.
      • Thomas J.
      • Cox S.
      • et al.
      Cochrane Centralised Search Service showed high sensitivity identifying randomized controlled trials: a retrospective analysis.
      ,
      • Noel-Storr A.
      • Dooley G.
      • Elliott J.
      • Steele E.
      • Shemilt I.
      • Mavergames C.
      • et al.
      An evaluation of Cochrane Crowd found that crowdsourcing produced accurate results in identifying randomized trials.
      ]. These projects also monitor research databases (using a combination of ML identification of RCTs and crowdsourcing) and notify Cochrane review groups (which each typically manage tens of systematic reviews on a common clinical theme) when new research is published relevant to their theme. In contrast, our system is designed to manage updates for individual systematic reviews.
      In our prospective case study, the automated method identified all of the includable abstracts found manually. We continued to run the automated system for an additional month after the evaluation period (until August 1), because the review team conducted the manual update search earlier than we had expected. In this month, the automated system found 12 additional abstracts which were deemed includable. This illustrates the advantage of the low latency “push” screening model, especially for topics such as COVID-19 vaccination, with rapid publication rates.
      One criticism of systematic review automation tools previously is that they are often found as discrete, scattered pieces of academic code which require substantial technical expertise to use in practice [
      • Marshall I.J.
      • Wallace B.C.
      Toward systematic review automation: a practical guide to using machine learning tools in research synthesis.
      ,
      • O’Connor A.M.
      • Tsafnat G.
      • Gilbert S.B.
      • Thayer K.A.
      • Wolfe M.S.
      Moving toward the automation of the systematic review process: a summary of discussions at the second meeting of International Collaboration for the Automation of Systematic Reviews (ICASR).
      ]. To overcome this problem, we have produced an easy-to-use web interface which should allow users to create a “living” version of a systematic review with minimal effort (Fig. 3, Fig. 4).
      This technology is still emerging, and users should be aware of important limitations. Although the performance on this case study is strong, the evaluation review is ideal for such technology. The review question is precise, and concerns a well-defined intervention and health condition, both of which are easy to capture in the structured vocabularies used in the Trialstreamer database. In the midst of a pandemic, there are also large numbers of eligible studies being published (whereas precision is likely to reduce in any search as the prevalence of eligible studies decreases—no matter whether manual or automated). We have presented a single case study, and it is likely that performance will vary particularly for more complex reviews.
      Currently, we make use of the Trialstreamer database, which at present is limited to articles describing RCTs. We intend to make additional article types available in future; at present the system is limited for use to systematic reviews of intervention trials due to the data sources used. At present, we make use of articles from PubMed only—we are unable to access additional proprietary databases such as EMBASE which might (modestly) harm the recall of the system [
      • Marshall I.
      • Marshall R.
      • Wallace B.
      • Brassey J.
      • Thomas J.
      Rapid reviews may produce different results to systematic reviews: a meta-epidemiological study.
      ]. Overall, although the individual components of the system have been extensively validated, this report describes the only validation using a conventional systematic review as a comparator. The reliability of the system in general (particularly for reviews that deviate substantially from the format of the current evaluation) requires further study.

      5. Conclusion

      Manually updating systematic reviews is time consuming and laborious, meaning many conventionally produced reviews become quickly out of date. We hope that further evaluation and development of the ideas and methods presented here will bring the goal of dynamic publication of live evidence synthesis updates a step closer into practice.

      CRediT authorship contribution statement

      Iain J Marshall: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing – original draft, Writing - review and editing, Supervision, Project Administration, and Funding acquisition. Thomas A. Trikalinos: Conceptualization, Methodology, Validation, Writing – review and editing, Supervision, and Funding acquisition. Frank Soboczenski: Conceptualization, Methodology, Software, Validation, and Writing – review and editing. Hye Sun Yun: Methodology, Software, and Writing – review and editing. Gregory Kell: Methodology, Software, and Writing - review and editing. Rachel Marshall: Conceptualization, Methodology, and Writing – review and editing. Byron C. Wallace: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing – original draft, Writing – review and editing, Supervision, Project administration, and Funding acquisition.

      References

        • Marshall I.J.
        • L’Esperance V.
        • Marshall R.
        • Thomas J.
        • Noel-Storr A.
        • Soboczenski F.
        • et al.
        State of the evidence: a survey of global disparities in clinical trials.
        BMJ Glob Health. 2021; 6: e004145
        • Bastian H.
        • Glasziou P.
        • Chalmers I.
        Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?.
        PLoS Med. 2010; 7: e1000326
        • Shojania K.G.
        • Sampson M.
        • Ansari M.T.
        • Ji J.
        • Doucette S.
        • Moher D.
        How quickly do systematic reviews go out of date? A Survival analysis.
        Ann Intern Med. 2007; 147: 224-233
        • Garner P.
        • Hopewell S.
        • Chandler J.
        • MacLehose H.
        • Schünemann H.J.
        • Akl E.A.
        • et al.
        When and how to update systematic reviews: consensus and checklist.
        BMJ. 2016; 354: i3507
        • Elliott J.H.
        • Turner T.
        • Clavisi O.
        • Thomas J.
        • Higgins J.P.T.
        • Mavergames C.
        • et al.
        Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap.
        PLoS Med. 2014; 11: e1001603
        • Elliott J.H.
        • Synnot A.
        • Turner T.
        • Simmonds M.
        • Akl E.A.
        • McDonald S.
        • et al.
        Living systematic review: 1. Introduction—the why, what, when, and how.
        J Clin Epidemiol. 2017; 91: 23-30
        • Siemieniuk R.A.
        • Bartoszko J.J.
        • Ge L.
        • Zeraatkar D.
        • Izcovich A.
        • Kum E.
        • et al.
        Drug treatments for covid-19: living systematic review and network meta-analysis.
        BMJ. 2020; 370: m2980
        • Boutron I.
        • Chaimani A.
        • Meerpohl J.J.
        • Hróbjartsson A.
        • Devane D.
        • Rada G.
        • et al.
        The COVID-NMA project: building an evidence ecosystem for the COVID-19 pandemic.
        Ann Intern Med. 2020; 173: 1015-1017
        • Living Evidence Network
        Guidance for the production and publication of Cochrane living systematic reviews: Cochrane Reviews in living mode. Cochrane.
        (Available at)
        • Thomas J.
        • Noel-Storr A.
        • Marshall I.
        • Wallace B.
        • McDonald S.
        • Mavergames C.
        • et al.
        Living systematic reviews: 2. Combining human and machine effort.
        J Clin Epidemiol. 2017; 91: 31-37
        • Marshall I.J.
        • Wallace B.C.
        Toward systematic review automation: a practical guide to using machine learning tools in research synthesis.
        Syst Rev. 2019; 8: 163
        • O’Mara-Eves A.
        • Thomas J.
        • McNaught J.
        • Miwa M.
        • Ananiadou S.
        Using text mining for study identification in systematic reviews: a systematic review of current approaches.
        Syst Rev. 2015; 4: 5
        • Shemilt I.
        • Simon A.
        • Hollands G.J.
        • Marteau T.M.
        • Ogilvie D.
        • O’Mara-Eves A.
        • et al.
        Pinpointing needles in giant haystacks: use of text mining to reduce impractical screening workload in extremely large scoping reviews.
        Res Synth Methods. 2014; 5: 31-49https://doi.org/10.1002/jrsm.1093
        • Wallace B.C.
        • Small K.
        • Brodley C.E.
        • Lau J.
        • Trikalinos T.A.
        Deploying an interactive machine learning system in an evidence-based. IHI ’12.
        ACM, New York, NY, USA2012: 819-824
        • Hamel C.
        • Kelly S.E.
        • Thavorn K.
        • Rice D.B.
        • Wells G.A.
        • Hutton B.
        An evaluation of DistillerSR’s machine learning-based prioritization tool for title/abstract screening – impact on reviewer-relevant outcomes.
        BMC Med Res Methodol. 2020; 20: 256
        • Tsou A.Y.
        • Treadwell J.R.
        • Erinoff E.
        • Schoelles K.
        Machine learning for screening prioritization in systematic reviews: comparative performance of Abstrackr and EPPI-Reviewer.
        Syst Rev. 2020; 9: 73
        • Marshall I.J.
        • Nye B.
        • Kuiper J.
        • Noel-Storr A.
        • Marshall R.
        • Maclean R.
        • et al.
        Trialstreamer: a living, automatically updated database of clinical trial reports.
        J Am Med Inform Assoc. 2020; 27: 1903-1912
        • Devlin J.
        • Chang M.-W.
        • Lee K.
        • Toutanova K.
        BERT: pre-training of deep bidirectional transformers for language understanding. ArXiv181004805 Cs.
        (Available at)
        http://arxiv.org/abs/1810.04805
        Date: 2018
        Date accessed: November 8, 2018
        • Liu Y.
        • Ott M.
        • Goyal N.
        • Du J.
        • Joshi M.
        • Chen D.
        • et al.
        RoBERTa: a robustly optimized BERT pretraining approach. ArXiv190711692 Cs.
        (Available at)
        http://arxiv.org/abs/1907.11692
        Date: 2019
        Date accessed: October 15, 2021
        • Gururangan S.
        • Marasović A.
        • Swayamdipta S.
        • Lo K.
        • Beltagy I.
        • Downey D.
        • et al.
        Don’t stop pretraining: adapt language models to domains and tasks. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
        Association for Computational Linguistics, Stroudsburg, PA2020: 8342-8360
        • Japkowicz N.
        • Shaju S.
        The class imbalance problem: a systematic study|intelligent data analysis. p429–449.
        (Available at)
        https://dl.acm.org/doi/10.5555/1293951.1293954
        Date: 2002
        Date accessed: October 15, 2021
        • Wallace B.C.
        • Small K.
        • Brodley C.E.
        • Trikalinos T.A.
        Class Imbalance, Redux. 2011 IEEE 11th International Conference on Data Mining.
        2011: 754-763
        • Thomas J.
        • McDonald S.
        • Noel-Storr A.
        • Shemilt I.
        • Elliott J.
        • Mavergames C.
        • et al.
        Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews.
        J Clin Epidemiol. 2021; 133: 140-151
        • Noel-Storr A.H.
        • Dooley G.
        • Wisniewski S.
        • Glanville J.
        • Thomas J.
        • Cox S.
        • et al.
        Cochrane Centralised Search Service showed high sensitivity identifying randomized controlled trials: a retrospective analysis.
        J Clin Epidemiol. 2020; 127: 142-150
        • Noel-Storr A.
        • Dooley G.
        • Elliott J.
        • Steele E.
        • Shemilt I.
        • Mavergames C.
        • et al.
        An evaluation of Cochrane Crowd found that crowdsourcing produced accurate results in identifying randomized trials.
        J Clin Epidemiol. 2021; 133: 130-139
        • O’Connor A.M.
        • Tsafnat G.
        • Gilbert S.B.
        • Thayer K.A.
        • Wolfe M.S.
        Moving toward the automation of the systematic review process: a summary of discussions at the second meeting of International Collaboration for the Automation of Systematic Reviews (ICASR).
        Syst Rev. 2018; 7: 3
        • Marshall I.
        • Marshall R.
        • Wallace B.
        • Brassey J.
        • Thomas J.
        Rapid reviews may produce different results to systematic reviews: a meta-epidemiological study.
        J Clin Epidemiol. 2018; 109: 30-41