Abstract
Keywords
- •The need to maintain an up-to-date, dynamic system for evidence synthesis can be facilitated using new technologies which comprise both human and machine effort.
- •As well as standard review teams, systematic review activities can be broken down into “microtasks” and distributed across a wider group of people—including the involvement of citizen scientists through crowdsourcing.
- •Machine automation can assist with some systematic review tasks, including routine searching, eligibility assessment, identification and retrieval of full-text reports, extraction of data, and risk of bias assessment.
- •While the context is living systematic reviews, many of these enabling technologies apply equally to standard approaches to systematic reviewing.
1. Introduction
- •Living systematic reviews: 1. Introduction—the why, what, when, and how
- •Living systematic reviews: 2. Combining human and machine effort
- •Living systematic reviews: 3. Statistical methods for updating meta-analyses
- •Living systematic reviews: 4. Living guideline recommendations
- •A systematic review which is continually updated, incorporating relevant new evidence as it becomes available
- •An approach to review updating, not a formal review methodology
- •Can be applied to any type of review
- •Use standard systematic review methods
- •Explicit and a priori commitment to a predetermined frequency of search and review updating
2. Opportunities for a different workflow
Review task | Method/microtask | Potential methods for efficiency gain |
---|---|---|
Team formation | Use a wider range of personnel than may traditionally be the case | Crowdsourcing (e.g., Cochrane Crowd) Task-sharing platforms (e.g., TaskExchange) |
Search | Running search on bibliographic databases | Automatic, continuous database search with push notification Database aggregators (e.g., HDAS, Epistemonikos) Notification from clinical trial registries Automatic retrieval of full-text papers (e.g., CrossRef) |
Eligibility assessment | Selecting studies for inclusion | Machine-learning classifier Crowdsourced inclusion decisions |
Data extraction or collection | Extracting information on characteristics of the participants, interventions, outcomes | Machine-learning information-extraction systems (e.g., RobotReviewer, ExaCT) Linkage of existing structured data sources (e.g., clinical trials registries) Automated structured data extraction tools for PDFs (e.g., ContentMine, Graph2Data) |
Assessing risks of bias | Machine learning–assisted risk of bias tools (e.g., RobotReviewer) | |
Synthesis | Entering data into meta-analysis software | Structured data extraction tools, which automatically provide data in suitable format for statistical analysis |
Conducting meta-analyses | Continuous analysis updating based on availability of structured extracted data | |
Report writing and updating conclusions | Templated reporting of some report items Statistical surveillance of key analysis results, with threshold set for potential conclusion change | |
Supportive systems that reduce duplication of effort | Data sharing and reuse | Using standard descriptors for studies across all systematic reviews (e.g., linkeddata.cochrane.org) |
2.1 Database searching and eligibility assessment
- Higgins J.P.T.
- Lasserson T.
- Chandler J.
- Tovey D.
- Churchill R.
2.1.1 Database searching

2.1.2 Eligibility assessment


2.2 Data extraction/collection and risk of bias assessment
2.3 Synthesis and reporting
2.4 Looking ahead to new research evidence surveillance systems
3. Conclusion
References
- editors, Cochrane handbook for systematic reviews of interventions version 5.1.0 [updated March 2011].The Cochrane Collaboration, 2011 (Available at www.cochrane-handbook.org)
- Living systematic review: 1. Introduction—the why, what, when and how.J Clin Epidemiol. 2017; 91: 23-30
- Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap.PLoS Med. 2014; 11: e1001603
- Human and machine effort in Project Transform: how intersecting technologies will help us to identify studies reliably, efficiently and at scale.Cochrane Methods. 2016; 1: 37-41
- Identifying reports of randomized controlled trials (RCTs) via a hybrid machine learning and crowdsourcing approach.J Am Med Inform Assoc. 2017; 24: 1165-1168
- RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.J Am Med Inform Assoc. 2016; 23: 193-201
- The automation of systematic reviews.BMJ. 2013; 346: f139
- Systematic review automation technologies.Syst Rev. 2014; 3: 74
- Using text mining for study identification in systematic reviews: a systematic review of current approaches.Syst Rev. 2015; 4: 5
- Applications of text mining within systematic reviews.Res Synth Methods. 2011; 2: 1-14
- Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?.PLoS Med. 2010; 7: e1000326
- Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews.Syst Rev. 2016; 5: 140
- Methodological Expectations of Cochrane Intervention Reviews (MECIR). Standards for the conduct and reporting of new Cochrane Intervention Reviews, reporting of protocols and the planning, conduct and reporting of updates.Cochrane, London2016
- Semi-automated screening of biomedical citations for systematic reviews.BMC Bioinformatics. 2010; 11: 55
- Focus on sharing individual patient data distracts from other ways of improving trial transparency.BMJ. 2017; 357: j2782
Article info
Publication history
Footnotes
Conflicts of interest: none.
Funding: The Living Systematic Review Network is supported by funding from Cochrane and the Australian National Health and Medical Research Council (Partnership Project grant APP1114605). J.T., A.N.-S., I.S., T.T., and J.E. receive funding from Cochrane (“Transform Project”) and Australian NHMRC (“Evidence Innovation Transforming the efficiency of systematic review”). J.T. is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) North Thames at Bart's Health NHS Trust. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health.
Identification
Copyright
User license
Creative Commons Attribution – NonCommercial – NoDerivs (CC BY-NC-ND 4.0) |
Permitted
For non-commercial purposes:
- Read, print & download
- Redistribute or republish the final article
- Text & data mine
- Translate the article (private use only, not for distribution)
- Reuse portions or extracts from the article in other works
Not Permitted
- Sell or re-use for commercial purposes
- Distribute translations or adaptations of the article
Elsevier's open access license policy
ScienceDirect
Access this article on ScienceDirectLinked Article
- Citation analysis is also useful to assess the eligibility of biomedical research works for inclusion in living systematic reviewsJournal of Clinical EpidemiologyVol. 97
- PreviewI have read with much interest the JCE series advocating the use of human efforts and machine automation to create and update living systematic reviews (LSRs) [1]. I recognize that the series provides important information on how biomedical research works are verified as eligible for inclusion in LSRs using semantic classification and crowdsourcing techniques [1]. However, this paper has not dealt with another technique that has been recently shown to be useful (when jointly used with semantic classification and crowdsourcing techniques) in assessing the eligibility of papers for inclusion in LSRs: This important technique is citation analysis [2–6].
- Full-Text
- Preview