Characteristics
A systematic review can be designed to provide a thorough summary of current literature relevant to a research question. A systematic review uses a rigorous and transparent approach for research synthesis, with the aim of assessing and, where possible, minimizing bias in the findings. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews and other types of mixed-methods reviews which adhere to standards for gathering, analyzing and reporting evidence. Systematic reviews of quantitative data or mixed-method reviews sometimes use statistical techniques (meta-analysis) to combine results of eligible studies. Scoring levels are sometimes used to rate the quality of the evidence depending on the methodology used, although this is discouraged by the Cochrane Library. As evidence rating can be subjective, multiple people may be consulted to resolve any scoring differences between how evidence is rated. The EPPI-Centre,Types
There are over 30 types of systematic review and Table 1 below summarises some of these, but it is not exhaustive. It is important to note that there is not always consensus on the boundaries and distinctions between the approaches described below.Scoping reviews
Scoping reviews are distinct from systematic reviews in several important ways. A scoping review is an attempt to search for concepts by mapping the language and data which surrounds those concepts and adjusting the search method iteratively to synthesize evidence and assess the scope of an area of inquiry. This can mean that the concept search and method (including data extraction, organisation and analysis) are refined throughout the process, sometimes requiring deviations from any protocol or original research plan. A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts to determine if a systematic review is possible or appropriate, or to lay the groundwork for a full systematic review. The goal can be to assess how much data or evidence is available regarding a certain area of interest. This process is further complicated if it is mapping concepts across multiple languages or cultures. As a scoping review should be systematically conducted and reported (with a transparent and repeatable method), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are helpful when determining if it is possible or appropriate to carry out a systematic review, and are a useful method when an area of inquiry is very broad, for example, exploring how the public are involved in all stages systematic reviews. There is still a lack of clarity when defining the exact method of a scoping review as it is both an iterative process and is still relatively new. There have been several attempts to improve the standardisation of the method, for example via a PRISMA guideline extension for scoping reviews (PRISMA-ScR). PROSPERO (the International Prospective Register of Systematic Reviews) does not permit the submission of protocols of scoping reviews, although some journals will publish protocols for scoping reviews.Stages
While there are multiple kinds of systematic review methods, the main stages of a review can be summarised as follows:Defining the research question
Defining an answerable question and agreeing an objective method is required to design a useful systematic review. Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency, and consistency between methodology and protocol. Clinical reviews of quantitative data are often structured using the acronym PICO, which stands for 'Population or Problem', 'Intervention or Exposure', 'Comparison' and 'Outcome', with other variations existing for other kinds of research. For qualitative reviews PICo is 'Population or Problem', 'Interest' and 'Context'.Searching for relevant data sources
Planning how the review will search for relevant data from research that matches certain criteria is a decisive stage in developing a rigorous systematic review. Relevant criteria can include only selecting research that is good quality and answers the defined question. The search strategy should be designed to retrieve literature that matches the protocol's specified inclusion and exclusion criteria. The methodology section of a systematic review should list all of the databases and citation indices that were searched. The titles and abstracts of identified articles can be checked against pre-determined criteria for eligibility and relevance. Each included study may be assigned an objective assessment of methodological quality, preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, or the high-quality standards of Cochrane. Common information sources used in searches include scholarly databases of peer-reviewed articles such as MEDLINE, Web of Science, Embase, and PubMed as well as sources of unpublished literature such as clinical trial registries and grey literature collections. Key references can also be yielded through additional methods such as citation searching, reference list checking (related to a search method called ' pearl growing'), manually searching information sources not indexed in the major electronic databases (sometimes called 'hand-searching'), and directly contacting experts in the field. To be systematic, searchers must use a combination of search skills and tools such as database subject headings, keyword searching,'Extraction' of relevant data
Assess the eligibility of the data
This stage involves assessing the eligibility of data for inclusion in the review, by judging it against criteria identified at the first stage. This can include assessing if a data source meets the eligibility criteria, and recording why decisions about inclusion or exclusion in the review were made. Software can be used to support the selection process including text mining tools and machine learning, which can automate aspects of the process. The 'Systematic Review Toolbox' is a community driven, web-based catalogue of tools, to help reviewers chose appropriate tools for reviews.Analyse and combine the data
Analysing and combining data can provide an overall result from all the data. Because this combined result may use qualitative or quantitative data from all eligible sources of data, it is considered more reliable as it provides better evidence, as the more data included in reviews, the more confident we can be of conclusions. When appropriate, some systematic reviews include a meta-analysis, which uses statistical methods to combine data from multiple sources. A review might use quantitative data, or might employ a qualitative meta-synthesis, which synthesises data from qualitative studies. A review may also bring together the findings from quantitative and qualitative studies in a mixed methods or overarching synthesis. The combination of data from a meta-analysis can sometimes be visualised. One method uses a forest plot (also called a blobbogram). In an intervention effect review, the diamond in the 'forest plot' represents the combined results of all the data included. An example of a 'forest plot' is the Cochrane Collaboration logo. The logo is a forest plot of one of the first reviews which showed that corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child. Recent visualisation innovations include the albatross plot, which plots p-values against sample sizes, with approximate effect-size contours superimposed to facilitate analysis. The contours can be used to infer effect sizes from studies that have been analysed and reported in diverse ways. Such visualisations may have advantages over other types when reviewing complex interventions. Assessing the quality (or certainty) of evidence is an important part of some reviews. GRADE (Grading of Recommendations, Assessment, Development and Evaluations) is a transparent framework for developing and presenting summaries of evidence and is used to grade the quality of evidence. The GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) is used to provide a transparent method for assessing the confidence of evidence from reviews or qualitative research.Communication and dissemination
Once these stages are complete, the review may be published, disseminated and translated into practice after being adopted as evidence. The UK National Institute for Health Research (NIHR) defines dissemination as ‘getting the findings of research to the people who can make use of them to maximise the benefit of the research without delay’. However, many evidence users do not have time to invest in reading large and complex documents and/or may lack awareness or be unable to access newly published research. Researchers are therefore developing skills to use creative communication methods such as illustrations, blogs, infographics and board games to share the findings of systematic reviews.Automation of systematic reviews
Living systematic reviews are a relatively new kind of high quality, semi-automated, up-to-date online summaries of research which are updated as new research becomes available. The essential difference between a living systematic review and a conventional systematic review is the publication format. Living systematic reviews are 'dynamic, persistent, online-only evidence summaries, which are updated rapidly and frequently'. While living systematic reviews seek to maintain current evidence, the automation or semi-automation of the systematic process itself is increasingly being explored. While little evidence exists to demonstrate it is as accurate or involves less manual effort, efforts that promote training and using artificial intelligence for the process are increasing.Research fields
Medicine and human health
History of systematic reviews in medicine
A 1904 ''British Medical Journal'' paper by Karl Pearson collated data from several studies in the UK, India and South Africa of typhoid inoculation. He used a meta-analytic approach to aggregate the outcomes of multiple clinical studies. In 1972 Archie Cochrane wrote: 'It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials'. Critical appraisal and synthesis of research findings in a systematic way emerged in 1975 under the term 'meta analysis'. Early syntheses were conducted in broad areas of public policy and social interventions, with systematic research synthesis applied to medicine and health. Inspired by his own personal experiences as a senior medical officer in prisoner of war camps, Archie Cochrane worked to improve how the scientific method was used in medical evidence, writing in 1971: 'the general scientific problem with which we are primarily concerned is that of testing a hypothesis that a certain treatment alters the natural history of a disease for the better'. His call for the increased use of randomised controlled trials and systematic reviews led to the creation of The Cochrane Collaboration, which was founded in 1993 and named after him, building on the work by Iain Chalmers and colleagues in the area of pregnancy and childbirth.Current use of systematic reviews in medicine
Many organisations around the world use systematic reviews, with the methodology depending on the guidelines being followed. Organisations which use systematic reviews in medicine and human health include the National Institute for Health and Care Excellence (NICE, UK), the Agency for Healthcare Research and Quality (AHRQ, USA) and thePublic involvement and citizen science in systematic reviews
Cochrane has several tasks that the public or other 'stakeholders' can be involved in doing, associated with producing systematic reviews and other outputs. Tasks can be organised as 'entry level' or higher. Tasks include: * Joining a collaborative volunteer effort to help categorise and summarise healthcare evidence * Data extraction and risk of bias assessment * Translation of reviews into other languages A recent systematic review of how people were involved in systematic reviews aimed to document the evidence-base relating to stakeholder involvement in systematic reviews and to use this evidence to describe how stakeholders have been involved in systematic reviews. Thirty percent involved patients and/or carers. The ACTIVE framework provides a way to consistently describe how people are involved in systematic review, and may be used as a way to support the decision-making of systematic review authors in planning how to involve people in future reviews. Standardised Data on Initiatives (STARDIT) is another proposed way of reporting who has been involved in which tasks during research, including systematic reviews. While there has been some criticism of how Cochrane prioritises systematic reviews, a recent project involved people in helping identify research priorities to inform future Cochrane Reviews. In 2014, the Cochrane-Wikipedia partnership was formalised. This supports the inclusion of relevant evidence within all Wikipedia medical articles, as well as other processes to help ensure that medical information included in Wikipedia is of the highest quality and accuracy.Learning resources
Cochrane has produced many learning resources to help people understand what systematic reviews are, and how to do them. Most of the learning resources can be found at the 'Cochrane Training' webpage, which also includes a link to the book ''Testing Treatments'', which has been translated into many languages. In addition, Cochrane has created a short video ''What are Systematic Reviews'' which explains in plain English how they work and what they are used for. The video has been translated into multiple languages, and viewed over 192,282 times (as of August 2020). In addition, an animated storyboard version was produced and all the video resources were released in multiple versions under Creative Commons for others to use and adapt. The Critical Appraisal Skills Programme (CASP) provides free learning resources to support people to appraise research critically, including a checklist which contains 10 questions to 'help you make sense of a systematic review'.Social, behavioural and educational
In 1959, social scientist and social work educator Barbara Wootton published one of the first contemporary systematic reviews of literature on anti-social behavior as part of her work, ''Social Science and Social Pathology''. Several organisations use systematic reviews in social, behavioural, and educational areas of evidence-based policy, including the National Institute for Health and Care Excellence (NICE, UK), Social Care Institute for Excellence (SCIE, UK), the Agency for Healthcare Research and Quality (AHRQ, USA), the World Health Organization, the International Initiative for Impact Evaluation (3ie), the Joanna Briggs Institute and the Campbell Collaboration. The quasi-standard for systematic review in the social sciences is based on the procedures proposed by the Campbell Collaboration, which is one of several groups promotingBusiness and economics
Due to the different nature of research fields outside of the natural sciences, the aforementioned methodological steps cannot easily be applied in all areas of business research. Some attempts to transfer the procedures from medicine to business research have been made, including a step-by-step approach, and developing a standard procedure for conducting systematic literature reviews in business and economics. The Campbell & Cochrane Economics Methods Group (C-CEMG) works to improve the inclusion of economic evidence into Cochrane and Campbell systematic reviews of interventions, to enhance the usefulness of review findings as a component for decision-making. Such economic evidence is crucial for health technology assessment processes.International development research
Systematic reviews are increasingly prevalent in other fields, such as international development research. Subsequently, several donors (including the UK Department for International Development (DFID) and AusAid) are focusing more attention and resources on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.Environment
The Collaboration for Environmental Evidence (CEE) works to achieve a sustainable global environment and the conservation of biodiversity. The CEE has a journal titled ''Environmental Evidence'' which publishes systematic reviews, review protocols and systematic maps on impacts of human activity and the effectiveness of management interventions.Environmental health and toxicology
Systematic reviews are a relatively recent innovation in the field of environmental health and toxicology. Although mooted in the mid-2000s, the first full frameworks for conduct of systematic reviews of environmental health evidence were only published in 2014 by the US National Toxicology Program's Office of Health Assessment and Translation and the Navigation Guide at the University of California San Francisco's Program on Reproductive Health and the Environment. Uptake has since been rapid, with the estimated number of systematic reviews in the field doubling since 2016 and the first consensus recommendations on best practice, as a precursor to a more general standard, being published in 2020.Review tools
A 2019 publication identified 15 systematic review tools and ranked them according to the number of 'critical features' as required to perform a systematic review, including: * DistillerSR: a proprietary, paid web application * Swift Active Screener: a proprietary, paid web application * Covidence: a proprietary, paid web application and Cochrane technology platform. * Rayyan: a proprietary, free of charge web application * Sysrev: a proprietary, freemium web applicationLimitations
While systematic reviews involve a highly rigorous approach to synthesizing the evidence, they still have several limitations.Out-dated or risk of bias
While systematic reviews are regarded as the strongest form of evidence, a 2003 review of 300 studies found that not all systematic reviews were equally reliable, and that their reporting can be improved by a universally agreed upon set of standards and guidelines. A further study by the same group found that of 100 systematic reviews monitored, 7% needed updating at the time of publication, another 4% within a year, and another 11% within 2 years; this figure was higher in rapidly changing fields of medicine, especially cardiovascular medicine. A 2003 study suggested that extending searches beyond major databases, perhaps into grey literature, would increase the effectiveness of reviews. Some authors have highlighted problems with systematic reviews, particularly those conducted byLimited reporting of clinical trials and data from human studies
The ' AllTrials' campaign highlights that around half of clinical trials have never reported results and works to improve reporting. This lack of reporting has extremely serious implications for research, including systematic reviews, as it is only possible to synthesize data of published studies. In addition, 'positive' trials were twice as likely to be published as those with 'negative' results. At present, it is legal for-profit companies to conduct clinical trials and not publish the results. For example, in the past 10 years 8.7 million patients have taken part in trials that have not published results. These factors mean that it is likely there is a significant publication bias, with only 'positive' or perceived favourable results being published. A recent systematic review of industry sponsorship and research outcomes concluded that 'sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources' and that the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments. Systematic reviews of such a bias may amplify the effect, although it is important to note that the flaw is in the reporting of research generally, not in the systematic review method.Poor compliance with review reporting guidelines
The rapid growth of systematic reviews in recent years has been accompanied by the attendant issue of poor compliance with guidelines, particularly in areas such as declaration of registered study protocols, funding source declaration, risk of bias data, issues resulting from data abstraction, and description of clear study objectives. A host of studies have identified weaknesses in the rigour and reproducibility of search strategies in systematic reviews. To remedy this issue, a new PRISMA guideline extension called PRISMA-S is being developed to improve the quality, reporting, and reproducibility of systematic review search strategies. Furthermore, tools and checklists for peer-reviewing search strategies have been created, such as the Peer Review of Electronic Search Strategies (PRESS) guidelines. A key challenge for using systematic reviews in clinical practice and healthcare policy is assessing the quality of a given review. Consequently, a range of appraisal tools to evaluate systematic reviews have been designed. The two most popular measurement instruments and scoring tools for systematic review quality assessment are AMSTAR 2 (a measurement tool to assess the methodological quality of systematic reviews) and ROBIS (Risk Of Bias In Systematic reviews); however, these are not appropriate for all systematic review types.See also
* Critical appraisal * Further research is needed * Horizon scanning * Literature review * Living review * Metascience *References
STARDIT report Q101116128.External links