By Janice Tripeny
Ms Janice Tripeny presenting at Evidence 2016
Across many sectors of international development, the number of systematic reviews being commissioned by national and international policy-makers has increased dramatically over the last decade. On 21 September 2016 I had the pleasure of delivering a training workshop at Evidence 2016 to a diverse set of stakeholders interested in gaining practical knowledge about the process of commissioning, conducting or using systematic reviews. It was a great opportunity to share expertise and experiences, and I came away knowing so much more about the demand for systematic reviews in this region. In this blog, I focus on briefly answering some of the key questions about systematic reviews that were covered in the workshop.
What is a systematic review of the literature?
Systematic reviews are overviews of the best available evidence on a particular topic/ pertaining to a specific topic that are undertaken in a specific way according to an explicit, methodological approach. By systematically identifying, mapping, critically appraising and synthesising results of research studies they enable us to establish what is known from existing research. They also focus our minds on what we do not know and what more we might want to know from research/and can inform decisions about what further research might be best undertaken.
A systematic review is a piece of research in its own right, requiring the same degree of rigour that is expected of primary research. In the words of Ben Goldacre (2012: 2)…
…instead of just mooching through the research literature, consciously or unconsciously picking out papers here and there that support your pre-existing beliefs, [systematic reviews] take a scientific, systematic approach to the very process of looking for scientific evidence, ensuring that [the] evidence is as complete and representative as possible of all the research that has ever been done.
Why is there a need for systematic reviews?
Systematic reviews are an attractive method for decision-makers for a number of reasons. First, finding relevant, reliable research can be a challenge. Millions of articles are published yearly, making it difficult for decision-makers, who often do not have the time or training, to keep abreast of all the relevant research in an area. Second, policy makers and other non-academic users of research are less likely to have the experience or background knowledge to appraise and contextualise research evidence. Third, some primary studies are poorly conducted, while others never make it to publication (perhaps because of negative findings). Finally, research about the same issue, even if well-designed and executed, can vary considerably. Systematic reviews offer a solution to each of these problems, by accumulating evidence across many studies using an explicit, methodological approach.
How do you design and conduct one?
It is a myth that systematic reviews only look at randomized controlled trials, and answer ‘what works?’ questions. Not only are they able to address many types of research question on different topics, they can involve many different research paradigms and methods. Some basic stages of the review process, however, are common to many reviews. These are briefly discussed in turn.
- Developing the initial question and conceptual framework: The review question drives key decisions about the types of studies to include, where to look for them, how to assess their quality, and how to integrate their findings. Each concept has to be carefully defined, otherwise decisions about study eligibility will not be consistent. Any particular theory or ideology underpinning the review question must also be explicitly stated.
- Clarifying which studies are relevant: The review question and conceptual framework help determine what types of studies to include in the review.
- Identifying studies: The selection criteria for studies determine the strategy used to search for potentially relevant studies for a review. Bibliographic databases of published sources are a common place to search for relevant studies, but additional sources (e.g. websites and specialist library catalogues) will often also be searched.
- Checking that studies found meet the selection criteria: Studies identified by the search are ‘screened’ to ensure they meet the inclusion criteria.
- Mapping: Descriptive information is collected from each study and then ‘mapped’ to describe the field of interest.
- Further coding of studies for synthesis: Additional relevant information is extracted from each study (e.g. details about how the research was undertaken to allow assessment of the quality and relevance of the studies in addressing the review question, and the study findings so that these can be synthesised).
- Quality and relevance appraisal: Assessment of the quality and relevance of each study determines how much weight is placed on the evidence of each study included in the final synthesis.
- Synthesis: The process of integrating the findings from included studies to answer the review question involves examining the available data, looking for patterns and interpreting them.
- Communication, interpretation and application of the findings of the review
How do reviews make their way into policy?
Research findings are just one form of information to inform decision making, and other factors (such as politics, our values and resources) of course need to be considered. However, if research evidence is available we should aim to make good use of it. By synthesising research findings in a form which is easily accessible, systematic reviews are a key component in evidence-informed decision making. Although having an impact on policy and practice will rarely be easy, there are several international repositories of published systematic reviews to help potential users of reviews (see below).
Where can existing systematic reviews be accessed?
Published systematic reviews and protocols for ongoing reviews can be accessed via specialist databases, protocol registries and review collaboration websites, including the EPPI-Centre, Cochrane Collaboration, Campbell Collaboration, Joanna Briggs Institute, 3ie International Initiative for Impact Evaluation, and PROSPERO.
Where can additional resources be found?
Guidelines and training on how to conduct systematic reviews in the social sciences are outlined by the Campbell Collaboration, in health by the Cochrane Collaboration, and in environmental sciences and conservation by the Collaboration for Environmental Evidence.
Other useful resources include: Waddington et al. (2012). How to do a good systematic review of effects in international development: A tool kit. Journal of Development Effectiveness, 4(3), 359-387.