Research article

Rapid reviews as an emerging approach to evidence synthesis in education

Authors
  • Sabine Wollscheid orcid logo (Nordic Institute for Studies in Innovation, Research and Education (NIFU), Oslo, Norway)
  • Janice Tripney orcid logo (UCL Institute of Education, London, UK)

Abstract

Rapid reviews using abbreviated systematic review methods are of increasing importance for evidence-informed decision-making in education, although there is little guidance about the most suitable approach. Three recently completed rapid review reports are compared to inform discussions on the utility of this type of review in education and to highlight appropriate methods for producing evidence syntheses in a limited time frame. Rapid review methods need to be chosen to fit the needs of the review, which involves: thinking broadly about different kinds of team experience and expertise; estimating the size and nature of the literature to be reviewed; considering the review purpose and nature of the topic; choosing an appropriate synthesis method for the review purpose, evidence base and reviewers’ expertise; fully describing the review approach, and discussing the potential limitations of chosen methods; and understanding the anticipated audiences and tailoring outputs accordingly. Rapid reviews to address urgent and high-priority questions provide the benefits of timeliness and reduced resource requirements. However, it is crucial to understand caveats and limitations to the rapid conduct of evidence syntheses for decision-making purposes. This article offers guidance to support researchers, postgraduate students and commissioners who wish to conduct rapid reviews in a transparent and systematic way, addressing complex questions of relevance to evidence-informed decision-making in education.

Keywords: rapid review, acceleration methods, education, methodology, systematic review

How to Cite: Wollscheid, S., & Tripney, J. (2021). Rapid reviews as an emerging approach to evidence synthesis in education. London Review of Education, 19(1). https://doi.org/10.14324/lre.19.1.32

Rights: Copyright 2021 Sabine Wollscheid and Janice Tripney

2211 Views

1982 Downloads

3Citations

Published on
01 Dec 2021
Peer Reviewed

Introduction

Systematic reviews are today conducted across diverse fields of inquiry, and with the emergence of organisations such as the Campbell Collaboration (Littell and White, 2018) they are increasingly used to inform decision-making in education. When done well, they provide a vital tool to find what is known (and not known) from previous research (Gough et al., 2017). Their aim is to produce clear messages about the reliable evidence available on a given topic by identifying and appraising individual studies and pooling their results. A fundamental principle is to avoid misrepresentation of the knowledge base by using a methodical and explicit approach. However, the focus in systematic reviews on transparency, rigour and coherence in the approach used means that they have one main disadvantage: they can be extraordinarily resource and time intensive, depending on the methods chosen (Bullers et al., 2018).

Rapid reviews have emerged as an efficient alternative, and they are now widely used to inform decision-making in healthcare. Their conduct implies a shorter time frame, with implications for quality and trade-offs during the review process. Tricco et al. (2015) found significant variation in completion time and methodological and reporting quality, ambiguous definitions and highly heterogeneous approaches. Alongside other authors (for example, Haby et al., 2016) they highlighted the need for better understanding of methodologies for conducting rapid reviews of research for evidence-informed decision-making.

Educational research differs in its assumptions and philosophies from that of the natural sciences, with important implications for the sorts of questions that can be asked and methods that can be used. Research questions are often more complex, exploring plausible mechanisms between different concepts, rather than testing for linear causalities (Gough et al., 2017). Thus, we assume that epistemological differences and the scope of the review question have important consequences for the design of a systematic review. There are now methodological recommendations for the development of rapid reviews for health policy and practice (for example, Plüddemann et al., 2018; Garritty et al., 2021), but similar guidance to support the conduct of rapid reviews for decision-making purposes in education is lacking.

To inform the methodological discussion of rapid reviews in education, the three objectives of this article were: to perform a scoping search of the rapid review literature in education; to compare a purposive sample of rapid reviews in terms of context, methodological approach, and strength and limitations; and to propose issues to consider when selecting a rapid review approach for use in education.

First, we briefly describe approaches for reviewing, followed by an outline of our methods. Next, findings from a comparison of three rapid reviews are presented to highlight different ways of undertaking a rapid review for future studies. Several key issues when selecting a rapid review approach are outlined. Finally, we offer some concluding remarks in the context of the COVID-19 pandemic.

Approaches to reviewing

Since evidence reviews can have various purposes and forms, we present a brief methodological exploration of systematic reviews compared to rapid reviews.

Systematic reviews

Systematic reviews use rigorous and explicit methods to identify, appraise and synthesise studies addressing a structured question. Each stage requires specific decisions, dependent on the scope and epistemology underpinning the question, as well as the time and resources available (Gough et al., 2017). Although presented as a linear journey, these processes are often iterative rather than linear.

The first stage comprises review team formation, stakeholder engagement activities and defining the scope of the review (including specifying the questions it will address and selection criteria). Initial conceptual and definitional work forms the basis of the review’s conceptual framework.

The second stage involves searching for and selecting studies. The search strategy includes a description of relevant sources, core keywords and an outline of benefits and costs for different combinations of sources. Selection of studies is made against prespecified eligibility criteria.

The third stage involves describing the studies that have met the review’s inclusion criteria to gain a high-level overview of the research literature relevant to the review. Codes relate to variables such as country, types of outcomes and study design. The emphasis is on summarising what evidence there is, rather than what the evidence says (Saran and White, 2018). The mapping stage forms a key step in analysing a group of studies, irrespective of whether this is a precursor to a synthesis. Given the large number of studies that may be identified, the map can be used to help reviewers and other stakeholders decide on which areas of the evidence to focus in greater depth at a later stage (James et al., 2016).

In the fourth stage, reviewers undertake more detailed coding, going beyond simply describing a research field. Data extraction involves gathering individual study findings for synthesis suited to answering the review question. Quality assessment also prepares studies for analysis by ensuring that ‘only the most appropriate, trustworthy and relevant studies are used to develop the conclusions’ (Gough et al., 2012a: 154). Dependent on their question, systematic reviews vary in their approach to quality assessment, but it is widely accepted that all should ‘aim to avoid drawing misleading conclusions because of problems in the studies they contain’ (Gough et al., 2012b: 4). Using standardised coding frameworks helps ensure consistency, while duplicating the data extraction and quality assessment processes (where at least two people work independently) reduces both the risk of making mistakes and the possibility that coding is influenced by a single reviewer’s biases (Li et al., 2021).

As the final stage, the synthesis of findings can lead to new knowledge based on information collected from individual studies. Given the diversity in synthesis methods (Barnett-Page and Thomas, 2009), it is useful to distinguish between aggregative and configurative approaches. Aggregative reviews are closely related to theory testing, and studies included are often relatively homogeneous. Configurative reviews are likely to include more heterogeneous studies, and they are closely related to hypothesis generating and theory exploration. Most reviews, however, include some elements of configuration and aggregation, reflecting study design and methods of analyses (Gough et al., 2017).

Rapid reviews

The term ‘rapid review’ denotes an acceleration in the process of reviewing a body of literature. A wide range of terminology is used to describe this approach to evidence synthesis, such as ‘brief review’, ‘scoping review’, ‘rapid evidence synthesis’ (Tricco et al., 2015) and ‘restricted systematic reviews’ (Plüddemann et al., 2018). These terms indicate a hastening of the synthesising process, denoted by the notion ‘rapid’, and they suggest a restriction in the scope, and potentially the quality, of the work, through use of adjectives such as ‘restricted’.

Rapid reviews should be understood as a spectrum of different products. For health, Featherstone et al. (2015) found a difference in their purpose and aims, as well as methods and definition. While they are often regarded as a variation of a systematic review balancing time resource constraints with considerations for bias, some do not follow a specific methodology. Those rapid reviews that closely resemble systematic reviews adapt established methodological approaches by accelerating one of the common stages of a systematic review in some way (for example, database search) or by leaving out a stage altogether (for example, quality assessment). Before review initiation, researchers should consider the steps that might be reduced compared to a standard systematic review, and the potential consequences of taking a different, more pragmatic, approach. Final decisions about synthesis methods take place following assessment of the evidence base.

By rapid reviews here, we mean those systematic reviews that apply fewer resources and are less ambitious than a full systematic review, while still aiming to limit bias and ensure transparent reporting (Schünemann and Moja, 2015). Given that rapid reviews have no established definition, choosing an appropriate terminology might imply a mindful description of the review question. Most important, however, is to concentrate on a full description of the review stages (Gough et al., 2012b).

Finally, one should keep in mind that there is always a trade-off between time and resources to conduct a review, and the comprehensiveness of the final product.

Table 1 presents the common stages in a systematic review, and provides examples of acceleration strategies that might be used in rapid reviews. As noted by James Thomas et al. (2013), their applicability will depend both on the individual context and on the expertise of reviewers.

Table 1:

Stages in (systematic) rapid reviews and examples of acceleration strategies (Source: Adapted from Thomas et al., 2013)

Review initiation, review questions
Review team and other stakeholders:
  • use an experienced team already in place

  • use a large research team

  • limit stakeholder involvement in the review process.

Scope of the review and the questions it will address:
  • do not include more than one question for a given topic

  • focus the review on narrowly defined questions

  • conduct a review of existing systematic reviews on the topic if they exist

  • review within disciplinary boundaries

  • use commonly accepted definitions for core concepts (for example, for interventions and outcomes), if they exist

  • restrict the types of evidence which will be covered (for example, include only quantitative evidence)

  • specify publication date, publication format and/or language restrictions in the eligibility criteria for studies

  • exclude studies where only a subset of participants or interventions covered by the study are eligible for inclusion in the review.

Identification of relevant studies
Discuss draft search strategy with a research retrieval specialist.
Targeted searching – selected databases and specific searches:
  • use a single search for overlapping review questions

  • use primarily electronic bibliographic databases

  • use primarily well-indexed bibliographic databases

  • use publication date, publication format and/or language restrictions in the search (where these are included in the eligibility criteria for studies)

  • apply Google translation to titles and abstracts for initial screen

  • utilise specialist registries for impact evaluations and systematic reviews.

Screening shortcuts:
  • use text-mining technologies to semi-automate the screening process

  • use text mining to prioritise which items are screened

  • screen for inclusion using only one reviewer

  • have one reviewer undertake the screening, but a second reviewer double-check a random sample of citations and/or those flagged up for additional scrutiny

  • prioritise screening: simple concepts first, with large team; screen remaining citations against complex concepts with small team.

Prioritise reports that are easy to access and reports available electronically.
Description of study characteristics (mapping the research field)
Omit this stage in the review.
Restrict the number of study characteristics to be coded at this stage.
Use (semi)automated methods to apply keywords to individual studies.
Map included studies based on information in titles and abstracts only: this saves time retrieving documents that will not be used in the later synthesis.
Map only a representative sample of included studies: this can be enough to facilitate a discussion with decision makers about priority areas of research.
Detailed coding (data extraction and quality assessment)
Use a standardised data extraction form which has been piloted elsewhere.
Limit data extraction to a minimal set of required data items.
Use (semi)automated data extraction methods.
Data extraction done by a single reviewer.
Omit quality appraisal of included studies.
Quality appraisal done by a single reviewer.

Use a simple tool for assessment of risk of bias in included studies – for example:
  • use a ranking or hierarchy of study designs to compare across individual studies

  • adapt an existing quality assessment tool, using only key elements.

Synthesis
Reduce number of studies for synthesis.
Synthesise evidence narratively.
Choose methods of evidence synthesis involving little conceptual innovation (for example, thematic synthesis where concepts are largely predetermined).
Choose a pragmatic method (for example, framework synthesis) when synthesising large numbers of individual studies.
If meta-analysis is appropriate, limit to the most important outcomes.

Methods

We conducted a scoping literature search of rapid reviews in education to inform our selection of three case studies. Selection criteria were developed to include studies published between January 2000 and December 2020 on educational problems or theories in the context of preschool education, compulsory education or higher education. Studies written in English, Norwegian, Swedish or German were eligible. Language inclusion was based on author nationalities and languages spoken.

We searched six electronic bibliographic databases: ERIC, Australian Education Index, ASSIA, British Education Index, Education Abstracts and Web of Science (selected collections). Multiple alternative search terms for ‘rapid review’ were combined with terms related to ‘education’. The search was performed in English, with wild cards and truncation used to capture British and American English spellings. We executed the following search: (‘rapid review*’ OR ‘scoping review*’ OR ‘rapid evidence assessment*’ OR ‘restricted review*’ OR ‘preliminary review*’ OR ‘mapping review*’ OR ‘rapid synthesis’ OR ‘rapid evidence synthesis’) AND (‘education*’ OR ‘school*’). A publication date filter was applied. To identify grey literature and other relevant studies that might have been missed by the database search, we manually examined the reference lists of included studies and used the Google search engine (first 100 hits). Study records were managed in EPPI-Reviewer 4, a web-based application for research synthesis (Thomas et al., 2020).

In total, 1,398 items were returned by the database search, and a further 15 items though the manual search. After duplicates were removed, 1,282 items were screened using the selection criteria. The title and abstract of each citation were read to determine relevance. If no abstract was available, the executive summary or introduction was used. The initial screening process was conducted by one researcher. Full-text versions of seemingly eligible studies were obtained, with each being reviewed and confirmed as relevant by two researchers independently.

The preliminary scoping of the literature identified 81 relevant reviews published in the years 2002 to 2020. All studies were published in English (although some were written by non-native English speakers).

A brief examination found that they were heterogeneous in terms of research questions and methodological approach. They appear increasingly to be commissioned by governmental agencies and ministries in the education sector.

Informed by a snowballing approach, a purposive sample of three rapid reviews was selected. As case studies for in-depth review, these reviews were chosen to reflect diversity in methodological approach, with each study employing different methods to suit the review question and the evidence base.

Data were extracted on the following variables: research question and aims; the main terminology; rationale for choice of method; number of included studies; the literature search including key words, data sources and search strategy; synthesis methods applied; main results, strengths and limitations of the rapid review methods. The quality of the three reviews was assessed using the AMSTAR checklist (http://amstar.ca/Amstar_Checklist.php).

Three examples of rapid reviews in education

The three examples used differing methods to suit the evidence base for each review, taking into consideration time constraints and other factors. We present some core information about their rationale and context, scope and target audience.

Review 1: employment (Burge et al., 2012)

This policy-oriented review was funded and carried out by the National Foundation for Education Research (NFER), a leading independent provider of education research that exists to improve outcomes for young people. It is one of four reviews related to the theme ‘From Education to Employment’ that collectively identify strategies to support young people at risk of becoming NEET (not in education, employment or training) to make effective transitions into learning or employment. The study scope included five research questions: (1) In what ways do employers engage with primary and secondary schools in the UK and abroad?; (2) What are the key features and principles of successful employer involvement?; (3) What is the impact of employers’ involvement with schools on young people progressing to education, employment or training after compulsory education?; (4) What gaps are there in the evidence base?; and (5) What are the implications of the findings of this review for policy and practice?

Review 2: school closure (EEF, 2020)

The intended audiences for this review are teachers and key decision makers in schools, as well as policymakers working with innovations in education. It was funded and carried out by the Education Endowment Foundation (EEF), an independent charity with the vision to break the connection between family, income and educational achievement. The review is authored by a large experienced team from the EEF and other institutions. Drawing on guidelines by the Cochrane Collaboration Rapid Review group (Garrity et al., 2021), the Civil Service Rapid Evidence Assessment (Government Social Research Service, 2009) and the Cochrane Collaboration on overviews of reviews (Pollock et al., 2020), this rapid review assessment was motivated by the COVID-19 pandemic and resulting school closure in many countries. Given large heterogeneity, it aimed at including all relevant evidence, focusing specifically on the differential impact of school closure on disadvantaged students. There are three closely related research questions: (1) What evidence currently exists about the impact of different kinds of school closure (for example, due to summer holidays) on differential academic attainment for disadvantaged/others, and on other outcomes related to education (for example, impact on IQ)?; (2) What factors moderate the impact?; and (3) What evidence and theory helps us to understand the mechanisms by which school closure leads to learning loss and widening of attainment gaps?

Review 3: digital technologies (Major et al., 2018)

This review claims to be the first that links school-based classroom dialogue and digital technology, an emerging and relatively new area of research. Supported by the Research Council of Norway, and published in the international journal Education Information Technology, the review is aimed at researchers, policymakers and practitioners. The guiding questions were formulated broadly to incorporate an extensive base of existing literature: (1) In what ways does research suggest that use of digital technologies enhances productive classroom dialogue?; and (2) What challenges are reported that may impact on the successful use of digital technology to support dialogic teaching and learning?

Comparison across the three review examples

In the following, we move through each stage of the review process and compare the three examples according to the various acceleration strategies adopted by the different review teams. Table 2 provides an overview of the key details.

Table 2:

Summary characteristics of the three reviews: overview of acceleration strategies (Source: Authors, 2021)

Acceleration strategies Review 1 (Burge et al., 2012) Review 2 (EEF, 2020) Review 3 (Major et al., 2018)
Time frame Approximately eight months Approximately six weeks Not reported, but search was completed June 2016–March 2017
Review initiation, review questions
Use an experienced team already in place Yes – core team of three people who acknowledge input of eight NFER colleagues (for example, for conceptual guidance and research retrieval) Yes – core team of four people who acknowledge support of ten others drawn from EEF and beyond; each has clearly defined methodological and/or topic expertise Unclear – team of five education researchers located at two universities
Use a large research team
Limit stakeholder involvement Yes Yes Yes
Focus review on narrowly defined questions No Yes No
Use established definitions for core concepts Yes Yes Yes
Limit to English-language studies Unclear, but highly likely Yes Yes
Limit to peer-reviewed studies No No Yes
Specify publication date limits Yes Yes Yes
Proceed without publishing a peer-reviewed protocol Yes – implicit, as no reference to protocol Yes – protocol published but not formally peer reviewed Yes – protocol developed but publication details unknown
Identification of relevant studies
Targeted searching: selected databases/most relevant sources, and ‘specific’ (rather than ‘sensitive’) searches Systematic search of key databases (2007–11) and of websites (2007–11) Authors report that the search process was systematic but ‘far from comprehensive, given constraints of time’ (EEF, 2020: 9)
Search was conducted for literature published for 1994–2020 in these databases: Web of Science; ERIC; Google Scholar
Searches were refined using filters corresponding to inclusion criteria (for example, publication date)
Systematic search for studies published 2000–16: three databases (BEI, ERIC, Scopus); backward snowballing; contacting network members
Use one reviewer to select studies Not reported Yes – reviewer had option of marking an item as unclear for review by second reviewer Yes – another team member checked a sample of included studies
Use text-mining technologies No Yes – to prioritise which items are screened (title/abstract screening only) No
Description of study characteristics (mapping the research field)
Omit this stage in the review Yes Yes No – this article contains a high-level overview of existing research
Map only a representative sample of included studies N/A N/A No – a range of data were extracted from each included study
Detailed coding (data extraction and quality assessment)
Use a standardised data extraction form Not reported Not reported Not reported
Limit data extraction to a minimal set of required data items Not reported Yes Not reported
Use one reviewer to extract data for synthesis Not reported Partially – two researchers (double data extraction on 10 per cent of studies, randomly selected) No – data extraction was conducted by two reviewers independently
Use a simple quality assessment tool Unclear – tool used not reported Unclear – tool used not reported N/A – no quality assessment of included studies was undertaken
Use one reviewer to appraise study quality Not reported Partially – completed by one reviewer and checked by another N/A
Synthesis
Synthesise evidence narratively Yes No Yes
Choose synthesis method involving little conceptual innovation Yes N/A Partially – an adapted version of an existing synthesis method was applied
Limit to the most important outcomes for meta-analysis N/A Yes – scale of the review was reduced considerably after decision to focus on the effect of school closure on the gaps between disadvantaged students and others, rather than on the overall effect on learning loss N/A
AMSTAR score 1.0–2.0 7.0 6.0

Review initiation and review questions

Since rapid reviews are designed to deliver information in a relatively short period of time, an experienced team ideally needs to be in place at the outset. Restricted timescales can make recruitment and appointment of a new researcher very difficult. Reporting a time frame of eight months, Review 1 (employment) was carried out by researchers with expertise in all the areas expected of a review team. This study was part of an NFER programme of work to inform policy and practice. Review 2 (school closure) aimed to publish its findings within six weeks of starting the review. It used a relatively large review team, consisting of four experienced senior researchers and an additional ten people for screening and data extraction. No information is reported on the time frame for conducting Review 3 (classroom dialogue and digital technologies), or on team experience. None of the three reports describe involvement of external stakeholders (a decision which can speed the early development stages).

All three examples review relatively broad research areas, but they are scoped differently in terms of overall review objectives and specific questions. Review 1 had an intervention focus and addressed diverse types of question relating to strategies supporting young people at risk of becoming NEET. While a key aim is to investigate the impact of employer involvement on young people’s achievement and progression, additional questions go beyond this to capture the ways in which employers engage with schools, and the features and principles of successful school–business partnerships. Review 2 was also conducted to inform policy decisions. Initiated in the context of the COVID-19 pandemic, it answers a series of deliberately narrow questions relating to the impact of different kinds of school closure on educational outcomes, the effects of moderating factors (for example, age), and the underlying mechanisms of change. By contrast, the main purpose of Review 3 was to examine a body of research to provide an overall picture of the available evidence and to highlight knowledge gaps in the field. Such results generally allow for the identification of future research initiatives, including whether a full systematic review is needed, rather than directly informing policy or practice. Arksey and O’Malley (2005) applied a scoping review approach, while they extended the conventional parameters of this type of review through a thematic synthesis of the study findings.

It is recommended that authors develop the scope of their review with care, even when selecting concepts that have been well defined elsewhere (Thomas et al., 2020). This helps prevent conceptual incoherence and irrelevance, and it will save time in later review stages. All three examples discussed here used existing definitions for core concepts, but to different degrees. Review 1 was to some extent able to proceed along predefined boundaries in terms of the concept of NEET, although this phenomenon (disengagement among youth) is understood and named differently in countries other than the UK (Holte et al., 2019). Review 2 examined school closure and a wide range of academic and non-academic outcomes. Studies in the meta-analysis defined and operationalised concepts in different ways. Review 3 focused on a more complex field of inquiry, combining the concepts of dialogue, schooling and digital technology. This review team revealed spending time thinking through concepts to operationalise the search strategy effectively.

One of the main similarities between the reviews was the decision to limit inclusion to certain types of study. All three specified language, publication date and/or document format restrictions in the eligibility criteria for studies. For instance, Review 3 included only peer-reviewed studies, whereas the other reviews also considered grey literature. While at least two of the reviews developed a protocol as the basis for the ongoing research, none appears to have published or registered a formally peer-reviewed protocol before proceeding further with the review.

Identification of relevant studies

All three reviews addressed multiple questions and incorporated different types of evidence, so searching for and selecting studies was more complex than an effectiveness review answering a single question (Cooper et al., 2018). Although the level of detail reported about the search strategy varied across the reviews, they used broadly similar strategies, with completeness of searching determined by both time and scope constraints. None of the reviews aimed for exhaustive searching; instead, each used a targeted and specific search approach limited to key resources (for example, academic databases). Reviews 2 and 3 both reported extensive information on the search strategy. Review 1 provided few details on this or on the study selection process. There are indications that, to save time, both Review 2 and Review 3 chose not to duplicate the study selection process and used only one reviewer to screen abstracts. However, because of the substantial potential for errors, both reviews included quality assurance and control measures to help alleviate concerns relating to the misapplication of the eligibility criteria. The researcher in Review 2 had the option of marking items as ‘unclear’ for review by a second person; in Review 3, another team member double-checked a sample of included studies. Review 2 also used text-mining technologies to speed up the selection of studies. A ‘priority screening’ function was used to order results by probability of inclusion (based on a training set), and screening stopped once relevant studies were no longer being identified (O’Mara-Eves et al., 2015).

Description of study characteristics (mapping the research field)

In response to time and resource constraints, Reviews 1 and 2 both opted to omit the mapping stage. Neither review describes the extent and nature of the included literature before examining it in depth. Review 3 chose not to employ either of the two main ways of accelerating this stage: collecting information based on titles and abstracts only, and mapping only a representative sample of included studies. Instead, it mapped all relevant evidence to highlight areas of saturation and research gaps in an emerging field, with the suggestion that full-text reports were used to collect information.

Detailed coding (data extraction and quality assessment)

None of the three examples reported use of a standardised data extraction form which, as well as providing consistency in a review, has the added advantage of saving time. Review 3 refers to two reviewers independently coding each study, suggesting no acceleration strategy was used. In contrast, Review 2 reduced time spent on data extraction by using a single reviewer, with a sample (10 per cent) of studies double data extracted so that coding reliability could be assessed. This review also saved resources by limiting data extraction to a minimal set of required coding items. Information on the data extraction process is missing for Review 1.

For all three reviews, the level of detail reported about the quality assessment of included studies was poor. The protocol for Review 2 indicates that individual studies were judged as ‘low’, ‘medium’ or ‘high’ quality (that is, ratings that reflect the extent of confidence in the estimate of effect). Review 1 reports the use of three categories for quality assessment: ‘strong’, ‘moderate’ and ‘impressionistic’ evidence. Neither of these reviews provides details of the tool used or offers a statement of its validity (Majid and Vanstone, 2018). Quality assessment is generally not a priority in scoping reviews, and Review 3 did not assess each of the included studies. Although not explicitly stated, the authors may have used peer-reviewed publication as a proxy for good quality.

Synthesis

None of the three examples reports using abbreviated synthesis methods. Yet they all accelerated this stage of the review process to some extent. Review 2 applied an aggregative approach using meta-analysis, which was limited to the most important outcomes to ensure that the review could be completed within the timeline for the project. The authors also modified the original research questions to reduce the scale of the review from 58 to 11 included studies. Often, synthesising evidence narratively can save time in a rapid review context, and Reviews 1 and 3 both took this approach. To cope with the large number of eligible studies, Review 3 chose also to adapt an existing thematic synthesis method (Thomas and Harden, 2008) involving inductive line-by-line coding of text to generate descriptive themes. This process was undertaken by two reviewers working independently, then collaborating with other team members to revise, refine and finalise themes. No attempt was made to further theorise the themes presented.

Table 3 compares the challenges, strengths and limitations of the chosen rapid review approaches across the three review examples.

Table 3:

Challenges, strengths and limitations of chosen rapid review approaches (Source: Authors, 2021)

Review 1 (Burge et al., 2012) Review 2 (EEF, 2020) Review 3 (Major et al., 2018)
Main challenges faced by reviewers Challenges due to review objectives:
To address five research questions, among them one research question on effects of interventions.
Challenges due to evidence base:
To assess and synthesise information from diverse documents (54 items), comprising research reports, literature reviews, case studies and guides.
Challenges due to review objectives:
To produce publication within six weeks of starting the review.
To conduct an effectiveness review with sub-questions on moderating variables and causal mechanisms.Challenges due to evidence base
Unmanageably large number of hits (around 7,000) identified after initial search.
Using statistical meta-analysis (an invariably demanding and labour-intensive process).
Challenges due to review objectives:
To review studies in an emergent field, with research questions as wide in scope as possible.
Challenges due to evidence base:
Large number of eligible studies (72 items) identified through the literature search.
Diversity of included studies and themes that might be examined.
Strengths of chosen rapid review approach Consultation with experts (methodological and topic) for additional guidance and support.
Quality of evidence considered when formulating the conclusions of the review.
Transparent description of search process.
Transparent decisions during the review process, for example, in terms of main outcome (gap between disadvantaged students and others).
Refined scope of review after initial search was used to facilitate a discussion with decision makers about priority areas of research. Some research questions dropped at an early stage because of constraints of time.
Application of meta-analysis.
Limitations of the review methods used are outlined and considered when formulating the review’s conclusions.
Use of personal contacts and snowballing to mitigate impact of searching for emerging and fast-growing topic of research.
Duplication of data extraction.
Adapted an existing thematic synthesis method to fit timescale/resources of the review.
Limitations of the methods used are outlined.
Limitations of chosen rapid review approach Methods used are not clearly described:
selection criteria (not clearly stated).
search: no databases mentioned.
search strategy: no combination of keywords given.
Collection of information on study characteristics was incomplete.
Limitations of the review methods used not considered when formulating the conclusions of the review.
Systematic search process but not comprehensive.
Collection of information on study characteristics was incomplete.
Only considered peer-reviewed, English-language studies.

Issues to consider when selecting a rapid review approach

There is no set approach for conducting rapid reviews in education. Existing methods generally need to be adapted and refined for the specific features of individual review projects. A range of factors influenced the ways in which the authors of the three examples presented in this article approached the review process, suggesting seven broad issues to consider when planning a rapid review and deciding which approaches to adopt.

Consider temporal, financial and personal resources

Timescale and other conditions set by the commissioning body can affect decisions regarding the breadth and depth of the review, shaping processes and outputs accordingly:

  • As a rule, it is difficult to review broad areas of research quickly. Rapid reviews tend to address highly refined research questions, as these result in a smaller set of studies being required in the synthesis. However, reviewers should be wary of defining the scope and focus of a rapid review so narrowly that it fails to meet user needs.

  • For rapid reviews in complex fields of policy inquiry or less-established research areas that lack accepted or common operational definitions of concepts, time and resources may need to be weighted towards the earlier stages of the process.

  • Reviews of controlled trials may take less time to complete than other types of review because there are more supporting resources (for example, critical appraisal tools) available to review this type of research.

  • If the review team lacks previous experience of carrying out systematic reviews, it might be best to undertake a systematic map of the research field that is limited to describing a set of studies. However, a rapid review lacking appraisal of study quality and synthesis of findings may not align with policy needs.

Think broadly about different kinds of team experience and expertise

In selecting a review team, it is important to consider variety and breadth of expertise among its membership:

  • It is good to have a mix of methodological (knowing how) and substantive (knowing what) experience and expertise within review teams.

  • Stakeholder involvement is not always a cause of delay, especially if the reviewers draw on existing networks and have experience of collaborative work. Support from advisory groups may help focus the resources and time available.

  • Having review team members with prior knowledge of the field, especially conceptual issues relating to the topic areas under review, can be important in making progress under tight timescales.

  • Take the expertise of team members into account when allocating roles and responsibilities. For example, use experienced team members to check the accuracy of what novice reviewers have done and to resolve coding disagreements.

Estimate the size and nature of the literature to be reviewed at the outset

It is important to have some understanding of the amount, type and variation of evidence available before deciding which of the reviewing options are most appropriate. Quick scoping searches based on a simple search strategy can be undertaken while developing the protocol to provide an overview of the studies likely to be retrieved. A common challenge is balancing the amount of time spent on scoping the review and identifying studies, with the amount of time spent appraising and synthesising:

  • Lack of an established, recognised field might mean deciding on a deliberately inclusive approach to study selection while streamlining later processes (for example, data extraction and synthesis).

  • Identification of a large volume of literature in an initial scoping search may present a challenge, with too many hits for screening impacting on the time available to undertake detailed appraisal and synthesis of included studies. Solutions include narrowing the scope, so that the total number of records is manageable in the time frame (for example, by abandoning some research questions or restricting eligible study designs to randomised controlled trial evidence).

  • The need for flexibility is also crucial when the final search identifies far more research than originally expected. One response is to reduce the level and detail of study appraisal and/or synthesis.

  • Expectations of perceived weaknesses in the evidence base, such as lack of randomised controlled trials for reviews that have an intervention focus, can shape the review team’s view of what kind of review is needed, for example, one on the effect of learning motivation interventions.

  • Conducting a review of existing systematic reviews is one solution to broad topics, as well as to the challenge of completing a review in a faster time frame. Systematic reviews (or overviews) of reviews can save time and resources, because reviews are often easier and quicker to locate than primary research (for example, using registers of published systematic reviews) and their findings may be summarised narratively, with little real synthesis being required. It is important to appreciate, however, that there are also inherent limitations to using this approach, partly because available systematic reviews are likely to be of diverse quality and scope.

Consider the purpose of the review and nature of the topic

To ensure that reviews are ‘fit for purpose’, identify reviewing approaches that are relevant and feasible for the review purpose and topic:

  • The more likely it is that a rapid review will be used to inform policy or practice decisions, the more careful reviewers need to be about the shortcuts taken. Undertaking a thorough quality appraisal of the evidence is seen as critical for policy-oriented reviews to increase confidence in the validity of the findings. The costs and benefits of different ways of accelerating (or even abandoning) this stage of the review process require careful consideration.

  • Alternatively, if the emphasis is on mapping research activity and highlighting knowledge gaps, reviewers generally have more time to spend on definitions and study identification, since evidence maps generally do not judge research quality or combine the findings of studies.

  • Typically, the main purpose of the review and the topic will influence whether it is more appropriate to present an in-depth analysis of a small number of studies or less information from a wider range of studies.

Choose a synthesis method appropriate for the review purpose and question, the evidence base and the reviewers’ expertise

It is important to clarify priorities and strategies for analysis at an early stage, so that the other review stages can be shaped accordingly:

  • Thematic summaries of the results of the included studies, or meta-analysis based on primary outcome(s) (if data support the use of this statistical procedure), are often the most suitable modes of synthesis in rapid reviews.

  • The type and depth of synthesis that it is possible to conduct will, to some degree, depend on the reviewers’ familiarity with different approaches to research synthesis.

Provide a full description of the rapid review approach and discuss the potential limitations and biases of the chosen methods

Although generally less ambitious than a full systematic review, rapid reviews are expected to conform to many of the same standards. To ensure that users have confidence in the final product, they need to be made aware of the limits placed on the review, and the impact of decisions taken:

  • Review methods should be described in full to ensure that the review process is transparent, reproducible and accountable.

  • Since shortcuts are likely to introduce bias, and can affect confidence in the conclusions of the review, transparency will be increased by highlighting where standard systematic review methods were omitted or modified.

  • Limitations and biases associated with the chosen methods (for example, risks of introducing publication bias due to reduced searching) should be reported, and the implications for the review findings discussed.

Be mindful of the anticipated audiences and tailor outputs accordingly

To increase uptake and use, it is important to consider how best to present the evidence required by users of the review (for example, its commissioners) and wider audiences:

  • The complexity of the evidence base, and the type of audience anticipated, will influence the likely structure, format and length of the final report, including how much data should be presented, in what ways and in what level of detail.

  • There is a range of user-friendly communication tools (for example, summary of finding tables) that can be used (Petkovic et al., 2018).

  • When producing evidence summaries (Khangura et al., 2012), it may be less important to adhere to PRISMA reporting standards (Moher et al., 2015). However, detailed information on review methods should be made available elsewhere (for example, as an online supplement).

Discussion and concluding remarks

Rapid reviews respond to the needs of decision-makers and other knowledge users who need timely evidence to inform their decisions. Comparing three different approaches to rapid reviewing, we have suggested several key factors for consideration when selecting appropriate rapid review methods for use in education. Some of the methods’ shortcuts that could be applied to accelerate the review process, such as omitting quality appraisal, may not be acceptable to review commissioners. The review team should therefore attend carefully to their methodological choices, and they should seek to confirm a common understanding as to the purpose and expectations of the review. Where reviews have heavy time constraints, interacting with a wider consultation team may not be possible. However, close engagement with the end user helps ensure that user needs are addressed, and that the review is suitable for policy decision-making. Close dialogue is especially important in situations where review commissioners have little experience in evidence syntheses. Finally, since rapid reviews may fail to provide the expected rigour of a full systematic review, putting them at higher risk of error and bias (Moons et al., 2021), reviewers should aim for compliance with PRISMA-RR (Stevens et al., 2018) and AMSTAR guidelines (Mattivi and Buchberger, 2016). Accordingly, authors of rapid reviews should report full details of the overall approach and decisions taken, and should explore the potential implications of streamlined or omitted systematic review methods.

The number of published rapid reviews has been shown to have steadily increased since the mid-2010s (Garritty et al., 2021). Lately, the emergence of COVID-19 has exposed the urgent need to pull together the best available evidence about what works, when and how, and to develop the best methodologies to enable rapid reviews of relevant evidence (Tricco et al., 2020). Countries around the world have responded with guidance and resources to reliably speed up evidence synthesis activities. Most are designed to strengthen health policy and systems, but some take a broader view. For instance, the EPPI-Centre at UCL, UK is curating a living map of current COVID-19 evidence across several topics, some of it already providing answers for those making decisions in education (http://eppi.ioe.ac.uk/COVID19_MAP/covid_map_v59.html). There are also recently published rapid reviews bridging the fields of education and health, including those that have addressed emergency remote education (Bond, 2020) and technology use during the pandemic (Vargo et al., 2021).

Given the increased appeal of rapid reviews in fields other than medicine and healthcare, the information provided in this article comes at an opportune time for those working in education and related social sciences investigating matters of educational concern. It offers guidance to support the design and conduct of rapid evidence syntheses that seek to be transparent and replicable, while also ensuring that minimum methodological standards for systematic reviews are applied.

Acknowledgements

The authors would like to thank their colleagues at the Nordic Institute for Studies in Innovation, Research and Education (NIFU), in particular Bjørn Stensaker, Hanne Næss Hjetland and Kristin Rogde for their useful comments on an earlier draft of the article.

Conflicts of interest statement

The authors declare no conflicts of interest with this work.

References

Arksey, H; O’Malley, L. (2005).  oping studies: Towards a methodological framework.  International Journal of Social Research Methodology 8 (1) : 19–32, DOI: http://dx.doi.org/10.1080/1364557032000119616

Barnett-Page, E; Thomas, J. (2009).  Methods for the synthesis of qualitative research: A critical review.  BMC Medical Research Methodology 9 (1) : 59. DOI: http://dx.doi.org/10.1186/1471-2288-9-59

Bond, M. (2020).  Schools and emergency remote education during the COVID-19 pandemic: A living rapid systematic review.  Asian Journal of Distance Education 15 (2) : 191–247, DOI: http://dx.doi.org/10.5281/zenodo.4425683

Bullers, K; Howard, AM; Hanson, A; Kearns, WD; Orriola, JJ; Polo, RL; Sakmar, KA. (2018).  It takes longer than you think: Librarian time spent on systematic review tasks.  Journal of the Medical Library Association 106 (2) : 198–207, DOI: http://dx.doi.org/10.5195/jmla.2018.323

Burge, B; Wilson, R; Smith-Crallan, K. (2012).  Employer Involvement in Schools: A rapid review of UK and international evidence. (NFER Research Programme: From Education to Employment). Slough: NFER. Accessed 6 May 2021 https://www.nfer.ac.uk/publications/REIS01/REIS01.pdf .

Cooper, C; Booth, A; Varley-Campbell, J; Britten, N; Garside, R. (2018).  Defining the process to literature searching in systematic reviews: A literature review of guidance and supporting studies.  BMC Medical Research Methodology 18 : 85. DOI: http://dx.doi.org/10.1186/s12874-018-0545-3

EEF (Education Endowment Foundation). Impact of School Closures on the Attainment Gap: Rapid evidence assessment. London: Education Endowment Foundation. Accessed 6 May 2021 https://dera.ioe.ac.uk/35707/1/EEF_%282020%29_-_Impact_of_School_Closures_on_the_Attainment_Gap.pdf .

Featherstone, RM; Dryden, DM; Foisy, M; Guise, J-M; Mitchell, MD; Paynter, RA; Robinson, KA; Umscheid, CA; Hartling, L. (2015).  Advancing knowledge of rapid reviews: An analysis of results, conclusions and recommendations from published review articles examining rapid reviews.  Systematic Reviews 4 (1) : 50. DOI: http://dx.doi.org/10.1186/s13643-015-0040-4

Garritty, C; Gartlehner, G; Nussbaumer-Streit, B; King, VJ; Hamel, C; Kamel, C; Affengruber, L; Stevens, A. (2021).  Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews.  Journal of Clinical Epidemiology 130 : 13–22, DOI: http://dx.doi.org/10.1016/j.jclinepi.2020.10.007

Gough, D; Oliver, S; Thomas, J. (2012a).  An Introduction to Systematic Reviews. London: SAGE Publications.

Gough, D; Thomas, J; Oliver, S. (2012b).  Clarifying differences between review designs and methods.  Systematic Reviews 1 (1) : 28. DOI: http://dx.doi.org/10.1186/2046-4053-1-28

Gough, D; Oliver, S; Thomas, J. (2017).  An Introduction to Systematic Reviews. 2nd ed London: SAGE Publications.

Government Social Research Service . GSR Rapid Evidence Assessment Toolkit index, Accessed 6 May 2021 https://webarchive.nationalarchives.gov.uk/20140402164155/http://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapid-evidence-assessment .

Haby, MM; Chapman, E; Clark, R; Barreto, J; Reveiz, L; Lavis, JN. (2016).  What are the best methodologies for rapid reviews of the research evidence for evidence-informed decision-making in health policy and practice: A rapid review.  Health Research Policy and Systems 14 : 83. DOI: http://dx.doi.org/10.1186/s12961-016-0155-7

Holte, BH; Swart, I; Hiilamo, H. (2019).  The NEET concept in comparative youth research: The Nordic countries and South Africa.  Journal of Youth Studies 22 (2) : 256–72, DOI: http://dx.doi.org/10.1080/13676261.2018.1496406

James, KL; Randall, NP; Haddaway, NR. (2016).  A methodology for systematic mapping in environmental sciences.  Environmental Evidence 5 : 7. DOI: http://dx.doi.org/10.1186/s13750-016-0059-6

Khangura, S; Konnyu, K; Cushman, R; Grimshaw, J; Moher, D. (2012).  Evidence summaries: The evolution of a rapid review approach.  Systematic Reviews 1 : 10. DOI: http://dx.doi.org/10.1186/2046-4053-1-10

Li, T, Higgins, JPT; JPT and Deeks, JJ JJ (eds.), . (2021).  Higgins, JPT, Thomas, J; J and Chandler, J; J, Cumpston, M; M, Li, T; T, Page, MJ; MJ, Welch, VA VA (eds.),   Cochrane Handbook for Systematic Reviews of Interventions, version 6.2. Accessed 6 May 2021 https://training.cochrane.org/handbook/current/chapter-05 .

Littell, JH; White, H. (2018).  The Campbell Collaboration: Providing better evidence for a better world.  Research on Social Work Practice 28 (1) : 6–12, DOI: http://dx.doi.org/10.1177%2F1049731517703748

Majid, U; Vanstone, M. (2018).  Appraising qualitative research for evidence syntheses: A compendium of quality appraisal tools.  Qualitative Health Research 28 (13) : 2115–31, DOI: http://dx.doi.org/10.1177/1049732318785358

Major, L; Warwick, P; Rasmussen, I; Ludvigsen, S; Cook, V. (2018).  Classroom dialogue and digital technologies: A scoping review.  Education and Information Technologies 23 (5) : 1995–2028, DOI: http://dx.doi.org/10.1007/s10639-018-9701-y

Mattivi, JT; Buchberger, B. (2016).  Using the AMSTAR checklist for rapid reviews: Is it feasible?.  International Journal of Technology Assessment in Health Care 32 (4) : 276–83, DOI: http://dx.doi.org/10.1017/S0266462316000465

Moher, D; Shamseer, L; Clarke, M; Ghersi, D; Liberati, A; Petticrew, M; Shekelle, P; Stewart, LA. (2015).  PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement.  Systematic Reviews 4 (1) : 1. DOI: http://dx.doi.org/10.1186/2046-4053-4-1

Moons, P; Goossens, E; Thompson, DR. (2021).  Rapid reviews: The pros and cons of an accelerated review process.  European Journal of Cardiovascular Nursing 20 (5) : 515–19, DOI: http://dx.doi.org/10.1093/eurjcn/zvab041

O’Mara-Eves, A; Thomas, J; McNaught, J; Miwa, M; Ananiadou, S. (2015).  Using text mining for study identification in systematic reviews: A systematic review of current approaches.  Systematic Reviews 4 : 5. DOI: http://dx.doi.org/10.1186/2046-4053-4-5

Petkovic, J; Welch, V; Jacob, MH; Yoganathan, M; Ayala, AP; Cunningham, H; Tugwell, P. (2018).  Do evidence summaries increase health policy-makers’ use of evidence from systematic reviews?: A systematic review.  Campbell Systematic Reviews 14 : 1–52, DOI: http://dx.doi.org/10.4073/csr.2018.8

Plüddemann, A; Aronson, JK; Onakpoya, I; Heneghan, C; Mahtani, KR. (2018).  Redefining rapid reviews: A flexible framework for restricted systematic reviews.  BMJ Evidence-Based Medicine 23 (6) : 201–3, DOI: http://dx.doi.org/10.1136/bmjebm-2018-110990

Pollock, M; Fernandes, RM; Becker, LA; Pieper, D; Hartling, L. (2020). Overviews of reviews In:  Higgins, JPT, Thomas, J; J and Chandler, J; J, Cumpston, M; M, Li, T; T, Page, MJ; MJ, Welch, VA VA (eds.),   Cochrane Handbook for Systematic Reviews of Interventions, version 6.2. Accessed 6 May 2021 https://training.cochrane.org/handbook/current/chapter-v .

Saran, A; White, H. (2018).  Evidence and gap maps: A comparison of different approaches.  Campbell Systematic Reviews 14 (1) : 1–38, DOI: http://dx.doi.org/10.4073/cmdp.2018.2

Schünemann, HJ; Moja, L. (2015).  Reviews: Rapid! rapid! rapid! … and systematic.  Systematic Reviews 4 (1) : 4. DOI: http://dx.doi.org/10.1186/2046-4053-4-4

Stevens, A; Garritty, C; Hersi, M; Moher, D. (2018).  Developing PRISMA-RR, a Reporting Guideline for Rapid Reviews of Primary Studies (Protocol), Accessed 6 May 2021 https://www.equator-network.org/wp-content/uploads/2018/02/PRISMA-RR-protocol.pdf .

Thomas, J; Harden, A. (2008).  Methods for the thematic synthesis of qualitative research in systematic reviews.  BMC Medical Research Methodology 8 (45) : 1–10, DOI: http://dx.doi.org/10.1186/1471-2288-8-45

Thomas, J; Newman, M; Oliver, S. (2013).  Rapid evidence assessments of research to inform social policy: Taking stock and moving forward.  Evidence and Policy 9 (1) : 5–27, DOI: http://dx.doi.org/10.1332/174426413X662572

Thomas, J; Graziosi, S; Brunton, J; Ghouze, Z; O’Driscoll, P; Bond, M. (2020).  EPPI-Reviewer: Advanced software for systematic reviews, maps and evidence synthesis. EPPI-Centre Software. London: UCL Social Research Institute.

Thomas, J; Kneale, D; McKenzie, JE; Brennan, SE; Bhaumik, S. (2020). Determining the scope of the review and the questions it will address In:  Higgins, JPT, Thomas, J; J and Chandler, J; J, Cumpston, M; M, Li, T; T, Page, MJ; MJ, Welch, VA VA (eds.),   Cochrane Handbook for Systematic Reviews of Interventions, version 6.2. Accessed 6 May 2021 https://training.cochrane.org/handbook/current/chapter-02 .

Tricco, AC; Antony, J; Zarin, W; Strifler, L; Ghassemi, M; Ivory, J; Perrier, L; Hutton, B; Moher, D; Straus, SE. (2015).  A scoping review of rapid review methods.  BMC Medicine 13 : 224. DOI: http://dx.doi.org/10.1186/s12916-015-0465-6

Tricco, AC; Garritty, CM; Boulos, L; Lockwood, C; Wilson, M; McGowan, J; McCaul, M; Hutton, B; Clement, F; Mittmann, N; Devane, D; Langlois, EV; Abou-Setta, AM; Houghton, C; Glenton, C; Kelly, SE; Welch, VA; LeBlanc, A; Wells, GA; Pham, B; Lewin, S; Strauss, SE. (2020).  Rapid review methods more challenging during COVID-19: Commentary with a focus on 8 knowledge synthesis steps.  Journal of Clinical Epidemiology 126 : 177–83, DOI: http://dx.doi.org/10.1016/j.jclinepi.2020.06.029

Vargo, D; Zhu, L; Benwell, B; Yan, Z. (2021).  Digital technology use during COVID-19 pandemic: A rapid review.  Human Behaviour and Emerging Technologies 3 (1) : 13–24, DOI: http://dx.doi.org/10.1002/hbe2.242