Corps de l’article

Introduction

The coronavirus disease (COVID-19) pandemic represents a major challenge for healthcare organizations around the world. New clinical processes, guidelines, and protocols had to be developed promptly to ensure that healthcare professionals were prepared to deliver care effectively while ensuring the safety of both patients and healthcare providers. Indeed, evidence suggests that healthcare professionals are 3.4 times as likely to be diagnosed with COVID-19 compared with the general population, even after controlling for the difference in rates of testing between these groups (Nguyen et al., 2020). Areas of care associated with more risk include endotracheal intubation, cardiopulmonary resuscitation, patient flow, and isolation procedures (Bhimraj et al., 2020; Chaplin et al., 2020; Edelson et al., 2020; Nolan et al., 2020).

An increasing number of educators and decision makers rely on simulation for the education of healthcare professionals regarding new processes and protocols (Dube et al., 2020). Simulation is used to reproduce real clinical situations that healthcare professionals can experience and interact with, without compromising patient safety (Gaba, 2004). Simulation activities are generally composed of three components: 1) a briefing to introduce learners to the simulation environment, the learning objectives and the scenario they are about to experience, 2) a clinical scenario, which refers to the simulated clinical situation, and 3) a debriefing to reflect on their simulation experience and receive feedback (Lopreiato, 2016). In the last decades, simulation-based education has been embraced by healthcare organizations to prepare healthcare professionals to safely deliver care and its effectiveness has been highlighted in several systematic reviews (Alanazi et al., 2017; Beal et al., 2017; Bracq et al., 2019; Hippe et al., 2020; Marion-Martins et Pinho, 2020). Yet, the COVID-19 pandemic forced educators to quickly redesign and deliver simulation activities that comply with health and safety measures, while meeting the needs of healthcare organizations. For example, authors report that using personal protective equipment (PPE) during simulation activities has proven to be difficult due to shortages in many healthcare organizations that had to prioritize its use for clinical practice (Chaplin et al., 2020). Concerns regarding the risk of contamination between healthcare professionals during simulation activities have also been raised (Chiu et al., 2020). To mitigate this risk, strategies such as reducing group sizes and enforcing physical distancing during simulation activities have been implemented (Chaplin et al.). Additional concerns included the increased stress of healthcare professionals during the pandemic, which could decrease their receptivity for learning, and the presentation of unrealistic simulation scenarios due to the limited amount of time that educators had to design them, which could affect the suspension of participant disbelief, i.e., their ability to accept the simulation scenario as genuine (Chiu et al.). As such, it is unclear if simulation activities have reached their purpose to prepare healthcare professionals to deliver care during the COVID-19 pandemic. To our knowledge and based on a search in the International prospective register of systematic reviews (PROSPERO), no systematic review has focused on the effect of simulation activities on the preparedness of healthcare professionals to safely deliver care during the COVID-19 pandemic.

Objective

Considering that simulation activities are a first-choice educational intervention for many organizations, it is essential to evaluate if the simulation activities designed and delivered during the COVID-19 pandemic reached their purpose. As such, this systematic review objective was to describe the features and evaluate the effect of simulation activities on the preparedness of healthcare professionals for the COVID-19 pandemic.

Methods

This systematic review was conducted according to the Joanna Briggs Institute Reviewer’s Manual: Systematic Reviews of Effectiveness (Tufanaru, 2017). Reporting is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher et al., 2009). The review protocol was prospectively registered in the PROSPERO database [CRD42020210741].

Eligibility Criteria

We considered all experimental (i.e., randomized controlled trial), quasi-experimental (i.e., non-randomized controlled trial, pretest/posttest, and interrupted time-series design), and observational studies (e.g., cross-sectional, case control, cohort study) where the effect of simulation activities on the preparedness of healthcare professionals and students to deliver care during the COVID-19 pandemic was assessed. Preparedness was defined as the achievement of learning outcomes related to the safety and effective delivery of care during the COVID-19 pandemic.

Population

We considered studies with healthcare professionals and students at any level of practice (pre- and post-licensure, undergraduate, and postgraduate) and in any clinical context. Thus, healthcare professionals and students are identified as “participants” in this article, whereas patients are identified as such.

Interventions

We considered studies assessing the effect of simulation activities. A simulation activity was defined as the entire set of actions and events from the beginning to the end of a simulated event for educational purposes (e.g., briefing, scenario, debriefing). All simulation modalities were considered, including part-task trainers, simulated patients (i.e., standardized patients), manikin-based (low to high-fidelity), computer-based (i.e., screen-based simulation), and hybrid simulations (i.e., combining two or more simulation modalities; Chiniara et al., 2013). To be included in this review, simulation activities had to involve learning objectives related to the delivery of care to a patient with a confirmed or suspected diagnosis of COVID-19, or a change in clinical practice directly related to the COVID-19 pandemic. Studies using simulation solely to identify latent safety threats (e.g., can a gurney be easily transported to a resuscitation room) were excluded because their primary objectives were not educational but aimed at identifying and addressing issues in the healthcare environment (Jee et al., 2020).

Comparators

When available, comparators included any other educational intervention.

Outcomes

The modified version of Kirkpatrick’s Levels of Evaluation model (Barr et al., 2000), a model frequently used in simulation-based education (Blue et al., 2015; Reeves et al., 2015), was chosen as the framework to categorize outcomes related to the preparedness of healthcare professionals to deliver care during the COVID-19 pandemic. This model includes the following levels of educational outcomes: 1) learners’ views and reaction to simulation-based education, 2a) modification of attitudes/perceptions, 2b) acquisition of knowledge/skills, 3) behavioral change, 4a) change in organizational practice, and 4b) benefits to patients. Immediate acquisition (i.e., right after the simulation) or retention (i.e., after a period without simulation) of these outcomes were both of interest.

Search Strategy

On November 18, 2020, we searched four databases—Cumulative Index to Nursing and Allied Health Literature (EBSCO), Excerpta Medica dataBASE (Ovid), Citation Index Expanded Medline (Web of Science), and MEDLINE (Ovid)—using a combination of controlled descriptors and keywords related to the following concepts: healthcare professionals and students, simulation, and COVID-19. A sample of the MEDLINE search strategy is available in Appendix 1. The search was restricted to peer-reviewed papers published in English or French since 2019 considering the first documented COVID-19 cases (Shereen et al., 2020). We also searched the Cochrane Central Register of Controlled Trials (CENTRAL) for additional records.

Study Selection

Titles and abstracts of citations retrieved from the initial search were screened independently by two of the authors (MAMC, AL, TM, or PL) using the Covidence platform (Veritas Health Innovation Ltd, 2021). Full texts of eligible citations were retrieved and assessed independently by two of the authors (MAMC, AL, TM, or PL) based on the inclusion/exclusion criteria mentioned above. Disagreements at any stage of the selection process were resolved with a third author.

Data Extraction and Synthesis

Data were extracted independently by two of the authors (MAMC, AL, or GF) using a form adapted from a previous systematic review (Lapierre et al., 2021). Data items included general study information (e.g., country in which the study was conducted), methods (e.g., study aim and design), simulation features (e.g., briefing, scenario, debriefing), and outcomes (e.g., name and definition, time points measured, descriptive and inferential statistics). For pretest/posttest studies, we used meta-analytical methods to evaluate intra-group changes (i.e., change in outcomes before and after participating in the simulation activity) using a generic inverse variance approach and random-effect models in RevMan 5.4.1. (The Cochrane Collaboration, 2020). To account for correlations between timepoint measures and considering that we did not have access to individual participant data, we corrected all effect sizes (Cohen’s D) by considering a correlation of 0.6 for all pretest/posttest outcome measures. Results are reported with 95% confidence intervals (CI). The statistical significance level was set at 0.05.

For all studies, we synthesized posttest scores using descriptive meta-analytical methods. Although less common than meta-analyses of efficacy and effectiveness, descriptive meta-analyses are used to pool cross-sectional data from similar studies and provide an overview of the distribution of results (Bohannon, 2007; Vakili et al., 2020). All scores were standardized to fit a scale between zero (0) and a hundred (100), with higher scores indicating positive educational outcomes such as favorable reactions to the simulation activity, better attitude/perceptions, and higher levels of knowledge or skills. A narrative description is also provided when quantitative syntheses were not possible due to missing data or unclear reporting.

Methodological Quality Assessment

The methodological quality of included studies was assessed independently by two of the authors (MAMC, AL, or GF) using the Methodological Item for Non-Randomized Studies (MINORS) tool (Slim et al., 2003). This tool consists of 12 items to assess factors that may affect the methodological quality of two-group non-randomized studies—eight of these items also apply to single-group studies according to the authors of this tool. Although the evaluation of the adequacy of statistical analyses is only suggested for two-group studies, we also included this evaluation for single-group studies. Each item is scored on a three-point scale (0-information not reported, 1-reported but inadequate, or 2-reported and adequate) for a maximum possible score of 18 points for single-group studies, or 24 points for two-group non-randomized studies. The content validity of the MINORS tool was determined by 10 clinical methodologists. The MINORS tool showed a satisfactory level of internal consistency (Cronbach’s alpha = 0.73) after independent assessment in pairs of a sample of 80 non-randomized and single-group studies (Slim et al.). Considering that authors of this tool report that two-group non-randomized studies of good methodological quality reached a mean score of 19.8/24.0 (82.5%), we used the following threshold to dichotomize the methodological quality (poor or adequate) of included studies: 15/18 for single-group studies and 20/24 for two-group non-randomized studies.

Results

Search Results and Descriptive Results

The initial database search yielded a total of 1,271 unique citations, and 22 studies met the inclusion criteria. Study characteristics are presented in Table 1. Study flowchart is available in Appendix 2.

Table 1

Characteristics of the studies

Characteristics of the studies

Note. *Studies that used a pretest/posttest for some outcomes and only a posttest for others. [1] Shi et al., 2020, [2] Tan et al., 2020, [3] Cheung et al., 2020, [4] Mouli et al., 2020, [5] Khan & Kiani, 2020, [6] Loh et al., 2020, [7] Shrestha et al., 2020, [8] Mileder et al., 2020, [9] Jensen et al., 2020, [10] Wenlock et al., 2020, [11] Doussot et al., 2020, [12] Montauban et al., 2020, [13] Sharara-Chami et al., 2020, [14] Lakissian et al., 2020, [15] Aljahany et al., 2020, [16] Dharamsi et al., 2020, [17] Trembley et al., 2020, [18] Mark et al., 2020, [19] LoSavio et al., 2020, [20] Munzer et al., 2020, [21] Yuriditsky et al., 2020, [22] Diaz-Guio et al., 2020.

-> Voir la liste des tableaux

Methodological Quality

All studies were judged to be of poor methodological quality (see Table 2). The median MINORS score of single-group studies (n=21) was 11/18 (Q1=10, Q3=12). The sole two-group study scored 15/24. MINORS scores were mostly low because: 1) sample sizes were not prospectively calculated (n=22); 2) study protocols were not published or prospectively registered, which made it impossible to determine if data collection was prospectively planned and if all collected data were reported (n=22); 3) there were no long-term follow-ups (n=21); 4) study aims were not clearly stated (n=11); 5) inappropriate endpoints (i.e., no pretest assessments were included; n=8).

Features of Simulation Activities

Details regarding the features of simulation activities are presented in Table 3 (Appendix 3). All simulation activities included multiple learning objectives concerning COVID-19 preparedness. Learning objectives were related to: 1) infection prevention and control (use of PPE, n=19; hand hygiene, n=4), 2) identification and management of COVID-19 (ventilation and airway management, n=9; care of COVID-19 patients, n=7; nasopharyngeal swabbing and other diagnostic procedures, n=5; triage and early identification of COVID-19 patients, n=4; prone positioning, n=1), and 3) work processes and patient flow (transport of COVID-19 patients, n=4; contamination zones, n=3; biosafety and medical waste disposal, n=2). Additional learning objectives revolved around teamwork and communication (n=9), as well as the pathophysiology and epidemiology of COVID-19 (n=3).

Most simulation activities were deployed in clinical settings, either in situ (n=9) or on-site (n=3); seven studies reported using a simulation lab. One study combined in situ simulations with lab simulations. The most frequent simulation modality was manikin-based (n=9); two studies employed standardized patients, and another used a real patient who had been tested negative to the SARS-CoV-2 to simulate a COVID-19 case. Two other studies combined two simulation modalities (standardized patients and manikins). In most studies, simulation occurred in groups (n=18); two studies involved simulations with individual participants. Settings, simulators, and group or individual participation were not reported in two, eight, and two study, respectively.

A five to ten-minute briefing consisting of a presentation of the simulation activity and familiarization with the simulator and environment was reported in seven studies. Simulation scenarios were diverse but generally involved caring for a patient with suspected or confirmed COVID-19 diagnosis. A 15 to 40 min debriefing or feedback session using various methods and approaches was reported in most studies (n=20). The length of simulation activities, from briefing to debriefing, ranged from 20 min to 180 min; however, 12 studies did not report on this feature. In most studies (n=17), simulation activities were complemented with additional educational activities such as lectures, video demonstrations, skills stations, or written material.

Effect of Simulation Activities

Pretest/Posttest Differences. Ten out of the 14 pretest/posttest studies provided enough data to compute an effect size and combine their results regarding the improvement of participants’ attitudes/perceptions (level 2a; n=6), knowledge (level 2b; n=3), and skills (level 2b; n=2). Other pretest/posttest results are presented narratively.

In studies assessing improvement of attitudes/perceptions (n=6 studies; 305 participants), the pretest/posttest pooled effect size was 2.0 [95% CI: 1.0, 3.0]. Individual study results are shown in Table 4.

Table 2

Methodological quality assessment

Methodological quality assessment

Note. Each item was rated as either adequate (2), inadequate (1), or not reported (0). The maximum score for single-group studies was 18 and for the two-group study was 24. *Cheung 2020 was also rated adequate regarding having an adequate control group (2) that was contemporary (2). However, the baseline equivalence of both groups was rated as inadequate (1).

-> Voir la liste des tableaux

Table 4

Effect sizes on attitude and perception outcomes in pretest-posttest studies

Effect sizes on attitude and perception outcomes in pretest-posttest studies

-> Voir la liste des tableaux

Four other studies reported improvements in participants’ attitudes/perceptions of their preparedness for providing care. Montauban et al. (2020) reported significant improvements in participants’ (n=27) perception of preparedness for all aspects of care delivery to COVID-19 patients (e.g., donning and doffing of PPE, transfer to an intensive care unit, and screening high-risk patients). Trembley et al. (2020) reported improvements in participants’ (n=48) confidence in their role during intubation. Aljahany et al. (2020) reported that although participants (n=54) felt significantly more comfortable providing care to unstable COVID-19 patients, they did not feel significantly more comfortable performing airway procedures, nor did they feel more knowledgeable of the triage process after the simulation activity. Finally, Jensen et al. (2020) reported that participants (n=97) felt more comfortable using PPE and providing care to COVID-19 patients after participating in the simulation activity. In studies assessing improvement in knowledge (n=3 studies; 61 participants), the pretest/posttest pooled effect size was 2.8 [95% CI 1.7, 3.8]. These studies showed significant improvements in participants’ knowledge regarding: 1) prevention, identification, and treatment of COVID-19, as well as referral of COVID-19 patients (Cohen’s D 2.0 [95% CI 1.3, 2.7]; Shi et al., 2020), 2) triage of patients exhibiting COVID-19 symptoms (Cohen’s D 2.4 [95% CI 1.7, 3.1]; Shrestha et al., 2020), and 3) ventilation of COVID-19 patients (Cohen’s D 4.0 [95% CI 3.0, 4.9]; Mouli et al., 2020). Furthermore, Mark et al. (2020) mentioned an improvement in participants’ knowledge (n=45) of the nasopharyngeal swab but did not report data supporting that claim. In studies assessing improvement in skills (n=2 studies; 99 participants), the pretest/posttest pooled effect size was 4.2 [95% CI 0.3, 8.0]. These studies showed significant improvements in skills regarding: 1) application of universal precautions (Cohen’s D 2.2 [95% CI 1.6, 2.8]; Tan et al., 2020), and 2) donning and doffing of PPE (Cohen’s D 6.2 [95% CI 5.3, 7.0]; Diaz-Guio et al., 2020).

Posttest scores. Seventeen studies provided enough data to combine their results regarding the posttest scores of participants’ reactions (level 1; n=4), attitudes/perceptions (level 2a; n=12), skills (level 2b; n=6), and knowledge (level 2b; n=4). In studies of participants’ reactions (level 1; n=4 studies, 245 participants) to the simulation activities (Jensen et al., 2020; Mouli et al., 2020; Sharara-Chami et al., 2020; Shi et al., 2020), satisfaction results normalized to a 0–100 scale ranged from 84.6 to 96.7 with a median of 90.1 (Quartile 1 84.8, Quartile 3 96.3). It was found in two studies that the vast majority of participants would recommend the simulation activity to their colleagues (Loh et al., 2021; Trembley et al., 2020). In two studies, researchers report that 72.6% to 94.0% of participants (n=131) found the simulation activities helpful to prepare them to deliver care (Dharamsi et al., 2020; Doussot et al., 2020). Score distributions in studies of participants’ attitudes/perceptions (level 2a; n=12 studies, 1,973 participants) and skills (level 2b; n=6 studies, 423 participants) are illustrated in Figure 1. Median scores were similar for both levels of outcomes; their distribution lies in the upper third of the range of possible scores.

In the sole two-group study (Cheung et al., 2020), no statistically significant differences were found between the effect of lab-based and in situ simulation activities to improve participants’ confidence, control, and motivation to deliver care during the COVID-19 pandemic.

Figure 1

Participants' attitudes/perceptions and skill assessment scores post-simulation activities

Participants' attitudes/perceptions and skill assessment scores post-simulation activities

-> Voir la liste des figures

In studies of participants’ knowledge (level 2b; n=4 studies, 192 participants) after simulation activities (Lakissian et al., 2020; Mouli et al., 2020; Shi et al., 2020; Shrestha et al., 2020), results normalized to a 0–100 scale ranged from 74.6 to 96.6 with a median of 87.4 (Quartile 1 77.2, Quartile 3 94.9). Furthermore, Loh et al. (2021) reported that 98.0% of 42 participants obtained at least 16 out of 20 correct answers on a quiz on aerosol-generating procedures, PPE, and airway management. However, only 42.9% and 52.3% of participants remembered the steps for PPE donning and doffing, respectively.

Regarding behaviors and benefits to patients, Doussot et al. (2020) reported that more than half of participants (n=109/212; 51%) eventually performed prone positioning to ICU patients that had developed severe acute respiratory distress syndrome (not specified if self-reported or observed). For four patients, prone positioning had to be stopped due to respiratory complications or pressure ulcers. Also, Loh et al. (2021) shared that participants (n=33) had self-reported at least one change in their clinical practice following the simulation activity (e.g., hand hygiene, use of PPE, physical distancing).

Discussion

This systematic review described the features and evaluated the effect of simulation activities on the preparedness of healthcare professionals to deliver care during the COVID-19 pandemic. We found significant and large improvements in participants’ attitudes and perceptions, knowledge, skills. We also found high posttest scores regarding participants’ reactions, attitudes and perceptions, knowledge, and skills following simulation activities. However, almost all the studies under review were of poor methodological quality with significant threats to their internal validity mostly due to the absence of control groups. Furthermore, although healthcare students are increasingly being called upon to provide care in healthcare organizations (Bohsra, 2020; Goshua, 2020; Mensik, 2020), we could only identify a single study with this population.

The most frequent purposes for using simulation were to prepare healthcare professionals for infection prevention and control measures (e.g., PPE and hand hygiene), identification and management of COVID-19 patients, and work processes and patient flow. Overall, these efforts were driven by an imperative to protect professionals from contracting COVID-19 and a will to maintain or improve the efficiency of healthcare delivery under rapidly changing circumstances. Although results from this review must be considered with caution, the outcomes of these efforts appear to align with those of simulation-based education in other contexts. Specifically, it is widely acknowledged that healthcare professionals are highly satisfied with simulation and that their attitudes/perceptions, knowledge, and skills tend to increase following simulation-based education (Alanazi et al., 2017; Beal et al., 2017; Bracq et al., 2019; Hippe et al., 2020; Marion-Martins et Pinho, 2020). In this review, we found statistically significant improvements in all learning outcomes following simulation activities. However, effect sizes were large and imprecise due to their intragroup nature (i.e., pre/post-intervention, pre-posttest effect sizes are known to be affected by natural modifications to variables after their measure at baseline or by other uncontrolled variables, which often leads to large pre-posttest effect sizes (Cuijpers et al., 2017)) and the low number of included studies. They should therefore be interpreted with caution in light of these limitations. Furthermore, evidence about higher-level outcomes, such as changes in behaviors, organizational practice, or benefits to patients was scarcer (only three studies identified)—notably because of the methodological challenges in measuring such outcomes. Nevertheless, simulation-based education seems relevant to improve health professionals’ perception of their preparedness for the COVID-19 pandemic, a non-negligible outcome considering the severe impacts of the pandemic on their mental health (Chen et al., 2020; Civantos et al., 2020; Dal’Bosco et al., 2020; Lai et al., 2020; Luceno-Moreno et al., 2020; Wang et al., 2020; Xiao et al., 2020).

In terms of simulation features, wide variations were observed in the methods and approaches to simulation-based education. In addition, the reporting of important components of simulation activities was often incomplete. Nevertheless, most simulation activities were implemented in situ, an increasingly popular practice (Guise et Mladenovic, 2013; Patterson et al., 2013). It is also a cost-effective solution for clinical settings that do not have access to a simulation laboratory and other resources for simulation-based education (Villemure et al., 2016). Despite its advantages, in situ simulation comes with its own challenges, including high cancelation rates because patient care takes precedence over continuing education (Kurup et al., 2017). Besides, this review did not reveal trends in terms of the methods for briefings, scenarios, or debriefings. As such, standards of best practice for simulation-based education (Sittner et al., 2015) appear as the most reliable source to guide practices in that area.

Most studies used single-group designs and more than a third used a posttest design. These research designs are subject to significant internal validity threats. As such, they are indicated when researchers want to explore if a phenomenon warrants further investigation before undertaking more costly experiments or, in the case of a posttest design, when it is irrelevant to assess the outcome before an intervention is implemented (Creswell, 2013). Considering the amount of evidence from randomized trials that already support the efficacy of simulation for healthcare professional education (Alanazi et al., 2017; Beal et al., 2017; Bracq et al., 2019; Hippe et al., 2020; Marion-Martins & Pinho, 2020), the use of such research design adds very little to our overall understanding of the effect of simulation activities.

However, due to the state of crisis caused by the COVID-19 pandemic in healthcare organizations, it seems hardly defensible from an ethical standpoint to assign participants to a control group. As such, if true experiments cannot be conducted and researchers must rely on a single-group design, to enhance the internal validity of their results, researchers could adopt interrupted time series or repeated-treatment research designs or add nonequivalent outcome variables (i.e., an outcome variable that is not expected to be affected by the intervention) to their methods. Such strategies may help to address maturation and historical threats that may affect the internal validity of single-group studies (Bell, 2010). Strengths of this review include a literature search in multiple databases and complemented by a search in an online study registry, favoring the odds that all potentially relevant studies were identified. However, the search was limited to studies published in English or French and may be subject to a language bias as we did not have the resource to consider other languages. Study selection, data extraction, and quality assessment processes were performed in pairs and independently. This ensures the correct application of eligibility criteria as well as data integrity. Besides, current findings are limited by internal validity threats due to the research designs of the studies under review. As such, it cannot be excluded that historical factors, the maturation of study participants, the testing procedure, or the Hawthorne effect (i.e., bias related to participants’ awareness that they are part of a study and are being observed) may have affected individual study findings. Finally, as we did not have access to individual participant data, we corrected all pre/posttest effect sizes by considering a correlation of 0.6 (Cuijpers et al., 2017). However, these effect size values may well be positively biased.

Conclusion

Based on single-group pretest/posttest studies, findings from this review suggest that simulation activities have a positive effect on the preparedness of healthcare professionals to deliver care during the COVID-19 pandemic. Importantly, this review contributes a systematic description of the features of simulation activities designed for this purpose. Such description has the potential to inform the work of clinical educators who wish to use simulation as an educational tool for pandemic preparedness in various care settings. However, the state of the evidence prevents us from making recommendations as to which simulation modalities is more effective than others. In addition, as only a single study was conducted among healthcare students, the extent to which these results can be applied to this population is limited. Furthermore, the validity of these results is impeded by threats such as the absence of control groups and overall poor methodological quality.

Future studies should include a control group if practically and ethically feasible. Other strategies to improve the internal validity of single-group studies have been suggested.