Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-26T05:42:04.517Z Has data issue: false hasContentIssue false

Major Incident Hospital Simulations in Hospital Based Health Care: A Scoping Review

Published online by Cambridge University Press:  01 September 2023

Sacha Wynter*
Affiliation:
Emergency Trauma Centre, Royal Brisbane and Women’s Hospital, Herston, Queensland, Australia School of Medicine, College of Health and Medicine, University of Tasmania, Tasmania, Australia
Rosie Nash
Affiliation:
School of Medicine, College of Health and Medicine, University of Tasmania, Tasmania, Australia
Nicola Gadd
Affiliation:
School of Medicine, College of Health and Medicine, University of Tasmania, Tasmania, Australia
*
Corresponding author: Sacha Wynter; Email: sacha.i.wynter@gmail.com.
Rights & Permissions [Opens in a new window]

Abstract

Major incidents are occurring in increasing frequency, and place significant stress on existing health-care systems. Simulation is often used to evaluate and improve the capacity of health systems to respond to these incidents, although this is difficult to evaluate. A scoping review was performed, searching 2 databases (PubMed, CINAHL) following PRISMA guidelines. The eligibility criteria included studies addressing whole hospital simulation, published in English after 2000, and interventional or observational research. Exclusion criteria included studies limited to single departments or prehospital conditions, pure computer modelling and dissimilar health systems to Australia. After exclusions, 11 relevant studies were included. These studies assessed various types of simulation, from tabletop exercises to multihospital events, with various outcome measures. The studies were highly heterogenous and assessed as representing variable levels of evidence. In general, all articles had positive conclusions with respect to the use of major incidence simulations. Several benefits were identified, and areas of improvement for the future were highlighted. Benefits included improved understanding of existing Major Incident Response Plans and familiarity with the necessary paradigm shifts of resource management in such events. However, overall this scoping review was unable to make definitive conclusions due to a low level of evidence and lack of validated evaluation.

Type
Systematic Review
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Disaster Medicine and Public Health

Terrorism events, floods, bushfires, and even pandemics are occurring in increasing frequency over the past century. Reference Castoldi, Greco and Carlucci1,Reference Bird, Braunold and Dryburgh-Jones2 In health care, these events can be grouped under the term major incident (MI). An MI can be defined as “an incident or event where the location, number, severity or type of live casualties, requires extraordinary resources” beyond the normal “resources of the emergency and health care services’ ability to manage.” Reference Bird, Braunold and Dryburgh-Jones24

Over the past decades, preparation for MIs has become a focus of concern for health-care systems. During such events, hospitals must “adapt to exceptional situations, and all activities must be coordinated to cope with the unavoidable chaos… Everything is different from routine, and responders need to be coordinated by people accustomed to these dynamics.” Reference Castoldi, Greco and Carlucci1

These events must be analyzed from a systems perspective, to appreciate the complexity involved. A system can be defined as “a group of interacting, interrelated and interdependent components that form a complex and unified whole.” 5 Systems thinking provides a set of tools to describe and analyze these networks. It is particularly useful in addressing complex problems, that cannot be solved by any 1 stakeholder. It focuses on organization learning and adaptive management, and is a vital tool in addressing complex public health issues, such as MIs. 5

To reduce the chaos of these complex events, most Western health-care systems have developed a Major Incidence Response Plan (MIRP). 3,6 Generally MIRPs are rarely “stress tested” and often not known by most staff. Reference Tallach, Schyma and Robinson7,Reference Mawhinney, Roscoe and Stannard8 Practically, and ethically, it is only possible to test MIRPs by means of simulation. Thus, the methods to create high level scientific evidence are very limited. Reference Nilsson, Vikström and Rüter9 In an MI simulation, the participating system “simulates the influx of a large number of patients” and the system responds to this stress. Reference Verheul, Dückers and Visser10 Simulations vary in fidelity and scale. Reference Klima, Seiler and Peterson11,Reference Tochkin, Tan and Nolan12 Ideally simulations should be evaluated, and learnings fed back into the involved system in a Plan-Do-Study-Act cycle. Reference Verheul, Dückers and Visser10,Reference Tochkin, Tan and Nolan12

Anecdotally, MI simulations are thought to help improve health-care system preparedness, Reference Tobert, von Keudell and Rodriguez13,Reference Albert and Training14 although it is difficult to objectively evaluate. Reference Legemaate, Burkle and Bierens15 The majority of MI simulation research focuses on Emergency Department (ED) Triage, or prehospital care. Reference McGlynn, Claudius and Kaji16Reference Imamedjian, Maghraby and Homier21 However, analyzing MI response from the perspective of a single department does not reflect the impact of these events on the hospital system as a whole. For example, after the 2005 London Bombings, the Royal London Hospital stood down from the formal declaration of an MI 5 h after the bombings started and reopened for normal services. However, at the time of reopening “theatres were operating to full capacity and the intensive care unit had not received the patients it had already accepted from the MI. Reference Johnson and Cosgrove22 Published expert opinion after this event identified that:

Such actions have the potential to further overload pressured systems. Thus, the ongoing care of the patients admitted from the incident should form part of a major incident plan as the impact of their admission and treatment is beyond a period of a few hours.” Reference Johnson and Cosgrove22

Thus, to determine how an MI may impact the hospital health-care system, wider whole hospital simulations must be performed. Locally, there is little published Australian data on hospital disaster preparedness. Reference Corrigan and Samrasinghe23 Therefore, the aim of this scoping review of the international literature was to determine if whole hospital-based simulation improves hospital response capability to prepare for and manage major incidences, from an Australian health-care system perspective. A systems perspective was used in the analysis.

Methods

Search Strategy

A systematic style scoping review was undertaken in August 2022, according to the recommendations of the Preferred Reporting Items for Systemic Reviews and Meta-Analyses (PRISMA) and the Joanna Briggs Institute (JBI) Methodology, with the aim to tabulate all relevant literature. 24,25 The initial research question was reviewed against the population/problem, concept, and context (PCC) and FINER frameworks. 25,Reference Fandino26 This research aimed to determine if whole hospital-based simulation improves hospital response capability to prepare for and manage major incidences, from an Australian perspective. A systematic search was then undertaken, using 2 databases: PubMed and CINAHL. An attempt was made to include the ERIC database; however, no results were returned.

As per the JBI methodology, an initial limited search was undertaken in each database to identify appropriate key and index terms. A second formal search was then performed, and these results were included. Slightly different search terms were used between databases, due to different tools offered by each. The ERIC database was included in the initial limited search; however, no appropriate results were returned despite numerous searches.

The search terms are provided in Table 1, and the eligibility criteria can be observed in Table 2.

Table 1. Search terms

Note: Refer to Appendix 1 for full Boolean search string.

Table 2. Inclusion and exclusion criteria

The inclusion and exclusion criteria were predefined before beginning the scoping review (Table 2). Articles included must have evaluated the implications of the simulation on the health-care system (ie, not a pure feasibility study). Evaluation data must have been included. A broad scope of publication dates was included as these events are rare, and contemporary data were assumed to be minimal. Articles limited to the single departments were excluded.

The initial PubMed search returned 171 articles, and CINHAL returned 122 articles. These articles were combined then the title and abstracts were screened against the eligibility criteria. Refer to the PRISMA Flow diagram (Figure 1). After title and abstract screening, 54 relevant articles were identified for full text screening. Following this, 15 duplicates were identified and removed. Thus 39 articles proceeded to full text screening.

Figure 1. PRISMA flow diagram for systematic reviews. Reference Page, McKenzie and Bossuyt27

Prehospital or emergency department only simulation accounted for a significant proportion of articles returned in the search. However, these were not sufficient to answer the research question, and were excluded. Simulations based purely on mathematical and computational modeling were also excluded. This is justified by a 2008 study, which demonstrated that there were marked differences in patient benchmarks between computer simulation and live exercises. Reference Franc-Law, Bullard and Corte28 Three papers were excluded as they were set in Saudi Arabia, which was assessed as too dissimilar to the Australian population and health-care system. Reference Bajow, Alkhalil and Maghraby29Reference Bin Shalhoub, Khan and Alaska31 A further 3 studies were excluded as English translations were not available. Reference Kippnich, Kippnich and Markus32Reference Wolf, Partenheimer and Voigt34 Thus, after full text screening, 11 relevant articles were retained. Reference lists from the included articles were snowballed to identify relevant papers. However, no new articles were identified.

Quality Assessment

All included articles were assessed for quality against the appropriate CASP checklist. 35 Of note, most articles were found to be of low evidence strength, likely due to ethical and procedural difficulties in this topic.

However, 2 studies were excluded for further quality concerns. A 2014 United States article was excluded, due to a very significant risk of selection bias and low strength of evidence. Self-reported perception of knowledge improvement was assessed by a post course questionnaire only, of which only 20 participants completed, despite a whole hospital simulation being conducted at 3 Los Angeles Hospitals with staff from all 3 hospitals participating. Reference Burke, Kim and Bachman36 Another 2018 Dutch study was excluded as the primary outcome recorded was not considered valid by the authors of this review, and there were significant sources of bias. The original Dutch authors retrospectively evaluated 32 MI simulation reports from Dutch hospitals and identified the difference in the number of items of improvements identified in different reports. Measuring the number of areas of improvement identified, with no actual evaluation into these areas, is not a valid outcome measurement. The study was thus excluded as it lacks internal validity. Reference Verheul, Dückers and Visser10 Please refer to Appendix 2 for further details.

Data Extraction and Synthesis

Author 1 of this review independently reviewed the relevant articles identified from the search strategy, described above. As per the JBI protocol for scoping reviews, 25 data were extracted from each article under key characteristics and main conceptual categories.

Results

Study Characteristics

After a scoping systematic literature search, and the application of the exclusion and inclusion criteria listed in Table 1, a total of 11 relevant articles were identified, as can be seen in Table 3. Although the date range for inclusion was set as the past 20 y, the majority of articles (n = 10; 91%) were published in the past 12 y. Only 1 article included was based in Australia. Of the other articles, 4 were based in Sweden, 3 in the United States of America (USA), 2 in Italy, and 1 in England. The type and size of simulation used in the articles varied greatly, from tabletop exercises to multijurisdictional simulations. Where described, all simulations appeared to have involved a man-made MI.

Table 3. Characteristics of 11 included studies from international scoping review

Of the included articles, 8 were prospective observational study designs, 2 used quasi-experimental study design, with pre- and postsimulation evaluation. All 11 articles examined mixed populations, including both adult and pediatric patients. Assessed against the National Health and Medical Research Council (NHMRC) Evidence Hierarchy, 10 articles were all found to be of level 4 evidence with a high chance of bias. Reference Paul, Shekelle and Maglione37 One study, a prospective cohort design which examined a purely pediatric cohort, was found to be level 3 evidence. Reference Bird, Braunold and Dryburgh-Jones2 Overall, there was a significant paucity in high level data.

Across the 11 included articles, there was a significant amount of heterogeneity in study designs, outcome measures, and evaluation techniques. No 2 articles used the same evaluation technique or outcome measures, making direct comparison difficult. In addition, a mixture of qualitative and quantitative measures were used across articles.

Common Themes Identified

The aim of this scoping review was to determine if whole hospital-based simulation improved hospital response capability to prepare for and manage MIs, from an Australian health-care system perspective. From a single site outlook, the 2020 Italian article provides the best example. Reference Castoldi, Greco and Carlucci1 Over a 2-y period, 7 whole hospital simulations were held using a preestablished course to train staff on the implementation of the hospitals MI plan. Overall, the authors found it to be an efficient way to train hospital staff in MI management, although the article was assessed as representing a low level of evidence.

This is supported by the other articles. In general, participants in the simulations self-reported improvement or increased understanding. Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7,Reference Bartley, Stella and Walsh38 Of interest, in a 2022 English article which involved a whole hospital simulation with more than 700 staff participants, further exercises were requested by the participants. They found “the simulations mimicked real responses and that exercising as a whole system was beneficial.” Reference Tallach, Schyma and Robinson7 The only Australian study located in the literature that used a whole hospital simulation found that participation in MI simulations improved factual knowledge among participants. Reference Bartley, Stella and Walsh38 Benefits of MI simulation reported in the included articles have been summarized in Box 1.

Some articles evaluated an entire region’s response to MI by means of simulation. For example, the 2012 USA prospective observational study completed a full-scale regional exercise, which included 17 participating hospitals. All 17 hospitals considered the simulation exercise outcomes across the whole hospital. This massive exercise was used to evaluate the region’s response and identified key areas that required improvement. Similar areas of improvement were identified in the other 11 included articles; these have been summarized in Box 2.

Some articles identified unique points, through more novel study designs. Refer to Appendix 3 for further information.

Discussion

Improving MI preparedness and management is a topic of significant public health concern. However, there is little published data evaluating management in real-world events. Some recommendations have been published after specific events; but these are examples of expert opinion only. Reference Tobert, von Keudell and Rodriguez13,Reference Albert and Training14,Reference Yanagawa, Ishikawa and Takeuchi4547

Simulation has long been thought to be an effective tool to assist this preparation, although it is difficult to objectively evaluate. Reference Tobert, von Keudell and Rodriguez13Reference Legemaate, Burkle and Bierens15 Unfortunately, similar to previous publications, Reference Hsu, Jenckes and Catlett48 this scoping review has also demonstrated a paucity of strong data. Studies were generally either quasi-experimental or prospective observational design. Although they contribute preliminary insights, these designs do not have randomization, a limited control of confounding variables, and no control group. This weakens the scientific strength of the evidence, and it must all be interpreted with caution.

In general, retrospective self-evaluation demonstrated improvement of MI simulation management, and increased understanding of MIRP. Reference Castoldi, Greco and Carlucci1,Reference Tallach, Schyma and Robinson7,Reference Bartley, Stella and Walsh38 Participants in a 2022 study stated that the simulations mimicked real responses and that exercising as whole system was beneficial.” Reference Tallach, Schyma and Robinson7 Thus simulations seem to improve staff confidence, which is important and beneficial. While performance is not a substitute for capacity, “individual, leader, and team confidence play essential roles in achieving success and the absence of confidence has been connected with failure.” Reference Owens and Keller49 Simulations appeared to be useful tools for identifying areas of improvement, as can be seen in Box 2. While these studies were highly heterogenous, similar themes of improvement were found, suggesting potential generalizability.

Simulations of a variety of fidelity were performed. Due to common deficiencies across the region, the 2012 USA study found that “tabletop exercises are inadequate to expose operational and logistic gaps in disaster response. Full scale regional exercises should routinely be performed to adequately prepare for catastrophic events.” Reference Klima, Seiler and Peterson11 From a systems perspective, it would be ideal to regularly run large scale exercises to truly stress the networks involved. However practically these exercises are expensive, time and resource consuming. Reference Tochkin, Tan and Nolan12 Other studies used lower fidelity techniques as they believed “the resource investment and expense of high-fidelity simulation was not justified.” Reference Tallach, Schyma and Robinson7 At this stage, there is not enough evidence to support 1 approach over the other. However, despite fidelity level, all studies included found some benefit or identified areas of improvement.

As identified in the 2010 Swedish study “monitoring health-care quality may be difficult without the use of clinical indicators.” Reference Nilsson, Vikström and Rüter9 This is further emphasized by the existing literature on MIs and simulation, which has found demonstrating the effectiveness of such exercises difficult. Reference Verheul, Dückers and Visser10,Reference Tochkin, Tan and Nolan12 In this literature review, all studies evaluated their simulation differently. In future, to accurately evaluate the effectiveness of these activities clinical indicators must be developed. The proposed indicators in the 2010 Swedish study are 1 possibility, but they must be externally validated.

Review Strengths and Limitations

This is the first known scoping review on MI simulations in hospital-based health care that considers a whole hospital or regional response to MIs. It provides preliminary insights into the areas of benefit and possible improvements that could be made to MI simulation. To ensure rigor in our process, this scoping review followed the JBI manual, carried out pilot searches to refine search terms, and predefined inclusion and exclusion criteria before screening.

However, the generalizability of these scoping review findings to different international health-care systems is a limitation of concern. Only 1 study identified in this review was performed in Australia. Four studies were performed in Sweden, and 1 in the United Kingdom. Arguably, these countries have comparable health-care system. 50 However, this review also included 3 American studies, which has a vastly different health-care system and limits the generalizability of the American study findings. 50 Thus, conclusions from these articles must also be interpreted with caution, when considering within the context of different health-care systems. This concern is reinforced further by acknowledging the essential role and influence of the key elements of the systems thinking framework.

There were other limitations to this scoping review. The database search was performed by a single author, which may introduce a bias regarding the “relevant” articles included. Additionally, the author was unable to include or analyze 3 articles published in another language. Reference Kippnich, Kippnich and Markus32Reference Wolf, Partenheimer and Voigt34 Another limitation that should be acknowledged is the small number of included articles; however, this may be reflective of the current literature deficit in this field.

To support the value of simulation in MI preparation and management, further research must be performed. Specifically clinical indicators of MI management should be validated, which would allow more scientific and objective evaluation of MI simulation in the future.

Conclusions

This scoping review of the international literature aimed to determine if whole hospital-based simulation improves hospital response capability around MIs. Definitive conclusions were unable to be made, due to the low number of relevant articles identified, the lack of data, and the general paucity of strong scientific evidence. In general, all articles had positive conclusions with respect to the use of MI simulations. Several benefits were identified, and areas of improvement for future highlighted. However overall, there was a lack of validated evaluation, little evidence to definitively conclude that simulations improved preparation or management for real world MIs. Further research is required to optimize future responses to MI events.

Data availability

Dr Sacha Wynter, Emergency Department Registrar, Australian College of Emergency Medicine Trainee. Qualifications: Doctor of Medicine, Bachelor of Science

Dr Rosie Nash, School of Medicine, College of Health and Medicine, University of Tasmania Senior Lecturer, Public Health, Tasmanian School of Medicine. Qualifications: Bachelor of Pharmacology (Hons), Master of Professional Studies, PhD, Graduate Certificate in Research

Orchid: 0000-0003-3695-0887

Ms Nicola Gadd, Lecturer, Public Health, Tasmanian School of Medicine. Qualifications: Masters of Nutrition and Dietetic Practice

ORCID: 0000-0002-3014-2929

Acknowledgments

None.

Competing interests

The authors declare there are no conflicts of interest.

Appendix 1 Search Strings

PubMed search:

((simulation training[MeSH Terms]) OR (simulation)) AND ((disaster planning[MeSH Terms]) OR (disaster medicine[MeSH Terms])) AND ((major incident) OR (mass casualty incident[MeSH Terms]))

CINHAL search:

((major incident) or (mass casualty incident) or (mass casualty event) or (major critical incident) or (disaster)) AND ((disaster medicine) OR (disaster preparedness) OR (disaster planning)) AND ((simulation) OR (simulation learning))

ERIC search

((major incident) or (mass casualty incident) or (mass casualty event) or (major critical incident) or (disaster)) AND ((disaster medicine) OR (disaster preparedness) OR (disaster planning)) AND ((simulation) OR (simulation learning))

Nil results

((major incident) or (mass casualty incident) or (mass casualty event)) AND ((simulation) OR (simulation learning))

Nil results

((disaster medicine) OR (disaster preparedness) OR (disaster planning)) AND ((simulation) OR (simulation learning))

Nil results

Appendix 2 Excluded Articles after Quality Assessment

Paper 1: Burke RV, Kim TY, Bachman SL, et al . Using mixed methods to assess pediatric disaster preparedness in the hospital setting. Prehosp Disaster Med. 2014;29(6):569-575. Reference Burke, Kim and Bachman36

This article was excluded, due to a very significant risk of selection bias and low strength of evidence. In this study, a whole hospital simulation was conducted in 3 Los Angeles Hospitals, with staff from all 3 hospitals participating. Self-reported perception of knowledge improvement was assessed by a post course questionnaire only, of which only 20 participants completed. It was not disclosed by the authors how many individuals participated in the simulation. However, given the simulation occurred over 3 hospitals, involving the whole site, it is likely to be a significant number. Given the weak study design, and undisclosed survey completion rates, this study quality was found to be very low and was thus excluded from this review.

Paper 2: Verheul ML, Dückers M, Visser BB, et al . Disaster exercises to prepare hospitals for mass-casualty incidents: does it contribute to preparedness or is it ritualism? Prehosp Disaster Med. 2018;33(4):387-393. Reference Verheul, Dückers and Visser10

This paper was excluded, as the primary outcome recorded is not valid, and there were significant sources of bias. Reference Verheul, Dückers and Visser10 The authors retrospectively evaluated 32 MI simulation reports from Dutch hospitals, with each hospital supplying 2 reports (with a mean time of 26.1 mo between reports). The authors identified the number of items of improvement suggested in the initial report and compared this with the number of items of improvement suggested in the later report. The data had several limitations: they were collected retrospectively from heterogenous evaluation formats. They were also limited by the initial evaluators; the authors themselves identified no clear selection criteria and training among evaluators. However, most significantly, it is doubtful that the primary outcome of interest, the number of areas of improvement identified, accurately reflects improvement in MI management. There was no actual evaluation on improvement of areas identified, just the number identified. Given the data were collected by evaluators with no standardization, there are numerous possibilities for this difference. For example, improved engagement with the simulation, self-reflection from previous simulations, and differences between evaluators. Measuring the number of areas of improvement identified, with no actual evaluation into these areas, is not a valid outcome measure. The study was thus excluded as it lacks internal validity.

Appendix 3 Unique Points Identified

The 2020 English pediatric study focused on a unique aspect of MI preparation, improving pediatric discharges. The authors developed a discharge criterion that could be applied to hospital inpatients at the start of a MI to identify appropriate early discharges, thus increasing the hospitals surge capacity. Reference Bird, Braunold and Dryburgh-Jones2 This is a unique tool with clinical implications, which was appropriately evaluated by means of simulation in a Plan-Do-Act evaluation model. Not only does this article provide evidence to support this technique being implementation in other sites, but it also provided an excellent example in how to implement and evaluate new clinical tools from a systems perspective in major incidences.

The 2020 Swedish study also had a unique perspective, demonstrating by means of tabletop simulations that there was a correlation between proactive decision-making skills and staff procedural skills. Reference Murphy, Kurland and Rådestad44 While this study had a narrow focus, it did provide a clinically relevant outcome. This study provides evidence to support clinical, procedural staff being more highly involved in the command structure of MIs (where proactive decisions are required).

Footnotes

*

Currently working at Redcliffe Emergency Department, Redcliffe Hospital, Queensland, Australia.

References

Castoldi, L, Greco, M, Carlucci, M, et al. Mass Casualty Incident (MCI) training in a metropolitan university hospital: short-term experience with Mass Casualty Simulation system MACSIM(®). Eur J Trauma Emerg Surg. 2022;48(1):283-291.CrossRefGoogle Scholar
Bird, R, Braunold, D, Dryburgh-Jones, J, et al. Paediatric major incident simulation and the number of discharges achieved using a major incident rapid discharge protocol in a major trauma centre: a retrospective study. BMJ Open. 2020;10(12):e034861.CrossRefGoogle Scholar
Queensland Health. Queensland Health Mass Casualty Incident Plan. Queensland: Queensland Health; 2016. Accessed August 15, 2022. https://www.health.qld.gov.au/__data/assets/pdf_file/0025/628270/mass-casualty-incident-plan.pdf Google Scholar
University Hospital Birmingham. Clinical guidelines for major incidents and mass casualty events. Birmingham, United Kingdom: NHS; 2020. Accessed August 15, 2022. https://www.england.nhs.uk/wp-content/uploads/2018/12/B0128-clinical-guidelines-for-use-in-a-major-incident-v2-2020.pdf Google Scholar
The Australian Prevention Partnership Centre. A systems thinking approach. Australia: The Sax Institute; 2022. Accessed August 17, 2022. https://preventioncentre.org.au/work/systems-thinking/ Google Scholar
Australian Government Department of Health. Domestic Response Plan for Mass Casualty Incidents of National Significance. Canberra, ACT: Australian Government Deparment of Health; 2018. Accessed August 17, 2022. https://www.health.gov.au/sites/default/files/documents/2021/04/austraumaplan---domestic-response-plan-for-mass-casualty-incidents-of-national-significance.pdf Google Scholar
Tallach, R, Schyma, B, Robinson, M, et al. Refining mass casualty plans with simulation-based iterative learning. Br J Anaesth. 2022;128(2):e180-e189.CrossRefGoogle ScholarPubMed
Mawhinney, JA, Roscoe, HW, Stannard, GAJ, et al. Prparation for the next major incident: are we ready? A 12 year update. Emerg Med J. 2019;36(12):762-764.Google Scholar
Nilsson, H, Vikström, T, Rüter, A. Quality control in disaster medicine training--initial regional medical command and control as an example. Am J Disaster Med. 2010;5(1):35-40.CrossRefGoogle ScholarPubMed
Verheul, ML, Dückers, M, Visser, BB, et al. Disaster exercises to prepare hospitals for mass-casualty incidents: does it contribute to preparedness or is it ritualism? Prehosp Disaster Med. 2018;33(4):387-393.CrossRefGoogle ScholarPubMed
Klima, DA, Seiler, SH, Peterson, JB, et al. Full-scale regional exercises: closing the gaps in disaster preparedness. J Trauma Acute Care Surg. 2012;73(3):592-597.CrossRefGoogle ScholarPubMed
Tochkin, JT, Tan, H, Nolan, C, et al. Ten (+1) lessons from conducting a mass casualty in situ simulation exercise in a Canadian academic hospital setting. J Emerg Manag. 2021;19(3):253-265.CrossRefGoogle Scholar
Tobert, D, von Keudell, A, Rodriguez, EK. Lessons from the Boston Marathon Bombing: an orthopaedic perspective on preparing for high-volume trauma in an urban academic center. J Orthop Trauma. 2015;(29 Suppl):10:S7-S10.CrossRefGoogle Scholar
Albert, E, Training, Bullard T., drills pivotal in mounting response to Orlando Shooting. ED Manag. 2016;28(8):85-89.Google ScholarPubMed
Legemaate, GA, Burkle, FM Jr, Bierens, JJ. The evaluation of research methods during disaster exercises: applicability for improving disaster health management. Prehosp Disaster Med. 2012;27(1):18-26.CrossRefGoogle ScholarPubMed
McGlynn, N, Claudius, I, Kaji, AH, et al. Tabletop application of SALT Triage to 10, 100, and 1000 pediatric victims. Prehosp Disaster Med. 2020;35(2):165-169.CrossRefGoogle ScholarPubMed
Cicero, MX, Brown, L, Overly, F, et al. Creation and Delphi-method refinement of pediatric disaster triage simulations. Prehosp Emerg Care. 2014;18(2):282-289.CrossRefGoogle ScholarPubMed
Koziel, JR, Meckler, G, Brown, L, et al. Barriers to pediatric disaster triage: a qualitative investigation. Prehosp Emerg Care. 2015;19(2):279-286.CrossRefGoogle ScholarPubMed
Desai, SP, Bell, WC, Harris, C, et al. Human consequences of multiple nuclear detonations in New Delhi (India): interdisciplinary requirements in triage management. Int J Environ Res Public Health. 2021;18(4):1740.CrossRefGoogle Scholar
Cicero, MX, Whitfill, T, Munjal, K, et al. 60 seconds to survival: a pilot study of a disaster triage video game for prehospital providers. Am J Disaster Med. 2017;12(2):75-83.CrossRefGoogle ScholarPubMed
Imamedjian, I, Maghraby, NHM, Homier, V. A hospital mass casualty exercise using city buses and a tent as a hybrid system for patient decontamination. Am J Disaster Med. 2017;12(3):189-196.CrossRefGoogle Scholar
Johnson, C, Cosgrove, JF. Hospital response to a major incident: initial considerations and longer term effects. BJA Education 2016;16(10):329-333.CrossRefGoogle Scholar
Corrigan, E, Samrasinghe, I. Disaster preparedness in an Australian urban trauma center: staff knowledge and perceptions. Prehosp Disaster Med. 2012;27(5):432-438.CrossRefGoogle Scholar
PRISMA. Transparent reporting of systematic reviews and meta-analysis. Canada: Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2021. Accessed August 15, 2022. https://prisma-statement.org// Google Scholar
The Joanna Briggs Institute. Joanna Briggs Institute Reviewers’ Manual. South Asutralia, Australia: The Joanna Briggs Institute; 2015. Accessed August 15, 2022. https://nursing.lsuhsc.edu/jbi/docs/reviewersmanuals/scoping-.pdf Google Scholar
Fandino, W. Formulating a good research question: pearls and pitfalls. Indian J Anaesth. 2019;63(8):611-616.CrossRefGoogle ScholarPubMed
Page, M, McKenzie, JE, Bossuyt, PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.CrossRefGoogle ScholarPubMed
Franc-Law, JM, Bullard, MJ, Corte, FD, et al. Accuracy of computer simulation to predict patient flow during mass-casualty incidents. Prehosp Disaster Med. 2008;23(4):354-360.CrossRefGoogle ScholarPubMed
Bajow, N, Alkhalil, S, Maghraby, N, et al. Assessment of the effectiveness of a course in major chemical incidents for front line health care providers: a pilot study from Saudi Arabia. BMC Med Educ. 2022;22(1):350.CrossRefGoogle ScholarPubMed
Bajow, NA, AlAssaf, WI, Cluntun, AA. Course in prehospital major incidents management for health care providers in Saudi Arabia. Prehosp Disaster Med. 2018;33(6):587-595.CrossRefGoogle ScholarPubMed
Bin Shalhoub, AA, Khan, AA, Alaska, YA. Evaluation of disaster preparedness for mass casualty incidents in private hospitals in Central Saudi Arabia. Saudi Med J. 2017;38(3):302-306.CrossRefGoogle ScholarPubMed
Kippnich, M, Kippnich, U, Markus, C, et al. Advanced medical post within hospitals as possible tactical instrument for handling mass casualty incidents. Anaesthesist. 2019;68(7):428-435.CrossRefGoogle ScholarPubMed
Weiß, J. Disaster medicine: how do doctors cope?. Dtsch Med Wochenschr. 2013;138:1446-1447.Google ScholarPubMed
Wolf, S, Partenheimer, A, Voigt, C, et al. Primary care hospital for a mass disaster MANV IV. Experience from a mock disaster exercise. Unfallchirurg. 2009;112(6):565-574.CrossRefGoogle ScholarPubMed
CASP. Critical Appraisal Skills Programme. United Kingdom: CASP; 2022. Accessed August 16, 2022. https://casp-uk.net/casp-tools-checklists/.Google Scholar
Burke, RV, Kim, TY, Bachman, SL, et al. Primary care hospital for a mass disaster MANV IV. Experience from a mock disaster exercise. Prehosp Disaster Med. 2014;29:569-575.CrossRefGoogle Scholar
Paul, G. Shekelle, MD, Maglione, MA, et al. Global health evidence evaluation framework. Rockville, USA: Agency for Healthcare Research and Quality; 2013. Assessed August 24, 2022. https://www.ncbi.nlm.nih.gov/books/NBK121300/table/appb.t21/ Google Scholar
Bartley, BH, Stella, JB, Walsh, LD. What a disaster?! Assessing utility of simulated disaster exercise and educational process for improving hospital preparedness. Prehosp Disaster Med. 2006;21(4):249-255.CrossRefGoogle ScholarPubMed
Harris, C, Bell, W, Rollor, E, et al. Medical surge capacity in Atlanta-area hospitals in response to tanker truck chemical releases. Disaster Med Public Health Prep. 2015;9(6):681-689.CrossRefGoogle ScholarPubMed
Davidson, RK, Magalini, S, Brattekås, K, et al. Preparedness for chemical crisis situations: experiences from European medical response exercises. Eur Rev Med Pharmacol Sci. 2019;23(3):1239-1247.Google ScholarPubMed
Khorram-Manesh, A, Lönroth, H, Rotter, P, et al. Non-medical aspects of civilian-military collaboration in management of major incidents. Eur J Trauma Emerg Surg. 2017;43(5):595-603.CrossRefGoogle ScholarPubMed
Davids, MS, Case, C Jr, Hornung, R III, et al. Assessing surge capacity for radiation victims with marrow toxicity. Biol Blood Marrow Transplant. 2010;16(10):1436-1441.CrossRefGoogle ScholarPubMed
Grant, WD, Secreti, L. Joint civilian/national guard mass casualty exercise provides model for preparedness training. Mil Med. 2007;172(8):806-811.CrossRefGoogle ScholarPubMed
Murphy, JP, Kurland, L, Rådestad, M, et al. Hospital incident command groups’ performance during major incident simulations: a prospective observational study. Scand J Trauma Resusc Emerg Med. 2020;28(1):73.CrossRefGoogle ScholarPubMed
Yanagawa, Y, Ishikawa, K, Takeuchi, I, et al. Should helicopters transport patients who become sick after a chemical, biological, radiological, nuclear, and explosive attack? Air Med J. 2018;37(2):124-125.CrossRefGoogle ScholarPubMed
Crews, C, Heightman, AJ. City on alert. Lessons learned from the San Bernardino terrorist attack. JEMS 2016;41(8):26-31.Google Scholar
The aftermath of a shooting: what hospitals can learn from Colorado’s emergency response? Patient Safety Monitor J. 2012;13:1-4.Google Scholar
Hsu, EB, Jenckes, MW, Catlett, CL. Effectiveness of hospital staff mass-casualty incident training methods: a systematic literature review. Prehosp Disaster Med. 2004;19(3):191-199.CrossRefGoogle ScholarPubMed
Owens, KM, Keller, S. Exploring workforce confidence and patient experiences: a quantitative analysis. Patient Experience J. 2018;5(1):97-105.CrossRefGoogle Scholar
The Commonwealth Fund. The Commonwealth Fund Health Scorecard. New York, USA. 2022. Assessed August 24, 2022. https://www.commonwealthfund.org/about-us Google Scholar
Figure 0

Table 1. Search terms

Figure 1

Table 2. Inclusion and exclusion criteria

Figure 2

Figure 1. PRISMA flow diagram for systematic reviews.27

Figure 3

Table 3. Characteristics of 11 included studies from international scoping review