Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T18:38:03.749Z Has data issue: false hasContentIssue false

Disaster Metrics: A Comprehensive Framework for Disaster Evaluation Typologies

Published online by Cambridge University Press:  08 May 2017

Diana F. Wong*
Affiliation:
Monash University Disaster Resilience Initiative (MUDRI), Monash University, Melbourne, Australia New South Wales (NSW) Health, Sydney, New South Wales, Australia
Caroline Spencer
Affiliation:
Monash University Disaster Resilience Initiative (MUDRI), Monash University, Melbourne, Australia
Lee Boyd
Affiliation:
School of Nursing and Midwifery, Monash University, Melbourne, Australia Cabrini Health, Malvern, Victoria, Australia
Frederick M. Burkle Jr.
Affiliation:
Monash University Disaster Resilience Initiative (MUDRI), Monash University, Melbourne, Australia Harvard Humanitarian Initiative, Harvard University, Cambridge, MassachusettsUSA Woodrow Wilson International Center for Scholars, Washington, DCUSA
Frank Archer
Affiliation:
Monash University Disaster Resilience Initiative (MUDRI), Monash University, Melbourne, Australia
*
Correspondence: Diana Wong, MCP Nsg Monash University Disaster Resilience Initiative (MUDRI) Monash University Accident Research Centre (MURAC) Building 70, Clayton Campus, Monash University Wellington Road, Clayton VIC 3800 Australia E-mail: Diana.F.Wong@monash.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction

The frequency of disasters is increasing around the world with more people being at risk. There is a moral imperative to improve the way in which disaster evaluations are undertaken and reported with the aim of reducing preventable mortality and morbidity in future events. Disasters are complex events and undertaking disaster evaluations is a specialized area of study at an international level.

Hypothesis/Problem

While some frameworks have been developed to support consistent disaster research and evaluation, they lack validation, consistent terminology, and standards for reporting across the different phases of a disaster. There is yet to be an agreed, comprehensive framework to structure disaster evaluation typologies.

The aim of this paper is to outline an evolving comprehensive framework for disaster evaluation typologies. It is anticipated that this new framework will facilitate an agreement on identifying, structuring, and relating the various evaluations found in the disaster setting with a view to better understand the process, outcomes, and impacts of the effectiveness and efficiency of interventions.

Methods

Research was undertaken in two phases: (1) a scoping literature review (peer-reviewed and “grey literature”) was undertaken to identify current evaluation frameworks and typologies used in the disaster setting; and (2) a structure was developed that included the range of typologies identified in Phase One and suggests possible relationships in the disaster setting.

Results

No core, unifying framework to structure disaster evaluation and research was identified in the literature. The authors propose a “Comprehensive Framework for Disaster Evaluation Typologies” that identifies, structures, and suggests relationships for the various typologies detected.

Conclusion

The proposed Comprehensive Framework for Disaster Evaluation Typologies outlines the different typologies of disaster evaluations that were identified in this study and brings them together into a single framework. This unique, unifying framework has relevance at an international level and is expected to benefit the disaster, humanitarian, and development sectors. The next step is to undertake a validation process that will include international leaders with experience in evaluation, in general, and disasters specifically. This work promotes an environment for constructive dialogue on evaluations in the disaster setting to strengthen the evidence base for interventions across the disaster spectrum. It remains a work in progress.

WongDF, SpencerC, BoydL, BurkleFMJr., ArcherF. Disaster Metrics: A Comprehensive Framework for Disaster Evaluation Typologies. Prehosp Disaster Med. 2017;32(5):501–514.

Type
Original Research
Copyright
© World Association for Disaster and Emergency Medicine 2017 

Introduction

The frequency of disasters is increasing around the world with more people being at risk.Reference Downton and Pielke 1 - Reference Buttenheim 6 They can be a complex mix of natural hazards and human action.Reference Wisner, Blaikie, Vannon and Davis 7 There is a moral imperative to improve the approach to undertaking and reporting disaster evaluations, with the aim of reducing preventable mortality and morbidity in future events.Reference Veenema 8 , Reference Sundnes 9 Improving the quality of disaster evaluations and strengthening accountability is urgently required.Reference Clarke and Darcy 10 - Reference Clarke, Allen, Archer, Wong, Eriksson and Puri 12 While some frameworks have been developed to support consistent disaster research and evaluation, 13 , Reference Kulling, Birnbaum, Murray and Rockenschaub 14 they are fragmented and uni-focused. There is yet to be an agreed, comprehensive framework to structure disaster evaluation typologies. Such a framework could provide consistency in terminology and standards for reporting across the different phases of a disaster, with a view to providing comparability to better understand the process, outcomes, and impacts of the efficacy and efficiency of interventions. Sharing methodological experiences would contribute to the further development of these standards and guidelines to systematically build disaster science.

Undertaking disaster evaluations is a specialized area of study at an international level. Different approaches to evaluation over the years have led to a variety of definitions being offered when describing the term “evaluation.”Reference Stufflebeam and Coryn 15 An earlier definition of evaluation put forward by the Joint Committee on Standards for Educational Evaluation (JCSEE) in 1994 states that “evaluation is the systematic assessment of the worth or merit of an object.”Reference Sanders 16 The “object” in this case is the program, project, or intervention under review.Reference Sanders 16 Other recent definitions focus more on active purposes such as accountability assessment, decision making, program improvement, judgement, and organizational learning.Reference Yarbrough, Shulha, Hopson and Caruthers 17 Regardless of the definition used, evaluations are largely conducted to find areas for improvement and to generate an assessment of overall quality and value, usually for reporting or decision making purposes.Reference Davidson 18

The aim of this paper is to outline an evolving Comprehensive Framework for Disaster Evaluation Typologies. It is anticipated that this new framework will facilitate an agreement on organizing and describing the various evaluations found in the disaster setting. While continuing to be a work a progress, it is intended that this work will add structure to the current understanding and help to underpin the diversity of disaster evaluation typologies that currently exist.

When considering the title of the framework and how best to describe this body of work, the authors considered two words: methodology and typology. The word “method” or “methodology” is defined as “a particular procedure for accomplishing or approaching something.” 19 The preferred term for the framework was “typology,” which refers to “a structure of different types,” 20 and is a closer match to describing the classification of the variety of disaster evaluation styles that are currently available.

Methodology

This research was undertaken in two phases. Phase One was designed to identify current evaluation frameworks and typologies used in the disaster setting. A scoping literature review 21 was undertaken in two parts. Firstly, the peer-reviewed literature was searched using major electronic databases, including: PubMed/Medline (US National Library of Medicine, National Institutes of Health; Bethesda, Maryland USA); CINAHL (EBSCO Information Services; Ipswich, Massachusetts USA); EMBASE (Elsevier; Amsterdam, Netherlands); ProQuest (Ann Arbor, Michigan USA); Science Web (Thomson Reuters; New York, New York USA); Scopus (Elsevier; Amsterdam, Netherlands); and Web of Knowledge (Thomson Reuters; Philadelphia, Pennsylvania USA). These databases were searched to identify contributions to the history and development of disaster/disaster health evaluation frameworks/models/repositories. The key search words used included “disaster OR emergency,” AND “health,” AND “guidelines OR frameworks OR models OR repositories OR evaluation OR typology.” Inclusion criteria consisted of articles in English, published after 2003, and included frameworks, models, or methodologies rather than exemplars of specific evaluations. Additional references were identified through examination of bibliographies from the most recent publications (snowballing) and through scrutiny of the contents pages of highly relevant journals.Reference Smith, Wasiak, Sen, Archer and Burkle 22 This scoping review was supplemented by a convenience sample of international colleagues who commented on the evolving framework to identify additional relevant typologies.

Secondly, a review of the “grey literature” also was undertaken, including similar key words, using Google and Google Scholar (Google Inc.; Mountain View, California USA) and supplemented by “ReliefWeb,” a resource maintained by the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA; New York, USA and Geneva, Switzerland) 23 and the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP; London, United Kingdom). The ALNAP is an organization that is dedicated to improving humanitarian performance through learning and accountability. 24

The scoping review provided examples of a wide range of typologies used under the general label of “evaluation.” The following Comprehensive Framework lists at least one example of each evaluation type identified as an exemplar; however, it does not intend to list all evaluations identified. One hundred twenty-two papers were used in developing this Comprehensive Framework.

In Phase Two, all co-authors contributed to developing a structure that included the range of typologies identified in Phase One, and suggested possible relationships in the disaster setting.

The resultant “Comprehensive Framework for Disaster Evaluation Typologies” not only identifies and structures different disaster evaluation typologies, but it also suggests relationships between these typologies and all phases of the disaster cycle. Various disaster evaluation typologies are mapped across the disaster timeline, demonstrating their inter-relationships. It is not the intent of this paper to describe perceived strengths or weaknesses of any particular evaluation typology. It is important to note that Baselines, Consequences, and Outcomes evaluation typologies are related but not hierarchical; that is, one is not more important that the other and are to be interpreted within the context of a specific disaster.

Results

The literature review revealed that more information can be found in the “grey literature” and humanitarian arena than in peer-reviewed literature. There were very few evaluations of health interventions during disasters reported in the literature. Most evaluation reports were descriptive, process-focused, and lacked a core conceptual framework.Reference Rådestad, Jirwe, Castrén, Svensson, Gryth and Rüter 25 Recent research undertaken by Stratton in 2014 identified that the majority of papers submitted to Prehospital and Disaster Medicine (PDM) were surveys or descriptive in nature.Reference Leiba, Schwartz and Eran 26 The published reports did not demonstrate a consistent and structured approach to evaluations of interventions, and the impact of interventions on the affected population was rarely mentioned.Reference Rüter 27 Many nongovernment organizations (NGOs), such as the International Federation of Red Cross and Red Crescent Societies (IFRC; Geneva, Switzerland) and various United Nations (UN) agencies, have their own internal standards for evaluations. Attempts are being made to consolidate standards and guidelines across the sector, as evidenced by the work of ALNAP 24 and the Inter-Agency Standing Committee (IASC).Reference von Schreeb 28 National government disaster organizations, however, were noticeably absent in this activity.

One influential guideline identified during the literature review was Health Disaster Management Guidelines for Evaluation and Research (hereafter referred to as “The Guidelines”). The Guidelines were co-authored by the Task Force on Quality Control and Disaster Management (TFQCDM), the World Association of Disaster and Emergency Medicine (WADEM; Madison, Wisconsin USA), and the Nordic Society for Disaster Medicine. 13 It provided a conceptual framework for undertaking research and evaluation in the disaster setting. The core of this conceptual framework was frequently referenced in peer-reviewed papers,Reference Wong, Spencer, Boyd, Burkle and Archer 29 , Reference Kohl, O’Rourke, Schmidman, Dopkin and Birnbaum 30 scholarly journals, and in higher degree research theses, 31 , 32 but it was rarely used as the methodological framework for undertaking disaster evaluations and research.Reference Wong, Spencer, Boyd, McArdle, Burkle and Archer 33 The literature review revealed one journal articleReference Adibhatia, Dudek, Ramsel-Miller and Birnbaum 34 and two booksReference Stratton 35 , Reference Birnbaum, Daily, O’Rourke and Loretti 36 that utilized the “Conceptual Framework” and terminology used in The Guidelines. The three articles/books were based on the Sumatra-Andaman Earthquake and subsequent Asian Tsunami that occurred in December 2004.

Consideration of The Guidelines to underpin the “Comprehensive Framework” included a validation step. In-depth interviews of 18 experts in the fields of disaster and emergency health and medicine undertaken by the lead author in 2014 and 2015 revealed that the core framework of The Guidelines was deemed to be valuableReference Wong, Spencer, Boyd, McArdle, Burkle and Archer 33 and was being referenced. It was not, however, being used to structure research and evaluation in the disaster setting. 32

In an attempt to test the validity of the core “Guidelines,” the authors undertook a thematic analysis of seven Australian disaster reports/inquiries dating from 2006-2014 to see if the core elements of the conceptual framework in The Guidelines were present in all reports/inquiries. The disasters occurred in four different Australian States, covered four different types of events, included four different types of reports, and were chaired by six different Chairpersons. Results from the thematic analysis were reviewed by two researchers and identified that all elements of the “Conceptual Framework” were present in each of the seven Australian disaster reports/inquiries. 37

Given this support for The Guidelines from both the international experts and the thematic review of Australian reports, it was decided to use its core structure, with some modifications, to underpin this Framework for Disaster Evaluation Typologies.

Other frameworks or guidelines that were identified included work by Stephenson;Reference Stephenson 38 Powers and Daily;Reference Powers and Daily 39 Kulling et al;Reference Kulling, Birnbaum, Murray and Rockenschaub 14 Debacker;Reference Debacker, Hubloue and Dhondt 40 Fattah;Reference Fattah, Rehn, Reierth and Wisborg 41 - Reference Fattah, Rehn, Lockey, Thompson, Lossius and Wisborg 43 Sundnes;Reference Sundnes 9 and Birnbaum et al.Reference Rüter 27 , Reference Birnbaum, Daily, O’Rourke and Loretti 44 - Reference Birnbaum, Daily, O’Rourke and Loretti 51 While Fattah identified more than 10 frameworks, she also discovered that none had been validated and they were not commonly used to structure evaluations and research in the disaster setting.Reference Fattah, Rehn, Reierth and Wisborg 42

It became evident that a core unifying framework did not exist to structure disaster evaluation and research. In an attempt to create a tool and consolidate the diverse non-validated frameworks together, the authors utilized key components from The Guidelines 13 and “The Impacts Framework” (comprising of event, event characteristics, object, harm, and impacts) from StephensonReference Stephenson 38 into their framework. Disaster Evaluation Typologies was created by linking and integrating various typologies into a unifying core structure that aimed to inform and support a Comprehensive Framework for Disaster Evaluation Typologies.

The Comprehensive Framework for Disaster Evaluation Typologies is presented using the following headings:

  1. 1. Figure One: Core Structure;

  2. 2. Figure Two: Baselines;

  3. 3. Figure Three: Consequences;

  4. 4. Figure Four: Outcomes;

  5. 5. Figure Five: Impact Evaluations;

  6. 6. Figure Six: Accountability;

  7. 7. Figure Seven: Evaluation Standards and Evidence; and

  8. 8. Figure Eight: Disaster Evaluation Typologies: Comprehensive Framework.

Core Structure

The Core Structure outlines the fundamental framework of Disaster Evaluation Typologies to which all other entries will be related (Figure 1: Core Structure).

Figure 1 Core Structure.

The Core Structure consists of three important layers. The first layer is found at the bottom of the diagram and provides a preliminary and simplistic view of the disaster continuum or timeline. At its most basic level, this layer has three core elements or phases that are represented by a pre-event phase, an event phase, and the post-event phase of an emergency or disaster. While each phase can be identified individually, their timing is not necessarily sequential and the phases can overlap. Post-event phases will influence the pre-event phase of subsequent events.

The second layer is represented by an expansion of the earlier pre-event, event, and post-event phases and is based on a modified representation of The Guidelines, 13 as already noted. Additional detail is evident and the relationships of key disaster phases are demonstrated, including:

The third layer introduces the concept of “Strengthening Resilience” as an overarching theme. It enhances and enriches the Core Structure of Disaster Evaluation Typologies and is an emerging, international imperative embraced within the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). 52 Key elements of “Strengthening Resilience” include:

  1. 1. Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR); 52

  2. 2. Sustainable Development Goals (SDGs); 59

  3. 3. Climate Change Conference (COP21); 60

  4. 4. Global Facility for Disaster Reduction and Recovery (GFDRR) Recovery Framework, also known as Post-Disaster Needs Assessment (PDNA) Recovery Framework; 61 and

  5. 5. Rockefeller Foundation 100 Resilient Cities (RC) Framework.Reference Kete 62

The Core Structure for the Comprehensive Framework is illustrated in Figure 1.

Baselines

Baselines are a series of evaluations or assessments that occur during the pre-event phase of disasters and provide information about the current state of the community (Figure 2: Baselines).

Figure 2 Baselines.

Baseline evaluations include any information or data that have been collected prior to an event or disaster occurring. It covers both the pre-event status of a community and the actual hazard itself. Obtaining Baseline information in the pre-event phase is critical in understanding the state of the community, how it has been affected by a disaster,Reference Koenig and Schultz 2 , Reference Cutter, Burton and Emrich 63 and the subsequent damage that has occurred. This information assists in identifying community strengths, weaknesses, and vulnerabilities to disasters. Additionally, this information will assist in developing appropriate disaster management and disaster risk-reduction strategies.

Innovations in science and technology have made it easier in recent years to collect information that helps reduce disaster risk, and therefore, plan for the future.Reference Fung 64 The SFDRR, 52 ratified in Sendai, Japan in March 2015 by 187 UN Member States, acknowledges that there is a growing demand for science and technology to play a more prominent and effective role in providing evidence for policy and decision making. Knowledge is essential to the process. A strengthened evidence base to support the implementation of disaster risk-reduction strategies also is required.Reference Fowler 65 , Reference Carabine 66 Furthermore, Priority 4, paragraph 34(b) of the SFDRR supports the “further development and dissemination of instruments, such as standards, codes, operational guides, and other guidance instruments to support coordinated action in disaster preparedness and response to facilitate information sharing on lessons learned and best practices for policy practice and post-disaster reconstruction programmes.” 52

Examples of Baseline evaluations include, but are not limited to:

Baseline information and evaluations are illustrated in Figure 2.

Consequences

Consequences are a series of evaluations and/or assessments that occur after the event or disaster has occurred and include assessment of damage and changes in function (Figure 3: Consequences).

Figure 3 Consequences.

Consequence evaluations include any information or assessments that have been collected after an event or disaster has occurred. It covers both the event and post-event phase of the disaster timeline. Systematic data collection and assessment is required in order to inform disaster needs analysis after an event. It is used in monitoring the effectiveness of response and recovery interventions and to aid decision making.

Examples of Consequence evaluations include, but are not limited to:

  • Rapid needs assessments (damage), usually occur on Day 1 after the event; 78 , 79

  • Detailed needs assessments (functional), usually occur on Days 2-3 and may include PDNA; 80

  • Continual assessments that include monitoring and surveillance, usually occur on multiple occasions after the event; 78 and

  • Independent real-time evaluationsReference Cosgrave, Ramalingam and Beck 81 and collaborative joint evaluations are contemporary evaluation types.Reference Beck and Buchanan-Smith 82

The information received from these evaluations will ideally be compared with previous Baseline studies and incorporated into helping to plan response and recovery for the current event, provide feedback into planning and preparing for subsequent events, and assist in disaster risk reduction. 83 Currently, damage and loss trends are difficult to monitor over time, partly due to inconsistent methodologies and the fact that very few countries keep national disaster databases. Even then, only one in five countries will have consistently recorded economic losses using validated tools and data collection methods. 84 The PDNAs aim to provide a common approach to post-crisis needs assessments and recovery planning. 85

The Centre for Research on the Epidemiology of Disasters (CRED; Brussels, Belgium) promotes research, training, and information dissemination on disasters. 86 In the Australian context, the Australian Business Roundtable for Disaster Resilience and Community Safety provides a first-time overview of disaster data with the aim of making Australian communities safer and more resilient to natural disasters. 87

In an attempt to reduce disaster risk and strengthen resilience, a feedback loop is present in Figure 8 from Consequences to Baselines. Consequence evaluations are illustrated in Figure 3.

Outcomes

Outcomes are a series of evaluations and/or assessments that occur towards the end of the post-event phase of a disaster (Figure 4: Outcomes).

Figure 4 Outcomes.

Outcome evaluations reflect information or data that have been collected after an event or disaster has occurred. These evaluations include summative reviews of processes used in managing the event and outcomes related to the post-event status of the community. This information will ideally be incorporated into planning and preparing for the next event or disaster.

Examples of Outcome evaluations include, but are not limited to:

In an attempt to reduce disaster risk and strengthen resilience, a feedback loop is present in Figure 8 from Outcomes to Baselines. Outcome evaluations are illustrated in Figure 4.

Impact Evaluations

Impact Evaluations of programs, projects, and interventions are evaluations that include a measure of causality or attribution 97 and can occur during any phase of the disaster timeline (Figure 5: Impact Evaluations).

Figure 5 Impact Evaluations.

In the disaster setting, Impact Evaluations have gained popularity for identifying causal links between specific interventions and outcomes. This is a result of the international community demanding accountability and improved evidence-based interventions.Reference Clarke and Darcy 10 , Reference Clarke, Allen, Archer, Wong, Eriksson and Puri 12 Although there remains ongoing debate about the exact definition of Impact Evaluations,Reference Buttenheim 6 they are particularly well-suited to answer important questions, such as: whether interventions do or do not work; whether interventions make a positive or negative impact; whether there are intended or unintended consequences; and how cost effective they are. 98 , Reference Puri, Aladysheva, Iversen, Ghorpade and Bruck 99 It is believed they will greatly improve the effectiveness of interventions delivered in the disaster setting by identifying what works for whom, and why.Reference White 100

Examples of Impact Evaluations include, but are not limited to:

Impact Evaluations are illustrated in Figure 5.

Accountability

Accountability to donors, stakeholders, and beneficiaries is a cross-cutting theme across all phases of the disaster timeline and is applicable to every evaluation undertaken in the disaster settingReference Brown and Moore 104 (Figure 6: Accountability).

Figure 6 Accountability.

Over the last 20 years, there has been a call for greater Accountability in disaster and humanitarian settings. More recently, at an international level, there has been mounting pressure to strengthen quality, accountability, and learning practices, while also ensuring transparency.Reference Tan and von Schreeb 105 - 108 The lack of an accepted definition for “accountability” in the humanitarian context remains a challenge. The term “accountability” seems to represent a whole range of concepts and principles.Reference Tan and von Schreeb 105 The ALNAP is an example of an international organization dedicated to improving humanitarian performance through accountability and increased learning.Reference Buchanan-Smith and Cosgrave 109

For the purpose of this Typology, the term “accountability” will be defined as the means in which power is used responsibly. This includes consideration of the views of all interested parties (including donors, stakeholders, and beneficiaries). 75

Examples of Accountability evaluations include, but are not limited to:

  • 2013 Humanitarian Accountability Report; 110 and

  • Catholic Relief Services Monitoring, Evaluability, Accountability, and Learning in Emergencies: A Resource Pack for Simple and Strong MEAL.Reference Morel and Hagens 111

Accountability is illustrated in Figure 6.

Evaluation Standards and Evidence

Evaluation Standards and Guidelines, Evidence-Based Reviews and Registries, and Knowledge Management are important cross-cutting themes that are relevant throughout the entire disaster timeline (Figure 7: Evaluation Standards and Evidence).

Figure 7 Evaluation Standards and Evidence.

Evaluation Standards and Guidelines include generic Evaluation Standards, such as:

Additionally, there are disaster-specific Evaluation Standards and Guidelines published by many NGOs that include, but are not limited to:

  • IFRC Project/Programme Monitoring and Evaluation (M & E) Guide; 119

  • Save the Children Evaluation Handbook;Reference O’Neill 120

  • ALNAP Evaluation of Humanitarian Action Guide;Reference Buchanan-Smith and Cosgrave 121

  • International Initiative for Impact Evaluation (3ie) Principles for Impact Evaluation; 122

  • World Bank Group Impact Evaluations: Relevance and Effectiveness; 123

  • Centers for Disease Control and Prevention (CDC) Program Performance and Evaluation Office – Program Evaluation; 124 and

  • Australian Agency for International Development (AusAID) Monitoring, Evaluation, and Learning Framework. 125

Evaluation Standards and Guidelines also include guidelines for responsible and ethical conduct in undertaking evaluations that include, but are not limited to:

  • United Nations Evaluation Group (UNEG) Ethical Guidelines for Evaluation; 126

  • Australian Council for International Development (ACFID) Guidelines for Ethical Research and Evaluation in Development; 127 and

  • Australasian Evaluation Society (AES) Guidelines for the Ethical Conduct of Evaluations. 128

Evidence-Based Reviews and Registries include meta-evaluations, systematic reviews, other types of literature review typologies, and registries of evaluation reports.

The level and quality of evidence in this setting has recently been reviewed by Clarke and Darcy in Insufficient Evidence? The Quality and Use of Evidence in Humanitarian Action – ALNAP Study.Reference Clarke and Darcy 10 Despite improvements over the last 20 years, they identified that there remains room for further development in the quality and use of evidence in the humanitarian setting. The authors also suggest that “evidence matters: the use of good quality evidence improves the effectiveness and accountability of humanitarian action, and is in accordance with humanitarian ethics and principles.”Reference Clarke and Darcy 10

Systematic Reviews

Systematic Reviews are structured, comprehensive literature reviews that utilize a rigorous and published search strategy, with the aim of minimizing selection bias. 129 - 131

Examples of Systematic Reviews in this discipline include, but are not limited to:

Other literature review typologies include: scoping reviews, 21 , Reference Al Thobaity, Williams and Plummer 135 gap analyses,Reference Clarke, Allen, Archer, Wong, Eriksson and Puri 12 and priority settings. 136

Meta-Evaluations

Meta-Evaluations are systematic and formal evaluations of evaluationsReference Olsen and O’Reilly 137 and are a high-level of evidence; however, they are uncommon in the disaster setting.

Examples of Meta-Evaluations include, but are not limited to:

  • ALNAP Review of Humanitarian Action in 2003: Improving Monitoring to Enhance Accountability and Learning, Chapter 4 Meta-Evaluation; 138 and

  • Groverman and Hartmans Meta-Evaluation and Synthesis of the 2010 Pakistan Floods Response by SHO Participants: A Synthesis of Conclusions, Report Phase 2.Reference Groverman and Hartmans 139

Registries

For the purpose of this paper, Registries (sometimes called repositories) are defined as publicly available, free-access collations of evaluation studies that have been undertaken in the disaster setting. Registries aim to help build capacity and strengthen disaster risk reduction and resilience. 140 A separate review of such Registries undertaken by the lead author suggests that these are not well-known in the disaster sector but contain a large number of evaluation reports that might be of use to aid decision making and improve practice.Reference Clarke, Allen, Archer, Wong, Eriksson and Puri 12

Examples of disaster evaluation Registries include, but are not limited to:

  • Independent Evaluation Group (IEG) hosted by the World Bank; 141

  • Humanitarian Evaluation and Learning Portal (HELP) hosted by ALNAP; 142

  • IFRC; 143

  • Evaluation and Research Database (ERD) hosted by United Nations Children’s Emergency Fund (UNICEF); 144 and

  • Impact Evaluations hosted by 3ie. 145

Knowledge management includes cross-sectoral research, collaboration, and dissemination of information to improve the evidence base of disaster science and to improve practice. Sharing knowledge enables informed decision making regarding disaster risk reduction and management. 146

Evaluation Standards and Guidelines, Evidence-Based Reviews and Registries, and Knowledge Management are relevant in advancing the science of disaster evaluations by providing scientific rigor, common terminology, and the ability to replicate various methodologies.

Evaluation Standards and Guidelines are illustrated in Figure 7.

Disaster Evaluation Typologies: Comprehensive Framework

Disaster Evaluation Typologies: Comprehensive Framework identifies the different typologies of disaster evaluations and demonstrates key relationships in a single diagram. It suggests the interdependencies and relationships between various evaluation typologies along the disaster timeline and within the disaster setting. This consolidates the previous Figures 1–7 (Figure 8: Disaster Evaluation Typologies: Comprehensive Framework).

Figure 8 Disaster Evaluation Typologies: Comprehensive Framework.

A strong evaluation framework for disaster settings is extremely important given the increasing frequency and scale of disasters. It would need to utilize agreed definitions and be able to measure the impact and effectiveness of interventions. It is anticipated that Disaster Evaluation Typologies: Comprehensive Framework will create a useful and usable framework and promote an environment for constructive dialogue on an international level.Reference Archer 147

Figure 8 also includes feedback loops from Consequences and Outcomes to improve Baselines, reduce disaster risk, and strengthen resilience. The framework is not limited to any one phase of the disaster timeline and can be used for responding to disasters, humanitarian crises, or in the development sector.

Disaster Evaluation Typologies: Comprehensive Framework are illustrated in Figure 8.

Discussion

Natural disasters of themselves are complex eventsReference Oliver-Smith 148 - Reference Kelly 150 and undertaking structured evaluations in this setting is also a complex activity. The authors intended to create a classification of disaster evaluation typologies that would provide structure, encourage common terminology, and advance the evidence base of disaster science. The role of the framework is to support the ability to measure and evaluate the effectiveness of interventions provided in the disaster setting and thereby reduce the increasing human and economic costs associated with disasters.

The Comprehensive Framework outlined in this paper is the first framework of this type and thus makes a unique contribution to current knowledge. No previous reference has been located that identifies such a wide range of evaluation typologies used in the disaster setting and further provides conceptual relationships in a single comprehensive framework. The aim of the authors is consistent with that of James J. James in his recent Editorial where he concludes “A common Disaster Medicine and Global Health taxonomy will form the foundation of a safer, more resilient world, through more effective preparedness and response; but we must first come together for the public good.”Reference James 151

The Comprehensive Framework will undergo further research to validate the typologies and their relationships through structured interviews with targeted international experts in both general evaluation and disaster evaluations. Subsequently, additional work is needed to identify and develop toolkits of standards and guidelines for each of the evaluation typologies identified, as well as any methods that come to light as a result of the validation process. A recent example includes the frameworks for Disaster Research and Evaluation published by Birnbaum, Daily, O’Rourke, Loretti, and Kushner.Reference Rüter 27 , Reference Birnbaum, Daily, O’Rourke and Loretti 44 - Reference Birnbaum, Daily, O’Rourke and Loretti 51

Limitations

Limitations of the evolving Comprehensive Framework include difficulty in searching the “grey literature” and identifying all evaluation typologies used in this sector. There may be evaluation typologies that have not been identified. Secondly, the suggested relationships are framed through the eyes of the authors and there may be alternative perspectives to frame these relationships. Thirdly, there is a lack of a unifying theory for disaster evaluation. Finally, the authors have not considered specific research methods that might be utilized in the disaster setting. These can be found in any standard textbook on research methods in epidemiology, social sciences, or kindred disciplines. It is hoped that the validation process will address these limitations.

Conclusion

Disaster Evaluation Typologies: Comprehensive Framework identifies the different typologies of disaster evaluations that were identified in this study and brings them together in a single framework. It suggests interdependencies and relationships that exist between various evaluation typologies within the disaster setting. This unique unifying framework has relevance at an international level and is expected to benefit the disaster, humanitarian, and development sectors. This work promotes an environment for constructive dialogue on evaluation in the disaster setting and adds to the evidence base of disaster evaluation and research.

Acknowledgements

The authors would like to acknowledge and thank: Lauren Vassallo for her valuable graphic design work on the diagrams; and Jackie van Dam and Dr. Sarah Wong for their assistance with editing the manuscript.

Supplementary Material

To view supplementary material for this article, please visit https://doi.org/10.1017/S1049023X17006471

Footnotes

Conflicts of interest/previous presentations: This paper is based on a conference presentation delivered at the 19th World Conference on Disaster and Emergency Medicine (WCDEM), Cape Town, South Africa, April 2015 and a poster presentation delivered at the United Nations Office for Disaster Risk Reduction (UNISDR) Science and Technology Conference on the Implementation of the Sendai Framework for Disaster Risk Reduction 2015-2030, Geneva, Switzerland, January 2016. The authors declare no conflicts of interest.

*

Author names have been changed since original publication.

References

1. Downton, MA, Pielke, RA. How accurate are disaster loss data? The case of US flood damage. Nat Hazards. 2005;35:211-228.CrossRefGoogle Scholar
2. Koenig, KL, Schultz, CH. (eds). Koenig and Schultz’s Disaster Medicine: Comprehensive Principles and Practices. New York USA: Cambridge University Press; 2010.Google Scholar
3. Coppola, DP. Introduction to International Disaster Management. (2nd Ed.). Amsterdam, Netherlands: Elsevier; 2011.Google Scholar
4. De Smet, H, Schreurs, B, Leysen, J. The response phase of the disaster management life cycle revisited within the context of “disasters out of the box.” Homeland Security and Emergency Management. 2015;12(2):319-350.Google Scholar
5. World Health Organization (WHO), Health Protection Agency, United Nations International Strategy for Disaster Reduction (UNISDR). Disaster Risk Management for Health: Overview 2011. http://www.who.int/hac/events/drm_fact_sheet_overview.pdf. Accessed June 2016.Google Scholar
6. Buttenheim, A. Impact Evaluation in the Post-Disaster Setting: A Conceptual Discussion in the Context of the 2005 Pakistan Earthquake. International Initiative for Impact Evaluation (3ie), 2009. http://reliefweb.int/sites/reliefweb.int/files/resources/ E709B504EC3925DCC125768D002B5F30-3ie-working-paper-5.pdf. Accessed June 2016.Google Scholar
7. Wisner, B, Blaikie, P, Vannon, T, Davis, I. At Risk: Natural Hazards, People’s Vulnerability and Disasters. (2nd Ed.). Abingdon, United Kingdom: Routledge; 2003.Google Scholar
8. Veenema, TG (ed). Disaster Nursing and Emergency Preparedness for Chemical, Biological and Radiological Terrorism and Other Hazards. Second Edition. New York USA: Springer Publishing Company; 2007.Google Scholar
9. Sundnes, KO. Task Force on Quality Control of Disaster Management (TFQCDM). Health disaster management: guidelines for evaluation and research in the “Utstein Style.” Structural framework, operational framework, and preparedness. Scandinavian J Public Health. 2014;42(Supplement 14):1-195.Google Scholar
10. Clarke, PK, Darcy, J. Insufficient Evidence? The Quality and Use of Evidence in Humanitarian Action. ALNAP Study. London, United Kingdom: Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) Overseas Development Institute (ODI); 2014.Google Scholar
11. Puri, J, Aladysheva, A, Iversen, V, Ghorpade, Y, Bruck, T. What Methods May Be Used in Impact Evaluations of Humanitarian Assistance? 3ie Working Paper 22. New Delhi, India: International Initiative for Impact Evaluation (3ie); 2014.CrossRefGoogle Scholar
12. Clarke, M, Allen, C, Archer, F, Wong, D, Eriksson, A, Puri, J. What Evidence is Available and What is Required, in Humanitarian Assistance? 3ie Scoping Paper 1. New Delhi, India: International Initiative for Impact Evaluation (3ie); 2014.Google Scholar
13. Task Force on Quality Control of Disaster Management (TFQCDM). World Association for Disaster and Emergency Medicine (WADEM), Nordic Society for Disaster Medicine. Health disaster management guidelines for evaluation and research in the Utstein Style. Prehosp Disaster Med. 2003;17(Supplement 3):1-177.Google Scholar
14. Kulling, P, Birnbaum, M, Murray, V, Rockenschaub, G. Guidelines for reports on health crises and critical health events. Prehosp Disaster Med. 2010;25(4):377-382.CrossRefGoogle ScholarPubMed
15. Stufflebeam, DL, Coryn, CLS. Evaluation Theory, Models and Applications. (2nd Ed.). San Francisco, California USA: Jossey-Bass; 2007.Google Scholar
16. Sanders, RJ (ed). The Program Evaluation Standards: The Joint Committee on Standards for Educational Evaluation. Second Edition. Thousand Oaks, California USA: Sage Publications Inc; 1994.Google Scholar
17. Yarbrough, DB, Shulha, LM, Hopson, RK, Caruthers, FA. The Program Evaluation Standards, A Guide for Evaluators and Evaluation Users. 3rd Edition. Los Angeles, California USA: Sage Publications Inc; 2011.Google Scholar
18. Davidson, EJ. Evaluation Methodology Basics. Los Angeles, California USA: Sage Publications; 2005.Google Scholar
19. Oxford University Press. Oxford University Press; 2016. [Definition of method]. http://www.oxforddictionaries.com/definition/english/method. Accessed June 2016.Google Scholar
20. Oxford Dictionaries Oxford: Oxford University Press; 2016. [Definition of typology]. http://www.oxforddictionaries.com/definition/english/typology. Accessed June 2016.Google Scholar
21. The Joanna Briggs Institute Reviewer’s Manual 2015: Methodology for JBI Scoping Reviews. Adelaide, Australia: Joanna Briggs Institute; 2015.Google Scholar
22. Smith, E, Wasiak, J, Sen, A, Archer, F, Burkle, FM. Three decades of disasters: a review of disaster-specific literature from 1977-2009. Prehosp Disaster Med. 2009;24(4):306-311.CrossRefGoogle ScholarPubMed
23. United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Relief Web. Geneva, Switzerland: United Nations Office for the Coordination of Humanitarian Affairs (OCHA); 2016. http://reliefweb.int/about. Accessed June 2016.Google Scholar
24. Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP). London, United Kingdom: ALNAP Overseas Development Institute (ODI); 2016. http://www.alnap.org/who-we-are/our-role. Accessed June 2016.Google Scholar
25. Rådestad, M, Jirwe, M, Castrén, M, Svensson, L, Gryth, D, Rüter, A. Essential key indicators for disaster medical response suggested to be included in a national uniform protocol for documentation of major incidents: a Delphi study. Scand J Trauma Resusc Emerg Med. 2013;21(1):1-11.CrossRefGoogle Scholar
26. Leiba, A, Schwartz, D, Eran, T, et al. DISAST-CIR: Disastrous Incidents Systematic Analysis Through Components, Interactions, and Results: application to a large-scale train accident. J Emerg Med. 2009;37(1):46-50.CrossRefGoogle ScholarPubMed
27. Rüter, A. Disaster Medicine - Performance Indicators, Information Support and Documentation. A Study of an Evaluation Tool. Linkoping: Linköping University; 2006.Google Scholar
28. von Schreeb, J. Needs Assessment for International Humanitarian Health Assistance in Disasters. Stockholm, Sweden: Karolinska Insitutet; 2007.Google Scholar
29. Wong, D, Spencer, C, Boyd, L, Burkle, FJ, Archer, F. A review of the history and use of ‘Health Disaster Management Guidelines for Evaluation and Research in the Utstein Style’ (Abstract). 19th World Conference on Disaster and Emergency Medicine; April 2015; Cape Town, South Africa: Prehospital and Disaster Medicine; 2015:170.Google Scholar
30. Kohl, PA, O’Rourke, AP, Schmidman, DL, Dopkin, WA, Birnbaum, ML. The Sumatra-Andaman Earthquake and Tsunami of 2004: the hazards, events, and damage. Prehosp Disaster Med. 2005;20(6):356-363.CrossRefGoogle ScholarPubMed
31. World Health Organization (WHO). Tsunami 2004: A Comprehensive Analysis, Volume I. New Delhi, India: World Health Organization (WHO), Regional Office for South-East Asia; 2013: 354.Google Scholar
32. World Health Organization (WHO). Tsunami 2004: A Comprehensive Analysis, Volume II. New Delhi, India: World Health Organization (WHO), Regional Office for South-East Asia; 2013: 321.Google Scholar
33. Wong, D, Spencer, C, Boyd, L, McArdle, D, Burkle, FJ, Archer, F. Thematic Analysis of Seven Australian Disaster Reports or Inquiries. 19th World Conference on Disaster and Emergency Medicine; April 2015. Cape Town, South Africa: Prehospital and Disaster Medicine; 2015: 170.Google Scholar
34. Adibhatia, S, Dudek, O, Ramsel-Miller, J, Birnbaum, M. Classification of disaster health publications. Presented at 19th World Congress on Disaster and Emergency Medicine. Cape Town, South Africa. April 2015. Prehosp Disaster Med. 2015;30(Suppl 1):s111.Google Scholar
35. Stratton, S. Is there a scientific basis for disaster health and medicine? Prehosp Disaster Med. 2014;29(3):221-222.CrossRefGoogle Scholar
36. Birnbaum, ML, Daily, EK, O’Rourke, AP, Loretti, A. Research and evaluations of the health aspects of disasters, Part I: an overview. Prehosp Disaster Med. 2015;30(5):512-522.CrossRefGoogle ScholarPubMed
37. Inter-Agency Standing Committee (IASC). Geneva, Switzerland; 2016. https://interagencystandingcommittee.org/. Accessed June 2016.Google Scholar
38. Stephenson, C. Impacts Framework for Natural Disasters and Fire Emergencies. Melbourne, Australia: RMIT University and Bushfire CRC; 2010.Google Scholar
39. Powers, R, Daily, E (eds). International Disaster Nursing. New York USA: Cambridge University Press; 2010.CrossRefGoogle Scholar
40. Debacker, M, Hubloue, I, Dhondt, E, et al. Utstein-style template for uniform data reporting of acute medical response in disasters. PLOS Curr. 2012;4:e4f6cf3e8df15a.Google ScholarPubMed
41 Fattah, S, Rehn, M, Reierth, E, Wisborg, T. Templates for reporting prehospital major incident medical management: systematic literature review. BMJ Open. 2012;2(e001082):5.CrossRefGoogle ScholarPubMed
42. Fattah, S, Rehn, M, Reierth, E, Wisborg, T. Systematic literature review of templates for reporting prehospital major incident medical management. BMJ Open. 2013;3(8).CrossRefGoogle ScholarPubMed
43. Fattah, S, Rehn, M, Lockey, D, Thompson, J, Lossius, HM, Wisborg, T. A consensus based template for reporting of prehospital major incident medical management. Scand J Trauma Resusc Emerg Med. 2014;22(5):6.Google ScholarPubMed
44. Birnbaum, ML, Daily, EK, O’Rourke, AP, Loretti, A. Research and evaluations of the health aspects of disasters, Part II: the disaster health conceptual framework revisited. Prehosp Disaster Med. 2015;30(5):523-538.CrossRefGoogle ScholarPubMed
45. Birnbaum, ML, Daily, EK, O’Rourke, AP. Research and evaluations of the health aspects of disasters, Part III: framework for the temporal phases of disasters. Prehosp Disaster Med. 2015;30(6):628-632.CrossRefGoogle ScholarPubMed
46. Birnbaum, ML, Daily, EK, O’Rourke, AP. Research and evaluations of the health aspects of disasters, Part IV: framework for societal structures: the social systems. Prehosp Disaster Med. 2015;30(6):633-647.CrossRefGoogle Scholar
47. Birnbaum, ML, Daily, EK, O’Rourke, AP. Research and evaluations of the health aspects of disasters, Part V: epidemiological disaster research. Prehosp Disaster Med. 2015;30(6):648-656.CrossRefGoogle ScholarPubMed
48. Birnbaum, ML, Daily, EK, O’Rourke, AP, Kushner, J. Research and evaluations of the health aspects of disasters, Part VI: interventional research and the disaster logic model. Prehosp Disaster Med. 2016;31(2):1-14.Google ScholarPubMed
49. Birnbaum, ML, Daily, EK, O’Rourke, AP. Research and evaluations of the health aspects of disasters, Part VII: the relief/recovery framework. Prehosp Disaster Med. 2016;31(2):1-16.Google ScholarPubMed
50. Birnbaum, ML, Loretti, A, Daily, EK, O’Rourke, AP. Research and evaluations of the health aspects of disasters, Part VIII: risk, risk reduction, risk management, and capacity building. Prehosp Disaster Med. 2016;31(3):1-9.Google ScholarPubMed
51. Birnbaum, ML, Daily, EK, O’Rourke, AP, Loretti, A. Research and evaluations of the health aspects of disasters, Part IX: risk-reduction framework. Prehosp Disaster Med. 2016;31(3):1-17.Google ScholarPubMed
52. United Nations General Assembly. Sendai Framework for Disaster Risk Reduction 2015-2030; 2015. http://www.wcdrr.org/uploads/Sendai_Framework_for_Disaster_Risk_Reduction_2015-2030.pdf. Accessed June 2016.Google Scholar
53. Council of Australian Governments (COAG). National Strategy on Disaster Resilience. Barton, Australia: Commonwealth of Australia; 2011: 29.Google Scholar
54. Australian Emergency Management Institute. Australian Emergency Management Arrangements. (2nd Ed.). Canberra, Australia: Australian Emergency Management Institute, Department AGA-Gs; 2014.Google Scholar
55. Victorian Emergency Management Strategic Action Plan 2015 - 2018. Melbourne, Australia: Emergency Management Victoria; 2015.Google Scholar
56. Twigg, J. Characteristics of a Disaster Resilient Community: A Guidance Note Version 2. London, United Kingdom: University College; 2009.Google Scholar
57. Clark, H. Building Resilience: The Importance of Prioritizing Disaster Risk Reduction – A United Nations Development Programme Perspective Hopkins Lecture, University of Canterbury; August 2012. Christchurch: United Nations Development Programme (UNDP); 2012.Google Scholar
58. Bahadur, A, Lovell, E, Wilkinson, E, Tanner, T. Resilience in the SDGs: Developing an Indicator for Target 1.5 that is Fit for Purpose. London, United Kingdom: Overseas Development Institute (ODI); 2015.Google Scholar
59. United Nations (UN). Sustainable Development Goals. 17 Goals to Transform Our World. Geneva, Switzerland: United Nations (UN); 2016. http://www.un.org/sustainabledevelopment/sustainable-development-goals/. Accessed June 2016.Google Scholar
60. United Nations (UN). United Nations Conference on Climate Change Paris. United Nations (UN); 2015. http://www.cop21.gouv.fr/en/. Accessed June 2016.Google Scholar
61. Global Facility for Disaster Risk Reduction and Recovery (GFDRR). 2016. https://www.gfdrr.org/who-we-are. Accessed June 2016.Google Scholar
62. Kete, N. How to Build a Resilient City: The City Resilience Framework. Women in Clean Energy Symposium; September 2014. The Rockefeller Foundation, ARUP; 2014.Google Scholar
63. Cutter, SL, Burton, CG, Emrich, CT. Resilience indicators for benchmarking baseline conditions. J Homeland Security Emergency Management. 2010;7(1, Article 51):24.CrossRefGoogle Scholar
64. Fung, V. Using GIS For Disaster Risk Reduction Geneva: United Nations Office for Disaster Risk Reduction (UNISDR); 2012. http://www.unisdr.org/archive/26424. Accessed June 2016.Google Scholar
65. Fowler, J. Appliance of Science Key to Disaster Risk Reduction. Geneva, Switzerland: United Nations Office for Disaster Risk Reduction (UNISDR); 2015. http://www.unisdr.org/archive/47180. Accessed June 2016.Google Scholar
66. Carabine, E. Revitalizing evidence-based policy for the Sendai Framework for Disaster Risk Reduction 2015-2030: lessons from existing international science partnerships. PLOS Curr. 2015;7.Google Scholar
67. Singh-Peterson, L, Salmon, P, Goode, N, Gallina, J. Translation and evaluation of the baseline resilience indicators for communities on the Sunshine Coast, Queensland Australia. International J Disaster Risk Reduction. 2014;10(Part A):116-126.CrossRefGoogle Scholar
68. Bamberger, M. Reconstructing baseline data for impact evaluation and results measurement. World Bank, Prem Notes. 2010;4:1-10.Google Scholar
69. Davies, R. Evaluability Assessment: Better Evaluation; 2015. http://betterevaluation.org/themes/evaluability_assessment. Accessed June 2016.Google Scholar
70. Abbas, SH, Srivastava, RK, Tiwari, RP, Ramudu, PB. GIS-based disaster management. Management of Quality: An International Journal. 2009;20(1):33-51.Google Scholar
71. Renger, R, Cimetta, A, Pettygrove, S, Rogan, S. Geographical Information Systems (GIS) as an evaluation tool. AJE. 2002;23(4):469-479.Google Scholar
72. Burkle, Jr. FM, Martone, G, Greenough, PG. The Changing face of humanitarian crises. Brown J World Affairs. 2014;XX(11):25-48.Google Scholar
73. Burkle, FJ, Greenough, PG. Impact of public health emergencies on modern taxonomy, planning, and response. Disaster Med Public Health Prep. 2008;2(3):192-199.CrossRefGoogle ScholarPubMed
74. The Sphere Project. Humanitarian Charter and Minimum Standards in Humanitarian Response. Rugby, United Kingdom: Practical Action Publishing; 2011.Google Scholar
75. Core Humanitarian Standard on Quality and Accountability. Groupe URD, HAP International, People in Aid, Sphere Project; 2014.Google Scholar
76. Developing Early Warning Systems: A Checklist. EWC III Third International Conference on Early Warning from Concept to Action. Bonn, Germany: International Strategy for Disaster Reduction (ISDR); 2006: 13.Google Scholar
77. Health Impact Assessment. Main Concepts and Suggested Approach, Gothenburg Consensus Paper. Brussels, Belgium: World Health Organization (WHO) Region Office for Europe European Centre for Health Policy ECHP; 1999.Google Scholar
78. Multi-Cluster/Sector Initial Rapid Assessment (MIRA). Inter-Agency Standing Committee (IASC), 2012. https://docs.unocha.org/sites/dms/documents/mira_final_version2012.pdf. Accessed June 2016.Google Scholar
79. Pan American Health Organization (PAHO), Americas WHO. Rapid Needs Assessment 2016. http://www.paho.org/disasters/index.php?option=com_content&view=article&id=744%3Arapid-needs-assessment&Itemid=800&lang=en. Accessed June 2016.Google Scholar
80. Post-Disaster Needs Assessment, Guidelines, Volume A. Global Facility for Disaster Reduction and Recovery (GFDRR), 2013. https://www.gfdrr.org/sites/gfdrr/files/PDNA-Volume-A.pdf. Accessed June 2016.Google Scholar
81. Cosgrave, J, Ramalingam, B, Beck, T. Real Time Evaluations of Humanitarian Action. An ALNAP Guide. London, United Kingdom: Overseas Development Institute (ODI); 2009.Google Scholar
82. Beck, T, Buchanan-Smith, M. Joint Evaluations Coming of Age? The Quality and Future Scope of Joint Evaluations. London, United Kingdom: Active Learning Network for Accountability and Performance (ALNAP); 2008.Google Scholar
83. United Nations Office for Disaster Risk Reduction (UNISDR). Disaster Statistics Geneva: United Nations Office for Disaster Risk Reduction (UNISDR); 2015. http://www.unisdr.org/we/inform/disaster-statistics. Accessed June 2016.Google Scholar
84. Bureau for Crisis Prevention and Recovery. A Comparative Review of Country-Level and Regional Disaster Loss and Damage Databases. Geneva, Switzerland: United Nations Development Programme (UNDP); 2013.Google Scholar
85. International Recovery Platform. Post Disaster Needs Assessments: International Recovery Platform; 2013. http://www.recoveryplatform.org/pdna/. Accessed June 2016.Google Scholar
86. Centre for Research on the Epidemiology of Disasters (CRED) Geneva: PreventionWeb; 2016. http://www.preventionweb.net/organizations/712/view. Accessed June 2016.Google Scholar
87. The Australian Business Roundtable, 2016. http://australianbusinessroundtable.com.au/. Accessed June 2016.Google Scholar
88. After Action Review. Melbourne, Australia: Better Evaluation; 2015. http://betterevaluation.org/evaluation-options/after_action_review. Accessed June 2016.Google Scholar
89. The Monitoring and Assurance Framework for Emergency Management. Melbourne, Australia: Inspector-General for Emergency Management; 2015. www.igem.vic.gov.au/documents/CD/15/255352. Accessed June 2016.Google Scholar
90. Lessons Management. Canberra, Australia: Australian Emergency Management Institute; 2013.Google Scholar
91. Emergency Management Assurance Framework. Brisbane, Australia: Inspector-General Emergency Management; 2014.Google Scholar
92. The Federal Response to Hurricane Katrina: Lessons Learned. Washington, DC USA: White House; 2006. https://georgewbush-whitehouse.archives.gov/reports/katrina-lessons-learned/. Accessed June 2016.Google Scholar
93. The 2009 Victorian Bushfires Royal Commission Final Report. Melbourne, Australia: Victorian Bushfires Royal Commission; 2009. http://www.royalcommission.vic.gov.au/ Commission-Reports/Final-Report.html. Accessed June 2016.Google Scholar
94. Process Evaluations. Geneva, Switzerland: World Health Organization (WHO), United Nations International Drug Control Programme (UNDCP), European Monitoring Center on Drugs and Drug Addiction (EMCDDA); 2000.Google Scholar
95. A Framework for Program Evaluation. Atlanta, Georgia USA: Centers for Disease Control and Prevention (CDC); 2016. http://www.cdc.gov/eval/framework/index.htm. Access June 2016.Google Scholar
96. Program Development and Evaluation: Logic Model. Madison, Wisconsin USA: University of Wisconsin-Extension; 2014. http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html. Accessed June 2016.Google Scholar
97. Impact Evaluation Glossary. International Initiative for Impact Evaluation (3ie), 2012. Contract No.: Version No. 7. http://www.3ieimpact.org/media/filer_public/2012/07/11/impact_evaluation_glossary_-_july_2012_3.pdf. Accessed June 2016.Google Scholar
98. Bamberger M, Independent Evaluation Group (IEG). Institutionalizing Impact Evaluation Within the Framework of a Monitoring and Evaluation System. Washington, DC USA: World Bank; 2009.Google Scholar
99. Puri, J, Aladysheva, A, Iversen, V, Ghorpade, Y, Bruck, T. What Methods May Be Used in Impact Evaluations of Humanitarian Assistance?2015. http://ftp.iza.org/dp8755.pdf. Accessed June 2016.CrossRefGoogle Scholar
100. White, H. Some Reflections on Current Debates in Impact Evaluation. New Delhi, India: International Initiative for Impact Evaluation (3ie); 2009.Google Scholar
101. Buttenheim, A. Impact evaluation in the post-disaster setting: a case study of the 2005 Pakistan Earthquake. J Development Effectiveness. 2010;2(2):197-227.CrossRefGoogle Scholar
102. Rogers, PJ. Introduction to Impact Evaluation. InterAction, Better Evaluation, The Rockefeller Foundation; 2012. https://www.interaction.org/sites/default/files/1%20-%20Introduction%20to%20Impact%20Evaluation.pdf. Accessed June 2016.Google Scholar
103. Chambers, R, Karlan, D, Ravallion, M, Rogers, PJ. Designing Impact Evaluations: Different Perspectives. New Delhi, India: International Initiative for Impact Evaluation (3ie); 2011.Google Scholar
104. Brown, LD, Moore, MH. Accountability, Strategy, and International Non-Governmental Organizations. The Hauser Center for Nonprofit Organizations, The Kennedy School of Government, Harvard University, 2001. Contract No.: Working Paper No. 7.CrossRefGoogle Scholar
105. Tan, YSA, von Schreeb, J. Humanitarian assistance and accountability: what are we really talking about? Prehosp Disaster Med. 2015;30(3):264-270.CrossRefGoogle ScholarPubMed
106. Griekspoor, A, Sondorp, E. Enhancing the quality of humanitarian assistance: taking stock and future initiatives. Prehosp Disaster Med. 2001;16(4):209-215.CrossRefGoogle ScholarPubMed
107. Foran, MP, Williams, AR. Global uptake of the humanitarian accountability partnership over its first ten years. Prehosp Disaster Med. 2014;29(4):413-416.CrossRefGoogle ScholarPubMed
108. Steering Committee for Humanitarian Response (SCHR). SCHR Peer Review on Accountability to Disaster Affected Populations. An Overview of Lessons Learned. Steering Committee for Humanitarian Response (SCHR); 2010. http://schr.info/assets/uploads/ docs/100212-SCHR-Peer-Review-lessons-paper-January-2010.pdf. Accessed June 2016.Google Scholar
109. Buchanan-Smith, M, Cosgrave, J. Evaluation of Humanitarian Action: Pilot Guide. London, United Kingdom: Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP); 2013.Google Scholar
110. 2013 Humanitarian Accountability Report. Humanitarian Accountability Partnership (HAP); 2013. http://www.chsalliance.org/files/files/2013-har.pdf. Accessed June 2016.Google Scholar
111. Morel, D, Hagens, C. Monitoring, Evaluation, Accountability and Learning in Emergencies: A Resource Pack for Simple and Strong MEAL. Catholic Relief Services (CRS); 2012. http://www.crs.org/sites/default/files/tools-research/monitoring-evaluation-accountability-and-learning-in-emergencies.pdf. Accessed June 2016.Google Scholar
112. Patton, MQ. Qualitative Research and Evaluation Methods. (4th Ed.). Los Angeles, California USA: Sage Publications; 2015.Google Scholar
113. Camfield, L, Duvendack, M. Impact evaluation - are we ‘off the gold standard’? European J Development Research. 2014;26(1):1-11.CrossRefGoogle Scholar
114. United Nations Evaluation Group (UNEG). Standards for Evaluation in the UN System. Geneva, Switzerland: United Nations Evaluation Group (UNEG); 2005.Google Scholar
115. American Evaluation Association (AEA). The Program Evaluation Standards, Summary Form. Washington, DC USA: American Evaluation Association (AEA); 2016. http://www.eval.org/p/cm/ld/fid=103. Accessed June 2016.Google Scholar
116. DAC Principles for Evaluation of Development Assistance. Paris, France: Organization for Economic Co-Operation and Development (OECD); 1991.Google Scholar
117. Organization for Economic Co-Operation and Development (OECD). Principles for Evaluation of Development Assistance: Development Assistance Committee. Paris, France: Organization for Economic Co-Operation and Development (OECD); 2008.Google Scholar
118. Hawe, P, Degeling, DE, Hall, J, Brierley, A. Evaluating Health Promotion: A Health Worker’s Guide. Sydney, Australia: MacLennan & Petty; 1990: 254.Google Scholar
119. International Federation of Red Cross and Red Crescent Societies (IFRC). Project/Programme Monitoring and Evaluation (M & E) Guide. Geneva, Switzerland: International Federation of Red Cross and Red Crescent Societies (IFRC); 2011; Contract No.: 1000400 E 3,000 08/2011.Google Scholar
120. O’Neill, K. Evaluation Handbook. London, United Kingdom: Save the Children; 2012.Google Scholar
121. Buchanan-Smith, M, Cosgrave, J. Evaluation of Humanitarian Action. Pilot Guide. London, United Kingdom: The Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP); 2103.Google Scholar
122. International Initiative for Impact Evaluation (3ie). Principles for Impact Evaluation 2016. http://www.3ieimpact.org/media/filer_public/2014/01/15/principles_for_impact_evaluation.pdf. Accessed June 2016.Google Scholar
123. World Bank Group. World Bank Group Impact Evaluations: Relevance and Effectiveness. Washington, DC USA: The World Bank Group; 2012; License: Creative Commons Attribution CC BY 3.0.Google Scholar
124. Program Performance and Evaluation Office (PPEO) - Program Evaluation. Atlanta, Georgia USA: Centers for Disease Control and Prevention (CDC); 2016. http://www.cdc.gov/eval/. Accessed June 2016.Google Scholar
125. AusAID. NGO Cooperation Program: Monitoring, Evaluation, and Learning Framework. Canberra, Australia: AusAID; 2012.Google Scholar
126. United Nations Evaluation Group (UNEG). UNEG Ethical Guidelines for Evaluation. Geneva, Switzerland: United Nations Evaluation Group (UNEG); 2008.Google Scholar
127. Australian Council for International Development (ACFID). Guidelines for Ethical Research and Evaluation in Development. 2015:33. https://acfid.asn.au/sites/site.acfid/files/resource_document/ethics-guidelines.pdf. Accessed June 2016.Google Scholar
128. Australian Evaluation Society Inc. (AES). Guidelines for the Ethical Conduct of Evaluations 2010. http://www.aes.asn.au/images/stories/files/About/Documents - ongoing/AES Guidlines10.pdf. Accessed June 2016.Google Scholar
129. Cochrane Database of Systematic Reviews (CDSR). The Cochrane Collaboration; 2015. http://community-archive.cochrane.org/editorial-and-publishing-policy-resource/cochrane-database-systematic-reviews-cdsr. Accessed June 2016.Google Scholar
130. The JBI Database of Systematic Reviews and Implementation Reports. Adelaide, Australia: Joanne Briggs Institute, University of Adelaide; 2016. http://joannabriggslibrary.org/index.php/jbisrir. Accessed June 2016.Google Scholar
131. Systematic Reviews: International Initiative for Impact Evaluation (3ie); 2016. http://www.3ieimpact.org/evidence/systematic-reviews/. Accessed June 2016.Google Scholar
132. Blanchet, K, Sistenich, V, Ramesh, A, et al. An Evidence Review of Research on Health Interventions in Humanitarian Crises. London, United Kingdom: London School of Hygiene and Tropical Medicine, Harvard School of Public Health, Overseas Development Institute (ODI) Enhancing Learning and Research for Humanitarian Assistance (ELHRA); 2015.Google Scholar
133. Gallardo, AR, Ahmadreza, D, Foletti, M, et al. Core competencies in disaster management and humanitarian assistance: a systematic review. Disaster Med Public Health Prep. 2015;9(4):430-439.CrossRefGoogle Scholar
134. Moslehi, S, Ardalan, A, Waugh, W, Tirone, DC, Akbarisari, A. Characteristics of an effective international humanitarian assistance: a systematic review. PLoS Curr. 2016;(8).CrossRefGoogle ScholarPubMed
135. Al Thobaity, A, Williams, B, Plummer, V. A new scale for disaster nursing core competencies: development and psychometric testing. Australasian Emerg Nurs J. 2016;19(1):11-19.CrossRefGoogle ScholarPubMed
136. Evidence Aid Priority Setting Group (EAPSG). Prioritization of themes and research questions for health outcomes in natural disasters, humanitarian crises, or other major health care emergencies. PLOS Curr. 2013; (1).Google Scholar
137. Olsen, K, O’Reilly, S. Evaluation Methodologies. Sheffield, England: International Organization Development (IOD PARC); 2011.Google Scholar
138. ALNAP Review of Humanitarian Action in 2003: Improving Monitoring to Enhance Accountability and Learning, Chapter 4 Meta-Evaluation. London, United Kingdom: Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP); 2003.Google Scholar
139. Groverman, V, Hartmans, J. Meta-Evaluation and Synthesis of the 2010 Pakistan Floods Response by SHO Participants: A Synthesis of Conclusions, Report Phase 2. Hague, The Netherlands: Cordaid; 2012.Google Scholar
140. Understanding Risk: Review of Open Source and Open Access Software Packages Available to Quantify Risk from Natural Hazards. Washington, DC USA: Global Facility for Disaster Reduction and Recovery (GFDRR); 2014.Google Scholar
141. Independent Evaluation Group (IEG), The World Bank Group. Evaluations. Washington, DC USA; 2014. http://ieg.worldbankgroup.org/webpage/evaluations. Accessed June 2016.Google Scholar
142. Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP). Humanitarian Evaluation and Learning Portal (HELP). London, United Kingdom: Overseas Development Institute (ODI); 2016. http://www.alnap.org/resources/. Accessed June 2016.Google Scholar
143. International Federation of Red Cross and Red Crescent Societies (IFRC). Evaluations. Geneva, Switzerland: International Federation of Red Cross and Red Crescent Societies (IFRC); 2016. http://www.ifrc.org/en/publications-and-reports/evaluations/. Accessed June 2016.Google Scholar
144. United Nations Children’s Fund (UNICEF). Evaluation and Research Database (ERD). United Nations Children’s Fund (UNICEF); 2014. http://www.unicef.org/evaldatabase/. Accessed June 2016.Google Scholar
145. International Initiative for Impact Evaluation (3ie). Impact Evaluations. International Initiative for Impact Evaluation (3ie); 2016. http://www.3ieimpact.org/en/evidence/impact-evaluations/. Accessed June 2016.Google Scholar
146. United Nations Office for Disaster Risk Reduction (UNISDR). 1st Information and Knowledge Management for Disaster Risk Reduction (IKM4DRR) Workshop: Final Report. Geneva, Switzerland: United Nations Office for Disaster Risk Reduction (UNISDR); 2013.Google Scholar
147. Archer, F. WADEM Section: Disaster Metrics - Disaster Health Evaluation. Madison, Wisconsin USA: World Association for Disaster and Emergency Medicine (WADEM); 2015.Google Scholar
148. Oliver-Smith, A. “What is a Disaster?” Anthropological Perspectives on a Persistent Questions. In Oliver-Smith A, Hoffman SM, (eds). The Angry Earth: Disaster in Anthrological Perspective. New York, USA: Routledge; 1999: 334.CrossRefGoogle Scholar
149. Peek, LA, Sutton, JN. An exploratory comparison of disasters, riots, and terrorist acts. Disasters. 2003;27(4):319-335.CrossRefGoogle ScholarPubMed
150. Kelly, C. Simplifying disasters: developing a model for complex non-linear events. Austral J Emerg Manag. 1999;14(1):25-27.Google Scholar
151. James, JJ. A rose by any other name. Disaster Med Public Health Prep. 2016;10(2):183-184.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 Core Structure.

Figure 1

Figure 2 Baselines.

Figure 2

Figure 3 Consequences.

Figure 3

Figure 4 Outcomes.

Figure 4

Figure 5 Impact Evaluations.

Figure 5

Figure 6 Accountability.

Figure 6

Figure 7 Evaluation Standards and Evidence.

Figure 7

Figure 8 Disaster Evaluation Typologies: Comprehensive Framework.

Supplementary material: PDF

Wong supplementary material

Wong supplementary material 1

Download Wong supplementary material(PDF)
PDF 2.6 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 2

Download Wong supplementary material(PDF)
PDF 2.6 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 3

Download Wong supplementary material(PDF)
PDF 2.6 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 4

Download Wong supplementary material(PDF)
PDF 2.6 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 5

Download Wong supplementary material(PDF)
PDF 2.5 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 6

Download Wong supplementary material(PDF)
PDF 2.5 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 7

Download Wong supplementary material(PDF)
PDF 2.5 MB
Supplementary material: PDF

Wong supplementary material

Wong supplementary material 8

Download Wong supplementary material(PDF)
PDF 2.8 MB