Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-10T07:39:33.265Z Has data issue: false hasContentIssue false

Evidence-based conservation and evidence-informed policy: a response to Adams & Sandbrook

Published online by Cambridge University Press:  19 July 2013

Neal Haddaway
Affiliation:
Centre for Evidence-Based Conservation, School of Environment, Natural Resources and Geography, Bangor University, Bangor, Gwynedd, LL57 2UW, UK
Andrew S. Pullin*
Affiliation:
Centre for Evidence-Based Conservation, School of Environment, Natural Resources and Geography, Bangor University, Bangor, Gwynedd, LL57 2UW, UK
*
(Corresponding author) E-mail a.s.pullin@bangor.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Type
Forum
Copyright
Copyright © Fauna & Flora International 2013 

Whilst giving general support to evidence-based conservation (EBC) Adams & Sandbrook (Reference Adams and Sandbrook2013) raise a number of concerns about its development. We would like to respond to these concerns from the position of contributors to the Collaboration for Environmental Evidence (CEE), and as authors of systematic reviews and of the CEE guidelines on conduct of systematic reviews in environmental management (CEE, 2013a). We recognize that there are other views of what EBC involves but our focus in this response is on the process and conduct of evidence synthesis.

Much of the recent development of methodology and approach in EBC has not been reported in the peer-reviewed literature and in this sense the methodological ‘straw person’ that Adams & Sandbrook construct presents an opportunity to raise awareness of the development of EBC since its inception over a decade ago. EBC, as we understand it, seeks to collect and synthesize appropriate evidence to inform decision-making in practice and policy. It seeks to make the best available evidence accessible to decision-makers rather than form a process of decision-making in itself. When seen in this context, many of the concerns raised by Adams & Sandbrook are not concerns about EBC but about its role in evidence-based policy. We respond to their main points in turn.

What counts as evidence?

In undertaking the process of evidence synthesis ‘what counts as evidence’ is an important issue for any individual question but for EBC in general any evidence can count, qualitative or quantitative. In our view what matters is fitness for purpose; what form of evidence is relevant for the question being addressed? As Adams & Sandbrook recognize, many questions in conservation require quantitative evidence because they are questions of quantitative impact or relative effectiveness of interventions. However, the concern that qualitative evidence may be inappropriately rejected or downgraded is unfounded in our experience. As in primary research, the question being asked will indicate the type of data or evidence required.

The authors state that ‘it is important to note that the formal research literature is not always correct, even when ideas are widely shared’. We could not agree more. It is surely evidence-based medicine that has been one of the most influential drivers in recognizing that not all peer-reviewed literature is reliable or correct (e.g. Chalmers et al., Reference Chalmers, Smith, Blackburn, Silverman, Schroeder, Reitman and Ambrox1981). EBC is certainly drawing attention to this same limitation in the conservation science literature (e.g. Pullin & Knight, Reference Pullin and Knight2012). Rigorous critical appraisal of the merits (reliability and relevance) of individual studies is a cornerstone of systematic review. This is surely a benefit of EBC.

Also in this section the authors state “The decisions taken by scientists undertaking evidence-based reviews in judging ‘good knowledge’ are careful but also socially constructed and vulnerable to false certainties”. We fully agree. This is why systematic review methodology is so valuable in demanding objectivity and transparency in the conduct of reviews. Evidence-based approaches recognize the very problems that the authors raise and for that reason have sought to develop appropriate methodologies to minimize the problem (e.g. peer review and registration of review protocols). These problems are not confined to conservation and evidence-based practice recognized them in different sectors of policy and practice before the recent development of EBC (Chalmers, Reference Chalmers2003).

Adams & Sandbrook contend that proponents of EBC view individual knowledge (e.g. indigenous knowledge, expert opinion) as no more than myth. This has never been claimed to our knowledge. Pullin & Salafsky (Reference Pullin and Salafsky2010) call for more information to be recorded by practitioners. Similarly, Pullin et al. (Reference Pullin, Knight, Stone and Charman2004) do not state that expert knowledge should be replaced by EBC, rather that some traditional knowledge and expert opinion may suffer the same biases and confounders as individual primary research. The case for including local and expert knowledge is well made and, when appropriate, it is what EBC seeks to do. Most obviously this is done in the consultation stage during question formulation when the CEE Guidelines advocate that all relevant stakeholders should be consulted (CEE, 2013a). Although this may not be perfectly achieved, the intent is there.

How does evidence count?

In the section ‘How does evidence count?’ Adams & Sandbrook state ‘the evidence-based approach implies that it is possible to determine which interventions cause particular outcomes, and thus tune policy to maximize effectiveness’. We agree that it is not always possible to attribute outcomes to conservation interventions but does this suggest we simply carry on putting resources behind untested interventions? Some aspects of conservation are more complex than others and there are certainly limitations to a reductionist approach (Stewart et al., Reference Stewart, Coles and Pullin2005). Just as in international development, we believe we need to test out evidence-based approaches in more diverse situations and better understand their limitations. Considering which questions are suitable for systematic review is a constant issue but we have learnt from past mistakes.

Also in this section, Adams & Sandbrook state that EBC ‘attempts to extend the social authority of experimental or observational science to the process of reviewing existing knowledge’. We sense there is some confusion here between the scientific process of evidence synthesis and the arena in which it might be conducted. Evidence synthesis reviews ‘available evidence’ pertinent to a question, not ‘existing knowledge’. Who frames the question and who uses (or misuses) the outcome is a concern in any aspect of science.

In their final section, Adams & Sandbrook, whilst recognizing that some forms of evidence are more appropriate than others, state that they ‘reject the notion of an evidence hierarchy that places quantitative and experimental studies at the top’ and that they favour ‘the adoption of a matrix or typology approach’. A central pillar of evidence-based approaches is the critical appraisal of evidence in the context of susceptibility to bias. How this is best done depends upon the question being addressed (CEE, 2013a). Many hierarchies of methodology have been produced in the health and other sectors (e.g. Katrak et al., Reference Katrak, Bialocerkowski, Massy-Westropp, Kumar and Grimmer2004; Vlayen et al., Reference Vlayen, Aertgeerts, Hannes, Sermeus and Ramaekers2005, Crowe & Sheppard, Reference Crowe and Sheppard2011). Theoretical hierarchies are based on fundamental scientific principles and are meant as a guide (e.g. Pullin & Knight, Reference Pullin and Knight2003, for practitioners), not a rule, when assessing internal validity of study designs. It is the responsibility of the authors of a systematic review to develop their own hierarchy and defend it with respect to the question they are addressing. If there is a rule, it is that not all evidence is equally reliable. As we have already noted, all evidence can count, but every item of evidence must be assessed and weighted according to its reliability. We agree that specifically designed typologies for critical appraisal are necessary, and there exist many examples in the systematic reviews available in the Environmental Evidence Library (CEE, 2013b).

In our view Adams & Sandbrook take a rather narrow view of EBC by considering its application to complex issues in conservation policy. We agree with many of the points that suggest that EBC has limitations when used in the context of complex conservation programmes with socio-economic considerations. However, the authors misrepresent the scope of EBC and most of their emphasis is on the challenges of evidence-based policy and the use of scientific evidence in the context of policy formation. We are not clear how this fits with the rest of the article. We would like to reassure the authors that EBC does not have a view on how policy works or favour one model over another: it objectively seeks to provide the best available evidence to whoever wishes to use it (policy, practice, management). We find interesting the contention that ‘The legitimacy attributed to evidence derived from formal science is a powerful influence on policy that can artificially depoliticize questions that should rightfully be subject to public deliberation’ but do not find any evidence for this in EBC in the two references cited, and similarly with ‘It can also override the knowledge of others, in the process rendering mute their ability to express their rights and wishes’. These are dire warnings against any attempt to synthesize the best available evidence. Are ignorance or bias better options? Evidence can be used in the ‘wrong’ way but does this mean that we should not collect it and seek to understand better the consequences of what we do? These are warnings about policy making, not about EBC.

Evidence-based conservation and evidence-informed policy

In their final section the authors call for the use of the term ‘evidence-informed’ rather than ‘evidence-based’ conservation. We agree that ‘evidence-informed’ is a useful term to use within the context of policy making. Indeed, it is already in common use (although probably less so in the peer-reviewed literature) and we thank the authors for raising awareness of the term. However, in our view both terms have legitimacy. The distinction is useful: the scientific process of systematic review and synthesis is evidence-based but we aim for the policy-making process to be evidence-informed (although decision making may still be argued to be evidence-based in some areas of practice). In our view Adams & Sandbrook often focus on the policy process, which EBC only informs, to the exclusion of many other forms of decision-making in conservation. The authors confuse a critique of the evidence-based process (the process of synthesizing evidence with respect to a specific question) with their concerns about how the products of this process might be used (or misused) in policy. These are both interesting issues but very different in nature. We recognize there are different perspectives on the conduct and application of EBC but we hope that greater engagement will raise awareness of its basic aims. Conservationists intervene with the best of intentions but the reality is that we are often uncertain if those interventions will do more good than harm. We need rigorous evaluation of appropriate evidence to reduce that uncertainty and ensure future decision making is better informed.

References

Adams, W.M. & Sandbrook, C. (2013) Conservation, evidence and policy. Oryx, 47, 329335.CrossRefGoogle Scholar
Chalmers, I. (2003) Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations. The Annals of the American Academy of Political and Social Sciences, 589, 2239.CrossRefGoogle Scholar
Chalmers, T.C., Smith, H. Jr, Blackburn, B., Silverman, B., Schroeder, B., Reitman, D. & Ambrox, A. (1981) A method for assessing the quality of a randomized control trial. Controlled Clinical Trials, 2, 3149.CrossRefGoogle ScholarPubMed
CEE (Collaboration for Environmental Evidence) (2013a) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management. Version 4.2. Http://www.environmentalevidence.org/Documents/Guidelines.pdf [accessed 29 May 2013].Google Scholar
CEE (Collaboration for Environmental Evidence) (2013b) The Environmental Evidence Library. Http://www.environmentalevidence.org/Library.html [accessed 29 May 2013].Google Scholar
Crowe, M. & Sheppard, L. (2011) A review of critical appraisal tools show they lack rigor: alternative tool structure is proposed. Journal of Clinical Epidemiology, 64, 7989.CrossRefGoogle ScholarPubMed
Katrak, P., Bialocerkowski, A.E., Massy-Westropp, N., Kumar, V.S.S. & Grimmer, K.A. (2004) A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology, 4, 22.CrossRefGoogle ScholarPubMed
Pullin, A.S. & Knight, T.M. (2003) Support for decision making in conservation practice: an evidence-based approach. Journal for Nature Conservation, 11, 8390.CrossRefGoogle Scholar
Pullin, A.S. & Knight, T.M. (2012) Science informing policy—a health warning for the environment. Environmental Evidence, 2012, 1:15.CrossRefGoogle Scholar
Pullin, A.S., Knight, T.M., Stone, D.A. & Charman, K. (2004) Do conservation managers use scientific evidence to support their decision-making? Biological Conservation, 119, 245252.CrossRefGoogle Scholar
Pullin, A.S. & Salafsky, N. (2010) Save the whales? Save the rainforest? Save the data! Conservation Biology, 24, 915917.CrossRefGoogle ScholarPubMed
Stewart, G.B., Coles, C.F. & Pullin, A.S. (2005) Applying evidence-based practice in conservation management: lessons from the first systematic review and dissemination projects. Biological Conservation, 126, 270278.CrossRefGoogle Scholar
Vlayen, J., Aertgeerts, B., Hannes, K., Sermeus, W. & Ramaekers, D. (2005) A systematic review of appraisal tools for clinical practice guidelines: multiple similarities and one common deficit. International Journal of Quality in Health Care, 17, 235242.CrossRefGoogle ScholarPubMed