Introduction
A growing literature makes the case for evidence-based conservation (e.g. Pullin & Knight, Reference Pullin and Knight2001, 2003; Sutherland et al., Reference Sutherland, Pullin, Dolman and Knight2004). The approach has been adopted by a number of research groups and is now supported by at least two dedicated journals (Conservation Evidence and Environmental Evidence). Evidence-based conservation is established as a new narrative in conservation; a set of ideas that frame a particular way of thinking as a self-evidently correct solution to a standardized set of problems (Roe, Reference Roe1991; Leach & Mearns, Reference Leach and Mearns1996; Adams, Reference Adams, Leader-Williams, Adams and Smith2010).
The idea of evidence-based conservation reflects a broader engagement in evidence-based management and evidence-based policy. This has emerged across multiple sectors in countries such as the UK (Cabinet Office, 1999; House of Commons Science and Technology Committee, 2006) as an attempt to modernize the procedures of government and make decisions impartially and objectively, without political or value-based choices (Sanderson, Reference Sanderson2002). In medicine in particular, rigorous, objective analysis of evidence has contributed to widespread improvements in medical outcomes (Petticrew & Roberts, Reference Petticrew and Roberts2003; Fazey et al., Reference Fazey, Salisbury, Lindenmayer, Maindonald and Douglas2004). Evidence-based conservation initially sought to apply the methods of evidence-based medicine to conservation (Fazey et al., Reference Fazey, Salisbury, Lindenmayer, Maindonald and Douglas2004). However, conservation is different from medicine in several ways (Fazey et al., Reference Fazey, Salisbury, Lindenmayer, Maindonald and Douglas2004, Reference Fazey, Fazey, Salisbury, Lindenmayer and Dovers2006a; Stewart et al., Reference Stewart, Coles and Pullin2005). For example, the social–ecological systems involved in conservation are far more complex than the human bodies that are the target of most medical interventions. In response to this challenge, evidence-based conservation has developed and diversified, and standard methodologies have been defined (e.g. Pullin & Stewart, Reference Pullin and Stewart2006). However, our reading of the literature and our experience of the way the case for evidence-based conservation is articulated by professional conservationists in conferences, seminars, and numerous informal conversations suggests that as thinking about evidence in conservation has spread, it has become somewhat formulaic in the type and sources of evidence used, and the way evidence-based conservation frames policy debate. We find this problematic, and in this paper we discuss our concerns.
We need to clarify several points. Firstly, this is not a systematic review of evidence-based conservation, although we mention as wide a range of studies as space allows. We draw extensively on our own experience of the approach, and on listening and talking to others. Secondly, our purpose is not to attack the use of evidence in conservation: we recognize that conservation decisions need to take account of available knowledge about the processes affecting biodiversity and biodiversity loss. However, we want to promote a broader discussion of the implications of evidence-based approaches in conservation. Such discussions have taken place in other complex areas of policy such as international development, sustainability and health (Greenhalgh & Russell, Reference Greenhalgh and Russell2009; Elgert, Reference Elgert2010; du Toit, Reference du Toit2012; Hagen-Zanker et al., Reference Hagen-Zanker, Duvendack, Mallett and Slater2012). We focus on two particular questions. The first is ‘what counts as evidence?’, in response to which we discuss what is meant by evidence, and what kind of evidence is given credibility. The second is ‘how does evidence count?’, in which we explore the way in which policy decisions are informed by conservation evidence. In the following sections we explore these challenges in more detail, before concluding with a call for a transition from evidence-based conservation to evidence-informed conservation.
What counts as evidence?
A crucial question for all evidence-based policy is how to define evidence. This can be broken down into two sub-questions; what kind of information is considered as evidence, and what sources of information can provide such evidence?
Evidence-based conservation reviews tend to be dominated by quantitative information. For example, the Conservation Evidence website explicitly states that for a study to be included ‘its effects must have been monitored quantitatively’ (ConservationEvidence.com, 2013). Some papers that explain an evidence-based approach to conservation do explicitly recognize that different kinds of information can provide useful evidence, including qualitative data (e.g. Sutherland et al., Reference Sutherland, Pullin, Dolman and Knight2004; Pullin et al., Reference Pullin, Knight and Watkinson2009), which are included in a number of recent systematic reviews for evidence-based conservation (e.g. Brooks et al., Reference Brooks, Franzen, Holmes, Grote and Borgerhoff Mulder2006; Waylen et al., Reference Waylen, Fischer, McGowan, Thirgood and Milner-Gulland2010). However, even where qualitative data are used, they are conventionally analysed quantitatively, using numerical scores. Thus in their study of the effect of local institutions on conservation project outcomes, Waylen et al. (Reference Waylen, Fischer, McGowan, Thirgood and Milner-Gulland2010) identified 15 explanatory variables associated with project context and design: the ‘supportiveness’ of local institutions was scored on a three-level ordinal variable. This is a careful and thoughtful study but it uses a highly reductionist approach to qualitative data.
The focus on quantitative data and analysis may in part reflect the analytical training of scientists and an ingrained bias towards quantitative data on the grounds that it is believed to be more rigorous, testable and hence reliable. Writing on evidence-based conservation persistently implies that qualitative data are inferior to quantitative data, and particularly to experimental studies with appropriate controls (Pullin & Knight, Reference Pullin and Knight2001; Sutherland et al., Reference Sutherland, Pullin, Dolman and Knight2004). For example, Stewart et al. (Reference Stewart, Coles and Pullin2005) stated that ‘a major concern must be that there will rarely be sufficient good quality evidence to enable a systematic review to draw robust conclusions through quantitative and statistical analysis’ (p. 276). We agree that there are certain questions that are best answered with such data. But there are many others that are best approached with qualitative methods, and which cannot be satisfactorily allocated to ordinal categories. For example, West (Reference West2005) used ethnographic methods to explain the complex reasons for failure of an integrated conservation and development project in Papua New Guinea.
What are appropriate sources of evidence for conservation? Evidence-based conservation reviews are dominated by the peer-reviewed academic literature, which itself is dominated by studies conducted by professional researchers. Attempts to marshal and review evidence rarely draw on knowledge that is informal and not recorded in web-searchable literature (e.g. Pullin & Salafsky, Reference Pullin and Salafsky2010; Segan et al., 2010). It is reasonable to assume that much of the evidence provided by the academic literature, whether quantitative or otherwise, is of a good standard, and procedures for evidence-based systematic reviews propose careful sifting of studies to ensure only those carried out with sufficient rigour are reviewed. However, it is important to note that the formal research literature is not always correct, even when ideas are widely shared. Thomas Kuhn (Reference Kuhn1962) described how scientists lock into particular forms of explanation before scientific results (and often maverick scientists) eventually overturn the paradigm. It is in the nature of science that scientists disagree, including in their interpretation of the same body of evidence. It is for this reason that, in discussing the role of scientific experts in advising policy-makers on risk, Stirling (Reference Stirling2010) called for ‘a measured array of contrasting specialist views’ (p. 1030). There is an entire academic discipline, Science and Technology Studies, devoted to understanding the scientific process (Jasanoff et al., Reference Jasanoff, Markle, Peterson and Pinch1995) and demonstrating that the process of undertaking science cannot be separated from social and political processes of thinking and decision-making. The decisions taken by scientists undertaking evidence-based reviews in judging ‘good knowledge’ are careful but also socially constructed and vulnerable to false certainties.
What sources of evidence exist outside the formal research literature, both published and web-searchable grey literature? Two important, and sometimes overlapping, examples offer important insights for policy; indigenous knowledge, and traditional or local knowledge.
Indigenous knowledge is both a practical and epistemological challenge to western science and its claims of privileged expert judgement (Berkes, Reference Berkes1999). Anthropologists have criticized the conventional practice of using western scientific rationality as the benchmark against which other types of knowledge should be evaluated (e.g. Watson-Verran & Turnbull, Reference Watson-Verran, Turnbull, Jasanoff, Markle, Peterson and Pinch1997), yet even by this narrow measure indigenous knowledge is recognized to make a potentially important contribution to conservation decision-making (Pilgrim & Pretty, Reference Pilgrim and Pretty2010). Indigenous knowledge is particularly challenging for western science because it is frequently based on world-views that do not map readily onto those held by most professional scientists. The potential importance of local knowledge and expertise is increasingly being recognized by conservation scientists (e.g. Sheil & Lawrence, Reference Sheil and Lawrence2004; Fraser et al., Reference Fraser, Coon, Prince, Dion and Bernatchez2006).
Traditional knowledge is defined by the International Council for Science as ‘a cumulative body of knowledge, know-how, practices and representations maintained and developed by peoples with extended histories of interaction with the natural environment’ (ICSU, 2002, p. 3). This category would include indigenous knowledge, but also knowledge held by a formally trained expert such as a protected area manager, or knowledge held by people who live and work in a place over time. It is knowledge derived from such personal experience that Pullin et al. (Reference Pullin, Knight, Stone and Charman2004) suggested evidence-based approaches should replace. Thus Pullin & Salafsky (Reference Pullin and Salafsky2010) wrote about the need for science to replace ‘myth and anecdote’ (p. 916), citing Sutherland et al. (Reference Sutherland, Pullin, Dolman and Knight2004) who worried that ‘much of conservation is… based on myths’ (p. 306). Traditional knowledge can indeed turn out to be mistaken when issues are subjected to formal scientific analysis. But it can also give deep insights into issues affecting a problem that may be missed by scientists with a superficial contextual understanding or short-term data. There is a parallel in medicine: Krska & Avery (Reference Krska and Avery2011) noted the power of direct reporting by patients of issues such as adverse reactions to drugs, even though such information is routinely dismissed by conventional healthcare professionals.
The critical point here is the nature of expertise (I. Fazey, pers. comm.). Expertise does not reside solely with scientists or professionals, even if the expertise of less qualified people is not recognized by certification (Collins & Evans, Reference Collins and Evans2002). Local people may have acquired profound ‘practice-based’ environmental knowledge through prolonged observation and exposure (Ingold, Reference Ingold2000). Such experience-based experts have much to contribute to conservation investigations and debates (Collins & Evans, Reference Collins and Evans2002), which is why it is important to ‘let local voices speak’ (Peterson et al., Reference Peterson, Russell, West and Brosius2010, p. 9). Krueger et al. (Reference Krueger, Page, Hubacek, Smith and Hiscock2012) distinguished between experts and non-experts by the relevance and depth of their experience of a particular problem. Experts can be scientists or professional managers but also experienced members of the public. Fazey et al. (Reference Fazey, Proust, Newell, Johnson and Fazey2006b) explored the importance of implicit experiential knowledge for wetland management but noted that it can be impossible to elicit quantitatively as it needs to be understood in the context of connected underlying values and assumptions. Thus both scientists and local lay people may be experts or novices with respect to particular problems: speaking a foreign language is an expertise but in a foreign land everybody does it (Collins & Evans, Reference Collins and Evans2007). Deciding whether the source is credible requires the judgement of the listener. Choosing what counts as evidence is a task that needs great care and strong contextual understanding.
Our own experience of conducting systematic reviews of the conservation literature has shown us how difficult it can be to incorporate and synthesize qualitative data and grey literature sources, let alone traditional and indigenous knowledge. The time and resources available force the reviewer to take practical decisions to limit the task, and only by setting tight criteria of acceptability is it possible to reduce the world of evidence to a small number of papers that can be read and from which tables can be compiled. Our concern is that much potentially valuable information is lost in this process because it is not legible to the technology of systematic review. We are not suggesting the need for better reduction or translation techniques but encourage an evidence-based conservation that promotes a pluralistic view of evidence, in which the outcome of systematic reviews of formal literature are set alongside other views of particular issues, to allow decision-makers to develop policy that relates to a broad range of insights and conclusions (cf. Stirling, Reference Stirling2010).
How does evidence count?
Evidence-based policy calls for decisions to be based on evidence. It is a ‘policy about policy’ (du Toit, Reference du Toit2012, p. 2). This raises the question of how evidence, however defined, counts in decision-making processes. Writing on evidence-based conservation tends to present poor decision-making as the consequence of a fairly straightforward information deficit problem, which the methods of evidence-based conservation can address by providing information to decision-makers ‘in a usable format’ or delivered in ‘an integrated and accessible way’ (Pullin & Knight, Reference Pullin and Knight2003, pp. 84, 89). Some writing on evidence-based conservation acknowledges that this can be challenging. For example, Pullin et al. (Reference Pullin, Knight and Watkinson2009) noted the difference between ‘broad holistic questions typically posed in policy formation and narrow reductionist questions that are susceptible to scientific method’ (p. 970). The relationship between evidence and policy has received a great deal of attention in the academic literature, particularly in the field of international development. This field resembles conservation in its complexity, uncertainty about the impact of policy decisions on outcomes, urgency, and the mission-driven nature of its related academic field of development studies (e.g. Roe et al., 2012).
The development literature raises some important further issues that we believe are relevant to evidence-based conservation. Firstly, the evidence-based approach implies that it is possible to determine which interventions cause particular outcomes, and thus tune policy to maximize effectiveness (du Toit, Reference du Toit2012). Thus Pullin et al. (2004, p. 245) noted ‘ideally, decisions should be based on effectiveness of actions in achieving the objectives as demonstrated by scientific experiment’. In other words, policy should be based on ‘what works’. As du Toit (Reference du Toit2012) observed, the evidence-based approach can work where change is well understood and the system is small and susceptible to input–output analysis. So, in conservation, the approach is appropriate to problems that can be tightly specified (e.g. pest control, fire management or the location of bird nest-boxes). It is less easy to apply satisfactorily to the multidimensional factors and context specificity that characterize other kinds of conservation project (e.g. attempts to reduce the poverty of people living adjacent to protected areas through alternative livelihood projects). Sanderson (Reference Sanderson2002) was sceptical about the feasibility of following policy interventions through to outcomes. In the introduction to a recent book on evidence-based conservation in the Lower Mekong, Sunderland et al. (2012, p. 4) stated that ‘we expected to be able to develop sets of simple metrics that would enable us to make statements about the conservation and development performance of projects. However, all of the projects that we describe… operate in the complex, messy, real world where even obtaining clarity on shared goals among such diverse stakeholders is difficult’. Pullin et al. (Reference Pullin, Knight and Watkinson2009) proposed a framework for breaking up large problems into interventions that can be addressed piecemeal through evidence-based reviews. This may be effective in some cases but many conservation problems are so complex and emergent that, as with development policy, this reductionist approach runs the risk of confusing policy-makers by disguising the politics of decisions in a fog of apparently technical issues (Ferguson, Reference Ferguson1990; Büscher, Reference Büscher2010).
Secondly, evidence-based policy tends to support a linear model of policy-making, in which good information fed in at one end leads to good decisions at the other (Greenhalgh & Russell, Reference Greenhalgh and Russell2009). In this view, policy-making comprises ‘a series of technical steps’: the selection, synthesis and critical evaluation of the best research evidence allows the best policy to be selected (Greenhalgh & Russell, Reference Greenhalgh and Russell2009, p. 308). Du Toit (Reference du Toit2012) argued that in this sense evidence-based policy reflects ‘a narrow and technocentrist understanding of what is involved in policy-making’ (p. 8). Reality is much more complex and messy (Keeley & Scoones, Reference Keeley and Scoones2003). Most conservation decisions (like those in development) are not made through a process whose effectiveness is controlled by the supply of expert information but are highly political processes in which different actors struggle to influence outcomes (Sabatier & Jenkins-Smith, Reference Sabatier and Jenkins-Smith1993; Hajer, Reference Hajer1995; Pretty, Reference Pretty, O'Riordan and Stoll-Kleeman2002; Keeley & Scoones, Reference Keeley and Scoones2003). As Greenhalgh & Russell (Reference Greenhalgh and Russell2009) observed, policy-making is not a matter of applying objective evidence to problems that exist ‘out there’ in some predetermined form, it is ‘about constructing these problems through negotiation and deliberation’, making ‘context-sensitive choices in the face of persistent uncertainty and competing values’ (p. 315).
Thirdly, evidence-based policy presents itself as neutral, when in fact it is a political project that promotes particular forms of evidence and processes of policy-making. Evidence is never neutral (it never ‘speaks for itself’, du Toit, Reference du Toit2012, p. 4) because both science and policy-making are shaped by discursive practices that allow particular observations, findings or records to count as evidence. Policy debates do not happen in a political vacuum where scientific consensus can be teased out. Policy options are expressed through the construction of narratives that ‘frame’ how facts are understood (Roe, Reference Roe1991; Sabatier & Jenkins-Smith, Reference Sabatier and Jenkins-Smith1993; Hajer, Reference Hajer1995; du Toit, Reference du Toit2012). This framing determines what counts as evidence (see above), but also what this evidence means, who is involved in talking about the evidence, and how information about the evidence is communicated (du Toit, Reference du Toit2012).
Formal science and the knowledge it generates has great power as a legitimising force, and is often used as a voice of authority in environmental policy (Keeley & Scoones, Reference Keeley and Scoones2003; Dickson & Adams, Reference Dickson and Adams2009). Evidence-based conservation attempts to extend the social authority of experimental or observational science to the process of reviewing existing knowledge. By its procedures, it privileges good scientific data, and the ideas and framing of those who create it. The legitimacy attributed to evidence derived from formal science is a powerful influence on policy that can artificially depoliticize questions that should rightfully be subject to public deliberation (Büscher, Reference Büscher2010; Elgert, Reference Elgert2010). It can also override the knowledge of others, in the process rendering mute their ability to express their rights and wishes. This power asymmetry can be particularly significant in developing countries where governance is weak and the opportunity for informed public debate is limited (Brosius, Reference Brosius1999; Bryant, Reference Bryant2002; Fairhead & Leach, Reference Fairhead and Leach2003).
From evidence-based conservation to evidence-informed conservation?
We have identified two sets of challenges relating to evidence-based conservation: the type and sources of evidence used, and the way evidence-based conservation frames policy debate. These suggest to us the need to think further about the practice of evidence-based conservation and its relation to policy.
Firstly, we would like to see the view of what constitutes useful evidence for conservation broaden to give more space for local and indigenous knowledge and for qualitative data. The integrated experience of individuals, or what Sanderson et al. (Reference Sanderson2002, p. 71) called ‘practical wisdom’ has currently under-recognized potential to contribute to understanding conservation problems. This is a pressing issue, particularly in the context of the newly established Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, which will need to establish procedures for dealing with information derived from a range of different sources (Tengö et al., Reference Tengö, Kvarnström, Malmer and Schultz2011; Turnhout et al., Reference Turnhout, Bloomfield, Hulme, Vogel and Wynne2012). This is not an impossible challenge: Raymond et al. (Reference Raymond, Fazey, Reed, Stringer, Robinson and Evely2010) suggested a framework for integrating local and scientific knowledge, and we encourage its adoption. We recognize that certain forms of evidence will be more appropriate than others to answer specific questions but we reject the notion of an evidence hierarchy that places quantitative and experimental studies at the top. We are concerned that in conservation the scientific training of most practitioners (and their lack of training in other forms of enquiry) leaves them vulnerable to a bias in precisely this direction. Rather, we see merits in the adoption of a matrix or typology approach that helps to identify the strengths and weaknesses of different evidence for particular problems (Petticrew & Roberts, Reference Petticrew and Roberts2003). This approach has been applied in medicine, where it has been found particularly useful in the case of social and public health actions that do not fit neatly into the category of treatment–response interventions that are amenable to experimental and quantitative study. Many (arguably all) conservation problems are social in nature, and we find the case for this broader approach to evaluating conservation evidence convincing. There are clearly technical challenges to incorporating indigenous and local knowledge into systematic reviews and conservation practice (Raymond et al., Reference Raymond, Fazey, Reed, Stringer, Robinson and Evely2010) but the importance for conservation of ‘letting the locals lead’ (Smith et al., Reference Smith, Verissimo, Leader-Williams, Cowling and Knight2009) and ‘social learning’ (Gardner, Reference Gardner2012) are widely noted: this sensibility needs to extend to evidence-based conservation.
Secondly, we would like to see a more informed understanding of how policy-making works and of the proper place of formal science reviews within it. Scientific procedures do not offer a ‘get out of politics free’ card. This is clear from cases in which the scientific evidence is unequivocal yet politicians develop policies that go against it, as in the case of the UK badger cull (Observer, 2012). Decision-makers can also excuse delays by alleging that evidence is insufficient; for example, in the delayed designation of Marine Conservation Zones in UK waters (despite a lengthy stakeholder-led planning process that recommended 127 sites in 2011) because of a late government decision that there were still ‘gaps and limitations’ in scientific evidence (Guardian 2011). As Collins & Evans (Reference Collins and Evans2007) noted, ‘science, if it can deliver truth, cannot deliver it at the speed of politics’ (p. 1).
In many cases, scientific evidence does not support clear-cut conclusions. As Stirling (Reference Stirling2010) suggested, scientific uncertainties and differences of view need to be clearly set out for decision-makers so that they can make sophisticated judgements. Evidence should be seen as something that informs policy-makers about options and uncertainties. This is particularly pertinent to conservation problems that are messy and complex, and where there is often disagreement about what conservation is trying to achieve: the baseline against which questions about ‘what works’ can be asked. In medicine this is relatively simple; the goals of medicine are clearly defined and shared by medical professionals. Despite some calls for it (Child, Reference Child2009), there is no equivalent to the Hippocratic Oath for ecosystem managers or conservationists and there is no universally shared set of conservation values (Sandbrook et al., Reference Sandbrook, Scales, Vira and Adams2010).
Many decisions are, and should be, deliberative, and not based in any automatic way on scientific evidence. As Shaxon (Reference Shaxon2005, p. 102) noted in the context of UK government policy ‘evidence is a necessary, but not a sufficient, condition for any decision-making process’: the evidence base is dynamic, and good policy results from the good use of evidence as well as from the quality of the evidence itself. Formal scientific data are often invaluable in reaching conservation decisions but equally often are not sufficient. We need to recognize that decision-makers will decide for themselves which forms and sources of evidence are appropriate. For this reason we are dubious about the idea proposed by Segan et al. (Reference Segan, Bottrill, Baxter and Possingham2011) that a dedicated organization should be created to evaluate conservation evidence and provide guidance to decision-makers, based on the model of the UK's National Institute for Health and Clinical Excellence.
The changes in evidence-based conservation that we propose build on developing practice. A shift of language to evidenced-informed policy has been advocated in fields such as social policy, for some of the reasons we have enumerated here (e.g. Nevo & Slonim-Nevo, Reference Nevo and Slonim-Nevo2011). Conservation could usefully follow this lead. The change is subtle, and reflects the obvious importance of the idea that conservation decisions should always be informed by the best information available. At the same time, it has profound implications in that it calls for recognition that conservation science is one source of information among many for decision-makers.
Acknowledgements
We are grateful to many friends, colleagues and students with whom we have discussed evidence-based conservation, inside and outside seminar rooms, particularly Nigel Leader-Williams, Bhaskar Vira, Bill Sutherland and Ioan Fazey. We are grateful to Dilys Roe and two referees for comments on drafts.
Biographical sketches
Chris Sandbrook conducts research on trade-offs between ecosystem services at the landscape scale in developing countries, and on the role of evidence and personal values in shaping the actions of conservation organisations. He also helps to run the Masters in Conservation Leadership at the University of Cambridge, a professional development degree for conservationists with leadership potential from around the world. Bill Adams is interested in past, present and future changes in the history and development of conservation policy, and is currently studying the institutional politics of landscape scale conservation, and the interactions between synthetic biology and conservation.