Introduction
Case studies are a common method of social science work, but in contrast to many other sub-disciplines, the specific context of research on conflict zones is significantly more challenging.Footnote 1 This applies to many different geographic regions, and it is also clearly evident in the post-Soviet space, where researchers grapple with various risks and uncertainties—from the protracted conflicts in the region’s de-facto states to the Central Asian borderlands with Afghanistan and China (Menon Reference Menon2003; Weitz Reference Weitz2004), and to the volatile republics of the Russian North Caucasus (Souleimanov Reference Souleimanov2015; Ware et al. Reference Ware, Kisriev, Patzelt and Roericht2002; Zhirukhina Reference Zhirukhina2018). Given the relevance of the post-Soviet space to some of the most pressing national and international security challenges today,Footnote 2 well-conducted case studies in this field can, and should, go beyond Theda Skocpol’s (Reference Skocpol, Rueschemeyer and Mahoney2003) “doubly engaged social science,” namely not only “to understand real-world transformations” and to contribute to “scholarly debates about causal hypotheses, theoretical frameworks, and optimal methods of empirical investigation,” but also to use the enhanced knowledge and understanding thus gained to offer insights into how these real-world transformations can be managed more effectively for better outcomes (409).
This very possibility of policy uptake and impact presents an obvious opportunity for social scientists, but it puts the challenges of research into even starker relief. How we know takes on a very different quality of both obligation and responsibility if what we know can shape the outcomes of peace negotiations, decisions to intervene militarily in foreign conflicts, or to commit funds to humanitarian relief efforts. In other words, a key challenge for researchers is to assure their readers, and potentially users, that the causal claims they make are robust.
How we know and what we know are as much theoretical and empirical issues as they are methodological ones. Case studies based on conflict zones often involve significant levels of fieldwork in a context that is not always conducive to this approach. Collecting data on conflict zones, in which a multitude of actors relate to each other in highly dynamic contexts, and on what often are politically highly sensitive and emotionally charged topics, poses significant challenges. These challenges are not unique to conflict settings, but they are more acute there.Footnote 3 In my experience, they include the fact that data is often relatively limited and its accuracy not always beyond doubt. Sources may be difficult to identify and to access, and their credibility is at times questionable. Moreover, even where interlocutors are willing to share information, they may be exposed to retribution, and researchers are also potentially at risk. Therefore, while there hardly is such a thing as perfect data, data on conflict zones is often even less perfect than usual. This has follow-on effects for both data analysis methods and the robustness and generalisability of any causal inferences drawn, and may limit the ability of researchers to offer credible policy recommendations.
The methodological implications and potential work-arounds of these issues are often neglected in standard methods texts.Footnote 4 Perhaps the best book-length treatment to date is an edited collection on Research Methods in Conflict Settings (Mazurana, Jacobsen, and Andrews Gale Reference Mazurana, Jacobsen and Gale2013) that offers some practical observations on a number of the issues noted above. Several articles address specific aspects of research on conflict zones: some offer practical advice on how to “survive” as a researcher (Kovats-Bernat Reference Kovats-Bernat2002; Wood Reference Wood2006; Greenwald Reference Greenwald2019; Knott Reference Knott2019), while others deal with specific methodological implications of such fieldwork (Fujii Reference Fujii2010; Malthaner Reference Malthaner and della Porta2014; Desrosiers and Vucetic Reference Desrosiers and Vucetic2018; Knott Reference Knott2015).
I seek to add to this body of scholarship by arguing that the fundamental task for researchers is to “align” their theories, methods, and empirics in a way that is logically sound, transparent, and increases the confidence of other scholars and policy makers in the robustness of their findings. Focusing on process tracing—“the central within-case method” (Bennett and Checkel Reference Checkel, Bennett, Bennett and Checkel2014, 4)—I argue for what theoretically grounded and empirically detailed methodological solutions can be considered to mitigate the challenges that fieldwork-based case studies pose to the rigour and integrity of research on conflict zones, by placing methodological considerations into a relevant theoretical and empirical context.Footnote 5
There is no single logic per se that would apply to all conflict zones at all times. Rather, what is required of researchers is to demonstrate the internal coherence of their argument with whichever logic they seek to demonstrate concerning their research question. As I show below in the two illustrations of my approach, in relation to the conflict in Donbas, one can investigate and explain, for example, a logic of escalation of conflict (and settlement demands), as well as one of the emergence of new de facto states which are simultaneously at work and not mutually exclusive or contradictory (which may, however, be the case for other logics and/or in other conflict contexts).
Given my own disciplinary background and research interests, the following discussion is informed by implicit and explicit disciplinary standards of qualitative political science and international relations. In my own work, and in the illustrative examples below, cases are usually violent conflicts with specific temporal and spatial dimensions and delimitations. I use cases to understand, for example, logics of escalation and de-escalation, of confidence building, or of conflict settlement or its absence. I use process tracing in line with Bennett and Checkel (Reference Bennett, Checkel, Bennett and Checkel2014a, 7) as a method to establish what “processes, sequences, and conjunctures of events within a case…might causally explain the case.” I seek such understanding of cases and the wider phenomena they represent through analysis of the actions and reactions of individuals and their consequences, while acknowledging that these individuals act within the constraints of their own social and material contexts. Consequently, these decision makers, at different levels of analysis from the local, to the state, to the regional, and the global, are an important source of data, albeit not the only one.Footnote 6 Rather, as also illustrated in the examples below, I make use of triangulation across and within multiple types of data to establish how different points in the evidence chain are connected and created a causal pathway to an observed outcome.Footnote 7
In order to accomplish this task, I first outline my assumptions about process tracing and its application to research on conflict zones. I then discuss, in more detail, data requirements, data collection, and data analysis, before illustrating these considerations with examples from a research project on the war in and over the Donbas region in eastern Ukraine. I conclude with a brief summary of the main argument and some general observations on likely trajectories of case study research that relies, in part, on fieldwork in conflict zones in the post-Soviet space and beyond.
Ontological and Epistemological Underpinnings of the Case Study Approach
As has been discussed at length in standard methodological texts (George and Bennett Reference George and Bennett2005; Gerring Reference Gerring2006; Levy Reference Levy2008; della Porta Reference della Porta, della Porta and Keating2008; Vennesson Reference Vennesson, della Porta and Keating2008; Yin Reference Yin2014), case studies have their distinct use in the repertoire of social scientists, contributing to both the (inductive) generation and (deductive) testing of theories and to the deeper understanding of particular instances of a given phenomenon. They are particularly useful for the identification and specification of causal mechanisms, and therefore they also have a critical role to play in multi-method research. These assumptions have specific ontological and epistemological underpinnings which relate to, and are reinforced by, the research context in which they are applied.
If ontology is about whether a social and political (as opposed to natural or physical) reality exists and can be discovered, we can think of a purely objectivist view (i.e., there is a social and political reality that can be discovered) and one that infuses such an objectivist perspective with a degree of subjectivism (i.e., social and political reality exists but cannot be discovered independently of human subjectivity). Thus, ontology implies particular assumptions about the kind of causal relationships an inquiry is to uncover, and as such, requires methodologies that are appropriate for that purpose (Hall Reference Hall, Rueschemeyer and Mahoney2003, 374). Epistemologically, the issue is about what forms of knowledge are possible about this social and political reality: natural-science like causal or covering laws (or in a softer version, probabilistic laws), or highly context-dependent understandings with limited generalisability.
Case study research on conflict zones tends to be on the interpretivist side of these ontological and epistemological divides. Ontologically, there is little point in either denying the objective existence of the social and political reality of conflict or in not accepting that, however objective it is, we discover it through anything else but our own subjectively informed perspectives (see, for example, Malejacq and Mukhopadhyay Reference Malejacq and Mukhopadhyay2016; J. H. Cohen Reference Cohen2000). Epistemologically, I cannot think of a more fitting conceptualisation of the knowledge that case studies produce than that by Charles Ragin (Reference Ragin1987, 27), namely that few social and political phenomena have a single cause; that causes do not operate in isolation but that it is their combined effect that matters; and that the impact of causes may differ according to the context in which they operate. The latter point is particularly important. Ragin specifically referred to the fact that a condition may be “an essential part of several causal combinations both in its presence and absence state” (Ragin Reference Ragin1987, 27), but one could equally add here considerations of different magnitudes and/or sequences in which conditions occur in different contexts (Hall Reference Hall, Rueschemeyer and Mahoney2003, 385).
Just consider in this context the complexity of many contemporary conflict situations. In so-called “blended conflicts,”Footnote 8 multiple actors and alliances of actors on the ground and beyond are in constant flux and contextually variable, not least because their agendas differ from local to global aspirations with punctual but unsustainable overlap. Geopolitical aspirations of regional and great powers interact with domestic elites, who are concerned about the sovereignty and territorial integrity of existing states that are challenged by other, at times transnational, actors expressing grievances couched in the language of human rights and self-determination. Often added into such local contexts, of fragile states with weak institutions that are unable to provide security and other basic public goods, are transnational organised criminal networks and ideologically or religiously motivated terrorist organisations. The actions and interactions of these actors are neither cost nor consequence-free to themselves or each other, and they are also conditioned by the social, political, cultural, and economic structures in which they operate—something that will become more obvious in the empirical illustrations that follow.
Ragin’s emphasis on “multiple and conjunctural causation” highlights one particular way in which we can think about causation beyond the Humean paradigm of relations of regularity between observable variables (Kurki Reference Kurki2008), which prizes a degree of parsimony ill-suited to the ontological and epistemological assumptions underpinning much of contemporary research on conflict zones. Theories positing multiple and conjunctural causation often require in-depth reconstruction of the process in which these multiple (combinations of) causes interact to produce an expected outcome, precisely because the theorised interaction effects tend to be too complex to be captured by statistical models. As Hall (Reference Hall, Rueschemeyer and Mahoney2003) put it, “observations bearing on a theory’s predictions about the process whereby an outcome is caused provide as relevant a test of that theory as predictions about the correspondence between a small number of causal variables and the outcomes they are said to produce” (393).
Such ontologically and epistemologically grounded thinking about causation not only provides a justification for the utility of case studies in social science theorising, but it also helps in identifying suitable methods of data collection and analysis because it requires data of a particular quantity and quality. Accordingly, in order to make valid claims about cause-and-effect relationships, we need to reconstruct the process of interactions between potentially multiple combinations of causes.
Process Tracing and Approaches to Data Collection and Data Analysis in Fieldwork-Based Case Studies on Conflict Zones
Process Tracing and the Debate over Robust Quality Standards for Causal Inference
Process tracing is widely considered to be the predominant method of within-case research. Bennett and Checkel define “process tracing as the analysis of evidence on processes, sequences, and conjunctures of events within a case for the purposes of either developing or testing hypotheses about causal mechanisms that might causally explain the case” (Reference Bennett, Checkel, Bennett and Checkel2014a, 7). This definition covers, albeit not always perfectly, a range of similar methods, including comparative historical analysis (Mahoney and Rueschemeyer Reference Mahoney and Rueschemeyer2003; Mahoney and Thelen Reference Mahoney and Thelen2015; and for applications to post-communism Chen and Sil Reference Chen and Sil2007; Shcherbak Reference Shcherbak2015; Tesser Reference Tesser2019), causal-process observations (Collier, Brady, and Seawright Reference Collier, Brady and Seawright2010), systematic process analysis (Hall Reference Hall, Rueschemeyer and Mahoney2003), pathway analysis (Weller and Barnes Reference Weller and Barnes2014), and the analytic narrative approach (Levi Reference Levi, Shapiro, Smith and Masoud2004). They are all concerned with uncovering causal mechanisms that link presumptive causes with hypothesised effects, thereby acknowledging that such “mechanisms are ultimately unobservable, but our hypotheses about them generate observable and testable implications” (Bennett and Checkel Reference Bennett, Checkel, Bennett and Checkel2014a, 12). This link between theory and methodology is a critical one, and I will return to it in more detail below.
Prior to that, it is worthwhile to engage with the debate on quality standards of process tracing. There are two dimensions to this. First, there is the debate on whether process tracing allows any kind of valid causal inference and against which standard such validity should be measured. This is an ongoing and as yet inconclusive debate between “quantitative” and “qualitative” social scientists.Footnote 9 It is beyond the scope and purpose of this discussion to revisit it here in any meaningful way.
The second debate is one among process-tracing scholars, perhaps best captured in an edited collection by Andrew Bennett and Jeffrey Checkel, entitled Process Tracing: From Metaphor to Analytical Tool (Reference Bennett and Checkel2014b). Based on a discussion of “ten best practices” for process tracing in general, suggested by Bennett and Checkel (Reference Bennett, Checkel, Bennett and Checkel2014a), contributors to the volume offer their own ideas on appropriate quality standards. Among them, Lyall (Reference Lyall, Bennett and Checkel2014) suggests “four additional process-tracing best practices that can help researchers avoid ‘just-so’ stories when exploring civil war dynamics (191). These include: (1) identifying counterfactual (‘control’) observations to help isolate causal processes and effects; (2) creating ‘elaborate’ theories where congruence across multiple primary indicators and auxiliary measures (‘clues’) is used to assess the relative performance of competing explanations; (3) using process tracing to understand the nature of treatment assignment and possible threats to causal inference; and (4) ‘out-of-sample testing’ (191).” Again, note the importance accorded to the role of theory and the ruling out of rival mechanisms.
Schimmelfennig introduces the notion of “efficient process tracing” which “starts from a causal relationship provisionally established through correlation, comparative, or congruence analysis and from a causal mechanism that is specified ex ante; it selects cases that promise external validity in addition to the internal validity established by process tracing; and it confines itself to analyzing those process links that are crucial for an explanation and for discriminating between alternative explanations” (Schimmelfennig Reference Schimmelfennig, Bennett and Checkel2014, 100-101). Of specific interest to my argument here, Schimmelfennig (Reference Schimmelfennig, Bennett and Checkel2014) emphasises that “process tracing should be based on causal mechanisms that are derived ex ante from theories and follow a basic analytical template […] Such causal mechanisms tell us what to look for in a causal process rather than inducing us to make up a ‘just so’ story of our own” (105). Equally as important is an emphasis on using process tracing to eliminate competing theories and the mechanisms they propose.
Waldner, in turn, proposes a so-called completeness standard based on the assumption that
[p]rocess tracing yields causal and explanatory adequacy insofar as: (1) it is based on a causal graph whose individual nodes are connected in such a way that they are jointly sufficient for the outcome; (2) it is also based on an event-history map that establishes valid correspondence between the events in each particular case study and the nodes in the causal graph; (3) theoretical statements about causal mechanisms link the nodes in the causal graph to their descendants and the empirics of the case studies allow us to infer that the events were in actuality generated by the relevant mechanisms; and (4) rival explanations have been credibly eliminated, by direct hypothesis testing or by demonstrating that they cannot satisfy the first three criteria listed above.
(Waldner Reference Waldner, Bennett and Checkel2014, 128; see also Waldner Reference Waldner2015)Again, there is a strong emphasis on the importance of theory and the need not simply to prove the presence and operation of one particular causal mechanism, but also to rule out alternative explanations.
This focus on the importance of theory and the need to engage with rival causal claims is one that resonates well with more generally recommended practices for case study research. George and Bennett in their classic text on Case Studies and Theory Development in the Social Sciences remind us that “standardized, general questions [asked] of each case, even in single case studies … must be carefully developed to reflect the research objective and theoretical focus of the inquiry” (Reference George and Bennett2005, 69) and that “[t]he plausibility of an explanation is enhanced to the extent that alternative explanations are considered and found to be less consistent with the data, or less supportable by available generalizations” (91).
A final aspect of process tracing standards relates to transparency. From the perspective of demonstrating the robustness of causal inferences drawn from process tracing, Checkel and Bennett (Reference Checkel, Bennett, Bennett and Checkel2014) assert that “[t]he central goal [of transparency] is to facilitate open scholarly contestation about the probative value of qualitative evidence” (264). This, in turn, reflects an earlier point made by King, Keohane and Verba in their influential Designing Social Inquiry, namely that “the most important rule for all data collection is to report how the data were created and how we came to possess them” (Reference King, Keohane and Verba1994, 27). In addition, it also demonstrates that qualitative researchers in general, and those practicing process tracing in particular, have embraced the three principles of data access and research transparency (DA-RT)—data access, production transparency, and analytic transparency—as elaborated in the Guide to Professional Ethics in Political Science (American Political Science Association 2012, 9–10) and further specified, among others, by Elman and Kapiszewski (Reference Elman and Kapiszewski2014) and Kapiszewski and Kirilova (Reference Kapiszewski and Kirilova2014). Many classical examples of transparent process tracing pre-date these debates (see for example discussions in Wood Reference Wood, Mazurana, Jacobsen and Gale2013; Fujii Reference Fujii2010; Barakat and Ellis Reference Barakat and Ellis1996).Footnote 10
Related to this issue of transparency is one of the researcher’s own positionality. While there is consensus in the literature that one’s own position towards one’s research needs to be acknowledged and reflected upon, research on conflict zones represents a specific set of circumstances with particular consequences.Footnote 11 While I explore these in more detail in relation to the examples below, some more general points are worth noting as well. The position that a researcher takes vis-à-vis a conflict and its multiple actors is often supposedly known (e.g., from past writings and/or presentations) or assumed (e.g., by inferring this from the researcher’s background, where he or she was educated, is based, or receives funding from). Regardless of whether these perceptions are accurate, they can shape researchers’ ability to access sourcesFootnote 12 and they may determine, for example, what information interlocutors share with them and to what extent data can be trusted. With their consequent impact on data availability, these perceptions have a clear influence on researchers’ ability to establish what “processes, sequences, and conjunctures of events within a case (…) might causally explain the case” Checkel (Reference Bennett, Checkel, Bennett and Checkel2014a, 7), and thus on the robustness of any causal claims. Such constraints, which need to be acknowledged, can occasionally be circumvented at the stage of case selection prior to commencing a project and/or by adding additional cases to a project. Where neither is possible, methods of triangulation can at least address issues of data credibility. Alternatively, the use and limitations of research brokers in conflict zones have been discussed extensively as a way of mediating and obtaining access to sources of varying kinds.Footnote 13 Where face-to-face contact is not feasible—for example because the association between a researcher and a source may have negative consequences for one or both of them or because access to the source’s physical location is not safely possible—interviews could be conducted online or by email or by using a locally-based interviewer instead.
Reflection on one’s own position as a researcher on conflict zones, thus, is essential to ensure proper mitigation of relevant consequences. Being transparent about both positionality and how the constraints that it imposed were mitigated is critical in order to allow others, including potential research users, to draw their own independent conclusions on the robustness of any causal claims made on the basis of research conducted in such specific circumstances.
Adopting or Adapting? Quality Standards for Process Tracing in Research on Conflict Zones
From the discussion in the preceding section, three broad principles for quality standards in process tracing have been deduced: the need for a theory-guided inquiry, the necessity to enhance causal inference by paying attention to (and ruling out) rival explanations, and the importance of transparency in the design and execution of research. All three of these principles are closely linked to issues of data collection and data analysis; any discussion of these needs to be based on an appreciation of data requirements, which, in turn, depend on both the questions asked and/or any hypotheses to be tested, and an ontologically and epistemologically grounded choice of method.Footnote 14 In the context of process tracing, the kinds of data that provide the necessary richness of empirical detail are usually found in a variety of sources, including, amongst others, a mix of interviews and surveys (with policy practitioners, observers, journalists, analysts, and academic experts), policy documents, laws, archival materials, contemporaneous media accounts, grey literature, (academic) secondary literature, and (auto-) biographies and diaries of relevant actors.Footnote 15
Access to these sources on conflict zones is challenging, and thus poses quantity problems, while the data that can be obtained from such sources is not necessarily of appropriate quality. In turn, then, the ability of a researcher to reconstruct any causal process, let alone one comprised of interactions between multiple combinations of causes, is potentially constrained, and with it any broader theoretical insights and policy recommendations. It is nevertheless possible, through the careful and purposeful application of appropriate research methods within well-conceived research designs grounded in appropriate theory, to make robust inferences about causal processes on conflict zones and to derive appropriate policy recommendations.
Such purposeful application begins with a recognition that data collection and data analysis are at the heart of any research project, but also that they do not exist in a vacuum. The context in which they operate has both internal and external dimensions—internal as related to the overall research design, external as related to the environment in which the research is carried out. With reference to fieldwork-based case studies on conflict zones, the internal dimensions are somewhat more generic than the external ones, as they apply to most case study research. In line with the illustrations that follow, I am starting from the assumption that research is driven by interest in a specific case; that is, the aim is to understand the case or resolve a concrete (case) puzzle that is of significance from a case-specific and/or broader policy perspective.Footnote 16
To begin with data analysis, an effects-of-causes research design, for example, would allow for a more deductive approach, being based on a clearly formulated hypothesis that a cause X results in an effect Y, and then “inferring systematically how much a cause contributes on average to an outcome within a given population” (Bennett and Elman Reference Bennett and Elman2006, 262). Data analysis could then initially be based on co-variation (i.e., demonstrating that changes in X lead to [proportional] changes in Y [see Schimmelfennig Reference Schimmelfennig, Bennett and Checkel2014]). For example, the absence of X would need to “produce” the absence of Y, and a large change in X would lead to a similarly large change in Y. The direction of such change, however, need not necessarily be the same. For example, if the question was related to the causes of conflict and the hypothesis was that systematic and sustained political exclusion significantly increases the likelihood of conflict, we would expect X and Y to move in the same direction; in other words, high levels of political exclusion co-vary with high conflict likelihood. By contrast, if the question was one about conflict prevention and the hypothesis was that political inclusion had a conflict-preventing effect, we would expect that high levels of inclusion would result in low levels of conflict likelihood.
Two further issues follow from this. The first is that for each such relationship, suitable indicators need to be identified that allow for accurate measurement and that represent a theoretically valid construct of the relationship hypothesised. The second is twofold: on the one hand, for co-variation to be meaningful, multiple observations are required (e.g., across multiple comparable cases or multiple instances within the same case), but on the other hand, co-variation in itself is not sufficient to make credible pronouncements about causality. Itis, however, a useful initial plausibility test that can be further probed by process-tracing techniques as outlined above, in order to facilitate a degree of generalisation and typological theorising (see Coppedge Reference Coppedge2012). Hence, Thelen and Mahoney (Reference Thelen, Mahoney, Mahoney and Thelen2015) emphasise that “it is not sufficient to demonstrate that hypothesized causes co-vary with outcomes across cases. Rather, the researcher must provide the reasons why this is so by opening up the black box and identifying the steps that connect observed causes to observed outcomes” (15).
If the research design is of the causes-of-effects type, and thus, especially in the context of case studies, potentially more open-ended and inductive, data requirement challenges are of a different kind. Even in such cases some theoretically-informed “directedness” of research is quite useful, but it would normally encompass a wider range of theoretically possible and plausible explanations (i.e., hypothesise a range of causes and then probe their relevance in a particular case, while remaining open to the serendipitous discovery of additional causal factors in the course of case study research and thus generate new hypotheses and contribute to theory building). Such designs, too, lend themselves to both co-variation and process tracing along the lines of what I have outlined above. While often more inductive in their approach, causes-of-effects designs can also be applied in cases in which various rival explanations exist in order to test their validity in the context of a particular case, confirm or disconfirm them, and contribute to the development of new theory by generating new, case-based hypotheses that can subsequently be tested in other cases.
Thus, co-variation and process tracing in case study research can be considered a useful “package” of data analysis methods. Based on a set of initial propositions (either “hunches” based on expert knowledge of one or more cases or theory-derived hypotheses or a combination of the two), co-variation can serve as an initial test to rule out certain relationships. If there is no pattern of co-variation, process tracing will not establish any causal mechanisms either. On the other hand, if there is co-variation, process tracing can be used to confirm whether the relationship is indeed causal (Mahoney Reference Mahoney, Rueschemeyer and Mahoney2003, 363) and can identify through which mechanism causes and outcomes are related. In that sense, case study research is less concerned with “the net effect of a cause over a large number of cases but rather for how causes interact in the context of a particular case or a few cases to produce an outcome” (Bennett and Elman Reference Bennett and Elman2006, 262).
A case-study based approach affords an opportunity to think about co-variation in a more complex way. In-depth case study research may be able to identify patterns of co-variation in which a condition may be “an essential part of several causal combinations both in its presence and absence state” (Ragin Reference Ragin1987, 27). Moreover, it may be possible that different patterns of co-variation emerge if one considers the “magnitude” of a particular factor and/or the specific point in time at which it occurs in a sequence of events. For example, sudden refugee influxes from a neighbouring country experiencing conflict is often considered a potential cause of conflict diffusion (or spill-over), with likely mechanisms being intensified competition over scarce resources or changing demographic power balances. Thus, one might need to consider at which scale such mechanisms would be triggered in both absolute numbers of refugees and relative to the population in the receiving state. Similarly, the deployment of peacekeepers has been identified as a potential conflict-mitigating strategy of international intervention, yet its success depends on when the deployment takes place in the conflict cycle—before a major escalation of violence, as a measure to enforce an end to violence, or to guarantee an agreed ceasefire or conflict settlement.Footnote 17 In each of these scenarios, the size of the deployed contingent and the robustness of its mandate are also frequently cited factors shaping eventual outcomes. Put differently, a comprehensive understanding of the context of a case increases our ability to make the best possible use of methods like co-variation and process tracing.
This, in turn, enables researchers to better define the scope conditions within which particular claims can be tested to establish whether hypothesised causal relationships are likely to be true; in other words, it contributes to developing properly specified theoretical propositions as required by process-tracing standards. Consequently, it also has important implications for our ability to offer evidence-based policy recommendations. Knowing when and how refugee crises have destabilising regional effects can determine the timing and method of intervention (e.g., what is the window of opportunity to deploy which resources in order to alleviate resource scarcity). Likewise, understanding the effects of peacekeepers on conflict de-escalation and settlement can provide more effective crisis responses (e.g., pre-escalation deployments may be effective even with more limited numbers, whereas deployments to enforce or guarantee ceasefires and settlements may require larger, longer, and more robust missions).
The fundamental issue concerning data requirements is whether data is available that will allow the application of a data analysis method that can generate robust inferences on the basis of which a particular research question can be answered. For each potential relationship between causes and outcomes, suitable indicators need to be identified that allow for accurate measurement and that represent a theoretically valid construct of the relationship hypothesised. Put differently, we need data (observations) that allow us to determine co-variation (measuring patterns of changes in causes and outcomes) and data (observations) that allow us to trace the process that connects presumed causes with their effects.
In terms of the internal dimensions of the context in which data collection and analysis operate, this means identifying appropriate indicators and sources for their measurement. For example, if we were to understand refugee movements, we could hypothesise that it is the intensity of violence in a given conflict that determines refugee numbers. We could consider combatant and civilian casualty figures as a direct indicator of the intensity of violence, as well as, for example, burnt-down settlements or destroyed crops and farm animals. While casualty figures are often highly contested and accurate numbers are hard to come by, it is usually possible to estimate ranges of casualties on the basis of several official and unofficial sources. Population displacement is not always easy to measure either, especially when it comes to internal displacement where access might be difficult even for international humanitarian relief organisations. Refugee numbers tend to be easier to obtain, especially if governments of receiving states grant such access. With some caveats, it might thus be possible to establish whether there is co-variation between the intensity of violence in a particular case and the number of refugees in a neighbouring state or states. Reasoning that people are unlikely to leave their homes without good cause, co-variation would establish one plausible such cause, but process tracing would be required to make them stick: how is violence connected to displacement( i.e., how does it shape people’s decisions to flee)? Interviews with refugees in refugee camps in neighbouring states, for example, would be a relatively safe way of collecting evidence of such a relationship (compared to in-country work), and is in fact often practiced by human rights NGOs, such as Amnesty International or Human Rights Watch, thus potentially also allowing for the use of such reports as a source of relevant data on its own or to complement a researcher’s own fieldwork. Process tracing could further facilitate an understanding of the precise mechanisms of displacement: is fear induced by first- or second-hand accounts of actual violence, is the threat or (historically-grounded) expectation of violence enough to force people to flee, and/or is the availability/accessibility of sanctuaries in neighbouring countries a significant pull factor? Collecting individual narratives of displacement would, in this way, help us understand how and why individuals and communities make decisions to take flight under particular conditions.
There is also a critical external context as far as data collection is concerned: what data are safely, legally, and ethically accessible for the researcher,Footnote 18 and how credible (truthful) and reliable (accurate) are such data as a basis for analysis and inference? Considerations of safe, legal, and ethical researcher conduct on conflict zones pose, at times, significant challenges to fieldwork-based methods of data collection, such as direct and participant observation, key-informant interviews and document, policy, and discourse analysis. As a result, fewer data may be available to the researcher and what is available may be of lesser quantity, poorer quality, and more contradictory, especially in cases where even basic facts are contested, where the information space is crowded, and where rival dis/information campaigns are common. Short of abandoning certain research questions, how can these issues be effectively addressed?
Case study research and its process tracing method, because of their requirement for an in-depth understanding of the full complexity of a given case, provides a first line of defence. On the one hand, comprehensive case knowledge is a safeguard against unsafe and potentially illegal conduct while in the field and thus critical in the preparatory stage of fieldwork. It also enables identifying relevant sources, assess their degree of accessibility, and judge the extent to which data gathered from them are credible and reliable. The latter, in turn, plays a crucial role later on in data analyses when it comes to weighing potentially conflicting evidence in drawing credible inferences from available data.
It is important not to underestimate the role of theory in case study research.Footnote 19 While case studies can be both theory-testing and theory-generating—more theory-grounded and deductive or more exploratory and inductive in their nature—the case-study research process is often iterative with a constant back-and-forth between theoretical and empirical considerations. This is relevant in several ways for the context of fieldwork-based case study research on conflict zones. First, theory (or in a broader sense the existing understanding of the relationships underpinning a particular research question) guides an inquiry both in a grand theoretical sense (a “structural realist” approaches a particular case differently from a “critical constructivist”) and in a mid-range theoretical sense (scope conditions determine the choice of a case or cases). Theory thus “helps us identify what to observe, defines the relevant and meaningful characteristics of actors and institutions, and fills in the connections between action and reaction so that we can plausibly reconstruct events and processes” (Coppedge Reference Coppedge2012, 62).
Second, the theoretical parameters within which a case study is bounded inform the threshold of evidence—the point at which we can consider a piece of data, such as a data-set observation or a causal-process observation, to amount to reasonable proof that a particular relationship or mechanism are in operation. Given the data challenges outlined above, theoretical plausibility is a critical test for the robustness of any inferences drawn. In an effects-of-causes theory-testing research design, this would be a more straightforward test based on existing theories. In a causes-of-effects theory-generating design, inferences drawn on the basis of theoretical plausibility would require sufficient specification of a new or refined theory and be contingent on further theory testing in other cases fitting the scope conditions established.
Thus, the standard of evidence in fieldwork-based case study research on conflict zones is one of plausibility in two ways: theoretical plausibility and empirical plausibility, the latter deriving both from a single case study and from the structured focused comparison of multiple cases (including within-case comparisons) within specified scope conditions. Thus, the question of “how we know” that our observations can form the basis of credible inferences about certain causal mechanisms being at work must be answered in two ways: by focusing on the “how”—demonstrating the appropriateness and rigour of the methods used—and by being specific about the evidentiary standard applied to what we claim to be the (new) knowledge generated within specified confidence boundaries, or in other words, being explicit about the degree of uncertainty that remains about the findings presented. By extension, the theoretical understanding that can be generated by case studies is “plausible in bounded times and places, but also provisional” (Coppedge Reference Coppedge2012, 66) until it has been tested across a wider range of cases that fit its specific assumptions.
Illustrations
What follows are two examples of how these general considerations above can be implemented in concrete instances of fieldwork-based case study research on conflict zones. The main point of this is that practical empirical challenges of fieldwork can be mitigated methodologically and theoretically to allow for robust inferences and contingent generalisations to be drawn from single case studies that can help us to develop and test theories and also inform policy making. The publications examined are part of the same research project on the conflict in DonbasFootnote 20 conducted by Tatyana Malyarenko and myself. Both publications focus on separate questions. While they are mostly underpinned by the same fieldwork, they approach the issue of what we know and how we know in different ways that illustrate the range of options available for researchers grappling with similar fieldwork challenges.
We have worked on Ukraine, and the contested neighbourhood of the post-Soviet space more generally, for more than a decade, and have an established track record of relevant publications (Malyarenko Reference Malyarenko2015; Malyarenko and Galbreath Reference Malyarenko and Galbreath2016; Whitman and Wolff Reference Whitman and Wolff2010; Whitman and Wolff Reference Whitman and Wolff2012a; Beyer and Wolff Reference Beyer and Wolff2016; Kemoklidze and Wolff Reference Kemoklidze and Wolff2019). We have acquired deep knowledge and understanding of relevant actors and issues over time, language skills, and networks of academic and policy contacts across Ukraine, Russia, the EU, and the US. This created opportunities for a total of over 60 interviews to be conducted in the whole project and for research findings to be presented at different stages to a wide range of audiences on more than a dozen occasions.Footnote 21 At the same time, our previous joint and individual work has also been theoretically informed by a broadly neo-classical realist view of understanding states’ behaviour in the international arena and their approach to conflict management at local, regional, and global levels (Wolff and Dursun-Özkanca Reference Wolff and Dursun-Özkanca2012; Wolff and Yakinthou Reference Wolff and Yakinthou2013; Whitman and Wolff Reference Whitman, Wolff, Whitman and Wolff2012b). Our position towards the crisis is thus shaped by both personal background and experience in the country prior to, during, and after the intensely violent phase of the conflict in 2014-15 and by our broader comparative expertise of other conflict situations elsewhere.
When we began this project in February 2014, Ukraine was experiencing an acute crisis, which led to the ouster of its then president and the formation of a new government, but also triggered a number of turf wars among oligarchs. This was followed by two severe external challenges to the country’s sovereignty and territorial integrity—the Russian annexation of Crimea and a Russian-supported separatist insurgency in Donbas, the latter of which quickly evolved into a very violent conflict costing approximately 10,000 lives, displacing over two million people, and causing significant physical destruction and economic disruption. Despite several ceasefire agreements and ongoing talks between the conflict parties, the conflict continued to simmer and occasionally flared up at the current ceasefire line, which was established in February 2015 to separate government- and rebel-controlled territories.
The Logic of Competitive Influence-Seeking: Russia, Ukraine, and the Conflict in Donbas
This is an idiographic case study, but also resembles a plausibility probe, an instance in which “the analyst probes the details of a particular case in order to shed light on a broader theoretical argument” (Levy Reference Levy2008, 6; see also George and Bennett Reference George and Bennett2005, 111). It illustrates the use of process tracing to support a particular explanation or causal mechanism (Mahoney Reference Mahoney, Rueschemeyer and Mahoney2003, 365) “derived ex ante from theories” (Schimmelfennig Reference Schimmelfennig, Bennett and Checkel2014, 100f.) that appear relevant and plausible following a limited test that validates “an ‘initial suspicion’ that the [hypothesised] causal mechanism has actually been at work and effective” (Schimmelfennig Reference Schimmelfennig, Bennett and Checkel2014, 104).
We examine the crisis in Ukraine since late 2013 through the lens of four successive internationally mediated agreements and ask why these have been at best partially implemented. While primarily driven by empirical interest, this is nonetheless also an important question from a theoretical perspective for a number of subfields of International Relations, including inter- and intrastate conflict management, geopolitics, and especially the relations between great powers in the context of the politics of unrecognised states.
We explore how our empirically-driven research question can be connected to our presumptive theory of competitive influence-seeking in a methodologically sound way and ask what we would need to observe in the analysis of Russian strategy in Ukraine that would offer evidence of competitive influence-seeking. Crucially, we also specify likely observations if the hypothesised logic of competitive influence-seeking were not true, thus building in a safeguard against confirmation bias. This approach shapes the selection of our sources, including key informants, and the various questions explored in both field work and desk research. We conducted field work individually and jointly (including interviews and workshops), and while the initial development of the theoretical framework and methodology was led by myself, the project as a whole was a joint effort in all its parts.
This conceptual and theoretical framing of the project, together with in-depth prior case knowledge, provided a sound basis for identifying and justifying the appropriate methods of data collection and analysis, leading us to rely on the textual analysis of relevant documents, official statements, and participant observation and key informant interviews as primary sources for data collection. This allowed us to utilise co-variation and process tracing.
While co-variation enabled us to establish a prima-facie plausibility of our argument that a logic of competitive influence-seeking has driven Russian policy in the Ukraine crisis, the thick analytical narrative that emerges from process tracing maximises data reliability through triangulation. Using multiple sources allowed us to compensate for limited access to policy makers in Donbas and in Russia, as did the use of experts in universities and think tanks elsewhere in Russia and Ukraine who have a particular familiarity with Russian policy and the evolving situation in Donbas. Thanks to our long-standing (i.e., pre-conflict) networks and contacts across the political spectrum in and beyond Kyiv and Donetsk, we were also able to conduct a number of interviews with internally displaced persons from Donbas who were evacuated with local government institutions, universities, and other organisations formerly based in now rebel-controlled territories. This mix of accessible sources forms the basis of a well-substantiated argument in which all claims are corroborated by more than one source and type of source in every instance.
This approach to data gathering and analysis allows us to contribute to typological theorising about Russian policy vis-à-vis the states of the former Soviet Union in the Western CIS and the South Caucasus. The explanation of the Ukrainian case in the context of a general theory of competitive influence-seeking thus also acts as a limited test of this evolving theory. Given the significance of the Ukrainian case and of Russia-West relations in the contested neighbourhood more generally, this is important for policy making in terms of developing scenarios for future developments and in terms of offering policy recommendations, which we provide at the end of this article with the usual sense of caution.
We found that, in line with the logic of competitive influence-seeking, Russia has sought to manage the level of instability in Ukraine in a way that does not preclude the emergence of an overall stable and friendly (that is, pro-Russian) regime in Kyiv, but that prevents, at all costs, the consolidation of an unfriendly (that is, pro-Western) regime, thus enabling Moscow to assert and sustain long-term influence over Ukraine’s domestic and foreign policy orientation. The analytical narrative we offer in support of this assertion rules out the main rival assumption, namely that Russian policy vis-à-vis Ukraine was mostly an improvised opportunistic exploitation of tactical openings at high costs (Freedman Reference Freedman2014). At the same time, we refine and integrate two other potential explanations. The first of these is the notion that a Russian grand strategy aimed at regaining superpower status underpins the Kremlin’s Ukraine policy (Allison Reference Allison2014; Tsygankov Reference Tsygankov2015; Yost Reference Yost2015), while the second is the idea that the Kremlin is mostly driven by the logic of political survival and retaining (rather than enhancing) global status (Bader, Grävingholt, and Kästner Reference Bader, Grävingholt and Kästner2010; Way Reference Way2016). The proposed theory of competitive influence-seeking stresses the importance of the longer-term view and shorter-term hedging in Russian strategy that allowed Russia to settle for establishing the two de-facto entities in eastern Ukraine in such a way that it retains significant future options for extending its influence on Ukrainian domestic politics and foreign policy. Without the breadth and depth of the fieldwork conducted, it is hard to imagine that these findings would have been generated and substantiated with credible evidence.
With these drivers of Russian policy in the contested neighbourhood in mind, we offer three more general conclusions about likely future developments: first, that confrontation between Russia and the West in and over this area is unavoidable; second, that, short of the “withdrawal” of one side or an agreed simultaneous withdrawal of both sides, there is little likelihood of restoring the full sovereignty and territorial integrity of countries like Ukraine, Moldova, and Georgia in the near future; and third, that in light of these difficult challenges locally, regionally, and globally, the management of stability and security in the contested neighbourhood should remain a priority for policy makers in Russia and the West.
The Dynamics of Emerging De-facto States: Eastern Ukraine in the Post-Soviet Space
The study described is an “intensive study of a single case” (Gerring Reference Gerring2006, Kindle Location 208). We were interested in understanding one particular outcome—the emergence of the de-facto entities of the so-called Donetsk and Luhansk People’s Republics in Donbas—in a specific instance: the crisis in Ukraine, specifically between late 2013 and mid-2015.
Developing “a more complete story with actors, motives, stages, and causal mechanisms that move the plot along” (Coppedge Reference Coppedge2012, loc. 3368-3370 of 10599), we focus on the pathway “to locate the intermediate factors lying between some structural cause and its purported effect” (Gerring Reference Gerring2006, loc. 521-522 of 10599). Note, however, that the approach taken in this study is more akin to a causes-of-effects approach (Gerring Reference Gerring2012, 332-335), trying to elucidate comprehensively what caused the emergence of the two de-facto entities in eastern Ukraine.
The analytical framework of a blended conflict that we employ captures the dynamic connectedness of actors, structures, and other factors at and across different levels of analysis and implies a significant role for actors that are external to the state and/or the region in which the conflict is situated or where it originated. This adds to conflict complexity, especially when the penetrating outsiders are, or grow to become, antagonists.
The framework of understanding the emergence of the de-facto entities in Donbas as underpinned by the dynamics of a blended conflict provides sufficient initial guidance on data requirements. While the generation of the core concept—blended conflict—was primarily based on inductive observation of a number of cases, including Ukraine, the more detailed examination of the conflict in Donbas was at least to some extent more deductive in that it used this concept for systematic and structured observation. Yet, as the concept was not yet integrated into a well-formed theory, we had no basis for deriving and testing hypotheses, and in this sense this case study of the conflict in Donbas is more of the hypothesis-generating kind.
The way we conceptualise blended conflicts is suggestive of the need to collect data at local, regional, and global levels of analysis that can help trace the process of the emergence of the two de-facto entities in Donbas. Sources of such data are, first of all, key decision-makers at each of these levels, including local and central government officials and key power brokers in Ukraine, officials in relevant international organisations in their headquarters and based in the country. These data can be obtained through interviews, focus groups, and participant observation, as well as through official statements and published third-party interviews. Additionally, academic experts and analysts who follow the same case can be useful sources of information (either through their published work or through interviews and focus groups), as well as a sounding board for ideas that develop in the course of field work and desk research. Official documents (such as the joint declarations and agreements concluded in the process of settlement negotiations) formed another source of data that we relied on, as did media coverage, primarily in Ukraine and Russia and originating from the two de-facto entities.
Over time, it became necessary to carefully reflect on, and adjust, our data collection strategy. The investigation of the dynamics underpinning the emergence of the two de-facto entities in Donbas started out as an empirical, curiosity-driven inquiry. As the trajectory of developments in eastern Ukraine began to point more and more clearly into the direction of new de-facto entities being established, we began to follow this process more systematically, initially with a focus on the various rounds and formats of negotiations and the fate of the various agreements signed, as detailed in the previous example. At this stage, the empirical research consisted primarily of closely following events on the ground and in the media and interviewing key informants in local and central government in Ukraine, in Ukraine-based missions of international organisations and in their headquarters.
During the summer of 2014, conditions for fieldwork in eastern Ukraine, and particularly in Donbas, became more hazardous for researchers and interlocutors, all but ruling out the continuation of key informant interviews in or near the conflict zone. Therefore, we began to rely more on internet-based media sources that by then had started to carry statements from, and interviews with, leading officials in the rebel governments. While we were not able to ask our own questions, the issues addressed in these broadcasts covered many areas of interest. Moreover, when comparing such third-party interviews to those that we had been able to carry out before the deterioration in the security situation in Donbas, we found that they were generally no more or less credible than the ones we had conducted ourselves and thus constituted a reasonable adjustment to the existing data collection strategy.
We faced a similar problem, albeit for different reasons, with interviews with key informants from Russia. As we were unable to gain direct access to senior government sources, we relied on published statements, transcripts of news conferences, and readouts from bilateral telephone conversations. We had better access to academic experts and analysts, partly through our established networks that predated the crisis in Ukraine. While participation of our Russian contacts in workshops that we organised in Ukraine had become impossible from mid-2014 onwards, we were still able to conduct interviews via email and Skype or in third-country locations. Taken together, this allowed us to reconstruct in detail Russia’s perception of the conflict and trace the evolution of its policies since late 2013.
There were no comparable problems concerning access to Ukrainian or international key informants (policy makers, analysts, academic experts) and we were able to conduct a significant number of interviews over the course of several years, including with contacts in Ukraine, in the OSCE, the EU, UNDP, and the World Bank.
In total, we conducted 65 interviews between April 2014 and August 2018 and discussed our research in 13 workshops over the same period of time. The latter, involving a range of participants from junior academics to seasoned analysts and senior government officials, was one of our strategies to corroborate data obtained from other sources and to sense-check our own analysis and interpretation of the wealth of information that we gradually built up. This use of key informant interviews and workshops again highlighted the very iterative nature of the research process between inductive and deductive, between empirically driven and theory-guided knowledge generation.
While we used these workshops as an integrated part of our research strategy, they were not the only means of triangulation. We also ensured as comprehensively as possible that we did not rely on just one data source or type of data source in supporting a particular claim by cross-checking information across interview transcripts and/or media and/or official documents. We generally used later interviews in the data-collection process to discuss information obtained from a range of earlier sources. Where discrepancies became obvious, we used our best joint judgement to “adjudicate” between sources; where consensus could not be reached, we did not rely on the piece of information concerned in our argument.
It is also worth noting that we were able to access original data in Ukrainian, Russian, English, and German and conduct interviews in all but a small handful of cases in the interlocutor’s mother tongue.Footnote 22 This enabled us to pick up nuances and to establish good personal rapport with our informants.
This data collection strategy thus produced a rich set of observations. At the same time, our pre-existing knowledge of Ukraine and the post-Soviet space more generally, our prior development of competitive influence-seeking as a plausible theory to understanding Russia’s Ukraine policy, and our assumptions about blended conflicts formed a “comprehensible universe of causal relations” (Gerring Reference Gerring2012, 331), in which we could make a number of general assumptions about how the world “works” in such a situation. Taken together, the nature of the data and our ability to interpret them in a structured way required, and enabled, us to use causal-process observations to develop a thick analytical narrative of the developments leading to the emergence of the two de-facto entities in Donbas. In doing this, we specify a particular pathway through which this outcome developed. Partly because of the inductive nature of this approach, under which we did not have preconceived notions of how the outcome would emerge, partly because there is very little, if any, existing research on this particular aspect either in the Ukrainian context or across the post-Soviet space,Footnote 23 and partly because of the real-time research that we conducted as events unfolded on the ground, a systematic consideration of alternative explanations was less feasible. However, the particular combination of data collection methods, especially through workshops and interviews, also served as a means to debate and test such rival explanations. Initially, this was an almost accidental by-product of academic and policy workshops in which different participants presented their own papers and discussed those of others. As our own work developed and a clearer argument began to emerge that built, first, on the theory of competitive influence-seeking and, second, on the concept of blended conflict, debate on the suitability of both became more robust and forced us to defend our argument on its own merits and by demonstrating that other approaches were less plausible on the basis of available data.
The analysis of the events that unfolded from late 2013 in and around Ukraine thus establishes a credible and well-substantiated causal pathway “that [is] consistent with the outcome and the process-tracing evidence” (George and Bennett Reference George and Bennett2005, 207) in this particular case. Additional gains emerge from the validation of the analytical utility of the concept of blended conflict and the revalidation of the theory of competitive influence-seeking, both of which lend themselves to broader comparative application.
Conclusion
My starting point, in line with much of contemporary qualitative methodology, is the need for a balanced and refined relationship between the conceptual and theoretical foundations of a particular research project, its empirical basis (i.e., the case or cases), and the methods of data collection and analysis employed. Such a relationship is best conceived as a dynamic process involving “the generation, testing, revising, and retesting of explanatory propositions within the same complex material” (Rueschemeyer Reference Rueschemeyer, Rueschemeyer and Mahoney2003, 315), that is, a constant back-and-forth between ideas and evidence that is mediated by the rigorous application of best-practice standards of qualitative social science research.
In the context of case studies, process tracing appears the most promising avenue of having confidence in how and what we know. Using process tracing in case studies is based on the fundamental (Weberian) premise that verstehen (understanding) must precede erklären (explaining) yet, at the same time, “any explanation requires theoretical premises” (Rueschemeyer Reference Rueschemeyer, Rueschemeyer and Mahoney2003, 307) and a methodological apparatus capable of uncovering the causal mechanisms that are presumed by its ontological foundations (Hall Reference Hall, Rueschemeyer and Mahoney2003, loc. 9294-9372 of 11860). Process tracing, if well-done, thus facilitates the dynamic relationship between concepts and theories, methods, and empirical evidence and enables the researcher to evidence the causal mechanisms at work in a specific case (or set of cases) that connects causes (and/or conditions) with outcomes.
Using two examples of case studies that rely on process tracing, I have illustrated how three standards of best-practice—the need for a theory-guided inquiry, the necessity to enhance causal inference by paying attention to (and ruling out) rival explanations, and the importance of transparency in the design and execution of research—can be applied in the challenging circumstances of fieldwork-based case studies on conflict zones.
A case-informed and case-informing conceptual and theoretical framework, profound understanding of the case, including macro- and micro-level dynamics, and a contextually sensitive and ethical approach to data collection, including reflection on appropriate levels of transparency, are preconditions to successful process tracing. While the two examples are somewhat different in their objective and execution, these fundamental aspects of research design are, nonetheless, observed in both. Their presence allows researchers to have confidence in their findings, even though any conclusions drawn retain a certain degree of contingency, perhaps less so in the specific case or cases studied, but more so when it comes to theoretical, empirical, and methodological generalisations beyond it. Such limitations are, partly, in the general nature of social-scientific inquiry, but they also require researchers working on conflict zones to be reflective and open about these limitations of the knowledge and understanding they generate, particularly because of the policy applications that their work may have.
The cautionary approach to policy recommendations evident in the two examples considered here is common to much of the research on conflict zones. It also highlights that the translation of fieldwork-based case study research into policy recommendations has implications beyond the academy. Transparency about data collection and analysis is a critical contribution that scholars can make to evidence-based policy making, but it does not absolve policy makers from reflecting on their judgements of the credibility of policy recommendations and taking responsibility for them. In turn, the different skills that scholars and policy makers bring to the table when it comes to evaluating the evidence generated from fieldwork could be usefully combined in efforts to establish community standards for this very purpose that would improve the robustness of both evidence and the policy implications that might follow from it.
My focus on just three specific quality standards for process tracing is not to negate the relevance of much broader requirements found in the literature. These continue to apply, but we must be aware of the limitations that are imposed upon the process tracing method when applied to fieldwork-based case study work on conflict zones. Focusing on the necessity of theory-guided inquiry, of ruling out rival explanations, and of being as transparent as possible in how data were collected and analysed should be seen as a minimum threshold below which causal inferences cannot be relied upon, which is particularly important given the policy implications of much case study research in this field. The focus on just these three standards also aligns with the standard of evidence in fieldwork-based case study research on conflict zones that I have proposed, namely of theoretical and empirical plausibility.
This is not to argue that, given the challenges of fieldwork-based case studies on conflict zones, we should not still aspire to, for example, Waldner’s (Reference Waldner, Bennett and Checkel2014) “completeness standard,” but we must be realistic about the extent to which achieving it is possible. An evidentiary standard that prizes both ex-ante derived theoretical plausibility and process-tracing-based empirical plausibility may not be the gold standard in the field of process tracing, but it may still be preferable to “just-so” stories or to no application of the process tracing method in this field. Process tracing that meets a quality threshold of theoretical and empirical plausibility is a viable method for arriving at contingent causal claims that enhance our knowledge and understanding of real-world cases of conflict and can lend themselves to equally contingent and cautious policy recommendations. To paraphrase Cohen and Arieli (Reference Cohen and Arieli2011), while accepting its inherent limitations, such an approach “may make the difference between research conducted under constrained circumstances and research not conducted at all” (433). This applies to conflict zones in the post-Soviet space, as demonstrated here, and well beyond.
Acknowledgments
I am grateful for the constructive comments I received from Tatyana Malyarenko, Claudius Wagemann, Markus Siewert, George Kyris, Natascha Neudorfer, Giuditta Fontana, Argyro Kartsonaki, and Christalla Yakinthou, from participants at the Workshop on “Methodological Advances in the Study of Civil War and Political Violence” at the University of Birmingham’s Institute for Advanced Studies in November 2018, and the EWIS Workshop on “International Dimensions of Unilateral Secession” in Krakow in June 2019, and from two anonymous reviewers and the editor of Nationalities Papers. Special thanks are also due to Richard Snyder for his very thoughtful copy-editing. The usual disclaimer applies.
Financial Support
This work was supported by the UK’s Economic and Social Research Council under Grant ES/M009211/1 (“Understanding and Managing Intra-State Territorial Contestation”).
Disclosure
Author has nothing to disclose.