Hostname: page-component-78c5997874-m6dg7 Total loading time: 0 Render date: 2024-11-13T01:15:09.422Z Has data issue: false hasContentIssue false

Models, Conceptual and Predictive: A Response to Johnson’s Models-as-Fables

Published online by Cambridge University Press:  04 October 2021

Rights & Permissions [Opens in a new window]

Abstract

James Johnson argues that formal models are best conceived as fables which provide lessons about empirical phenomena and the “standard rationale” of testing model predictions fails. Without justifying the “standard rationale” as such, we argue that models produce scientific predictions. These predictions come at different levels or granularity of description and in different forms each bearing some degree of uncertainty, but still give conditions for the existence of political phenomena. Models and their predictions require projection onto the world, and that projection involves interpretation. Tests utilize inference to the best explanation, and it is the conceptual or theoretical aspect of models that make them explanatory. We discuss the extent to which our characterisation of models and their explanatory form versus that of Johnson constitutes a verbal or substantive dispute.

Type
Reflection
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the American Political Science Association

Recently in this journal Jim Johnson has produced a powerful argument about the role of formal models in political science. He defends his view of scientific models as fables, similar to Aesop’s, where animals imbued with well-defined human characteristics interact and teach us a moral lesson. In this view models are distinct from theories, produce no predictions, cannot be tested, and do not bear a truth value. His argument is directed at what he calls the “standard rationale” in political science—and specifically at “Positive Political Theory” (PPT).

Johnson illustrates his case using an extended case-study of a sustained debate in PPT, tracing it from McKelvey–Schofield to rational choice institutionalism. However, his argument extends to all mathematical models in the social sciences, following Daniel Hausman in taking mathematical models to be conceptual. For Johnson, such conceptual exercises are part methodological, part explanatory. We do not dispute the conceptual role of formal models. We do dispute that such conceptual exercises entail models are not predictive. Indeed, we argue that, to be explanatory, they must provide predictions. However, our account of predictions is not that of Johnson.

To that extent, our disagreement might be merely verbal (Chalmers Reference Chalmers2011). A verbal disagreement can take (at least) two forms. The same term or claim can be used with two different extensions—in other words, the dispute concerns ambiguity of the claim itself. Or different words can be used to describe the same extension, but the descriptive differences add little of substance to our understanding of the extension itself. But verbal disputes of this second variety—which is how we see at least part of the difference between Johnson and ourselves—can be the product of a substantive dispute elsewhere. However, Johnson also claims that formal models have no substantive empirical content, do not bear truth values, and cannot be empirically tested. Our substantive disagreement is that they do and can.Footnote 1

Our account of models is broader than Johnson’s. We do not see the large discrepancy between mathematical models and less formal ones. We do not defend the “standard rationale” as such, since our account does not fully conform to what we take that rationale to be. So, despite appearances—we think models produce predictions, Johnson does not; we think formal models are open to empirical testing, Johnson does not; we think models bear truth values, Johnson does not—our account of what we do when we use models is not, in some regards, so different from his.

We begin by briefly laying out Johnson’s position and the “standard rationale.” We then discuss various ways of thinking about models and how Johnson’s and ours fit into those ways. We explain our position, pointing out the dangers of mere verbal dispute, and finally compare again with Johnson, arguing that our position is both substantively and rhetorically superior.

Models-as-Fables

The standard rationale is:

(1) we rely on formal models to generate predictions, (2) we treat these predictions as empirical hypotheses, and (3) we seek to test these hypotheses against evidence derived from the “real world.” Models, according to the standard rationale, are valuable for directly empirical purposes. (Johnson Reference Johnson2020, pp. 1–2)

We can illustrate how the standard rationale may misinterpret the point of formal models with a simple example. Think of the toy game Prisoners’ Dilemma (PD). This model shows how the agents’ preferences and the game form dictate a suboptimal solution. Now, for many years social psychologists and behavioral economists tested the prediction in laboratory experiments, finding higher levels of cooperation among subjects than in the toy PD. Game theorists responded that these experiments were not tests of the one-shot PD. One line was that experiments did not represent a single-play PD and when iterated we expect higher levels of cooperation. Even if the experimental set-up is fixed, contrary results would just show that either the subjects’ preferences were not those of the game-agents or the subjects misunderstood the game form. One way of interpreting these claims is that since game theory is a branch of mathematics, it cannot be empirically tested. Ken Binmore (Reference Binmore1992, 314), for example, says that “there is nothing a game theorist would like better than for his propositions to be entitled to the status of tautologies just like proper mathematical theorems.”Footnote 2

What use, then, is the simple PD as a model of human behavior? It is a fable that provides a moral for understanding human cooperation. In some situations, we cannot simply expect people to work together in their common interest. The moral suggests we need to carefully study social relationships to understand how cooperation might break down—Johnson (Reference Johnson2020, 8–9) mentions Ostrom’s empirical work as an example.Footnote 3 The conceptual lesson is that mutual cooperation is not assured.

In the section Retelling Rochester, Johnson narrates the moves over several decades in the formal analysis of disequilibrium, from the generalized instability of majority rule, through institutional equilibrium solutions, to institutions as equilibriums. He argues that this is a conceptual exercise that ends with understanding institutionalism as methodology. The moral is that how institutions affect human behavior is the subject of politics. But not just that, for that is an ancient trope, but that small changes in institutional forms can have vast effects on the types of outcomes that emerge. Rule changes can have large and unintended consequences.

There is a further important aspect of Johnson’s account. He distinguishes three accounts of models—the syntactic, the semantic, and the predicate—his own preferred view being the predicate.Footnote 4 He will, we think, place our account here into the syntactic conception (though we are somewhat chary of these neat divisions).Footnote 5 However, Johnson’s commitment to the predicate account is important for the danger of our dispute being merely verbal. Johnson, following Hausman, distinguishes models from theories. He says,

Among philosophers of science of various stripes something of a consensus exists that whatever else we might want to capture with the idea, a theory consists minimally in a set of claims about the world, meaning a set of claims with substantive content. So understood, a theory can be assessed empirically and found to be supported or not by appropriate evidence. (Johnson Reference Johnson2020, 6)

Johnson, like Hausman, needs to distinguish models from theories, since their claim is that models do not have substantive content—they are conceptual exercises. However, suggesting that a theory is minimally a set of claims “with substantive content” about the world does not minimally capture what is meant by theory. A theory is not simply any old set of claims—“Grass tends to be green, northern hemisphere swans tend to be white, the moon is made of cheese, and Trump was once president of the United States,” is a set of claims with substantive empirical content, but this set does not constitute a theory. Minimally, a theory has to provide an explanation.Footnote 6 Minimally, it has to be some complex set of propositions which together lead us to believe we understand how part of the world works. And that is the role of predictions. A scientific prediction is a logical implication drawn from some theory. They are existential since they are conditional: “if condition X holds, then we expect Y”; or “if condition X holds, we expect with some probability that Y holds.” They are explanatory, at least in part, because if X is supposed to explain Y, then it should do so in all analogous circumstances (Dowding Reference Dowding2016, ch. 3).

Scientific theories also utilize concepts. For nominalists these concepts do not refer to anything in the world. For realists they do. Theories refer to types of things; nominalists do not think types or universals exist, only tokens or particulars. Theories and models do not refer to tokens or particulars. For nominalists they only take on empirical content when applied to token examples. For realists they directly have content since types and universals are assumed to exist. Either way round, theories are about types of things, and this is important to the nature of their substantive content. The substantive empirical content (to a realist) is the type to which they refer. To be sure, even to the realist, that empirical content is instantiated in the token examples which compose the type. But those tokens also refer to non-actual ones. Popper (Reference Popper1972, 119–30) argues that the empirical content of a theory is composed of what it excludes. And that means what it excludes counterfactually. It should tell us what would happen in a non-actual token event. Science is about necessity and not contingency.Footnote 7

Theories, then, are minimally explanatory, contain theoretical concepts, but are supposed to have substantive implications about the world. These implications are type phenomena (Y) predicted under conditions X. Generally speaking, such conditions are the structures that constrain the possible form of Y. Sometimes, but not always, those constraints specify a specific form of Y. We apply the theory to situations where that structure is present. We can also reverse engineer and predict that, given some set of outcomes, the structure must have some given form. Theories are composed of theoretical terms (concepts) designed in order to help us understand how the world works. We apply them to specific or token cases that require interpretation. Following Nelson Goodman (Reference Goodman1958) we call this “projection.” And Goodman importantly shows (via his “grue” example) that even induction requires theory in order to determine which, from the possible set of projections, is the one we are interested in (see Dowding Reference Dowding2016, 107–11). It is theory that makes such induction explanatory.

Models as Mechanisms

We understand models as one type of explanatory theory. They can still be mediators between aspects of the world and higher-level theories. For example, different agency models, looking at different structural features between principals and agents, can all be categorized within a broader agency theory, or a broader theory about human responses to incentives. But each model can be considered a theory of a particular type of relationship. We see “theory” as a generic term for any generalized explanatory account of some aspect of the universe.

Specifically, we see models as modeling mechanisms, or parts of mechanisms, depending upon how one specifies the latter. There are many competing definitions of mechanism (see Hedstrom and Ylikoski Reference Hedstrom and Ylikoski2010; Beach and Pedersen Reference Beach and Pedersen2016). We see mechanisms as Woodward (Reference Woodward2003) does—summarized here by Hedstrom and Ylikoski (Reference Hedstrom and Ylikoski2010, 51):

A model of a mechanism (a) describes an organized or structured set of parts or components, where (b) the behavior of each component is described by a generalization that is invariant under interventions, and where (c) the generalizations governing each component are also independently changeable, and where (d) the representation allows us to see how, by virtue of (a), (b), and (c), the overall output of the mechanism will vary under manipulation of the input to each component and changes in the components themselves.

Games have (a) a game form (number of decision nodes) with agents conforming to the axioms of rational choice, (b) each decision node is treated exactly the same, with agents maximizing their utility, (c) the game form can change with the number of decision nodes and/or utility functions of agents, and (d) such change in decision nodes and/or preference orderings will vary the output.

The one-shot PD is very simple, and usually referred to as a “toy game” in comparison to more complex dynamic games where the real work is done (see Ross Reference Ross and Zalta2019). Nevertheless, it produces a prediction. The game structure (agent preferences plus game form) predicts agents will reach a suboptimal collective solution. The prediction is a type of event. Under this game form with these preferences, the outcome is suboptimal. Formal models in these terms must be predictive since they are deductive. Are these empirical predictions? If the conditions in the world conform to the model, they will occur. That is an empirical prediction. But does it project onto any actual situation in society?

We can think of this question as analogous to laboratory experiments (Guala Reference Guala2005; Mäki Reference Mäki2005). Does the PD toy game have any external validity? Even if you think the answer is zero, it still produces scientific predictions. Scientific predictions should be distinguished from pragmatic ones, which attempt to forecast; but forecasts need not be explanatory at all (Dowding and Miller Reference Dowding and Miller2019). Scientific predictions are necessary components of scientific explanation and they concern not only outcomes but, as they model mechanisms, the structural features of types. We can note that if one only tests a model in terms of projecting the outcomes, one can only find evidence consistent with the model (Dowding Reference Dowding2016, ch. 5) and not a full demonstration that the mechanism works as described by the model. This is one defence of qualitative process-tracing methods (e.g., Beach and Pedersen Reference Beach and Pedersen2016).

If the PD only has low external validity, what use is it? We can understand through it the type of mechanisms that exist in situations that are similar at some level of granularity of description. Any given description of the world can be given at different levels—or granularity—of detail. For “toy games” that level of granularity is low. Nevertheless, at that level, predictions exist. To the extent that situations resemble the PD, we can expect suboptimal solutions. For Johnson, that claim is a moral. For us it is a prediction. The dispute seems verbal.

Predictions

The term prediction is used ambiguously. Scott Page (Reference Page2018, 77) says,

Plate tectonics models explain how earthquakes arise but do not predict when they occur. Dynamical systems models can explain hurricanes, but they cannot predict with much success when hurricanes will form or what paths they will take. And while ecology models can explain patterns of speciation, they cannot predict new types of species.

What Page means here by prediction is forecast. If that is what Johnson means, our dispute is merely verbal. Traditionally, however, scientific predictions are intimately connected to explanation and theory. Dowding and Miller (Reference Dowding and Miller2019) distinguish “scientific” from “pragmatic predictions.” The former is the logical implication of a theoretical, ideally formal, model, but also informal ones, and inductive inferences from past data. Scientific predictions are conditional in nature and in that sense formal models produce predictions. They are about types of events. Pragmatic predictions are specific predictions, often about future token events, and are commonly called “forecasts.” Dowding and Miller argue that the criteria for judging good scientific predictions do not coincide with those of forecasts. Good forecasts need not be scientific at all—they may have no explanatory value. Good scientific predictions might not be amenable to forecasts, or to empirical testing when the data is not available, or when we are not yet technically equipped to test them. That does not mean they have no empirical content; nor necessarily that they are not testable. There are many scientific predictions that took decades to test. Many scientific tests are indirect, notably in history and archaeology (Currie Reference Currie2018).

Later in his book, Page writes about four ways in which we can think about the predictions of models: equilibrium, cyclic, random, and complex (Page Reference Page2018, 147). Each type bears a certain degree of uncertainty—a fundamental element of reality to incorporate into models. Equilibria give no uncertainty; they provide constraint-based explanations. An equilibrium is sustained even though there might be multiple token causal paths to it. Page’s other classes bear different levels of uncertainty, allowing for the emergence of cycles, random walks, and complex systems. Indeed, by assuming that uncertainty plays a role in the operating of a model’s mechanism, modelers are capable of evaluating outcomes that do not necessarily fall into well-behaved equilibria. Recent literature on complexity in political science calls for incorporating uncertainty into formal and statistical models, often into a single entity (Signorino Reference Signorino2003; Minhas, Hoff, and Ward Reference Minhas, Hoff and Ward2016; Warren Reference Warren2016). Our account acknowledges these distinct types of model predictions. Some of these models are not the target of Johnson’s specific argument. However, the same logic underlies all these types of model prediction; only the empirical consequences (content exclusion) of the predictions vary.

Johnson’s case is that the PPT models in his narrative offer no “prediction at all.” His examples of McKelvey–Schofield chaos theorems, Calvert’s equilibrium institutions, and Shepsle’s structure-induced equilibria show that these models only offer conceptual tools to grasp an intuition about real-world phenomena, rather than a pragmatic prediction of any specific outcome. Johnson claims the allegedly vague predictions of “political chaos,” institutional rules that constrain instability, and the persistence of institutions do not constitute predictions. However, they constitute precisely the scientific predictions specifying necessary constraints on sets of outcomes and are tested indirectly through the methodological process Johnson describes. Is this just verbal dispute?

Structural Realism

Johnson’s view implies a stark distinction between concepts and the world, or between the abstract and the concrete. “Abstract” means existing in thought or as an idea, “concrete” means existing in material or physical form. Even for a model to work as a fable, there must be some similarity relationship between the narrative and the world, as Johnson (Reference Johnson2020, 10–11) recognizes. However, how can a thought be similar to a physical form? Arnon Levy (Reference Levy2015,785) notes that “abstracta and concrata cannot share properties”; a property understood as something in thought only and a property existing in material form are different sorts of things. Those forms cannot share properties. Models can be projected onto the world because the world is already conceptualized. We see the world in terms of trees or water, of political parties or human beings. We see them as token examples, but also, and obviously even more conceptually, as types.

Models are stripped-down versions of what they represent. Everyone agrees on that. But what is stripped out? What do they represent? Experimental tests of the simple PD assume model agents represent biological human beings. And they test to see if biological human beings, when placed in the game form, act as predicted. Game theorists, as we saw, suggest biological human beings do not necessarily have those preferences, or are not playing the correct game form in the laboratory. What is represented is the structure of relationships between preferences given the game form, such that type-agents will not act in their mutual interest. The similarity relation between types of situations in the world is structural. While it is a conceptual exercise, it is not about non-existent entities but real-world structures described at low granularity.

In accounts of ontic structural realists (OSR), these structures turn out to be more real than everyday objects, even biological humans, whose actions result from the interaction of genetic-evolutionary, cultural-evolutionary, and information-processing dynamics in the complex human brain (Ross Reference Ross2008, Reference Ross2014).Footnote 8 In other words, the causal forces represented by such game forms operate through the minds of biological humans, and it is these forces that are modelled. However, one does not have to buy that far into OSR to see structures as real entities that constrain possibilities where model agents represent roles that biological agents take on in certain circumstances (Kincaid Reference Kincaid2008). People are constrained by the roles they play and, as we argued earlier, scientific explanation is largely about necessary constraints upon contingent affairs. In our account, formal models are about types of social and political forces. Tests of such models concern evidence about these types, generally estimated from the multiple token examples.

Now, it is true that nominalists believe that only tokens exist, not types. Each token has a causal capacity and these capacities can be modelled as mechanisms. The move seems to be: since types do not exist, we cannot have predictions about them, only morals drawn from a token model that can be applied to other token examples (Cartwright Reference Cartwright, Frigg and Hunter2010). For the realist, however, the underlying patterns we find across token examples are what allow us to form expectations about similar tokens. Those expectations, because they are predictive of the patterns by which we conceptualize the world, are more real than their instantiation in tokens.

Formal Models and Predictions

Johnson’s account is directed at PPT, the case-study he specifically describes, where even some of the authors themselves suggest their models are not open to empirical test. We have suggested the substantive content of these are in terms of the constraints on outcomes, and are empirically examined indirectly. We shall now briefly give some examples of formal models with tested predictions; that is, models with explicit predictions, with tests via suitable statistical models, no matter whether they take the form of discrete equilibrium or of complexity-oriented formats. Scholars have developed diverse strategies to work out models’ outcomes and empirical data in ways that they can fit into a statistical test.

Most political scientists are familiar with equilibrium predictions of game-theoretical models. They resemble discrete points of well-specified outcomes or scenarios emerging through the model’s structure. Nash equilibrium is the most conspicuous equilibrium mechanism. As many models utilize it, this leads to the misleading idea that games have a single equilibrium. Once you find that solution, the game is solved. If an empirical outcome conforms to such a prediction, the prediction is consistent with the outcome no matter what the causal path. Elliot Sober (Reference Sober1983), for example, claims, that for such reasons, equilibrium explanations are not causal explanations.

Having said that, there is more to be said about equilibria. Take, for example, the game-tree depicted in figure 1, representing Bueno de Mesquita and Lalman’s (Reference Bueno de Mesquita and Lalman1992) strategic game of warfare. The game starts with State 1 making a decision that leads to a node where State 2 has to decide. The actions are denoted by demands D (for State 1)/d (for State 2), and use of force F (for State 1)/f (for State 2). Probabilities p are assigned to each branch of the tree. At least eight different outcomes are possible: (A) status quo; (B) State 1 acquiesces; (C) State 2 acquiesces; (D) both states negotiate; (E) State 2 capitulates; (F) State 1 initiates a war; (G) State 1 capitulates; (H) State 2 initiates a war. Without submerging ourselves into the model formalities here, it makes a set of predictions about possible scenarios of an international crisis that may lead to states waging war. Furthermore, by specifying how states react at each node of the game-tree, the model also describes how the crisis unfolds. It specifies potential directions within the mechanism. In Bueno de Mesquita and Lalman’s (Reference Bueno de Mesquita and Lalman1992) original tests and Signorino’s (Reference Signorino1999) subsequent strategic international game, the predictions are tested. These authors consider these outcomes as predictions: each possible outcome is a prediction in its own right. One cannot argue that the model of figure 1 presents a single prediction, unless the model is reduced to predicting war (which would render the model less interesting and, ultimately, less explanatory).

Figure 1 International interaction game, Bueno de Mesquita and Lalman (Reference Bueno de Mesquita and Lalman1992)

Source: Signorino Reference Signorino1999. Signorino reproduces Bueno de Mesquita and Lalman’s model in order to develop his own approach to this strategic international game.

In a similar vein, formal modelers and empiricists resort to the same lexicon when referring to testing models’ outcomes. In table 1, we present a brief survey of formal models in specific areas of political science and international relations. Their authors clearly state from the outset that they aim to test the outcomes of formal models via some statistical method.Footnote 9 What matters here is how one derives hypotheses from the formal model (Diermeier and Stevenson Reference Diermeier and Stevenson2000; Martin and Stevenson Reference Martin and Stevenson2001; Becher and Flemming Reference Becher and Christiansen2015), how one measures specific concepts entailed in the model’s structure (Eyerman and Hart Reference Eyerman and Hart1996; Partell and Palmer Reference Partell and Palmer1999; Ansolabehere et al. Reference Ansolabehere, Snyder, Strauss and Ting2005; Tomz Reference Tomz2007), or even how the results of the test are interpreted vis-à-vis the formal model (Laver and Shepsle Reference Laver and Shepsle1996; Signorino Reference Signorino2003, Reference Signorino2007; Dewan and Spirling Reference Dewan and Spirling2011).

Table 1 Papers testing formal models

* Articles propose a model that combines simultaneously formal and statistical structures

For some critics, such statistical models do not test formal models because they shift the formal model’s parameters. Clarke and Primo (Reference Clarke and Primo2012, 104) state that while empirical models are most often used to “test” theoretical models, the theoretical (formal) and empirical models are different because “an empirical model … should describe accurately the dependencies within a given data set,” and as a consequence they “cannot attain the same level of generality that theoretical models do because where theory is general, data are specific, tied to particular places and times” (105). In other words, they are nominalists claiming token evidence is not evidence of types or universals. They further claim that formal and empirical models belong to different logical domains that lack a “deductive connection” (122); therefore, they conclude, “empirical models … are of little use in theory testing” (122).

First, let us note that no test of a model (mathematical or otherwise) involves deduction. Tests are abductions or inferences to the best explanation. Indeed, that is the upshot of Johnson’s conceptual claim for formal models. Clarke and Primo (Reference Clarke and Primo2012) defend formal models on similar grounds. They suggest statistical models cannot provide explanations; that is the role of theoretical models.Footnote 10 Their book is not entirely clear quite how theoretical models provide explanations for empirical models’ findings, unless statistical models are tests of the predictions, in our sense, of the former. There must be some similarity relationships between the world and the formal model for it to constitute an explanation. Like Johnson, they claim models cannot be true or false; indeed, they claim explanations do not have truth values either.

Clarke and Primo (Reference Clarke and Primo2012, 153) make this claim on the grounds that it would follow that “Newton’s theory did not explain the tides because we now know that Newton’s theory is not true.” The judgment about whether something is an explanation must lie between the set of propositions constituting the explicans and that of the explicanda. Newton explained that the ocean tides result from the gravitational attraction of the sun and moon on the oceans of the Earth; the greater the mass and the closer the distance, the greater the gravitational attraction. That is true. The proposition relating the explicans and explicanda at that level of granularity is true, which is why we consider his theory an explanation. However, Newton’s law states that gravitational attraction is directly proportional to their mass and inversely proportional to the square of the distance between the two bodies. That is false, demonstrated by the fact that using that calculation incorrectly predicts tidal force. In fact, the distance is more critical than the mass: the forces vary inversely to the cube of the distance. So, the propositions of Newton’s calculations are incorrect and for that reason cannot explain the precise tidal relationship between the moon, sun, and ocean (for discussion, see Thurman and Burton Reference Thurman and Burton2003). It is those propositions that cannot be the correct explanation. Explanation is always explanation at some level (or granularity) of analysis. At the level at which we normally suggest Newton explains the tides, his model is true.

Saying theories do not have to be completely true in all their details in order to provide good explanation of phenomena described at some level of description is not the same as claiming truth has no role to play in explanation. Claiming neither predictions nor truth are required for models to be explanatory does not seem to allow any way of telling the difference between, for example, purported explanations of storms as patterns of air condensing and the activities of Norse gods. Models need to be true in those details that bear similarity relations to the world, tested by their predictions as projected onto the world at the appropriate level of granularity.Footnote 11 To be sure, the models are abstract, in the sense that their details are abstracted from the complexity of the world. And they are applied, in the sense (as Clarke and Primo argue) that they might apply to the phenomena in one frame, whereas another (non-rival) model applies to that phenomenon in a different frame.

Some accounts combine both formal and statistical models into one single entity in order to correctly represent the uncertainties entailed in the former (Signorino Reference Signorino1999, Reference Signorino2003; Signorino and Yilmaz Reference Signorino and Yilmaz2003; Signorino and Tarar Reference Signorino and Tarar2006; see also Minhas, Hoff, and Ward Reference Minhas, Hoff and Ward2016; Warren Reference Warren2016). This entity cannot be subsumed to an archetypal statistical test: it constitutes a class of models where both components—formal and statistical—work together to generate and test outcomes. To separate them and see the outcomes as represented in the graphical image of the formal model is mathematically misleading, due to the combination of the formal and statistical components. What really matters here is that the structure of the formal model and the statistical estimation talk to each other to provide a firm metrics for empirical testing (Signorino and Yilmaz Reference Signorino and Yilmaz2003; Signorino Reference Signorino2007; Warren Reference Warren2016).

This strategy is particularly interesting where the level of uncertainty is high. The vast majority of political and social phenomena are pervaded with uncertainties and nonlinearities, which frequently make them highly sensitive to small changes in initial conditions. Indeed, this is the underlying premise of chaos theory, which thrived as a particular field in mathematics and physics as a result of Edward Lorenz’s (Reference Lorenz1963) findings in meteorology. In his modelling of atmospheric phenomena via dynamic differential equations, he realized that, although the equations were highly nonlinear and did not converge to a single outcome, their solutions followed predictable trajectories, which by themselves constitute the (chaotic) solutions for the problem. As Page (Reference Page2011, 27) notes, such complex outcomes “lie between simple structures and randomness” and such random behavior at highly granular description nevertheless displays recurrent patterns and structures at lower levels of granularity. This fundamental understanding allows models of complex phenomena to make predictions that capture the patterns generated by complexity. Even random walks, we might note, follow paths.Footnote 12

Politics, with its intrinsic complexity, is a natural locus for chaotic outcomes described in detail; nonetheless, this does not mean we cannot discern patterns at lower levels of detail. This is the explanatory and predictive role that even simple models can perform. They can constrain the set of outcomes and can describe the mechanism by which outcomes do emerge. In contexts highly sensitive to uncertainty (the main nonlinear feature of politics), predicting equilibrium outcomes is not an easy task (Signorino Reference Signorino2003), nor is observing and testing them vis-à-vis real-world data. Instead, if we think that uncertainty creates complex solutions depending on how agents interpret and ultimately incorporate them in their decision-making processes, we can offer predictions that point to possible trajectories rather than to a definitive, unique solution.

Verbal and Substantive

Considering Johnson’s extended case-study, our account might appear to be a mere verbal disagreement. We agree the models perform a conceptual exercise that can teach us something about the political world. For Johnson, this is a moral. The model is a token example—here an imaginary tale or fable—which looks a bit like other cases, so can lead us to interrogate them. For us, this is a prediction at low granularity, that describes patterns in types that are instantiated as predictions of low granularity in token examples. If that is all there is to the dispute, then all we need is a simple translation manual from Johnson’s language to ours. Underlying it, however, is whether we think universals and types exist. Such patterns include natural laws, but also general causal mechanisms that underlie empirical generalizations. We have argued that these general patterns are what constitute the explanatory link between models and the world, and they do so via similarities in the general concepts in the model and the instantiation of those concepts in the token empirical examples. We think our language helps us to understand how models are explanatory, whereas in Johnson’s (and Clarke and Primo’s) argument this is mysterious.

We think there are rhetorical advantages to our argument, too. Reducing political science to token description and claiming its theoretical aspects simply provide morals akin to fables is ill advised in times when science is under attack, and the funding of political science open to ideological whim. To be sure, there are dangers in our science being misunderstood; however, clearly understanding the difference between scientific and pragmatic predictions or forecasts will help in that regard (Dowding Reference Dowding2021). However, if any readers do not think a scientific prediction is a conditional statement of the sort “If X then Y” where ‘Y’ can range over a point (y), a set of points (y’, y”, y”’), a range (y0.5-y1.5), or a path f(y), then they might not agree with us that (formal) models produce predictions.

One final note. While we argue that models produce predictions that can be tested, and have truth values, there are many poorly constructed models whose predictions are so vague it is not clear what evidence would confirm or disconfirm them. Formalization is a guard against that, but purely verbal models can be deductive and, even when not, still provide testable predictions. To be sure, actual scientific analysis lies in the constant modelling, testing, and remodelling, where theory and evidence move together to produce satisfactory explanations. All sciences are messy in that regard. Our argument is that underlying that messy process is a modelling, prediction, and testing logic.

Acknowledgement

The authors would like to thank the editor and especially one anonymous reviewer for their comments on an earlier version of the paper. They also thank William Bosworth, Anne Gelling and Shawn Treier for comments, and Jim Johnson, who corrected some errors in interpretation of his work and encouraged them to strengthen their position in other regards

Footnotes

1 It should be noted that our belief that models can bear truth values, do produce predictions, and can be tested does not entail that all models have those qualities. Some poorly specified models get published.

2 Binmore Reference Binmore2007, ch. 1, has an extended discussion of finding the correct game structure to apply to social situations.

3 And Ostrom also did laboratory work; Ostrom, Gardner, and Walker Reference Ostrom, Gardner and Walker1994.

4 See table 1 in Johnson (2020, p. 9) for a summary of these three accounts.

5 He places a discussion of models found in Dowding Reference Dowding2016 in the semantic camp.

6 There are other types of theories which have a different role from explanation—normative theories, for example—but Johnson and we are talking about scientific theories.

7 Necessity here is not simply logical but also natural necessity—though how broadly that is understood is often disputed. Since Kripke Reference Kripke, Davidson and Harman1972, most accept a posteriori necessity—necessary relations that have to be discovered empirically, and these are often constitutive or conceptual. In other words, conceptual analysis is not divorced from empirical analysis.

8 For more general discussions of OSR see Ainsworth Reference Ainsworth2010 and Ladyman Reference Ladyman and Zalta2020.

9 For a complete review of coalition models, see Lenine Reference Lenine2020. For a discussion of Fearon’s model, see Lenine Reference Lenine2019.

10 Note that, like us, they see models as theories.

11 To be a bit more precise: truth is a predicate of propositions, and those model propositions that bear relevant similarity relations to the propositions about the world can be said to be true. If the explanatory application of the model is based on those propositions, we can claim the model is true.

12 A random walk implies that the mean of the time series changes over time, so the predicted point is conditional on the previous point and has a distinct and path-dependent graph. Chaotic systems can switch their qualities dramatically in ways not easy to forecast, but are still predictable in that we can model parameters under which such changes can be expected.

References

Ainsworth, Peter Mark. 2010. “What Is Ontic Structural Realism?Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 41(1): 5057. https://doi.org/10.1016/j.shpsb.2009.11.001 Google Scholar
Ansolabehere, Stephen, Snyder, James M. Jr., Strauss, Aaron B., and Ting, Michael M.. 2005. “Voting Weights and Formateur Advantages in the Formation of Coalition Governments.” American Journal of Political Science 49(3): 550–63.CrossRefGoogle Scholar
Beach, Derek, and Pedersen, Rasmus Brun. 2016. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing. Ann Abor: University of Michigan Press.CrossRefGoogle Scholar
Becher, Michael, and Christiansen, Flemming Juul. 2015. “Dissolution Threats and Legislative Bargaining.” American Journal of Political Science 59(3): 641–55.CrossRefGoogle Scholar
Binmore, Ken. 1992. Fun and Games: A Text on Game Theory. Lexington, MA: D.C. Heath.Google Scholar
Binmore, Ken. 2007. Playing for Real: A Text on Game Theory. New York: Oxford University Press.Google Scholar
Bueno de Mesquita, Bruce, and Lalman, M.. 1992. War and Reason. New Haven, CT: Yale University Press.Google Scholar
Cartwright, Nancy. 2010. “Models: Parables v Fables.” In Beyond Mimesis and Convention: Representation in Art and Science, ed. Frigg, Roman and Hunter, Matthew C., 1931. Dordrecht: Springer.CrossRefGoogle Scholar
Chalmers, David J. 2011. “Verbal Disputes.” Philosophical Review 120(4): 515–66.Google Scholar
Clarke, Kevin A., and Primo, David M.. 2012. A Model Discipline: Political Science and the Logic of Representations. Oxford: Oxford University Press.Google Scholar
Currie, Adrian. 2018. Rock, Bone and Ruin: An Optimist’s Guide to the Historical Sciences. Cambridge: MIT Press.CrossRefGoogle Scholar
Dewan, Torun, and Spirling, Arthur. 2011. “Strategic Opposition and Government Cohesion in Westminster Democracies.” American Political Science Review 105(2): 337–58.Google Scholar
Diermeier, Daniel, and Stevenson, Randy T.. 2000. “Cabinet Terminations and Critical Events.” American Journal of Political Science 94(3): 627–40.Google Scholar
Dowding, Keith. 2016. The Philosophy and Methods of Political Science. London: Palgrave.Google Scholar
Dowding, Keith. 2021. “Why Forecast? The Value of Correct and Incorrect Election Forecasts.” PS: Political Science and Politics 56(1): 104–06.Google Scholar
Dowding, Keith, and Miller, Charles. 2019. “On Prediction in Political Science.” European Journal of Political Research 58(3): 1003–21.CrossRefGoogle Scholar
Eyerman, Joe, and Hart, Robert A. Jr. 1996. “An Empirical Test of the Audience Cost Proposition.” Journal of Conflict Resolution 40(4): 597616.CrossRefGoogle Scholar
Goodman, Nelson. 1958. Fact, Fiction and Forecast. Cambridge, MA: Harvard University Press.Google Scholar
Guala, Francesco. 2005. The Methodology of Experimental Economics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Hedstrom, Peter, and Ylikoski, Petri. 2010. “Causal Mechanisms in the Social Sciences.” Annual Review of Sociology 36: 4967.Google Scholar
Johnson, James. 2020. “Models-As-Fables: An Alternative to the Standard Rationale for Using Formal Models in Political Science.” Perspectives on Politics FirstView. https://doi.org/10.1017/S1537592720003473 Google Scholar
Kincaid, Harold. 2008. “Structural Realism and the Social Sciences.” Proceedings of the 2006 Biennial Meeting of the Philosophy of Science Association Part II. Philosophy of Science 75(5): 720–31.Google Scholar
Kripke, Saul. 1972. “Naming and Necessity.” In Semantics of Natural Language, ed. Davidson, Donald and Harman, Gilbert, 252355. Dordrecht: Reidel.Google Scholar
Ladyman, James. 2020. “Structural Realism.” In The Stanford Encyclopedia of Philosophy, ed. Zalta, Edward N.. (https://plato.stanford.edu/archives/win2020/entries/structural-realism/).Google Scholar
Laver, Michael, and Shepsle, Kenneth A.. 1996. Making and Breaking Governments: Cabinets and Legislatures in Parliamentary Democracies. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Lenine, Enzo. 2019. “International Conflict and Strategic Games: Challenging Conventional Approaches to Modelling in International Relations.” Carta Internacional 14(1): 80102.Google Scholar
Lenine, Enzo. 2020. “Modelling Coalitions: From Concept Formation to Tailoring Empirical Explanations.” Games 11(4): 112.CrossRefGoogle Scholar
Levy, Arnon. 2015. “Modeling without Models.” Philosophical Studies 172(6): 781–98.CrossRefGoogle Scholar
Lorenz, Edward N. 1963. “Deterministic Nonperiodic Flow.” Journal of the Atmospheric Sciences 20(2): 130–41.Google Scholar
Mäki, Uskali. 2005. “Models Are Experiments, Experiments Are Models.” Journal of Economic Methodology 12(2): 303–15.CrossRefGoogle Scholar
Martin, Lanny W., and Stevenson, Randoloph T.. 2001. “Government Formation in Parliamentary Democracies.” American Journal of Political Science 45(1): 3350.CrossRefGoogle Scholar
Minhas, Shahryar, Hoff, Peter D., and Ward, Michael D.. 2016. “A New Approach to Analyzing Coevolving Longitudinal Networks in International Relations.” Journal of Peace Research 53(3): 491505.CrossRefGoogle Scholar
Ostrom, Elinor, Gardner, Roy, and Walker, James. 1994. Rules, Games, and Common-Pool Resources. Ann Arbor: Michigan University Press.CrossRefGoogle Scholar
Page, Scott E. 2011. Diversity and Complexity. Princeton, NJ: Princeton University Press.Google Scholar
Page, Scott E. 2018. The Model Thinker: What You Need to Know to Make Data Work for You. New York: Basic Books.Google Scholar
Partell, Peter J., and Palmer, Glenn. 1999. “Audience Costs and Interstate Crises: An Empirical Assessment of Fearon’s Model of Dispute Outcomes.” International Studies Quarterly 43(2): 389405.CrossRefGoogle Scholar
Popper, Karl R. 1972. The Logic of Scientific Discovery. 6th impression (revised). London: Hutchinson.Google Scholar
Ross, Don. 2008. “Ontic Structural Realism and Economics.” Proceedings of the 2006 Biennial Meeting of the Philosophy of Science Association Part II. Philosophy of Science 75(5): 732–43.CrossRefGoogle Scholar
Ross, Don. 2014. Philosophy of Economics. Palgrave: Macmillan.CrossRefGoogle Scholar
Ross, Don. 2019. “Game Theory.” In The Stanford Encyclopedia of Philosophy, ed. Zalta, Edward N.. Stanford, CA: Stanford University Press. (https://plato.stanford.edu/archives/win2019/entries/game-theory/).Google Scholar
Signorino, Curtis S. 1999. “Strategic Interaction and the Statistical Analysis of International Conflict.” American Political Science Review 93(2): 279–97.Google Scholar
Signorino, Curtis S. 2003. “Structure and Uncertainty in Discrete Choice Models.” Political Analysis 11(4): 316–44.Google Scholar
Signorino, Curtis S. 2007. “On Formal Theory and Statistical Methods: A Response to Carrubba, Yuen and Zorn.” Political Analysis 15(4): 483501.CrossRefGoogle Scholar
Signorino, Curtis S., and Tarar, Ahmer. 2006. “A Unified Theory and Test of Extended Immediate Deterrence.” American Journal of Political Science 50(3): 586605.Google Scholar
Signorino, Curtis S., and Yilmaz, Kuzey. 2003. “Strategic Misspecification in Regression Models.” American Journal of Political Science 47(3): 551–66.CrossRefGoogle Scholar
Sober, Elliott. 1983. “Equilibrium Explanation.” Philosophical Studies 43(2): 201–10.CrossRefGoogle Scholar
Thurman, Harold V., and Burton, Elizabeth. 2003. Introductory Oceanography. 10th ed. Upper Saddle River, NJ: Prentice Hall.Google Scholar
Tomz, Michael. 2007. “Domestic Audience Costs in International Relations: An Experimental Approach.” International Organization 61(4): 821–40.CrossRefGoogle Scholar
Warren, T. Camber. 2010. “The Geometry of Security: Modeling Interstate Alliances as Evolving Networks.” Journal of Peace Research 47(6): 697709.Google Scholar
Warren, T. Camber. 2016. “Modeling the Coevolution of International and Domestic Institutions: Alliances, Democracy, and the Complex Path to Peace.” Journal of Peace Research 53(3): 424–41.CrossRefGoogle Scholar
Woodward, James. 2003. Making Things Happen: A Theory of Causal Explanation. Oxford: Oxford University Press.Google Scholar
Figure 0

Figure 1 International interaction game, Bueno de Mesquita and Lalman (1992)Source: Signorino 1999. Signorino reproduces Bueno de Mesquita and Lalman’s model in order to develop his own approach to this strategic international game.

Figure 1

Table 1 Papers testing formal models