Introduction
The social sciences have come a long way from the naive logical positivism of the 1930s and the positivistic social science of the early post-World War II period. Positivistic social science had ideals including making laws and predictions, using quantitative methods, eschewing values, and conducting comparative analysis rather than case studies. These remain ideals for some social scientists—but by no means all.
For a sense of how far many social scientists depart from positivist ideals, consider a recent issue of the American Political Science Review, the “flagship” journal for political science and often thought to be a haven of neopositivism. Of the fourteen full-length empirical or methodological articles, none seeks or offers laws. None makes predictions. Three articles are qualitative (including one which also uses quantitative analysis), seven explicitly ask normative questions, and five others address values more loosely. Only one article conducts large-N comparative analysis; two articles analyze two to three countries; the other eleven articles analyze just one country or context.Footnote 1
Another example of the decay of positivism is Gary King, Robert Keohane, and Sidney Verba’s Designing Social Inquiry. This is often seen as highly positivistic,Footnote 2 yet it differs from many positivist ideals. Scientific laws are mentioned only twice and each time their necessity to science is denied.Footnote 3 Prediction is not listed as a core scientific featureFootnote 4; it is depicted several times as an important or legitimate scientific goal, but it is primarily seen as a tool for testing theories via their implications in a much broader sense than merely forecasting the future, as discussed below.Footnote 5 The authors advocate value-relevant research.Footnote 6 Rigor is about research design, not objective observation, as claimed, for example, by Hans-Georg Gadamer.Footnote 7 Uncertainty is central to science (not certainty, as claimed again by Gadamer).Footnote 8 Qualitative social science is a key focus.Footnote 9 Case studies are regularly discussed.Footnote 10
Yet many postpositivists (for example, interpretivists, constructivists, hermeneuticists, critical theorists—there are of course huge differences here) imply that positivism still dominates social science. Many postpositivist criticisms are powerful and important, as are some postpositivist recommendations about doing empirical research differently, especially those of Mark Bevir and Jason Blakely.Footnote 11
Nonetheless, many postpositivist criticisms of positivistic social science are partial and inaccurate. The criticisms are partial when they say little or nothing about scientific methodology, defined here as the logic of inference. This is a great strength of contemporary social science; I address that below. But first, I address inaccuracies. I briefly cover three issues: (1) facts and values, (2) laws, and (3) predictions.
(1) Postpositivists often claim that social science involves facts, not values.Footnote 12 That is misleading. Political scientists regularly ask normative questions, as shown above.Footnote 13 In organization studies, too, values have been relevant for decades.Footnote 14 Even when social scientists talk as if social science is only about facts, not values, what they recommend or do in practice is sometimes more subtle. For example, Rein Taagepera, trained as a physicist, sounds exactly like an extreme exponent of the fact-value divide,Footnote 15 but his substantive research often tackles normative issues.Footnote 16 Stephen Frantzich and Howard Ernst’s account of science does not mention values and seems to find normative hypotheses distasteful.Footnote 17 Yet their examples of social-scientific research sometimes involve norms.Footnote 18 Paul Kellstedt and Guy Whitten advise political scientists to “avoid normative statements … at all costs,” but they do not say to avoid norms. After all, “good political science” can “change the world.” Their examples include stopping wars and homelessness, and they discuss causal inferences about diet and health, educational effectiveness, democratic stability and life satisfaction, and race and political participation.Footnote 19
In short, social science, following Max Weber, need not be value-free; it can be value-relevant when picking topics and evaluating institutions and practices, but should seek value-neutrality in empirical analysis, trying not to let values infect empirical inferences.Footnote 20 Social scientists need not avoid values.
(2) Postpositivist critics often claim that social science is about making laws.Footnote 21 This is incorrect. While it is true that some scholars, such as Alfred Cuzán and Taagepera, do aim at laws,Footnote 22 this is now a minority position, as both authors note.Footnote 23 “In modern political science, and the social sciences more generally, scholars rarely, if ever, speak of laws,” writes Dimiter Toshkov.Footnote 24
(3) Postpositivists sometimes claim that social science is about making predictions.Footnote 25 Again, some social scientists agree,Footnote 26 but even they typically admit that prediction is now uncommon.Footnote 27 “While description and prediction remain worthy scientific goals,” writes Toshkov, “explanation gets the bulk of attention in political science.”Footnote 28 Alexander Wendt agrees: “[F]ew positivists make forecasting an important part of their work.”Footnote 29 Far from prediction being “the driving force” of science, as Bevir and Blakely claim,Footnote 30 it is not even mentioned as a core feature of science in some prominent social-science textbooks.Footnote 31 Keith Dowding and Charles Miller, who see future-oriented forecasting as “important and useful,” still regard “the principal desideratum” to be the kind of testable-implications prediction discussed below. It is this that they call “scientific prediction.”Footnote 32 Postpositivists often overlook this, as we will see.
Am I being unfair? Are postpositivist critics really discussing current social science or are they simply discussing positivist science as it used to be? They take different approaches here and some take more than one approach in different places. Some definitely discuss current social science, as with Bent Flyvbjerg’s claim that mainstream social scientists claim certainty.Footnote 33 Some focus primarily on older positivism, argue that it has significantly influenced modern social science, but sometimes admit—devastatingly so, I will argue—that there are profound differences between older positivism and some modern social science.Footnote 34 Some postpositivists essentially dichotomize, discussing or defending postpositivism after only attacking historical forms of positivism, overlooking more sensible social scientists.Footnote 35 This inaccurately implies that contemporary social science is more positivistic than it is. Unwary readers may assume that all social science is positivistic, reject social science, and accept the postpositivist alternative by default.
Caricaturing of social science is thus rife. Some criticisms are outdated, addressing features like law-making or prediction that are not core features of contemporary social science. By “core,” I mean something that is both important and universal (or almost universal).Footnote 36 Other postpositivist criticisms are legitimate but unrepresentative, for instance attacking positivism and then moving directly to defending postpositivism, as if all modern social science is positivistic. This, too, amounts to a caricature.
What explains such caricatures? Why do so many clever and knowledgeable people say so many partial, misleading, or incorrect things about social science? Why do so many people believe these claims? Why, in the words of Raymond Boudon, “do people adhere so readily to false or dubious ideas?”Footnote 37
I answer these questions using ideological analysis. Ideological analysis offers more insights here than does merely philosophical analysis. What we need to explain are not occasional, isolated, disparate errors—a misreading here, an overlooked citation there. These are systematic mistakes, repeated by thousands of scholars worldwide, often learned as students, and facilitated by social practices and institutional conventions such as poor citation practices.
In short, these are institutionalized systems of belief, on which ideological analysis specializes. These are not just intellectual errors, to be explained philosophically; empirical explanation is also needed. Simplifying considerably, purely philosophical analysis could ask what the mistakes are, but ideological analysis can also ask why mistakes arise and why so many people believe them, invoking psychological and social-political factors. Perhaps an inaccuracy is eagerly accepted by people who want it to be right, and it is then not spotted due to inadequate review procedures.
The second half of this essay thus offers speculative explanations about why such caricatures flourish. Testing these speculations, which requires tools such as ethnography, is well beyond the scope of this essay. But political theorists and philosophers have long provided empiricists with hypotheses to test.
Such tests should help to fill an important gap. There are large literatures conducting ideological analysis of anti-scientific views among citizens and of pro-scientific views among academics and more broadly. But I know of little ideological analysis of academic criticisms of modern social science.Footnote 38 My essay addresses that gap.
Before continuing, I should stress four points. First, and most important, my position does not ultimately depend on how widespread nonpositivistic social science now is. Even if it is only a tiny minority (and the above snapshot of the American Political Science Review suggests otherwise), many postpositivist claims are seriously weakened by the mere existence of more sensible forms of social science than the positivist versions they target.
Second, I have much sympathy for many criticisms of science. Much social science is conceptually simplistic, substantively banal, methodologically flawed, empirically inaccurate, normatively bland, or sleep-inducingly dull. I used to accept many postpositivist criticisms of science. Clive Payne, a statistician who helped me lose some of my anti-scientific views, once said that the normal distribution was perhaps the closest thing we have to a natural law. If the “quality” of science, however defined, is anything like a normal distribution, then unsurprisingly much science is poor. Perhaps most is poor, even in quantitative political science, which often trumpets its scientific credentials but regularly makes untenable statistical assumptions.Footnote 39 Much quantitative analysis of ideology, using the questionable liberal-conservative scale, may also be dubious.Footnote 40
Third, terminology is a tricky issue with no ideal solutions. For ease of argument, I mostly equate positivism, logical positivism, and logical empiricism. I call the critics discussed in this essay “postpositivists” and I defend “mainstream” or “current” social science (or suchlike). These terms are imperfect and risk implying two monolithic approaches, even though there is immense diversity both across the social sciences and their critics. Worse, my language might be insulting, if it implies that the postpositivists I criticize are not social scientists. In fact, postpositivists differ on this point. Gadamer is happy to criticize “science” and “modern science.” Flyvbjerg and Frank Fischer both state that “social science” or “the social sciences” have “failed,” but then present their preferred alternative as forms of social science: “phronetic social science” and “postpositivist social science,” respectively.Footnote 41 Bevir and Blakely are more nuanced. They criticize “naturalist” social science (closely linked to positivismFootnote 42), defend “interpretive social science,” and never imply that “social science” belongs to the former and not the latter.Footnote 43
Fourth, I do not depict ideology in pejorative terms as a distortion of reality. This is actually now a minority position in ideological analysis.Footnote 44 Rather, I follow most ideology theorists in seeing ideologies nonpejoratively as “clusters of beliefs in our minds.”Footnote 45 Many or most of us have ideologies and we probably all think ideologically to some degree. Science, too, can be seen ideologically, and science and technology studies (STS) examines science as an institutionalized system of beliefs.
This essay proceeds as follows. In the next section, I outline two core principles of (social) science: epistemic and methodological. Scientific methodology, a great scientific strength, is central to this essay. Yet many postpositivists largely or entirely ignore it, as I explain in the section on “The poverty of many criticisms of current social science.” Many postpositivists see science primarily in terms of its ends or its supposed assumptions rather than in terms of its means—that is, how scientists test and justify their claims. The two penultimate sections offer speculative explanations of such oversights and caricatures. The concluding section discusses how this damages people who need scientific methodology, including some postpositivists discussed here.
Two core principles of (social) science
There are many ways of characterizing science, but here I treat the sciences, including the social sciences, as having two core components: epistemic and methodological. The epistemic claim is that correct inferences can potentially be made about empirical reality. The methodological claim is that the epistemic claim requires more than observation alone. I concentrate on the methodological claim, which is central to this essay.
I define methodology as the logic of inference; that is how we reach, test, and justify our conclusions. This meaning is not uncommon.Footnote 46 (I discuss other meanings below.) For scientists, the core methodological claim, that drawing scientific inferences requires more than observation alone, takes different forms depending on the extent to which analysis is deductive and/or inductive. In primarily deductive sciences, such as theoretical physics and mathematics, the methodological tools are those of logic, mathematics, and so on. However, here I concentrate on induction, because the social sciences are mostly and primarily inductive. Unlike deduction, which can produce certain conclusions, induction (for example, extrapolation) is always uncertain.
Testing is thus vital in the social sciences. We should question our data rather than taking data as givens. We should think against ourselves, probing strengths and weaknesses of our ideas, rather than just looking for evidence that fits them. Far from trying to “prove” a descriptive or explanatory inference, we consider both what fits and does not fit it. Testing is often relative; other descriptive or explanatory inferences are always possible—underdetermination is fundamental to science!—so we should also consider what fits and does not fit plausible alternative descriptions and explanations, not just our preferred descriptions and explanations. Ideally, we should remember that “fit” is theory-laden; people may disagree about what fits a hypothesis. But in many cases, we can plausibly accept, amend, or reject our hypotheses based on our analyses. This is the core of inductive scientific methodology: in essence, thinking against ourselves to see how well our ideas stand up.Footnote 47
There is much more to methodology, with great variations across and within the natural and social sciences, and vigorous debates about how best to implement the above ideas. Methodology goes hand-in-hand with mentality; you will do better research if you worry that your data or arguments might be flawed.Footnote 48
Lawyers and philosophers grasped the essence of this inductive methodology long before scientists: hearing both sides, looking at strengths and weaknesses of one’s account, modifying one’s account accordingly, and so on. However, natural and social scientists have developed the idea significantly, offering valuable tools and conceptual distinctions, for instance different kinds of validity, techniques of controlled comparison, lists of threats to internal and external validity, and so on.
Further changes await—this core scientific methodology is not “right”—but overall, inductive scientific methodology offers the best set of tools that humans have yet developed for answering empirical questions inductively. Saying this is consistent with admitting that much or most social science is poor, as noted above. But if you disagree with the italicized claim, how would your inductive methodology differ? Is it safe to only seek evidence that fits a claim, say? Meanwhile, if your response is, instead, that what I describe as scientific methodology is “just good sense,” I would reply that scientists have significantly extended this “good sense” with tools and conceptual distinctions such as those mentioned above. If you disagree with that, please publish! We need more focused, detailed methodological debate; less discussion of alleged and often irrelevant goals; and more of actual means. However, postpositivists too often sidestep this issue, as I now discuss.
The poverty of many criticisms of current social science
Many postpositivists do not adequately address scientific methodology, even though it is a core scientific feature. These oversights and misrepresentations lead to troubling caricatures.
Consider Bevir and Blakely’s account of prediction. They see prediction as a core feature of naturalist social science, but do not cite a single modern social scientist who is guilty of what they allege.Footnote 49 The closest they get is an essay by Milton Friedman that was already sixty-five years old when Bevir and Blakely published their book. Yet crucially, Bevir and Blakely focus only on one of three components of Friedman’s account of prediction—and it is the component that is the least important today.
Bevir and Blakely describe prediction as a “goal: the discovery of a science that enables predictive power and thereby the ability to control (or at least forecast) social and political outcomes.”Footnote 50 Prediction is thus either an end in itself or a means for control. Control is linked to “technocracy” and potentially “anti-democratic and anti-humanistic politics.”Footnote 51
Friedman does initially talk something like this, minus the anti-democracy and anti-humanism.Footnote 52 However, he adds two further dimensions that Bevir and Blakely overlook. First, prediction helps us test theories: “[T]he only relevant test of the validity of a hypothesis is comparison of its predictions with experience.”Footnote 53 Second, “prediction” can be about the past and present, not just the future. For example, “a hypothesis may imply that such and such must have happened in 1906 …. If a search of the records reveals that such and such did happen, the prediction is confirmed.”Footnote 54
Friedman’s term “prediction” is thus broadly equivalent to “implication.” This idea is now often called “observable implications” or “testable implications.” (I now prefer the latter phrase, because it is a basic and powerful way of testing philosophical ideas, too.)
Testable implications are extremely important in scientific methodology.Footnote 55 They are vital for rigorous testing of ideas. Consider Jon Elster’s example: How might we explain the observation that standing ovations have become more common at Broadway shows? The first thing we should do, actually, is check the data.Footnote 56 If the data are reliable, we could hypothesize about causes. Might rising ticket prices be relevant, such that many audience members subconsciously want to feel like the expense was worth it? If so, we can test whether there are fewer standing ovations at cheaper shows or where businesses give employees free tickets. Since hypothesis-testing is relative, not just absolute, we should also test plausible alternative explanations: for instance, Broadway shows might have become better over time, a possibility which would also need to be investigated via testable implications.Footnote 57 Interpretive analysis of the kind Bevir and Blakely recommend may also help. However, interpretive analysis, too, will need predictions and testable implications of the kind that social scientists recommend and about which postpositivists are often largely silent (perhaps because this would be to admit that social science is about much more than just laws, values, and so on).
Elster summarizes this methodology and mindset:
[T]he advocate for the original hypotheses also has to be the devil’s advocate. One has consistently to think against oneself—to make matters as difficult for oneself as one can. We should select the strongest and most plausible alternative rival explanations, rather than accounts that can easily be refuted.Footnote 58
I have already shown in the introduction that prediction as future-oriented forecasting seems to be far less important for many contemporary social scientists than prediction via testable implications. Unfortunately, Bevir and Blakely do not discuss prediction via testable implications. They treat Friedman as discussing only forecasting rather than implications more generally (including about the past and present) and as having naive ends, when he also offers powerful means for “testing hypotheses by the success of their predictions.”Footnote 59
This misleading focus on ends over means is widespread. Martin Hollis also sidesteps Friedman’s methodology of testable implications and reads him as merely addressing prediction as an end.Footnote 60 Conflation of means and ends is even more striking in Sanford Schram’s discussion of how the natural and social sciences are “entirely different”: “The natural sciences are focused on prediction and control of the natural world, making them the wrong place to look for a model about how to produce scientific knowledge that can inform social relations.”Footnote 61 However, the second part of the sentence does not follow from the first; flawed ends need not imply flawed means. Prediction via testable implications is also a powerful tool for “how to produce scientific knowledge that can inform social relations.”
Many postpositivists thus attack social science for ends that are now far from universal, and overlook scientific methodology, a great scientific strength, which is universal or near universal. Too often, postpositivists also say little about other aspects of scientific methodology. It is not clear whether Gadamer’s Truth and Method grasps scientific methodology at all. He thinks that the essence of science is accurate observation and does not discuss inference by controlled comparison.Footnote 62 This is hardly a strong basis for challenging the idea of a social science.
Similarly, Milja Kurki and Colin Wight imply that science is just about careful, systematic observation revealing regularities.Footnote 63 Their later discussion of the logic of inference is very brief and very general.Footnote 64 Readers are unlikely to understand science from this. True, Kurki and Wight accept that “a rejection of the positivist model of science need not lead to the rejection of science,”Footnote 65 mention early advocates of political science who reject laws and recognize that “facts” may mislead,Footnote 66 and add that positivist philosophy of science has been much modified since the 1960s.Footnote 67 But if so, why discuss positivist science at such length? Why not just address current social science? Kurki and Wight do not give any examples of current social science aside from a few sentences on King, Keohane, and Verba and a brief discussion of scientific realism.Footnote 68 Neither discussion addresses methodology. This account of science could be broader, especially in a chapter in a student textbook, for this is how students often learn about science.
Bevir and Blakely, too, admit that their account of naturalistic social science perhaps includes few contemporary political scientists.Footnote 69 Why, then, spend so long criticizing something that (a) is not what many current social scientists do and (b) is far easier to criticize than what many current social scientists do? Also, (c) why sidestep the methodology of testing, a great strength of modern social science? After all, the methodology of testing derives more directly from core scientific ideas (for example, uncertainty and underdetermination) than many incidental features that Bevir and Blakely claim are central to science. (I revisit this point below.) Finally, (d) why defend interpretivism by rejecting extreme forms of social science rather than comparing interpretivism to more moderate and sensible versions? This raises unnecessary doubts about interpretivism’s defensibility. We need to know whether interpretivism does things that even sensible social scientists do not or cannot, rather than by contrasting it with outdated forms of social science that Bevir and Blakely admit are now minority practices.
In short, the methodology of testing is a central and powerful feature of inductive social science, yet many postpositivists say little or nothing about it. The resulting accounts of social science are so limited as to be misleading. It is not that what is said is necessarily wrong, although it often is, but that its incompleteness amounts to a caricature. It is a bit like evaluating a political institution by covering only its benefits or only its weaknesses—and only from many years ago. If one largely overlooks scientific methodology, one cannot fairly claim that “social science has failed as science.”Footnote 70
Speculative explanations of caricatures about social science
Why are so many clever people so misleading about positivism and, explicitly or implicitly, about much current social science? Why do so many insightful commentators overlook or misconstrue scientific methodology? Ideological analysts offer useful tools for explaining such institutionalized systems of belief, as I discussed in the introductory section.
I thus combine and modify two typologies from ideological analysis. First, Jonathan Leader Maynard covers many psychological and social influences on ideological thinking in general.Footnote 71 Second, Aviva Philipp-Muller, Spike Lee, and Richard Petty explain popular anti-science responses to scientific messages by distinguishing, roughly, between the content of what is said, who says it, and how; the psychology of listeners; and group identities.Footnote 72 I adapt and slightly expand both frameworks.
This section considers ten factors—either psychological or closely linked to psychology—that help explain caricatures of social science. The next section puts more weight on institutional and structural factors, and discusses not only caricatures of social science in general, but also oversights about scientific methodology in particular.
As the introduction explained, the following ideological analysis looks beyond philosophical errors. The conventions to be discussed toward the end of this essay are far more than just philosophical errors; they are facilitated by other factors, including the institutionalization of poor citation practices. Overall, this web of interconnecting factors—psychological, social, institutional, structural—may help to explain caricatures of social science as well as oversights concerning methodology.
Maintaining consistency between beliefs
Cognitive dissonance occurs when someone receives “information that conflicts with their existing beliefs, attitudes, or behaviors. Dissonance elicits discomfort.” Resolving the discomfort can take many forms, including rejecting the new information.Footnote 73
Cognitive dissonance theory suggests that if postpositivists spot that their criticisms do not fit many practicing social scientists, they might instinctively gloss over such concerns. If you sincerely reject positivist aims and can see plausible alternatives, it may clash too much with your system of ideas to accept that many current social scientists are not as guilty as you think.
Closely linked to cognitive dissonance is confirmation bias: “[I]nformation is searched for, interpreted, and remembered in such a way that it systematically impedes the possibility that the hypothesis could be rejected.”Footnote 74 If you already “know” that scientists accept a principle, you may not look for counterexamples, let alone examples. After all, academics do not reference claims that are general knowledge, such as the number of states in the United States. (Such cognitive factors affect us all. Previous drafts of this essay featured caricatures or misreadings of those I was criticizing. Doubtless, some remain.)
Cognitive dissonance and confirmation bias are not fundamental influences; they already depend on scholars being initially convinced by caricatures of science. Why does this happen in the first place?
Cognitive efficiency: Saving time and mental energy
Many postpositivists take dangerous shortcuts, for instance citing philosophers of science more than practitioners or assuming that science’s historical foundations still apply (see the next section). Again, these will not seem like shortcuts if one “knows” these things are true; but shortcuts they are. They prevent critics from engaging with more actual social-science research and from facing cognitive dissonance.
Self-esteem: The need for self-worth, self-confidence, and so on
Self-esteem is a powerful motivator. Based on my own anti-science phase, I would discuss two mechanisms here. First, countering one’s own dislikings and intellectual weaknesses. I used to criticize quantitative analysis partly for sincere reasons (such as information loss in quantitative indices), but partly because quantitative analysis made me uncomfortable. It took me years to grasp how unlikely it was that the best ways of studying the world happened to fit my personal likings and intellectual strengths. But perhaps other people are not so shallow.
Second, if people pompously tell you to study things scientifically, you might feel better about yourself by criticizing science and looking elsewhere. This is a completely understandable reaction to external arrogance.
External arrogance, presumptuousness, or narrowness
Many advocates of science are annoyingly dismissive about alternatives. Natural scientists are sometimes breathtakingly arrogant and ignorant about social science. Economists and quantitative political scientists are often awfully smug about their own approach and crassly disparaging about other approaches. Many social scientists presumptuously talk as if theirs is the best or only approach; they sometimes present extremely narrow perspectives of what counts as social science or good research. (I am guilty of this.) Unsurprisingly, many critics overreact.
External practices
Some social scientists do hold views that postpositivists can legitimately attack, as discussed above. My point remains: not all social scientists hold such views. Postpositivists should only attack some social scientists—not social science in general.
External resource unevenness
Publishing in the most “prestigious” journals, grant funding, and jobs are often easier for some kinds of social-science research than others. Humanities departments regularly lose out, too. These external factors go hand-in-hand with self-esteem. You might feel better about a bad situation by attacking the intellectual inadequacies of the people soaking up so many prestigious publications, grants, and jobs.
External resource unevenness arguably reflects a deeper structural factor: the commodification of universities. This includes linking education to job markets. Many graduates can get higher starting salaries with statistical training, encouraging many universities to beef up their quantitative training, sometimes at the expense of other jobs. (Misperceived) employment prospects are one reason why many universities’ humanities programs are declining.
Reactionary motivations
Ideological analysis suggests that people often evaluate claims not (only) on their own terms, but (also) on what those claims are linked to, for instance rejecting vaccines because they seem unnatural.Footnote 75 Postpositivist reactionaryism takes different forms, for instance attacking positivism or mainstream social science because it can support elitism and technocracy or because elitism and technocracy use positivism or mainstream social science.Footnote 76 Unfortunately for postpositivists, the link is not a necessary one. We may even need social science and scientific methodology to investigate and challenge elitism and technocracy. Indeed, social scientists often analyze deliberative democracy, a (potentially) anti-elitist form of democracy that Bevir and Blakely rightly praise but whose connection with postpositivism they exaggerate.Footnote 77 Nonetheless, since science is regularly tarred with an anti-democratic brush, it encourages people to react against it.
Many people seem to be happy with scientific ideas if they do not realize they are scientific. Years ago, I drafted a paper called “History of Political Thought as a Social Science.” Key questions in history of political thought are essentially empirical: What did Niccolo Machiavelli mean by virtú and why did he write The Prince, for example? Scientific tools are excellent for answering empirical questions, I argued, and textual interpreters can use many aspects of scientific methodology.
This was not well received, so I changed the title to “History of Political Thought as Detective-Work.” I removed the explicit discussion of science, but I kept the footnotes to social scientists and philosophers of science, and accidentally forgot to mention that textbooks on forensics and crime scene investigation explicitly treat detective work as scientific. Many people like the published paper, which is quite widely used in teaching, even though the arguments are the same as those in the science paper, which many people hated. This seems reactionary; some people accept scientific methodology until they think that they are doing something scientific.
Adopting the beliefs of the local majority
If many people around you oppose science, you may do the same. However, this factor already assumes that there is a local majority, a point to which I now turn.
Critical mass
Gadamer was a philosopher, writing before the internet, in a university department rather than a multidisciplinary Oxbridge college. He might simply not have mixed with social scientists or natural scientists who could have corrected his caricatures.Footnote 78 Gadamer might never have heard social-science papers in his department or at conferences. Also, the social science of his day was not usually very good. Such factors might have facilitated his caricatures.
Conferences are worth highlighting. Once subdisciplines and approaches get large enough, they can hold their own conferences or run multiple panels at crossdisciplinary conferences, making it even easier to avoid views that would challenge one’s preconceptions.
Critical-mass explanation can no longer be powerful, as there are so many opportunities to uncover different viewpoints. The pressure to adopt local majorities’ beliefs is probably not much different from the tendency to rebel against local majorities, but it may still have some influence, especially where education is involved.
Ideational resources: The availability of ideas or ways of reasoning
Many of the above psychological explanations already assume that someone holds anti-scientific views. But how did they get such views in the first place?
Postpositivist caricatures probably often take root in undergraduate or graduate study. They are so widespread—in print and in people’s heads—that many scholars will find my essay literally unbelievable. To return briefly to my broader argument, I must thus reiterate that it does not matter whether many modern social scientists really theorize and practice it as naively as postpositivists say they do. What matters is that many do not do so. Postpositivists can and should attack positivist science, but it is highly problematic when they overlook far more sensible forms of social science.
Speculative explanations: Institutional and structural
Having primarily discussed factors that are psychological or closely linked to psychology, I now address more institutional and structural factors. Here, I address not only caricatures of science in general, but also oversights about scientific methodology in particular.
Limitations of language
Ideational resources should not just be conceived positively, as resources that people have, but also negatively, as resources that people do not have. Here, it is interesting that “methodology” has no single meaning. “Methodology” literally means “the study of methods” and some scholars use it this way.Footnote 79 Bevir and Blakely treat “methodological” as the adjective of methods.Footnote 80 (“Methodical” means something else; perhaps we should use “methodic,” instead.Footnote 81) McGrath and Johnson equate methodologies with approaches or paradigms, while Dowding initially treats a methodology as an “organizing perspective,” such as rational choice or postmodernism.Footnote 82 Flyvbjerg seems to treat “methodology” as “practical advice for doing research.”Footnote 83 This includes the logic of inference and more.
These understandings are all legitimate, as there is no “true” meaning of the term. What matters is not our language but our ideas, that we somehow address the logic of inference. Other terms cover similar ground, for example, rigor, reliability, robustness, validity, or testing,Footnote 84 so while linguistic limitations do not help, they cannot be that important.
Underemphasis on actual methodology
Much more important than these linguistic limitations is surely the widespread convention, in many fields, of ignoring the logic of inference. For example, “methodological” discussions in the history of political thought almost entirely avoid the logic of inference. Students are taught to think about different approaches in terms of “schools of thought,” such as Marxist, contextualist, or Straussian, with little practical advice on reaching and testing one’s conclusions.Footnote 85
Postpositivists often overlook methodology, too, as discussed above with reference to Gadamer, Kurki and Wight, and others. Bevir and Blakely likewise write that for naturalists, good research involves rigorous application of methods.Footnote 86 However, rigorous application of methods requires methodology, not just methods, and Bevir and Blakely say little about this. Yet the postpositivist strategy of criticizing scientific ends and sidestepping methodological means is so common as to seem unobjectionable, presumably.
The poverty of positivist methodology
One reason why methodology gets less attention in this particular area reflects the poverty of methodology among Vienna Circle positivists. In their manifesto statement, the methodology of science is deductive: logical or conceptual analysis.Footnote 87 Most social science simply does not make inferences this way. Thomas Hobbes’s science of politics tried it, but failed; it works poorly for inductive questions, as it is not about testing ideas by thinking against oneself.Footnote 88
These Viennese positivists were to a considerable extent philosophers, not practicing social scientists. Otto Neurath was also a sociologist, though not apparently a very good one.Footnote 89 In fairness, most social science at the time was not great; the methodology of controlling variables to make robust inferences was not well developed. However, this highlights the deficiency of genealogical strategies, to which I now turn.
The convention of arguing genealogically
The remaining subsections discuss specific conventions and argumentative strategies that facilitate caricatures of much current social science. One common rhetorical move is genealogical, that is, discussing a phenomenon by approaching it historically. This is common and usually perfectly legitimate, for instance discussing ideology by starting with Karl Marx or Karl Mannheim. However, in discussions of science, genealogical discussions sometimes blend into what I call a “post hoc est hoc” fallacy: scientists once believed X, and so current scientists still do. This is like calling all current political scientists racist because political science was founded on racist doctrines.
The most common and seductive genealogical focus for modern science is positivism. But we have seen immense differences between positivism and contemporary social science. Bevir and Rhodes dispute this: “[E]ven if political scientists repudiate positivism, they often continue to study politics in ways that make sense only if they make positivist assumptions.”Footnote 90 The “often” is not good enough for Bevir and Rhodes’s purposes, though, as political scientists often do not use positivist assumptions, as shown above. Bevir and Rhodes’s admission invites the question: Is nonpositivist or noninterpretivist political science as problematic as the positivist or naturalist form they criticize?
Bevir and Blakely admit that “[p]erhaps few political scientists today would wish to identify themselves as naturalists.”Footnote 91 Why, then, immediately go on to discuss how “naturalists assume that explanations in the social sciences should be formal, ahistorical, and invariant”?Footnote 92 This certainly does not fit most case-study research, for example, including some political scientists who Bevir and Blakely had just cited. Even King, Keohane, and Verba accept that theories are contingent rather than universal, typically applying in limited situations.Footnote 93
What is particularly striking about post hoc est hoc genealogies is that at precisely the same time as Vienna positivists were offering philosophical speculations about how social science could be studied, actual scientific methodology was being revolutionized by practicing scientists, with ideas such as randomization and null hypotheses.Footnote 94 These ideas are far more central to modern social science than positivism, I would suggest. Revealingly, many postpositivists focus on philosophers in Viennese armchairs in the 1930s, not actual scientists in a Rothamsted field in the same decade.
The convention of arguing via philosophical presuppositions
Underpinning the genealogical fallacy is the assumption that we can understand social science via its philosophical presuppositions. This reduces the need to cite contemporary social scientists. But presupposition is not practice; even great philosophers sometimes diverge from their explicit presuppositions, and many contemporary social scientists do not share most positivist presuppositions. If they did, citations would show this. Such citations are rarely provided. Even that would only show that some social scientists are positivistic, whereas postpositivists need to attack good social science, not flawed and outdated positivist versions.
The convention of arguing by false dichotomy
A common and seductive academic trope is to present a false dichotomy, criticize the first category, and impel readers toward the second almost by default. This is fallacious if more sensible categories are overlooked.
Examples include Bevir and Blakely’s defense of interpretive social science, Flyvbjerg’s defense of phronetic social science, and Fischer’s defense of postpositivist policy inquiry, discussed above. All reject positivistic science without adequately addressing more sensible social scientists. They simply show that their approaches are superior to bad social science.
Another aspect of dichotomizing was discussed above: linking positivism to elitism and postpositivism to democracy. This, too, is a common and fallacious argumentative strategy, as the above pairs are not necessarily linked (see the subsection on “Reactionary motivations”). The social world is not a determinate world, a world of straight lines, but a world of tendencies and possibilities.
The convention of seeing science and other options as alternatives
Perhaps the most insidious dichotomizing tendency is to present these approaches as exclusive: you can either have scientific aims and means or nonscientific ones. You cannot use scientific means for nonscientific ends. This was exemplified by Schram (see above) and seems to underlie many postpositivist criticisms. In fairness, most social scientists would make similar arguments. I am unusual in applying scientific methodology in the humanities, as the concluding section below discusses.Footnote 95
Citation conventions and (lack of) constraints
Citation practices are surprisingly important. Without adequate citations, we can easily make sweeping, misleading generalizations or even false claims about what someone writes. “Whereof one cannot give page numbers, one should stay silent.”Footnote 96 We all fall short in our citations sometimes. It is legitimate to be criticized when we do.
Inadequate citation partly reflects psychology: detailed referencing is tiresome for authors and readers. Besides, if you “know” claims are true, detailed referencing seems unnecessary. But three institutional features of publishing are also worth noting. First, word limits mean we usually cannot cite everything we want, so we often take shortcuts, especially in journal articles. Second, we often avoid page numbers when teaching, whether because PowerPoint slides do not have much room or simply because the convention is not to give precise citations when speaking. (How tiresome that would be!) Third, and most important, is the bad convention whereby page numbers in publications are not required except for direct quotations.
A lack of institutional constraints also matters. Consider how U.S. Supreme Court justices’ ideology affects their decisions. Analysts often discuss the lack of institutional constraints as an important institutional “influence”: justices can act sincerely because there is little institutional check on them.Footnote 97 The same applies here. Our citations would be more accurate if they were regularly challenged, for instance by journal referees, or if publishers required authors to support broad claims with evidence.
Such institutional explanations cannot excuse many postpositivists’ citation practices. An egregious example is Gadamer’s Truth and Method, which mentions “modern science” almost sixty times without citing even one modern natural or social scientist.Footnote 98 Perhaps Gadamer was worried that his book was too long. The most recent scientist Gadamer cites is the physicist Hermann von Helmholtz, who died sixty-five years before Gadamer’s book was written; and Gadamer’s account of Helmholtz is flawed and misleading.Footnote 99
Many later commentators do not mention the inadequate account of science in Gadamer’s book.Footnote 100 This leads to systemic misrepresentation. No individual is necessarily culpable here; one would not expect everyone to mention Gadamer’s caricatures. However, when so few people do—and when the same applies for many other caricatures of science—the collective result is the systemic misrepresentation of scientific ideas through inadequate citation practices.
Conclusion
Widespread and untenable caricatures of science reflect many psychological, social, institutional, and structural factors, including linguistic limitations, conventions, and argumentative strategies. Scientists and early positivists are partly at fault—for example, the arrogance and presumptuousness of many natural and social scientists, the poverty of 1930s positivist methodology, and a widespread tendency (not just among scientists) to see science and the humanities as essentially different.
But ultimately, it is postpositivists who primarily drive these caricatures. This reflects many completely understandable psychological factors and their interaction with other factors listed above, including conventions, but it also reflects factual errors and fallacious argumentative strategies.
So what? Why does this argument matter? Because overlooking scientific methodology robs many people of something important. Many humanities scholars ask empirical or quasi-empirical questions; scientific methodology can help them test their ideas—seeing what fits and does not fit, for one’s own and for competing interpretations, using different data or methods, reporting uncertainty, and so on.
For example, many scholars ask empirical questions in the history of political thought and philosophy, such as what Hobbes means by “representation,” and why he wrote what he wrote. The same applies in literature. Who wrote William Shakespeare’s plays? Who influenced Johann Goethe? Did William Blake’s views change significantly after the French Revolution? If so, how much can we use his pre-Revolutionary writings to inform his post-Revolutionary ones? I know of no better logic of inference for answering such questions than a scientific one—questioning the evidence, testing different interpretations open-mindedly by asking what we might see if they were true or untrue, and so on.Footnote 101 Scientific methodology can even apply in philosophical thought experiments, where we need to manipulate variables in our scenarios to assess their relative effect, their interactions, the impact of missing variables, the generalizability of the tests, and so on.Footnote 102
Crucially, some aspects of scientific methodology will be vital even in some postpositivists’ claimed alternatives to social science, including Bevir and Blakely’s interpretive social science and Flyvbjerg’s phronetic social science (as David Laitin notes).Footnote 103 Flyvbjerg seems to admit this: “Validity, for the phronetic researcher, is defined in the conventional manner as well-grounded evidence and arguments, and the procedures for ensuring validity are no different in phronetic planning research than in other parts of the social sciences.”Footnote 104 This is an important admission. Alas, Flyvbjerg’s caricatures reemerge almost immediately. Phronetic planning researchers—unlike, he implies, other social scientists—“do not claim final, indisputable objectivity for their validity claims.”Footnote 105 Yet leading political-science textbooks flatly deny this.Footnote 106
Indeed, interpretive social scientists sometimes ask empirical questions to which answers can potentially be right or wrong, for instance whether a particular policy caused a drought.Footnote 107 Such questions would benefit from a scientific methodology. This even holds for normatively oriented questions. For example, Bevir and Blakely praise the interpretive social science of Judy Innes and her collaborators, investigating how deliberative planning—incorporating citizens and stakeholders into policymaking—helped previously intransigent policy discussions finally move forward.Footnote 108 Yet even this normative question—that is, how well deliberative planning works—benefits from the even-handedness essential to scientific methodology. In one study, Innes reports nothing but success for eight deliberative planning initiatives.Footnote 109 Sarah Connick and Innes’s discussion of three deliberative initiatives is mainly positive, only briefly mentioning problems.Footnote 110 Yet one of these three projects had serious issues, as the authors note elsewhere.Footnote 111 Fischer cites only the successful experiments in deliberative democracy, leaving just a few lines for problemsFootnote 112 within pages of positive reporting.Footnote 113 Here, too, postpositivists could learn from social-science advice about careful combination of facts and values.
Yet Bevir and Blakely assert—without references—that naturalism is about “eliminating values and political engagement from the study of human behavior” and making social research “more or less independent of the project of political and normative critique.”Footnote 114 Neither claim is tenable, as discussed above. I myself started my career doing this kind of normatively oriented empirical research, although only after my Ph.D. supervisor John Curtice showed me the value of couching my empirical questions normatively.Footnote 115
Bevir and Blakely are absolutely right that social analysis can and often should be normatively engaged. And interpretive social scientists are probably more open to normative questions than are many current social scientists. But it is wrong to claim that mainstream social scientists cannot and do not address norms. And postpositivists asking such questions need aspects of scientific methodology to answer them. Defenses of postpositivism by scholars such as Bevir and Blakely would be stronger if they were not built on caricatures of social science. Defenses of postpositivism would also be stronger if they did not leave discussions of methodology to social scientists.
Some readers may notice that I have already used some ideas from ideological analysis to encourage a more constructive way forward. After all, ideological analysis suggests that simply pointing out errors may have little effect. Let me thus end on a more optimistic note. Many important intellectual developments come less from senior figures changing their minds than from junior people doing new things. We need a new generation of scholars to show how scientific and postpositivist approaches can learn from and build on each other. There are many opportunities for such publications, including both theorizing and substantive research. But such analyses will be better if they take scientific methodology seriously and avoid caricatures.
Acknowledgments
This essay was written while I was a Core Fellow at the Helsinki Collegium for Advanced Studies, University of Helsinki. For comments and criticisms on earlier drafts, I thank my readers and reviewers, anonymous and nonymous: Keith Dowding, Michael Frazer, Tomáš Halamka, Tereza Křepelová, Jonathan Leader Maynard, Heikki Patomäki, Nahshon Perez, and Liz Ralph-Morrow. I also thank Dave Schmidtz and the other contributors to this volume. Errors and caricatures that remain are my own.
Competing interests
The author declares none.