Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-01-14T10:12:26.838Z Has data issue: false hasContentIssue false

Social Science and Its Critics: An Ideological Analysis

Published online by Cambridge University Press:  13 January 2025

Adrian Blau*
Affiliation:
Department of Political Economy, King’s College London
Rights & Permissions [Opens in a new window]

Abstract

Why do many postpositivists caricature contemporary social science? Why make incorrect claims, for instance about social scientists avoiding values? Why discuss features that often no longer matter, such as seeking laws or predictions? Why reject extreme forms of social science without discussing more sensible forms? Why say little or nothing about scientific methodology, which is a great strength of recent social science? To explain such oversights and caricatures, philosophical analysis will not suffice. These are not isolated intellectual errors, but systematic ones, made by numerous scholars and fostered by social practices and institutional conventions. We thus need ideological analysis, which specializes in explaining institutionalized systems of belief. Speculative explanations are offered for postpositivist caricatures, including not only psychological factors, but also external ones (for example, the arrogance of many social scientists), limitations of language (for example, the ambiguity of the term ‘methodology’), rhetorical strategies (for example, genealogical approaches), and conventions (for example, bad citation practices).

Type
Research Article
Copyright
© 2025 Social Philosophy & Policy Foundation. Printed in the USA

Introduction

The social sciences have come a long way from the naive logical positivism of the 1930s and the positivistic social science of the early post-World War II period. Positivistic social science had ideals including making laws and predictions, using quantitative methods, eschewing values, and conducting comparative analysis rather than case studies. These remain ideals for some social scientists—but by no means all.

For a sense of how far many social scientists depart from positivist ideals, consider a recent issue of the American Political Science Review, the “flagship” journal for political science and often thought to be a haven of neopositivism. Of the fourteen full-length empirical or methodological articles, none seeks or offers laws. None makes predictions. Three articles are qualitative (including one which also uses quantitative analysis), seven explicitly ask normative questions, and five others address values more loosely. Only one article conducts large-N comparative analysis; two articles analyze two to three countries; the other eleven articles analyze just one country or context.Footnote 1

Another example of the decay of positivism is Gary King, Robert Keohane, and Sidney Verba’s Designing Social Inquiry. This is often seen as highly positivistic,Footnote 2 yet it differs from many positivist ideals. Scientific laws are mentioned only twice and each time their necessity to science is denied.Footnote 3 Prediction is not listed as a core scientific featureFootnote 4; it is depicted several times as an important or legitimate scientific goal, but it is primarily seen as a tool for testing theories via their implications in a much broader sense than merely forecasting the future, as discussed below.Footnote 5 The authors advocate value-relevant research.Footnote 6 Rigor is about research design, not objective observation, as claimed, for example, by Hans-Georg Gadamer.Footnote 7 Uncertainty is central to science (not certainty, as claimed again by Gadamer).Footnote 8 Qualitative social science is a key focus.Footnote 9 Case studies are regularly discussed.Footnote 10

Yet many postpositivists (for example, interpretivists, constructivists, hermeneuticists, critical theorists—there are of course huge differences here) imply that positivism still dominates social science. Many postpositivist criticisms are powerful and important, as are some postpositivist recommendations about doing empirical research differently, especially those of Mark Bevir and Jason Blakely.Footnote 11

Nonetheless, many postpositivist criticisms of positivistic social science are partial and inaccurate. The criticisms are partial when they say little or nothing about scientific methodology, defined here as the logic of inference. This is a great strength of contemporary social science; I address that below. But first, I address inaccuracies. I briefly cover three issues: (1) facts and values, (2) laws, and (3) predictions.

(1) Postpositivists often claim that social science involves facts, not values.Footnote 12 That is misleading. Political scientists regularly ask normative questions, as shown above.Footnote 13 In organization studies, too, values have been relevant for decades.Footnote 14 Even when social scientists talk as if social science is only about facts, not values, what they recommend or do in practice is sometimes more subtle. For example, Rein Taagepera, trained as a physicist, sounds exactly like an extreme exponent of the fact-value divide,Footnote 15 but his substantive research often tackles normative issues.Footnote 16 Stephen Frantzich and Howard Ernst’s account of science does not mention values and seems to find normative hypotheses distasteful.Footnote 17 Yet their examples of social-scientific research sometimes involve norms.Footnote 18 Paul Kellstedt and Guy Whitten advise political scientists to “avoid normative statements … at all costs,” but they do not say to avoid norms. After all, “good political science” can “change the world.” Their examples include stopping wars and homelessness, and they discuss causal inferences about diet and health, educational effectiveness, democratic stability and life satisfaction, and race and political participation.Footnote 19

In short, social science, following Max Weber, need not be value-free; it can be value-relevant when picking topics and evaluating institutions and practices, but should seek value-neutrality in empirical analysis, trying not to let values infect empirical inferences.Footnote 20 Social scientists need not avoid values.

(2) Postpositivist critics often claim that social science is about making laws.Footnote 21 This is incorrect. While it is true that some scholars, such as Alfred Cuzán and Taagepera, do aim at laws,Footnote 22 this is now a minority position, as both authors note.Footnote 23 “In modern political science, and the social sciences more generally, scholars rarely, if ever, speak of laws,” writes Dimiter Toshkov.Footnote 24

(3) Postpositivists sometimes claim that social science is about making predictions.Footnote 25 Again, some social scientists agree,Footnote 26 but even they typically admit that prediction is now uncommon.Footnote 27 “While description and prediction remain worthy scientific goals,” writes Toshkov, “explanation gets the bulk of attention in political science.”Footnote 28 Alexander Wendt agrees: “[F]ew positivists make forecasting an important part of their work.”Footnote 29 Far from prediction being “the driving force” of science, as Bevir and Blakely claim,Footnote 30 it is not even mentioned as a core feature of science in some prominent social-science textbooks.Footnote 31 Keith Dowding and Charles Miller, who see future-oriented forecasting as “important and useful,” still regard “the principal desideratum” to be the kind of testable-implications prediction discussed below. It is this that they call “scientific prediction.”Footnote 32 Postpositivists often overlook this, as we will see.

Am I being unfair? Are postpositivist critics really discussing current social science or are they simply discussing positivist science as it used to be? They take different approaches here and some take more than one approach in different places. Some definitely discuss current social science, as with Bent Flyvbjerg’s claim that mainstream social scientists claim certainty.Footnote 33 Some focus primarily on older positivism, argue that it has significantly influenced modern social science, but sometimes admit—devastatingly so, I will argue—that there are profound differences between older positivism and some modern social science.Footnote 34 Some postpositivists essentially dichotomize, discussing or defending postpositivism after only attacking historical forms of positivism, overlooking more sensible social scientists.Footnote 35 This inaccurately implies that contemporary social science is more positivistic than it is. Unwary readers may assume that all social science is positivistic, reject social science, and accept the postpositivist alternative by default.

Caricaturing of social science is thus rife. Some criticisms are outdated, addressing features like law-making or prediction that are not core features of contemporary social science. By “core,” I mean something that is both important and universal (or almost universal).Footnote 36 Other postpositivist criticisms are legitimate but unrepresentative, for instance attacking positivism and then moving directly to defending postpositivism, as if all modern social science is positivistic. This, too, amounts to a caricature.

What explains such caricatures? Why do so many clever and knowledgeable people say so many partial, misleading, or incorrect things about social science? Why do so many people believe these claims? Why, in the words of Raymond Boudon, “do people adhere so readily to false or dubious ideas?”Footnote 37

I answer these questions using ideological analysis. Ideological analysis offers more insights here than does merely philosophical analysis. What we need to explain are not occasional, isolated, disparate errors—a misreading here, an overlooked citation there. These are systematic mistakes, repeated by thousands of scholars worldwide, often learned as students, and facilitated by social practices and institutional conventions such as poor citation practices.

In short, these are institutionalized systems of belief, on which ideological analysis specializes. These are not just intellectual errors, to be explained philosophically; empirical explanation is also needed. Simplifying considerably, purely philosophical analysis could ask what the mistakes are, but ideological analysis can also ask why mistakes arise and why so many people believe them, invoking psychological and social-political factors. Perhaps an inaccuracy is eagerly accepted by people who want it to be right, and it is then not spotted due to inadequate review procedures.

The second half of this essay thus offers speculative explanations about why such caricatures flourish. Testing these speculations, which requires tools such as ethnography, is well beyond the scope of this essay. But political theorists and philosophers have long provided empiricists with hypotheses to test.

Such tests should help to fill an important gap. There are large literatures conducting ideological analysis of anti-scientific views among citizens and of pro-scientific views among academics and more broadly. But I know of little ideological analysis of academic criticisms of modern social science.Footnote 38 My essay addresses that gap.

Before continuing, I should stress four points. First, and most important, my position does not ultimately depend on how widespread nonpositivistic social science now is. Even if it is only a tiny minority (and the above snapshot of the American Political Science Review suggests otherwise), many postpositivist claims are seriously weakened by the mere existence of more sensible forms of social science than the positivist versions they target.

Second, I have much sympathy for many criticisms of science. Much social science is conceptually simplistic, substantively banal, methodologically flawed, empirically inaccurate, normatively bland, or sleep-inducingly dull. I used to accept many postpositivist criticisms of science. Clive Payne, a statistician who helped me lose some of my anti-scientific views, once said that the normal distribution was perhaps the closest thing we have to a natural law. If the “quality” of science, however defined, is anything like a normal distribution, then unsurprisingly much science is poor. Perhaps most is poor, even in quantitative political science, which often trumpets its scientific credentials but regularly makes untenable statistical assumptions.Footnote 39 Much quantitative analysis of ideology, using the questionable liberal-conservative scale, may also be dubious.Footnote 40

Third, terminology is a tricky issue with no ideal solutions. For ease of argument, I mostly equate positivism, logical positivism, and logical empiricism. I call the critics discussed in this essay “postpositivists” and I defend “mainstream” or “current” social science (or suchlike). These terms are imperfect and risk implying two monolithic approaches, even though there is immense diversity both across the social sciences and their critics. Worse, my language might be insulting, if it implies that the postpositivists I criticize are not social scientists. In fact, postpositivists differ on this point. Gadamer is happy to criticize “science” and “modern science.” Flyvbjerg and Frank Fischer both state that “social science” or “the social sciences” have “failed,” but then present their preferred alternative as forms of social science: “phronetic social science” and “postpositivist social science,” respectively.Footnote 41 Bevir and Blakely are more nuanced. They criticize “naturalist” social science (closely linked to positivismFootnote 42), defend “interpretive social science,” and never imply that “social science” belongs to the former and not the latter.Footnote 43

Fourth, I do not depict ideology in pejorative terms as a distortion of reality. This is actually now a minority position in ideological analysis.Footnote 44 Rather, I follow most ideology theorists in seeing ideologies nonpejoratively as “clusters of beliefs in our minds.”Footnote 45 Many or most of us have ideologies and we probably all think ideologically to some degree. Science, too, can be seen ideologically, and science and technology studies (STS) examines science as an institutionalized system of beliefs.

This essay proceeds as follows. In the next section, I outline two core principles of (social) science: epistemic and methodological. Scientific methodology, a great scientific strength, is central to this essay. Yet many postpositivists largely or entirely ignore it, as I explain in the section on “The poverty of many criticisms of current social science.” Many postpositivists see science primarily in terms of its ends or its supposed assumptions rather than in terms of its means—that is, how scientists test and justify their claims. The two penultimate sections offer speculative explanations of such oversights and caricatures. The concluding section discusses how this damages people who need scientific methodology, including some postpositivists discussed here.

Two core principles of (social) science

There are many ways of characterizing science, but here I treat the sciences, including the social sciences, as having two core components: epistemic and methodological. The epistemic claim is that correct inferences can potentially be made about empirical reality. The methodological claim is that the epistemic claim requires more than observation alone. I concentrate on the methodological claim, which is central to this essay.

I define methodology as the logic of inference; that is how we reach, test, and justify our conclusions. This meaning is not uncommon.Footnote 46 (I discuss other meanings below.) For scientists, the core methodological claim, that drawing scientific inferences requires more than observation alone, takes different forms depending on the extent to which analysis is deductive and/or inductive. In primarily deductive sciences, such as theoretical physics and mathematics, the methodological tools are those of logic, mathematics, and so on. However, here I concentrate on induction, because the social sciences are mostly and primarily inductive. Unlike deduction, which can produce certain conclusions, induction (for example, extrapolation) is always uncertain.

Testing is thus vital in the social sciences. We should question our data rather than taking data as givens. We should think against ourselves, probing strengths and weaknesses of our ideas, rather than just looking for evidence that fits them. Far from trying to “prove” a descriptive or explanatory inference, we consider both what fits and does not fit it. Testing is often relative; other descriptive or explanatory inferences are always possible—underdetermination is fundamental to science!—so we should also consider what fits and does not fit plausible alternative descriptions and explanations, not just our preferred descriptions and explanations. Ideally, we should remember that “fit” is theory-laden; people may disagree about what fits a hypothesis. But in many cases, we can plausibly accept, amend, or reject our hypotheses based on our analyses. This is the core of inductive scientific methodology: in essence, thinking against ourselves to see how well our ideas stand up.Footnote 47

There is much more to methodology, with great variations across and within the natural and social sciences, and vigorous debates about how best to implement the above ideas. Methodology goes hand-in-hand with mentality; you will do better research if you worry that your data or arguments might be flawed.Footnote 48

Lawyers and philosophers grasped the essence of this inductive methodology long before scientists: hearing both sides, looking at strengths and weaknesses of one’s account, modifying one’s account accordingly, and so on. However, natural and social scientists have developed the idea significantly, offering valuable tools and conceptual distinctions, for instance different kinds of validity, techniques of controlled comparison, lists of threats to internal and external validity, and so on.

Further changes await—this core scientific methodology is not “right”—but overall, inductive scientific methodology offers the best set of tools that humans have yet developed for answering empirical questions inductively. Saying this is consistent with admitting that much or most social science is poor, as noted above. But if you disagree with the italicized claim, how would your inductive methodology differ? Is it safe to only seek evidence that fits a claim, say? Meanwhile, if your response is, instead, that what I describe as scientific methodology is “just good sense,” I would reply that scientists have significantly extended this “good sense” with tools and conceptual distinctions such as those mentioned above. If you disagree with that, please publish! We need more focused, detailed methodological debate; less discussion of alleged and often irrelevant goals; and more of actual means. However, postpositivists too often sidestep this issue, as I now discuss.

The poverty of many criticisms of current social science

Many postpositivists do not adequately address scientific methodology, even though it is a core scientific feature. These oversights and misrepresentations lead to troubling caricatures.

Consider Bevir and Blakely’s account of prediction. They see prediction as a core feature of naturalist social science, but do not cite a single modern social scientist who is guilty of what they allege.Footnote 49 The closest they get is an essay by Milton Friedman that was already sixty-five years old when Bevir and Blakely published their book. Yet crucially, Bevir and Blakely focus only on one of three components of Friedman’s account of prediction—and it is the component that is the least important today.

Bevir and Blakely describe prediction as a “goal: the discovery of a science that enables predictive power and thereby the ability to control (or at least forecast) social and political outcomes.”Footnote 50 Prediction is thus either an end in itself or a means for control. Control is linked to “technocracy” and potentially “anti-democratic and anti-humanistic politics.”Footnote 51

Friedman does initially talk something like this, minus the anti-democracy and anti-humanism.Footnote 52 However, he adds two further dimensions that Bevir and Blakely overlook. First, prediction helps us test theories: “[T]he only relevant test of the validity of a hypothesis is comparison of its predictions with experience.”Footnote 53 Second, “prediction” can be about the past and present, not just the future. For example, “a hypothesis may imply that such and such must have happened in 1906 …. If a search of the records reveals that such and such did happen, the prediction is confirmed.”Footnote 54

Friedman’s term “prediction” is thus broadly equivalent to “implication.” This idea is now often called “observable implications” or “testable implications.” (I now prefer the latter phrase, because it is a basic and powerful way of testing philosophical ideas, too.)

Testable implications are extremely important in scientific methodology.Footnote 55 They are vital for rigorous testing of ideas. Consider Jon Elster’s example: How might we explain the observation that standing ovations have become more common at Broadway shows? The first thing we should do, actually, is check the data.Footnote 56 If the data are reliable, we could hypothesize about causes. Might rising ticket prices be relevant, such that many audience members subconsciously want to feel like the expense was worth it? If so, we can test whether there are fewer standing ovations at cheaper shows or where businesses give employees free tickets. Since hypothesis-testing is relative, not just absolute, we should also test plausible alternative explanations: for instance, Broadway shows might have become better over time, a possibility which would also need to be investigated via testable implications.Footnote 57 Interpretive analysis of the kind Bevir and Blakely recommend may also help. However, interpretive analysis, too, will need predictions and testable implications of the kind that social scientists recommend and about which postpositivists are often largely silent (perhaps because this would be to admit that social science is about much more than just laws, values, and so on).

Elster summarizes this methodology and mindset:

[T]he advocate for the original hypotheses also has to be the devil’s advocate. One has consistently to think against oneself—to make matters as difficult for oneself as one can. We should select the strongest and most plausible alternative rival explanations, rather than accounts that can easily be refuted.Footnote 58

I have already shown in the introduction that prediction as future-oriented forecasting seems to be far less important for many contemporary social scientists than prediction via testable implications. Unfortunately, Bevir and Blakely do not discuss prediction via testable implications. They treat Friedman as discussing only forecasting rather than implications more generally (including about the past and present) and as having naive ends, when he also offers powerful means for “testing hypotheses by the success of their predictions.”Footnote 59

This misleading focus on ends over means is widespread. Martin Hollis also sidesteps Friedman’s methodology of testable implications and reads him as merely addressing prediction as an end.Footnote 60 Conflation of means and ends is even more striking in Sanford Schram’s discussion of how the natural and social sciences are “entirely different”: “The natural sciences are focused on prediction and control of the natural world, making them the wrong place to look for a model about how to produce scientific knowledge that can inform social relations.”Footnote 61 However, the second part of the sentence does not follow from the first; flawed ends need not imply flawed means. Prediction via testable implications is also a powerful tool for “how to produce scientific knowledge that can inform social relations.”

Many postpositivists thus attack social science for ends that are now far from universal, and overlook scientific methodology, a great scientific strength, which is universal or near universal. Too often, postpositivists also say little about other aspects of scientific methodology. It is not clear whether Gadamer’s Truth and Method grasps scientific methodology at all. He thinks that the essence of science is accurate observation and does not discuss inference by controlled comparison.Footnote 62 This is hardly a strong basis for challenging the idea of a social science.

Similarly, Milja Kurki and Colin Wight imply that science is just about careful, systematic observation revealing regularities.Footnote 63 Their later discussion of the logic of inference is very brief and very general.Footnote 64 Readers are unlikely to understand science from this. True, Kurki and Wight accept that “a rejection of the positivist model of science need not lead to the rejection of science,”Footnote 65 mention early advocates of political science who reject laws and recognize that “facts” may mislead,Footnote 66 and add that positivist philosophy of science has been much modified since the 1960s.Footnote 67 But if so, why discuss positivist science at such length? Why not just address current social science? Kurki and Wight do not give any examples of current social science aside from a few sentences on King, Keohane, and Verba and a brief discussion of scientific realism.Footnote 68 Neither discussion addresses methodology. This account of science could be broader, especially in a chapter in a student textbook, for this is how students often learn about science.

Bevir and Blakely, too, admit that their account of naturalistic social science perhaps includes few contemporary political scientists.Footnote 69 Why, then, spend so long criticizing something that (a) is not what many current social scientists do and (b) is far easier to criticize than what many current social scientists do? Also, (c) why sidestep the methodology of testing, a great strength of modern social science? After all, the methodology of testing derives more directly from core scientific ideas (for example, uncertainty and underdetermination) than many incidental features that Bevir and Blakely claim are central to science. (I revisit this point below.) Finally, (d) why defend interpretivism by rejecting extreme forms of social science rather than comparing interpretivism to more moderate and sensible versions? This raises unnecessary doubts about interpretivism’s defensibility. We need to know whether interpretivism does things that even sensible social scientists do not or cannot, rather than by contrasting it with outdated forms of social science that Bevir and Blakely admit are now minority practices.

In short, the methodology of testing is a central and powerful feature of inductive social science, yet many postpositivists say little or nothing about it. The resulting accounts of social science are so limited as to be misleading. It is not that what is said is necessarily wrong, although it often is, but that its incompleteness amounts to a caricature. It is a bit like evaluating a political institution by covering only its benefits or only its weaknesses—and only from many years ago. If one largely overlooks scientific methodology, one cannot fairly claim that “social science has failed as science.”Footnote 70

Speculative explanations of caricatures about social science

Why are so many clever people so misleading about positivism and, explicitly or implicitly, about much current social science? Why do so many insightful commentators overlook or misconstrue scientific methodology? Ideological analysts offer useful tools for explaining such institutionalized systems of belief, as I discussed in the introductory section.

I thus combine and modify two typologies from ideological analysis. First, Jonathan Leader Maynard covers many psychological and social influences on ideological thinking in general.Footnote 71 Second, Aviva Philipp-Muller, Spike Lee, and Richard Petty explain popular anti-science responses to scientific messages by distinguishing, roughly, between the content of what is said, who says it, and how; the psychology of listeners; and group identities.Footnote 72 I adapt and slightly expand both frameworks.

This section considers ten factors—either psychological or closely linked to psychology—that help explain caricatures of social science. The next section puts more weight on institutional and structural factors, and discusses not only caricatures of social science in general, but also oversights about scientific methodology in particular.

As the introduction explained, the following ideological analysis looks beyond philosophical errors. The conventions to be discussed toward the end of this essay are far more than just philosophical errors; they are facilitated by other factors, including the institutionalization of poor citation practices. Overall, this web of interconnecting factors—psychological, social, institutional, structural—may help to explain caricatures of social science as well as oversights concerning methodology.

Maintaining consistency between beliefs

Cognitive dissonance occurs when someone receives “information that conflicts with their existing beliefs, attitudes, or behaviors. Dissonance elicits discomfort.” Resolving the discomfort can take many forms, including rejecting the new information.Footnote 73

Cognitive dissonance theory suggests that if postpositivists spot that their criticisms do not fit many practicing social scientists, they might instinctively gloss over such concerns. If you sincerely reject positivist aims and can see plausible alternatives, it may clash too much with your system of ideas to accept that many current social scientists are not as guilty as you think.

Closely linked to cognitive dissonance is confirmation bias: “[I]nformation is searched for, interpreted, and remembered in such a way that it systematically impedes the possibility that the hypothesis could be rejected.”Footnote 74 If you already “know” that scientists accept a principle, you may not look for counterexamples, let alone examples. After all, academics do not reference claims that are general knowledge, such as the number of states in the United States. (Such cognitive factors affect us all. Previous drafts of this essay featured caricatures or misreadings of those I was criticizing. Doubtless, some remain.)

Cognitive dissonance and confirmation bias are not fundamental influences; they already depend on scholars being initially convinced by caricatures of science. Why does this happen in the first place?

Cognitive efficiency: Saving time and mental energy

Many postpositivists take dangerous shortcuts, for instance citing philosophers of science more than practitioners or assuming that science’s historical foundations still apply (see the next section). Again, these will not seem like shortcuts if one “knows” these things are true; but shortcuts they are. They prevent critics from engaging with more actual social-science research and from facing cognitive dissonance.

Self-esteem: The need for self-worth, self-confidence, and so on

Self-esteem is a powerful motivator. Based on my own anti-science phase, I would discuss two mechanisms here. First, countering one’s own dislikings and intellectual weaknesses. I used to criticize quantitative analysis partly for sincere reasons (such as information loss in quantitative indices), but partly because quantitative analysis made me uncomfortable. It took me years to grasp how unlikely it was that the best ways of studying the world happened to fit my personal likings and intellectual strengths. But perhaps other people are not so shallow.

Second, if people pompously tell you to study things scientifically, you might feel better about yourself by criticizing science and looking elsewhere. This is a completely understandable reaction to external arrogance.

External arrogance, presumptuousness, or narrowness

Many advocates of science are annoyingly dismissive about alternatives. Natural scientists are sometimes breathtakingly arrogant and ignorant about social science. Economists and quantitative political scientists are often awfully smug about their own approach and crassly disparaging about other approaches. Many social scientists presumptuously talk as if theirs is the best or only approach; they sometimes present extremely narrow perspectives of what counts as social science or good research. (I am guilty of this.) Unsurprisingly, many critics overreact.

External practices

Some social scientists do hold views that postpositivists can legitimately attack, as discussed above. My point remains: not all social scientists hold such views. Postpositivists should only attack some social scientists—not social science in general.

External resource unevenness

Publishing in the most “prestigious” journals, grant funding, and jobs are often easier for some kinds of social-science research than others. Humanities departments regularly lose out, too. These external factors go hand-in-hand with self-esteem. You might feel better about a bad situation by attacking the intellectual inadequacies of the people soaking up so many prestigious publications, grants, and jobs.

External resource unevenness arguably reflects a deeper structural factor: the commodification of universities. This includes linking education to job markets. Many graduates can get higher starting salaries with statistical training, encouraging many universities to beef up their quantitative training, sometimes at the expense of other jobs. (Misperceived) employment prospects are one reason why many universities’ humanities programs are declining.

Reactionary motivations

Ideological analysis suggests that people often evaluate claims not (only) on their own terms, but (also) on what those claims are linked to, for instance rejecting vaccines because they seem unnatural.Footnote 75 Postpositivist reactionaryism takes different forms, for instance attacking positivism or mainstream social science because it can support elitism and technocracy or because elitism and technocracy use positivism or mainstream social science.Footnote 76 Unfortunately for postpositivists, the link is not a necessary one. We may even need social science and scientific methodology to investigate and challenge elitism and technocracy. Indeed, social scientists often analyze deliberative democracy, a (potentially) anti-elitist form of democracy that Bevir and Blakely rightly praise but whose connection with postpositivism they exaggerate.Footnote 77 Nonetheless, since science is regularly tarred with an anti-democratic brush, it encourages people to react against it.

Many people seem to be happy with scientific ideas if they do not realize they are scientific. Years ago, I drafted a paper called “History of Political Thought as a Social Science.” Key questions in history of political thought are essentially empirical: What did Niccolo Machiavelli mean by virtú and why did he write The Prince, for example? Scientific tools are excellent for answering empirical questions, I argued, and textual interpreters can use many aspects of scientific methodology.

This was not well received, so I changed the title to “History of Political Thought as Detective-Work.” I removed the explicit discussion of science, but I kept the footnotes to social scientists and philosophers of science, and accidentally forgot to mention that textbooks on forensics and crime scene investigation explicitly treat detective work as scientific. Many people like the published paper, which is quite widely used in teaching, even though the arguments are the same as those in the science paper, which many people hated. This seems reactionary; some people accept scientific methodology until they think that they are doing something scientific.

Adopting the beliefs of the local majority

If many people around you oppose science, you may do the same. However, this factor already assumes that there is a local majority, a point to which I now turn.

Critical mass

Gadamer was a philosopher, writing before the internet, in a university department rather than a multidisciplinary Oxbridge college. He might simply not have mixed with social scientists or natural scientists who could have corrected his caricatures.Footnote 78 Gadamer might never have heard social-science papers in his department or at conferences. Also, the social science of his day was not usually very good. Such factors might have facilitated his caricatures.

Conferences are worth highlighting. Once subdisciplines and approaches get large enough, they can hold their own conferences or run multiple panels at crossdisciplinary conferences, making it even easier to avoid views that would challenge one’s preconceptions.

Critical-mass explanation can no longer be powerful, as there are so many opportunities to uncover different viewpoints. The pressure to adopt local majorities’ beliefs is probably not much different from the tendency to rebel against local majorities, but it may still have some influence, especially where education is involved.

Ideational resources: The availability of ideas or ways of reasoning

Many of the above psychological explanations already assume that someone holds anti-scientific views. But how did they get such views in the first place?

Postpositivist caricatures probably often take root in undergraduate or graduate study. They are so widespread—in print and in people’s heads—that many scholars will find my essay literally unbelievable. To return briefly to my broader argument, I must thus reiterate that it does not matter whether many modern social scientists really theorize and practice it as naively as postpositivists say they do. What matters is that many do not do so. Postpositivists can and should attack positivist science, but it is highly problematic when they overlook far more sensible forms of social science.

Speculative explanations: Institutional and structural

Having primarily discussed factors that are psychological or closely linked to psychology, I now address more institutional and structural factors. Here, I address not only caricatures of science in general, but also oversights about scientific methodology in particular.

Limitations of language

Ideational resources should not just be conceived positively, as resources that people have, but also negatively, as resources that people do not have. Here, it is interesting that “methodology” has no single meaning. “Methodology” literally means “the study of methods” and some scholars use it this way.Footnote 79 Bevir and Blakely treat “methodological” as the adjective of methods.Footnote 80 (“Methodical” means something else; perhaps we should use “methodic,” instead.Footnote 81) McGrath and Johnson equate methodologies with approaches or paradigms, while Dowding initially treats a methodology as an “organizing perspective,” such as rational choice or postmodernism.Footnote 82 Flyvbjerg seems to treat “methodology” as “practical advice for doing research.”Footnote 83 This includes the logic of inference and more.

These understandings are all legitimate, as there is no “true” meaning of the term. What matters is not our language but our ideas, that we somehow address the logic of inference. Other terms cover similar ground, for example, rigor, reliability, robustness, validity, or testing,Footnote 84 so while linguistic limitations do not help, they cannot be that important.

Underemphasis on actual methodology

Much more important than these linguistic limitations is surely the widespread convention, in many fields, of ignoring the logic of inference. For example, “methodological” discussions in the history of political thought almost entirely avoid the logic of inference. Students are taught to think about different approaches in terms of “schools of thought,” such as Marxist, contextualist, or Straussian, with little practical advice on reaching and testing one’s conclusions.Footnote 85

Postpositivists often overlook methodology, too, as discussed above with reference to Gadamer, Kurki and Wight, and others. Bevir and Blakely likewise write that for naturalists, good research involves rigorous application of methods.Footnote 86 However, rigorous application of methods requires methodology, not just methods, and Bevir and Blakely say little about this. Yet the postpositivist strategy of criticizing scientific ends and sidestepping methodological means is so common as to seem unobjectionable, presumably.

The poverty of positivist methodology

One reason why methodology gets less attention in this particular area reflects the poverty of methodology among Vienna Circle positivists. In their manifesto statement, the methodology of science is deductive: logical or conceptual analysis.Footnote 87 Most social science simply does not make inferences this way. Thomas Hobbes’s science of politics tried it, but failed; it works poorly for inductive questions, as it is not about testing ideas by thinking against oneself.Footnote 88

These Viennese positivists were to a considerable extent philosophers, not practicing social scientists. Otto Neurath was also a sociologist, though not apparently a very good one.Footnote 89 In fairness, most social science at the time was not great; the methodology of controlling variables to make robust inferences was not well developed. However, this highlights the deficiency of genealogical strategies, to which I now turn.

The convention of arguing genealogically

The remaining subsections discuss specific conventions and argumentative strategies that facilitate caricatures of much current social science. One common rhetorical move is genealogical, that is, discussing a phenomenon by approaching it historically. This is common and usually perfectly legitimate, for instance discussing ideology by starting with Karl Marx or Karl Mannheim. However, in discussions of science, genealogical discussions sometimes blend into what I call a “post hoc est hoc” fallacy: scientists once believed X, and so current scientists still do. This is like calling all current political scientists racist because political science was founded on racist doctrines.

The most common and seductive genealogical focus for modern science is positivism. But we have seen immense differences between positivism and contemporary social science. Bevir and Rhodes dispute this: “[E]ven if political scientists repudiate positivism, they often continue to study politics in ways that make sense only if they make positivist assumptions.”Footnote 90 The “often” is not good enough for Bevir and Rhodes’s purposes, though, as political scientists often do not use positivist assumptions, as shown above. Bevir and Rhodes’s admission invites the question: Is nonpositivist or noninterpretivist political science as problematic as the positivist or naturalist form they criticize?

Bevir and Blakely admit that “[p]erhaps few political scientists today would wish to identify themselves as naturalists.”Footnote 91 Why, then, immediately go on to discuss how “naturalists assume that explanations in the social sciences should be formal, ahistorical, and invariant”?Footnote 92 This certainly does not fit most case-study research, for example, including some political scientists who Bevir and Blakely had just cited. Even King, Keohane, and Verba accept that theories are contingent rather than universal, typically applying in limited situations.Footnote 93

What is particularly striking about post hoc est hoc genealogies is that at precisely the same time as Vienna positivists were offering philosophical speculations about how social science could be studied, actual scientific methodology was being revolutionized by practicing scientists, with ideas such as randomization and null hypotheses.Footnote 94 These ideas are far more central to modern social science than positivism, I would suggest. Revealingly, many postpositivists focus on philosophers in Viennese armchairs in the 1930s, not actual scientists in a Rothamsted field in the same decade.

The convention of arguing via philosophical presuppositions

Underpinning the genealogical fallacy is the assumption that we can understand social science via its philosophical presuppositions. This reduces the need to cite contemporary social scientists. But presupposition is not practice; even great philosophers sometimes diverge from their explicit presuppositions, and many contemporary social scientists do not share most positivist presuppositions. If they did, citations would show this. Such citations are rarely provided. Even that would only show that some social scientists are positivistic, whereas postpositivists need to attack good social science, not flawed and outdated positivist versions.

The convention of arguing by false dichotomy

A common and seductive academic trope is to present a false dichotomy, criticize the first category, and impel readers toward the second almost by default. This is fallacious if more sensible categories are overlooked.

Examples include Bevir and Blakely’s defense of interpretive social science, Flyvbjerg’s defense of phronetic social science, and Fischer’s defense of postpositivist policy inquiry, discussed above. All reject positivistic science without adequately addressing more sensible social scientists. They simply show that their approaches are superior to bad social science.

Another aspect of dichotomizing was discussed above: linking positivism to elitism and postpositivism to democracy. This, too, is a common and fallacious argumentative strategy, as the above pairs are not necessarily linked (see the subsection on “Reactionary motivations”). The social world is not a determinate world, a world of straight lines, but a world of tendencies and possibilities.

The convention of seeing science and other options as alternatives

Perhaps the most insidious dichotomizing tendency is to present these approaches as exclusive: you can either have scientific aims and means or nonscientific ones. You cannot use scientific means for nonscientific ends. This was exemplified by Schram (see above) and seems to underlie many postpositivist criticisms. In fairness, most social scientists would make similar arguments. I am unusual in applying scientific methodology in the humanities, as the concluding section below discusses.Footnote 95

Citation conventions and (lack of) constraints

Citation practices are surprisingly important. Without adequate citations, we can easily make sweeping, misleading generalizations or even false claims about what someone writes. “Whereof one cannot give page numbers, one should stay silent.”Footnote 96 We all fall short in our citations sometimes. It is legitimate to be criticized when we do.

Inadequate citation partly reflects psychology: detailed referencing is tiresome for authors and readers. Besides, if you “know” claims are true, detailed referencing seems unnecessary. But three institutional features of publishing are also worth noting. First, word limits mean we usually cannot cite everything we want, so we often take shortcuts, especially in journal articles. Second, we often avoid page numbers when teaching, whether because PowerPoint slides do not have much room or simply because the convention is not to give precise citations when speaking. (How tiresome that would be!) Third, and most important, is the bad convention whereby page numbers in publications are not required except for direct quotations.

A lack of institutional constraints also matters. Consider how U.S. Supreme Court justices’ ideology affects their decisions. Analysts often discuss the lack of institutional constraints as an important institutional “influence”: justices can act sincerely because there is little institutional check on them.Footnote 97 The same applies here. Our citations would be more accurate if they were regularly challenged, for instance by journal referees, or if publishers required authors to support broad claims with evidence.

Such institutional explanations cannot excuse many postpositivists’ citation practices. An egregious example is Gadamer’s Truth and Method, which mentions “modern science” almost sixty times without citing even one modern natural or social scientist.Footnote 98 Perhaps Gadamer was worried that his book was too long. The most recent scientist Gadamer cites is the physicist Hermann von Helmholtz, who died sixty-five years before Gadamer’s book was written; and Gadamer’s account of Helmholtz is flawed and misleading.Footnote 99

Many later commentators do not mention the inadequate account of science in Gadamer’s book.Footnote 100 This leads to systemic misrepresentation. No individual is necessarily culpable here; one would not expect everyone to mention Gadamer’s caricatures. However, when so few people do—and when the same applies for many other caricatures of science—the collective result is the systemic misrepresentation of scientific ideas through inadequate citation practices.

Conclusion

Widespread and untenable caricatures of science reflect many psychological, social, institutional, and structural factors, including linguistic limitations, conventions, and argumentative strategies. Scientists and early positivists are partly at fault—for example, the arrogance and presumptuousness of many natural and social scientists, the poverty of 1930s positivist methodology, and a widespread tendency (not just among scientists) to see science and the humanities as essentially different.

But ultimately, it is postpositivists who primarily drive these caricatures. This reflects many completely understandable psychological factors and their interaction with other factors listed above, including conventions, but it also reflects factual errors and fallacious argumentative strategies.

So what? Why does this argument matter? Because overlooking scientific methodology robs many people of something important. Many humanities scholars ask empirical or quasi-empirical questions; scientific methodology can help them test their ideas—seeing what fits and does not fit, for one’s own and for competing interpretations, using different data or methods, reporting uncertainty, and so on.

For example, many scholars ask empirical questions in the history of political thought and philosophy, such as what Hobbes means by “representation,” and why he wrote what he wrote. The same applies in literature. Who wrote William Shakespeare’s plays? Who influenced Johann Goethe? Did William Blake’s views change significantly after the French Revolution? If so, how much can we use his pre-Revolutionary writings to inform his post-Revolutionary ones? I know of no better logic of inference for answering such questions than a scientific one—questioning the evidence, testing different interpretations open-mindedly by asking what we might see if they were true or untrue, and so on.Footnote 101 Scientific methodology can even apply in philosophical thought experiments, where we need to manipulate variables in our scenarios to assess their relative effect, their interactions, the impact of missing variables, the generalizability of the tests, and so on.Footnote 102

Crucially, some aspects of scientific methodology will be vital even in some postpositivists’ claimed alternatives to social science, including Bevir and Blakely’s interpretive social science and Flyvbjerg’s phronetic social science (as David Laitin notes).Footnote 103 Flyvbjerg seems to admit this: “Validity, for the phronetic researcher, is defined in the conventional manner as well-grounded evidence and arguments, and the procedures for ensuring validity are no different in phronetic planning research than in other parts of the social sciences.”Footnote 104 This is an important admission. Alas, Flyvbjerg’s caricatures reemerge almost immediately. Phronetic planning researchers—unlike, he implies, other social scientists—“do not claim final, indisputable objectivity for their validity claims.”Footnote 105 Yet leading political-science textbooks flatly deny this.Footnote 106

Indeed, interpretive social scientists sometimes ask empirical questions to which answers can potentially be right or wrong, for instance whether a particular policy caused a drought.Footnote 107 Such questions would benefit from a scientific methodology. This even holds for normatively oriented questions. For example, Bevir and Blakely praise the interpretive social science of Judy Innes and her collaborators, investigating how deliberative planning—incorporating citizens and stakeholders into policymaking—helped previously intransigent policy discussions finally move forward.Footnote 108 Yet even this normative question—that is, how well deliberative planning works—benefits from the even-handedness essential to scientific methodology. In one study, Innes reports nothing but success for eight deliberative planning initiatives.Footnote 109 Sarah Connick and Innes’s discussion of three deliberative initiatives is mainly positive, only briefly mentioning problems.Footnote 110 Yet one of these three projects had serious issues, as the authors note elsewhere.Footnote 111 Fischer cites only the successful experiments in deliberative democracy, leaving just a few lines for problemsFootnote 112 within pages of positive reporting.Footnote 113 Here, too, postpositivists could learn from social-science advice about careful combination of facts and values.

Yet Bevir and Blakely assert—without references—that naturalism is about “eliminating values and political engagement from the study of human behavior” and making social research “more or less independent of the project of political and normative critique.”Footnote 114 Neither claim is tenable, as discussed above. I myself started my career doing this kind of normatively oriented empirical research, although only after my Ph.D. supervisor John Curtice showed me the value of couching my empirical questions normatively.Footnote 115

Bevir and Blakely are absolutely right that social analysis can and often should be normatively engaged. And interpretive social scientists are probably more open to normative questions than are many current social scientists. But it is wrong to claim that mainstream social scientists cannot and do not address norms. And postpositivists asking such questions need aspects of scientific methodology to answer them. Defenses of postpositivism by scholars such as Bevir and Blakely would be stronger if they were not built on caricatures of social science. Defenses of postpositivism would also be stronger if they did not leave discussions of methodology to social scientists.

Some readers may notice that I have already used some ideas from ideological analysis to encourage a more constructive way forward. After all, ideological analysis suggests that simply pointing out errors may have little effect. Let me thus end on a more optimistic note. Many important intellectual developments come less from senior figures changing their minds than from junior people doing new things. We need a new generation of scholars to show how scientific and postpositivist approaches can learn from and build on each other. There are many opportunities for such publications, including both theorizing and substantive research. But such analyses will be better if they take scientific methodology seriously and avoid caricatures.

Acknowledgments

This essay was written while I was a Core Fellow at the Helsinki Collegium for Advanced Studies, University of Helsinki. For comments and criticisms on earlier drafts, I thank my readers and reviewers, anonymous and nonymous: Keith Dowding, Michael Frazer, Tomáš Halamka, Tereza Křepelová, Jonathan Leader Maynard, Heikki Patomäki, Nahshon Perez, and Liz Ralph-Morrow. I also thank Dave Schmidtz and the other contributors to this volume. Errors and caricatures that remain are my own.

Competing interests

The author declares none.

References

1 I examined American Political Science Review 117, no. 1 (2023), excluding research notes and political theory articles. I thank Anna Kananen for research assistance on this matter.

2 E.g., Hay, Colin, Political Analysis: A Critical Introduction (Basingstoke: Palgrave Macmillan, 2002), 1, 12–13, 37, 80CrossRefGoogle Scholar; Johnson, James, “Consequences of Positivism: A Pragmatist Assessment,” Comparative Political Studies 39, no. 2 (2006): 224–52.CrossRefGoogle Scholar

3 King, Gary, Keohane, Robert, and Verba, Sidney, Designing Social Inquiry: Scientific Inference in Qualitative Research (Princeton, NJ: Princeton University Press, 1994)CrossRefGoogle Scholar, 11, 37; see also 8.

4 King et al., Designing Social Inquiry, 7–9.

5 King et al., Designing Social Inquiry, 12, 15, 20, 102–3, 169–70, 209.

6 King et al., Designing Social Inquiry, e.g., 15–16.

7 E.g., King et al., Designing Social Inquiry, 3, 6–7, 9; cf. Hans-Georg Gadamer, Truth and Method, 2nd ed., trans. Joel Weinsheimer and Donald Marshall (London: Continuum, 2004), 114, 212.

8 E.g., King et al., Designing Social Inquiry, 6–7, esp. 8–9; cf. Gadamer, Truth and Method, 484.

9 E.g., King et al., Designing Social Inquiry, 3–4.

10 E.g., King et al., Designing Social Inquiry, 3–4, 11–12; cf. Schram, Sanford, “Phronetic Social Science: An Idea Whose Time Has Come,” in Real Social Science: Applied Phronesis, ed. Flyvbjerg, Bent, Landman, Todd, and Schram, Sanford (Cambridge: Cambridge University Press, 2012), 23 Google Scholar.

11 Bevir, Mark and Blakely, Jason, Interpretive Social Science: An Anti-Naturalist Approach (Oxford: Oxford University Press, 2018), 88202 CrossRefGoogle Scholar; Flyvbjerg, Bent, Making Social Science Matter: Why Social Inquiry Fails and How It Can Succeed Again, trans. Sampson, Steven (Cambridge: Cambridge University Press, 2001), 53168.CrossRefGoogle Scholar

12 Frank Fischer, “Beyond Empiricism: Policy Inquiry in Postpositivist Perspective,” Policy Studies Journal 26, no. 1 (1998): 130–31; Glynos, Jason and Howarth, David, Logics of Critical Explanation in Social and Political Theory (London: Routledge, 2007), 2CrossRefGoogle Scholar; Blakely, Jason, Alasdair MacIntyre, Charles Taylor, and the Demise of Naturalism: Reunifying Political Theory and Social Science (Notre Dame, IN: University of Notre Dame Press, 2016), 75 CrossRefGoogle Scholar; Bevir and Blakely, Interpretive Social Science, 3.

13 See also King et al., Designing Social Inquiry, 15–16; Keith Dowding, The Philosophy and Methods of Political Science (Basingstoke: Palgrave Macmillan, 2016), 16, 172; Shively, W. Phillips, The Craft of Political Research, 10th ed. (London: Routledge, 2017), 11 CrossRefGoogle Scholar, although this may be the only such comment here—the rest fit postpositivist descriptions.

14 Connell, Ann and Nord, Walter, “The Bloodless Coup: The Infiltration of Organization Science by Uncertainty and Values,” Journal of Applied Behavioral Science 32, no. 4 (1996), 419–25.CrossRefGoogle Scholar

15 See likewise Taagepera, Rein, Making Social Sciences More Scientific: The Need for Predictive Models (Oxford: Oxford University Press, 2008), 5–13, esp. 12.CrossRefGoogle Scholar

16 E.g., Taagepera, Rein, “Nationwide Threshold of Representation,” Electoral Studies 21, no. 3 (2002): 384.CrossRefGoogle Scholar

17 Frantzich, Stephen and Ernst, Howard, The Political Science Toolbox: A Research Companion to American Government (Lanham, MD: Rowman & Littlefield, 2009), 1–7, 51.Google Scholar

18 E.g., Frantzich and Ernst, Toolbox, 50–54.

19 Kellstedt, Paul and Whitten, Guy, The Fundamentals of Political Science Research, 3rd ed. (Cambridge: Cambridge University Press, 2018), 19, 6573 CrossRefGoogle Scholar.

20 Hollis, Martin, The Philosophy of Social Science: An Introduction, rev. ed. (Oxford: Oxford University Press, 2002), 208–9.Google Scholar

21 Steve Smith, “Positivism and Beyond,” in International Theory: Positivism and Beyond, ed. Steve Smith, Ken Booth, and Marysia Zalewski (Cambridge: Cambridge University Press, 1996), 15–16; Martin Griffiths, “Worldviews and IR Theory: Conquest or Coexistence?” in International Relations Theory for the Twenty-First Century, ed. Martin Griffiths (London: Routledge, 2007), 6; Glynos and Howarth, Logics, 2–3, 19; Kurki, Milja and Wight, Colin, “International Relations and Social Science,” in International Relations Theories: Discipline and Diversity, 3rd ed., ed. Dunne, Tim, Kurki, Milja, and Smith, Steve (Oxford: Oxford University Press, 2013), 22 Google Scholar; Bevir and Blakely, Interpretive Social Science, 35–36.

22 E.g., Cuzán, Alfred, “Five Laws of Politics,” PS: Political Science and Politics 48, no. 3 (2015): 415–19Google Scholar; Taagepera, Making Social Sciences, esp. 13, 194–98.

23 Cuzán, “Five Laws,” 415; Taagepera, Making Social Sciences, 2–5.

24 Toshkov, Dimiter, Research Design in Political Science (Basingstoke: Palgrave, 2016), 146–47.CrossRefGoogle Scholar

25 Griffiths, “Worldviews,” 6; Bevir and Blakely, Interpretive Social Science, 36–40; Hay, Political Analysis, 55.

26 Taagepera, Making Social Sciences, 3–11, 187–98, 236–40; Wayman, Frank, “Scientific Revolutions and the Advancement of Explanation and Prediction,” in Predicting the Future in Science, Economics, and Politics, ed. Wayman, Frank et al. (Cheltenham: Edward Elgar, 2014), 427–58CrossRefGoogle Scholar.

27 E.g., Taagepera, Making Social Sciences, vii–x, 3–13, 236–40.

28 Toshkov, Research Design, 35.

29 Wendt, Alexander, “What Is International Relations For? Notes Toward a Postcritical View,” in Critical Theory and World Politics, ed. Jones, Richard Wyn (Boulder, CO: Lynne Rienner, 2000), 207.Google Scholar

30 Bevir and Blakely, Interpretive Social Science, 36.

31 E.g., King et al., Designing Social Inquiry, 7–9; Elster, Jon, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences (Cambridge: Cambridge University Press, 2007), 8, 2930 CrossRefGoogle Scholar, 32–50; Frantzich and Ernst, Toolbox, 2–3.

32 Dowding, Keith and Miller, Charles, “On Prediction in Political Science,” European Journal of Political Research 58, no. 3 (2019): 1002.CrossRefGoogle Scholar

33 Flyvbjerg, Bent, “Phronetic Planning Research: Theoretical and Methodological Reflections,” Planning Theory & Practice 5, no. 3 (2004): 292 CrossRefGoogle Scholar, but cf. King et al., Designing Social Inquiry, 6–9; Kellstedt and Whitten, Fundamentals, 1, 4–5.

34 See esp. Bevir, Mark and Blakely, Jason, “Naturalism and Anti-Naturalism,” in Routledge Handbook of Interpretive Political Science, ed. Bevir, Mark and Rhodes, R. A. W. (Abingdon: Routledge, 2016), 3132 Google Scholar; Kurki and Wight, “International Relations,” 22.

35 A particularly clear example is Griffiths, “Worldviews,” 5–7. See also, to some extent, Fischer, “Beyond Empiricism,” 129–30, 142–43; Flyvbjerg, Making Social Science Matter, 1–4, 166–68; and Bevir and Blakely, Interpretive Social Science, 1–14, 201–2.

36 Freeden, Michael, “The Morphological Analysis of Ideology,” in The Oxford Handbook of Political Ideologies, ed. Freeden, Michael, Sargent, Lyman Tower, and Stears, Marc (Oxford: Oxford University Press, 2013), 124–25.CrossRefGoogle Scholar

37 Boudon, Raymond, The Analysis of Ideology, trans. Slater, Malcolm (Cambridge: Polity, 1989), 11.Google Scholar

38 A partial exception is Elster, Jon, “Hard and Soft Obscurantism in the Humanities and Social Sciences,” Diogenes 58, nos. 1–2 (2012): 160–63Google Scholar. I do not endorse Elster’s main thesis.

39 See Christopher Achen, “Toward a New Political Methodology: Microfoundations and ART,” Annual Review of Political Science 5 (2002): 423–50; Schrodt, Philip, “Seven Deadly Sins of Contemporary Quantitative Political Analysis,” Journal of Peace Research 51, no. 2 (2014): 287–300; Elster, , “Hard and Soft Obscurantism,” 166–68.CrossRefGoogle Scholar

40 Maynard, Jonathan Leader, “A Map of the Field of Ideological Analysis,” Journal of Political Ideologies 18, no. 3 (2013): 314–15.CrossRefGoogle Scholar

41 Flyvbjerg, Making Social Science Matter, 7, 129; Fischer, “Beyond Empiricism,” 129, 136.

42 Bevir and Blakely, Interpretive Social Science, 6–8.

43 Bevir and Blakely, Interpretive Social Science.

44 Leader Maynard, “A Map,” 302, 309–10, but cf. 305.

45 Teun van Dijk, Ideology: A Multidisciplinary Approach (London: SAGE Publications, 1998), 26; see also Maynard, Jonathan Leader, “Ideological Analysis,” in Methods in Analytical Political Theory, ed. Blau, Adrian (Cambridge: Cambridge University Press, 2017), 300302.Google Scholar

46 E.g., Sprinz, Detlef and Wolinsky-Nahmias, Yael, “Introduction: Methodology in International Relations Research,” in Models, Numbers, and Cases: Methods for Studying International Relations, ed. Sprinz, Detlef and Wolinsky-Nahmias, Yael (Ann Arbor, MI: University of Michigan Press, 2004), 4.CrossRefGoogle Scholar

47 For more on my view of inductive science, see Blau, Adrian, “History of Political Thought as Detective-Work,” History of European Ideas 41, no. 8 (2015): 1178–94CrossRefGoogle Scholar. The scientific methodology of that article is made explicit in Blau, Adrian, “The Irrelevance of (Straussian) Hermeneutics,” in Reading Between the Lines: Leo Strauss and the History of Early Modern Philosophy, ed. Winfried Schröder, (Berlin: De Gruyter, 2015), 4251 Google Scholar. For similar accounts of inductive science methodology, see, e.g., Van Evera, Stephen, Guide to Methods for Students of Political Science (Ithaca, NY: Cornell University Press, 1997), 2740 Google Scholar; Valiela, Ivan, Doi ng Science: Design, Analysis, and Communication of Scientific Research (Oxford: Oxford University Press, 2001), 518 CrossRefGoogle Scholar; van Peer, Willie, Hakemulder, Frank, and Zyngier, Sonia, Scientific Methods for the Humanities (Amsterdam: John Benjamins, 2012), 7CrossRefGoogle Scholar; Dowding, Philosophy and Methods, 102–32.

48 Blau, “Detective-Work,” 1180–81, 1183, 1185–86, 1188.

49 Bevir and Blakely, Interpretive Social Science, 33–40. A much better referenced account of naturalism, which alas does not discuss prediction, is Bevir and Blakely, “Naturalism,” 31–39.

50 Bevir and Blakely, Interpretive Social Science, 36.

51 Bevir and Blakely, Interpretive Social Science, 36, 39.

52 Friedman, Milton, Essays in Positive Economics (Chicago, IL: University of Chicago Press, 1953), 4–5, 7.Google Scholar

53 Friedman, Essays, 8–9; emphasis removed.

54 Friedman, Essays, 9. On prediction being past-oriented, not just future-oriented, see also Dowding, Philosophy and Methods, 44, and esp. Dowding and Miller, “Prediction.”

55 E.g., Hempel, Carl, Philosophy of Natural Science (Upper Saddle River, NJ: Prentice Hall, 1966), 69 Google Scholar; van Evera, Guide to Methods, 35–36; Elster, Explaining Social Behavior, 16–20.

56 Elster, Explaining Social Behavior, 17. King et al., Designing Social Inquiry, 32–33, make the same point about questioning data, in a similar example about thinking against oneself.

57 Elster, Explaining Social Behavior, 16–20. However, Elster’s “hypothetico-deductive” approach is surely “hypothetico-inductive”; the implications are plausible inferences, not deductively necessary ones. See Blau, “Detective-Work,” 1187–88. Hypothetico-deductive models may be the textbook ideal, but the real world is messier. Van Evera implies something similar. Van Evera, Guide to Methods, 36; so did the great physicist Hermann von Helmholtz, Science and Culture: Popular and Philosophical Essays, ed. David Cahan (1862; repr., Chicago, IL: University of Chicago Press, 1995), 84.

58 Elster, Explaining Social Behavior, 20.

59 Friedman, Essays, 10.

60 Hollis, Philosophy of Social Science, 64.

61 Shram, “Phronetic Social Science,” 15; emphasis added.

62 Blau, “Irrelevance,” 44–47.

63 Kurki and Wight, “International Relations,” 22.

64 Kurki and Wight, “International Relations,” 30–31.

65 Kurki and Wight, “International Relations,” 15; see also 22.

66 Kurki and Wight, “International Relations,” 17–18.

67 Kurki and Wight, “International Relations,” 22.

68 Kurki and Wight, “International Relations,” 24–26, but see 30–31, although this too hardly scratches the surface of scientific methodology.

69 Bevir and Blakely, “Naturalism,” 31–32.

70 Flyvbjerg, Making Social Science Matter, 7.

71 Leader Maynard, “Ideological Analysis,” 315–16.

72 Aviva Philipp-Muller, Spike Lee, and Richard Petty, “Why Are People Antiscience, and What Can We Do About It?” Proceedings of the National Academy of Sciences 119, no. 30 (2022), https://www.pnas.org/doi/epdf/10.1073/pnas.2120755119.

73 Philipp-Muller, Lee, and Petty, “Why Are People Antiscience?” 4.

74 Oswald, Margit and Grosjean, Stefan, “Confirmation Bias,” in Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement, and Memory, ed. Pohl, Rüdiger (Hove: Psychology Press, 2004), 79.Google Scholar

75 Philipp-Muller, Lee, and Petty, “Why Are People Antiscience?” 4.

76 E.g., Alvin Gouldner, The Coming Crisis of Western Sociology (London: Heinemann, 1971), 50–51; Bevir and Blakely, Interpretive Social Science, 39, 189.

77 Bevir and Blakely, Interpretive Social Science, 188–92. For social-science analysis of deliberation, see esp. André Bächtiger’s work, some of which is with postpositivist John Dryzek, e.g., Bächtiger, André et al., “How Deliberation Happens: Enabling Deliberative Reason,” American Political Science Review 118, no. 1 (2023): 345–62.Google Scholar

78 Elsewhere, I discuss the possible influence of his biochemist father, who might have had simplistic, observation-based views of science. Blau, “Irrelevance,” 46.

79 E.g., van Peer et al., Scientific Methods, 53–56; Kurki and Wight, “International Relations,” 15, 25.

80 Bevir and Blakely, Interpretive Social Science, 18, chap. 5.

81 Jonathan Leader Maynard and Matthew Longo, “Method and Methodology in Political Theory” (presentation, European Consortium of Political Research Conference, University of Innsbruck, August 23, 2022), 6.

82 McGrath, Joseph and Johnson, Bettina, “Methodology Makes Meaning: How Both Qualitative and Quantitative Paradigms Shape Evidence and Interpretation,” in Qualitative Research in Psychology: Expanding Perspectives in Methodology and Design, ed. Camic, Paul, Rhodes, Jean, and Yardley, Lucy (Washington, DC: American Psychological Association, 2003)Google Scholar; Dowding, Philosophy and Methods, 70, 72.

83 Flyvbjerg, “Phronetic,” 290–302.

84 E.g., “validity” covers aspects of methodology in Yin, Robert, Qualitative Research from Start to Finish (New York: The Guilford Press, 2011), 7882 Google Scholar.

85 I reject the dominant approach; see Adrian Blau, “Interpreting Texts,” in Blau, Methods in Analytical Political Theory, 243–44; Blau, Adrian, “How Should We Categorize Approaches to the History of Political Thought?The Review of Politics 83, no. 1 (2021): 91101 CrossRefGoogle Scholar. For practical advice, see Blau, “Interpreting Texts,” 245–65; Blau, “Detective-Work.”

86 Bevir and Blakely, “Naturalism,” 32.

87 Hahn, Hans, Neurath, Otto, and Carnap, Rudolf, “Wissenschaftliche Weltauffassung: Die Wiener Kreis (The Scientific Conception of the World: The Vienna Circle),” in Neurath, Otto, Empiricism and Sociology, ed. Neurath, Marie and Cohen, Robert (Dordrecht: D. Reidel Publishing Company, 1973), 306Google Scholar, 309.

88 See my ongoing book, Adrian Blau, Hobbes’s Failed Science of Politics and Ethics (unpublished manuscript).

89 See Gordon, Scott, The History and Philosophy of Social Science (London: Routledge, 1991), 596.Google Scholar

90 Mark Bevir and R. A. W. Rhodes, “Interpretive Political Science: Mapping the Field,” in Bevir and Rhodes, Routledge Handbook of Interpretive Political Science, 4.

91 Bevir and Blakely, “Naturalism,” 31.

92 Bevir and Blakely, “Naturalism,” 32.

93 King et al., Designing Social Inquiry, 103–4.

94 Fienberg, Stephen and Tanur, Judith, “Reconsidering the Fundamental Contributions of Fisher and Neyman on Experimentation and Sampling,” International Statistical Review 64, no. 3 (1996): 237–53CrossRefGoogle Scholar.

95 Blau, “Detective-Work”; Blau, “Logic of Inference.”

96 Blau, Adrian, “How (Not) to Use the History of Political Thought for Contemporary Purposes,” American Journal of Political Science 65, no. 2 (2021): 366–69CrossRefGoogle Scholar; quotation at 366.

97 Segal, Jeffrey, “Supreme Court Deference to Congress,” in Supreme Court Decision-Making: New Institutionalist Approaches, ed. Clayton, Cornell and Gillman, Howard (Chicago, IL: University of Chicago Press, 1999), 238–39.Google Scholar

98 D’Amico, Robert, Contemporary Continental Philosophy (Boulder, CO: Westview Press, 1999), 158.Google Scholar

99 Blau, “Irrelevance,” 45, 47.

100 Blau, “Irrelevance,” 45–46.

101 Blau, “Detective-Work.”

102 Adrian Blau, “The Logic of Inference of Thought Experiments in Political Philosophy” (presentation, American Political Science Association annual meeting, Philadelphia, PA, September 4, 2016). For problems with interactions in thought experiments about democracy, see Blau, Adrian, “Political Equality and Political Sufficiency,” Moral Philosophy and Politics 10, no. 1 (2023): 2346.CrossRefGoogle Scholar

103 Laitin, David, “The Perestroikan Challenge to Social Science,” in Making Political Science Matter: Debating Knowledge, Research, and Method, ed. Schram, Sanford and Caterino, Brian (New York: New York University Press, 2006), 54.Google Scholar

104 Flyvbjerg, “Phronetic,” 292.

105 Flyvbjerg, “Phronetic,” 292.

106 King et al., Designing Social Inquiry, 6-9; Kellstedt and Whitten, Fundamentals, 1, 4–5.

107 E.g., Flyvbjerg, Making Social Science Matter, 148–54; Bevir and Blakely, Interpretive Social Science, 180–92. The drought example is from Bevir and Blakely, Interpretive Social Science, 187–88.

108 Bevir and Blakely, Interpretive Social Science, 189–92. I have praised the work of Judith Innes and collaborators before; see Blau, Adrian, “Rationality and Deliberative Democracy: A Constructive Critique of John Dryzek’s Democratic Theory,” Contemporary Political Theory 10, no. 1 (2011): 51.CrossRefGoogle Scholar

109 Innes, Judith, “Planning Through Consensus Building: A New View of the Comprehensive Planning Ideal,” Journal of the American Planning Association 62, no. 4 (1996): 465–69.CrossRefGoogle Scholar

110 Connick, Sarah and Innes, Judith, “Outcomes of Collaborative Water Policy Making: Applying Complexity Thinking to Evaluation,” Journal of Environmental Planning and Management 46, no. 2 (2003): 177–97.CrossRefGoogle Scholar

111 Judith Innes et al., “Collaborative Governance in the CALFED Program: Adaptive Policy Making for California Water” (Working Paper 2006–01, Institute of Urban and Regional Development, University of California, Berkeley, 2006), 48–51.

112 Fischer, Frank, Reframing Public Policy: Discursive Politics and Deliberative Practices (Oxford: Oxford University Press, 2003), 218.CrossRefGoogle Scholar

113 Fischer, Reframing Public Policy, 209–20.

114 Bevir and Blakely, Interpretive Social Science, 3, 60.

115 Blau, Adrian, “A Quadruple Whammy for First-Past-the-Post,” Electoral Studies 23, no. 3 (2004): 431–53CrossRefGoogle Scholar; Blau, Adrian, “The Effective Number of Parties at Four Scales: Votes, Seats, Legislative Power, and Cabinet Power,” Party Politics 14, no. 2 (2008): 167–87.CrossRefGoogle Scholar