1. Introduction
As we shall understand the locution, the ‘problem of philosophical disagreement’ is that philosophers have conflicting beliefs despite enormous epistemic effort — a collective effort that in some cases spans millennia. Unsurprisingly, there are conflicting philosophical views about the rationality of conflicting views. The epistemic uniqueness thesis (hereafter, ‘uniqueness’) is (to a first approximation) that “there is a unique rational response to a given body of evidence,”Footnote 1 while epistemic permissiveness is the denial of epistemic uniqueness. For those who hope to explain the problem of philosophical disagreement in a manner that preserves the attribution of rationality to each of its participants, if uniqueness is true, then a very depressing conclusion seems to follow: for every philosophical dispute, at least one party in the dispute fails to hold a rational response to the shared evidence. Since philosophy is rife with disagreement, it seems that philosophy is rife with less than rational doxastic responses to shared evidence. Naturally, this invites the question: who has formed a rationally mistaken belief in response to the shared evidence? Is it you, your philosophical opponents, or both? None of these three possibilities seem particularly inviting. The first-person case has a neo-Moorean ring to it: I believe that P, and the belief that P is a rationally mistaken doxastic response to a given batch of evidence. Attributing a rationally mistaken doxastic response to one's philosophical opponents is also not without difficulty: our philosophical opponents often seem (at least) as smart and epistemically industrious as we are. The idea that both parties are rationally mistaken simply combines these problems.
Epistemic permissiveness, in contrast, seems tailor-made to avoid such problems. As Sophie Horowitz notes, “One of the main benefits of permissivism is its purported ability to explain situations in which people can ‘agree to disagree’ — about politics, religion, jury verdicts, and so forth — while still respecting one another's epistemic credential.”Footnote 2 For example, in a much-quoted passage, Gideon Rosen expresses what has become known as the “intuitive argument” (Ballantyne, Reference Ballantyne2018, passim) for permissiveness:
It should be obvious that reasonable people can disagree, even when confronted with a single body of evidence. When a jury or a court is divided in a difficult case, the mere fact of disagreement does not mean that someone is being unreasonable. Paleontologists disagree about what killed the dinosaurs. And while it is possible that most of the parties to this dispute are irrational, this need not be the case. To the contrary, it would appear to be a fact of epistemic life that a careful review of the evidence does not guarantee consensus, even among thoughtful and otherwise rational investigators. (Rosen, Reference Rosen2001, pp. 71–72)
Applied to the problem of philosophical disagreement, the intuitive argument says that it should be obvious that philosophers can reasonably disagree after carefully reviewing a single body of evidence — this is simply a “fact of epistemic life.”
In the first part of this article (Sections 2 to 5), I look to strengthen the appeal of permissiveness in connection with the problem of philosophical disagreement by examining in more detail the costs associated with adopting uniqueness. In the second part of the article (Sections 6 to 9), I hope to show that, for many philosophical disputes, permissiveness does not fully deliver on the advertised benefit of respecting the epistemic credentials of disputants. In particular, I shall argue that permissiveness struggles to plausibly account for the scale and scope of philosophical disagreement. ‘Scale’ refers to the fact that philosophical disputes often have three or more philosophical views about some subject matter of dispute, and permissiveness appears much less plausible in such instances. ‘Scope’ refers to the fact that philosophical disagreements often go beyond merely whether our philosophical interlocutors have reasoned correctly, but also to the question of which view is true. Hence, permissiveness falls short in answering the problem of philosophical disagreement.
2. Epistemic Permissiveness
The discussion will be framed in terms of ‘doxastic attitudes’ where this term is meant to include both credences and beliefs. In other words, the argument does not require taking a position on the credence versus belief issue — with one possible exception. Sometimes the ‘full belief’ model is thought to include just the following attitudes: belief that P, belief that not-P, or suspension of belief for or against P, where P does not include any modal modifiers as part of the content of belief. Such a view would (implausibly) allow only the belief that ‘it will rain,’ ‘it will not rain,’ or ‘suspension of belief about rain.’ This is at odds with our everyday practice that allows more nuanced options like the belief that ‘it is more likely than not to rain,’ or ‘it probably won't rain,’ etc. Here I will assume (without argument) that it is acceptable to build into the content of one's belief such modal modifiers.Footnote 3 The thought is that this will provide a rough equivalence for translation between belief and credence talk, e.g., a 0.7 credence in the proposition that it will rain tomorrow based on a weather report can be translated into ‘belief talk,’ the belief that there is a 70% chance of rain tomorrow.
Several versions of the permissiveness/uniqueness contrast have been developed. The idea that there is at most one rational doxastic attitude an agent can take to a given body of evidence (E) is sometimes referred to as ‘intrapersonal uniqueness.’ Intrapersonal uniqueness, however, is consistent with different agents holding different rational doxastic attitudes in response to some shared E. A stronger position, ‘interpersonal uniqueness,’ says there is at most one rational doxastic attitude to some E, so that all agents, to the extent that they are rational, will hold the same doxastic attitude, given E. Permissiveness may be further distinguished between acknowledged and unacknowledged cases. There is also the question of how permissive permissiveness is. Roger White defines ‘extreme permissiveness’ as instances where it is fully rational for different agents to believe either P or not-P given E, whereas ‘moderate permissiveness’ allows less slack in rational doxastic responses: believing P or suspending judgement about P (White, Reference White2005).
In thinking about permissiveness in application to philosophical disagreement, our initial interest is in interpersonal, revealed, extreme permissive cases. The reason for interpersonal permissiveness is that it is philosophical disagreement with others that is at issue. The relevance of revealed permissiveness is that we are all too aware that we disagree with many of our philosophical colleagues.Footnote 4 The reason for the focus on extreme permissive cases is that much philosophical disagreement involves disagreement between ‘Dogmatic philosophers.’ Modifying the term ‘Dogmatist’ from Sextus Empiricus, we shall understand Dogmatists about P to refer to those who believe P is true, or believe P is at least more likely true than not. In credence terms, this may be cashed-out as a credence of greater than 0.5 that P. In terms of belief talk, the belief that ‘it will rain tomorrow’ counts as Dogmatism, as does the belief about the modally modified propositions, e.g., ‘it will probably rain tomorrow.’ Even the belief in something as weak as, ‘it is slightly more likely than not that P’ or ‘leaning towards P’ counts as Dogmatism. We will understand ‘Scepticism’ as the view that S has a credence of 0.5 whether some proposition P is true. In belief talk, this amounts to the claim that the Sceptic does not believe that P is more likely than not-P, nor does the Sceptic believe that not-P is more likely than P.Footnote 5 Understanding these terms in this way is helpful, since it permits us to acknowledge that philosophers have a range of positive doxastic attitudes to their preferred philosophical views. A survey of philosophical beliefs indicates that many philosophers ‘Accept’ or ‘Lean towards’ a wide variety of positions within disputed areas of philosophy (Bourget and Chalmers, Reference Bourget and Chalmers2014).Footnote 6 Data from the same survey indicates that only a tiny fraction of philosophers hold Sceptical views. Thus, most extant philosophical disagreement is between Dogmatists, hence the interest in extreme permissiveness. In what follows, I shall, for the most part, drop the ‘revealed interpersonal extreme’ qualifications, taking these as understood.
More could be said about the permissiveness/uniqueness contrast: Matthew Kopec and Michael G. Titelbaum (Reference Kopec and Titelbaum2016) identify at least 16 different versions of the uniqueness thesis. For present purposes, we may ignore the subtleties of many of these different versions and work with a somewhat generic notion: in cases of philosophical disagreement where two or more parties acknowledge holding contrary or contradictory positions, we will understand that proponents of uniqueness hold there is at most one rational response to a given a body of evidence, while proponents of permissiveness deny this.Footnote 7 One reason that we may ignore some of these subtleties is that this is a work in applied epistemology: the question is to what extent permissiveness helps with the problem of philosophical disagreement. There is no attempt here to adjudicate the more general (and theoretical) disagreement between uniqueness and permissiveness.
It will help our understanding of permissiveness to quickly review a second line of argument (in addition to the aforementioned intuitive argument). It starts with the claim that there is a mediated relationship between evidence and doxastic attitude (Kelly, Reference Kelly, Steup, Turri and Sosa2013). This mediation is done by the reasoner herself. Reasoners have what is sometimes referred to as ‘epistemic standards.’ As Miriam Schoenfield writes:
What are an agent's epistemic standards? There are different ways of thinking of epistemic standards. Some people think of them as rules of the form “Given E, believe p!” Others think of them as beliefs about the correct way to form other beliefs. If you are a Bayesian, you can think of an agent's standards as her prior and conditional probability functions. (Schoenfield, Reference Schoenfield2014, p. 199)
An argument for permissiveness asks us to imagine that two agents, S1 and S2, have different but highly reliable (yet fallible) epistemic standards ES1 and ES2. We may suppose that ES1 and ES2 yield the same doxastic attitude 98% of the time. In the 2% of the time that they yield conflicting results, ES1 gets it right half the time and ES2 gets it right half the time. The thought then is that S1 and S2 are rational in forming doxastic attitudes based on ES1 and ES2, even while knowing that their standards are fallible and not extensionally equivalent in terms of their output.Footnote 8
3. The Problem of Philosophical Disagreement and the Promise of Permissiveness
Much of the literature on disagreement focuses on models using two individuals who hold contradictory views, P versus not-P, about some disputed matter.Footnote 9 While such models have their uses, any comprehensive understanding of the problem of philosophical disagreement must account for philosophy's historical and social dimensions: philosophers disagree about philosophical issues despite massive individual effort — often spanning decades — and collective effort that sometimes spans centuries, if not millennia. Philosophical disagreement seems structural in that it appears to be woven into the very discipline of philosophy. As intimated above, it also appears that no one faction in a dispute seems to have a lock on the ability to reason or to ascertain the truth: philosophical disagreements invariably feature very smart and epistemically industrious philosophers on opposing sides of a dispute. Permissiveness seems tailor-made to offer an account for the historical and social dimensions of philosophical disagreement that ascribes rationality to all parties: if the advertised benefit of epistemic permissiveness noted above is true, then there is no need to attribute mistakes in reasoning to the opposing factions of epistemically diligent Dogmatists. Philosophical disagreements may persist for centuries or millennia and be rationally faultless.
One way to reinforce just how compelling the permissivist response to the problem of philosophical disagreement is, is to contrast it with what must be said if uniqueness is true. Thus, suppose:
1. Uniqueness: There is at most one unique rational response to a given body of evidence.
If uniqueness is true, as intimated above, when factions of philosophers reach different conclusions based on some shared evidence E, then at least one faction has made a mistake in reasoning. And such mistakes in reasoning apply to the wrong reasoning factions.Footnote 10 So, if 1 is true, then we must reject at least one of the following two theses.Footnote 11
2. Epistemic Equality: There is an (approximate) epistemic equality among factions of philosophers supporting competing philosophical views.
Epistemic equality is to be understood as the conjunction of three theses: equality of evidence, equality of reasoning, and alethic equality. Taking the question of evidential equality first: this is a shared assumption between proponents of uniqueness and permissiveness, that is, as we have seen, the dispute between uniqueness and permissiveness is formulated in terms of whether there may be more than one fully rational doxastic response given some shared evidence. To deny the shared assumption about equality of evidence is to call into question the relevance of permissiveness as a solution to the problem of philosophical disagreement. After all, proponents of uniqueness need not deny that there is more than one rational response when epistemic agents have different sets of evidence. Since permissiveness requires that there is equality of evidence, there is nothing dialectically untoward in granting the assumption of equality of evidence between competing philosophical factions. Conversely, if proponents of uniqueness hope to show the reasonableness of philosophical disagreement based on the idea of evidential inequality, then they will have to substantiate the idea that there is in fact evidential inequalities between competing philosophical factions — despite the (apparent) shared nature of so much philosophical discourse.Footnote 12
In terms of the second component of epistemic equality, examples of rational inequalities are easy to generate: there is a better-than-even chance that you have superior reasoning abilities when it comes to the dispute with a three-year-old about the nutritional value of a McDonald's Happy Meal. Such obvious differences in reasoning ability are unlikely to characterize any inequalities between philosophers. It may well be that there are individual differences between the reasoning abilities of philosophers, but, as far as we can tell, competing factions are comprised of roughly equally good reasoners.Footnote 13
The third component, alethic equality, refers to the idea that no one faction is more likely to have arrived at the truth. Initially, we will focus on the reasoning part of the epistemic equality thesis, leaving until Section 7 focus on the alethic component.
3. Anti-Scepticism: At least some Dogmatists in philosophical disputes reason for our preferred view (by and large) correctly.
The qualification ‘by and large’ is to allow for the fact that very few will be so epistemically hubristic as to say that they have never made a mistake in reasoning for their preferred view. The relevance of anti-scepticism is that uniqueness says that at most one faction can reason correctly in a philosophical dispute. This can be made consistent with the epistemic equality thesis by assuming that no party to the dispute reasons correctly.Footnote 14 Arguably, something like this line of thought is behind the thinking of some of the ancient Sceptics. One reading of Sextus Empiricus says that Sceptics reject the “precipitancy” (Sextus Empiricus, 1996, I. 11) of Dogmatists because Dogmatists ignore the fact that there are equally compelling reasons for some contrary view to their own. According to the Pyrrhonian line of reasoning, since we have no reason to favour P, and no reason to favour not-P, we should suspend judgement about P.Footnote 15 Some contemporary philosophers appeal to something like this line of argument to come to sceptical views about philosophical disagreement, but as noted, it is still very much a minority view (Frances, Reference Frances, Machuca and Reed2018; Fumerton, Reference Fumerton, Feldman and Warfield2010; Goldberg, Reference Goldberg, Christensen and Lackey2013; Lammenranta, Reference Lammenranta and Machuca2012; Ribeiro, Reference Ribeiro2011).
A seemingly compelling argument for permissiveness then, is this: claims 1, 2, and 3 form an inconsistent triad. If 1 is true, then at most one faction of philosophers in any philosophical dispute has reasoned correctly, so either we attribute a rational response to the evidence to just one Dogmatic faction (namely, our own faction), or none. The former requires that we reject the epistemic equality thesis; the latter requires that we reject the anti-scepticism thesis. I suspect that many philosophers are far more committed to 2 and 3 than they are to 1, and so many will be inclined to say that 1 ought to be rejected. This is not intended as an argument ad populum, but rather a speculation about how many philosophers are likely to react to this inconsistency.
4. Multi-Proposition Disputes
The appeal of permissiveness is only amplified when thinking about the best way to model philosophical disputes. Let us think of ‘multi-proposition disputes’ as disagreements where there are three or more contrary views about some disputed subject matter.Footnote 16 The contrast is what we shall refer to as ‘binary disagreements’ that take the aforementioned P versus not-P form. As suggested above, much of the disagreement literature takes canonical cases of disagreement to be binary disagreements.Footnote 17 There is reason to suppose that many important philosophical disagreements are in fact more perspicuously modelled as multi-proposition philosophical disputes. Consider this (by no means exhaustive) list of such disputes:
Religion: atheism vs. monotheism vs. agnosticism vs. polytheism
Ontology: materialism vs. immaterialism vs. dualism
Metaphysics: compatibilism vs. hard determinism vs. libertarianism
Philosophy of Science: realism vs. empirical realism vs. constructivism
Logic: classical vs. verificationist vs. dialethic
Perceptual Experience: disjunctivism vs. qualia theory vs. representationalism vs. sense-datum theory
Personal Identity: biological view vs. psychological view vs. further-fact view
Normative Ethics: virtue ethics vs. consequentialism vs. deontology
Knowledge Claims: contextualism vs. relativism vs. invariantism
Here are some (putative) examples of binary disputes:
Abortion: permissible/impermissible
Capital Punishment: permissible/impermissible
Free Will: compatibilism vs. incompatibilismFootnote 18
These lists are somewhat misleading in that they underemphasize the degree of disagreement amongst philosophers, for often agreement at one level of abstraction will disappear at another, as the following example illustrates:
Drinking by One's Lonesome
At the Canadian Philosophical Association conference, you find two philosophical colleagues who agree with you that consequentialism is the correct view in normative ethics. You have a good time shit-talking about your epistemically benighted colleagues who believe in virtue ethics or deontology. Soon, however, it turns out that your agreement about consequentialism masks another multi-proposition dispute: you are a hedonist about the good, while one of your fellow consequentialists is a perfectionist about the good, and the third is a pluralist about the good, combining both hedonistic and perfectionist elements. Fortunately, later at the bar that evening, you discover two colleagues who are both consequentialists and hedonists about the good, so you enjoy shit-talking about the benighted consequentialists who are perfectionists and pluralists about the good. Soon, however, it turns out that your agreement about hedonism masks another multi-proposition dispute: you believe hedonic value should be analyzed in terms of attitudinal pleasure, while one of your fellow hedonists analyzes it in terms of sensory pleasure, and the third, in terms of positive moods and emotions. The logical endpoint of this is you drinking vodka by your lonesome in your hotel room, thinking shit about everyone else's epistemically benighted views.Footnote 19
This example shows that agreement at the ‘family’ level may disappear at the ‘genus’ or ‘species’ level. In the aforementioned survey, David Bourget and David J. Chalmers found that 25.9% of philosophers surveyed either accept or lean towards deontology, 23.6% accept or lean towards consequentialism, and 18.2% either accept or lean towards virtue ethics.Footnote 20 We might think that agreement about consequentialism is at the family level, while agreement about the good as either hedonistic, perfectionist, or pluralist is at the genus level, and agreement about hedonism is at the species level. This is not to say that all philosophical disputes can be characterized in terms of this biological schema, nor that there is agreement about any particular hierarchy.Footnote 21 Still, to the extent that such a hierarchy approximates at least some characterization of philosophical disputes, it shows that we should expect that there is often more agreement at the higher levels as compared with lower levels. If you want to hang out with only those philosophers who agree with you on all species level questions, then you are likely to be very philosophically lonely.Footnote 22
In what follows, I make two claims about multi-proposition disputes: (I) many important philosophical disputes are most helpfully modelled as multi-proposition disputes, and (II) the fact that many philosophical disputes are multi-proposition disputes makes for important differences (as compared with the binary model) for how to understand the problem of philosophical disagreement. I will defend (I) in Section 8, and so ask the reader to grant (I) provisionally. In the meantime, I hope to show the plausibility of (II).
5. The Rational Big Bet
The problem of scale arises when thinking about the multi-proposition nature of many philosophical disputes. In keeping with the historical and social aspect of philosophical disagreement, we may think of the four individuals in the following example as spokespersons for their factions.
The Rational Big Bet
The Cruel God of epistemology overhears John Rawls yet again arguing for justice as fairness (JF), Robert Nozick for libertarianism (LB), Gerald Cohen for socialism (SC), and Robert Goodin for utilitarianism (UT). Tired of listening to their incessant squabbling, the Cruel God brings the four political theorists before her divine throne and confronts them with the Rational Big Bet: each philosopher must bet the lives of a quarter of the world's population of eight billion either for or against his preferred theory as a rationally unmistaken response to shared E. Suspending judgement on the matter will result in a loss of half of each theorists’ two billion stake, namely, one billion lives. What should they do? The Cruel God, not totally bereft of compassion, agrees to make things a little easier. She says that the four theories — (JF), (LB), (SC), and (UT) — are mutually exclusive and jointly exhaustive.Footnote 23 The Cruel God adds that the dispute is one where uniqueness applies: there is at most one fully rational doxastic response to E, and one of the four reasoned correctly. The Cruel God points out, with a mocking tone, that the way to save everyone on the planet is for the author of the correctly reasoned view to bet on his preferred view, and for those who made a rational mistake to vote against their own views. Of course, since all four believe that they have reasoned correctly, this doesn't help much. It is about as much help as the sage teacher's advice to students hoping to do well on the final exam: just answer every question correctly. If they all bet on their preferred theory, then a quarter of the world's population, two billion, will survive. If they all bet against their own theory, three quarters of the world's population, six billion, will survive. If all four suspend judgement, then four billion will die. The Cruel God whisks each away to his own isolated Aristophanean thinkery to ruminate on how to bet.Footnote 24
Given that uniqueness is true, multi-proposition disputes challenge our understanding of the costs associated with renouncing either the anti-scepticism thesis or the epistemic equality thesis. To see why, we need to consider the same two cases above in relation to the Rational Big Bet, that is, where we assume (a) uniqueness and epistemic equality are true, and (b) uniqueness and anti-scepticism are true. Let us take these in turn.
If we assume (a), then it follows that all four should think that it is likely that they have reasoned incorrectly and bet against their preferred theories. The thinking here is that epistemic equality requires that they think all have reasoned correctly, or that it is likely that each has reasoned incorrectly.Footnote 25 Since uniqueness rules out the former, it must be the latter. And so each is in a position to reason that he should disbelieve his preferred view. That is, each is in a position to think that it is likely that his reasoning is not rationally faultless, and so there is no reason to suppose that his preferred view is as likely, or more likely, than the combined probability of the set of competitor views. So, he has reason to suppose his view is rationally mistaken and probably false.Footnote 26
So, uniqueness and epistemic equality in this instance lead to a type of scepticism, which we will refer to as ‘Sceptical-Dogmatism.’ Sceptical-Dogmatism is importantly different (and, at least for some, more depressing) than Scepticism: it doesn't merely recommend suspension of belief about one's preferred philosophical view; it recommends disbelieving one's preferred view. Sceptical-Dogmatism is consistent with believing that one's preferred philosophical view is the most probable, so long as one holds that the view is probably false. Accordingly, Sceptical-Dogmatism is an even more radical rejection of the anti-scepticism thesis noted above. In other words, there are two positions that are inconsistent with the anti-scepticism thesis: Scepticism and Sceptical-Dogmatism.Footnote 27 At least with multi-proposition disputes, the more radical rejection of the anti-scepticism thesis is required, given the assumptions of the case.Footnote 28
If we assume (b), then this leads to a far more radical rejection of the epistemic equality thesis than previously considered. To illustrate, suppose Rawls holds onto his view that JF is correct in the Rational Big Bet. In which case, Rawls must represent himself as a reasoning über epistemic superior (hereafter, RÜES): one who is more likely to have reasoned correctly about some matter in a multi-proposition dispute than the combined probability of the other views. This follows even if Rawls endorses a very modest form of Dogmatism, e.g., he holds with a mere 0.52 probability that the reasoning used in support of JF is the unique rational response given E. Assuming that Rawls distributes the remaining credence (0.48) equally that one of his three colleagues is the correct reasoner (0.16 for each colleague), it follows that he must represent himself as more than three times as likely to have reasoned correctly than his colleagues. I say RÜES is a more ‘radical’ rejection of epistemic equality because it goes beyond Rawls saying that he is a ‘reasoning epistemic superior’ where this is understood as the idea that his reasoning credentials are better than each of his peers. Assigning a credence of 0.4 to having reasoned correctly about JF, 0.2 to the claim that Nozick reasoned correctly, 0.2 to the claim that Cohen reasoned correctly, and 0.2 to the claim that Goodin reasoned correctly, is sufficient for attribution of reasoning epistemic superiority, but not enough to thwart Sceptical-Dogmatism. Naturally, if one is willing to represent oneself as a RÜES, then the radical rejection of the epistemic equality thesis is exactly how things should be.
It might be objected that focusing on a single dispute ignores an important way to respect the epistemic credentials of our colleagues.Footnote 29 If we think of ‘equality’ in terms of a propensity to reason correctly, then this is consistent with inequalities of reasoning on specific questions or issues. However, the propensity understanding of ‘equality’ is of limited help in squaring the equality thesis with the anti-scepticism thesis. As intimated in the Drinking by One's Lonesome example, many philosophical disputes are such that there is no majority in terms of proponents of a single view. So, suppose there are three reasoning factions, R1, R2, R3, comprised of an equal number of reasoners. They each hold contrary positions in 18 multi-proposition disputes. If uniqueness is true, then at least two reasoning factions have made an error in reasoning, given their shared evidence for each dispute. If the three reasoning factions have a similar propensity to reason correctly, then at most each disputant reasons correctly 0.33 of the time.Footnote 30
Thus, the problem of scale — accounting for multiple-proposition disputes — only improves the attractiveness of permissiveness, since if permissiveness can live up to its billing, then it offers us a way of saying that all parties to a multi-proposition dispute reason correctly and thus avoids the implausibility (and unpleasantness) of attributing RÜES status to oneself.
6. Multi-Proposition Permissiveness
To assess how plausible permissiveness is in multi-proposition disputes, it will help to step back and ask, ‘how permissive is permissiveness?’ As we shall see, the more permissive one takes evidence to be, the less plausible it is. For example, I take it that no permissivist wants to defend the ‘evidential anarchy’ version of permissiveness, i.e., the view that each batch of evidence offers full rational support for every possible doxastic attitude.
Let's start with a binary case: suppose you and I are out on a week-long hike.Footnote 31 We are out of communication with the rest of the world when the American presidential election takes place. Tomorrow we should be back in communication range to find out the results of the election but, in the meantime, we debate who won the election. Suppose my belief that the Democratic candidate won is based on evidence that she was leading in the polls going into the election. Your belief that the Republican candidate won is based on an upward trend in support for the Republican nominee before we left for our hike. You reason that this trend is likely to have continued right up to the election, which is sufficient to push the numbers in favour of the Republican candidate. I dismiss this since most trends regress to the mean quite quickly. You agree that there is often a quick regression to the mean, but you doubt that it will be quick enough in this case.
Before accepting the verdict that this is an example that is congenial to permissivists, we should consider the level of confidence all parties have in their positions. As described, the evidence indicates that the race is very close. Accordingly, let us suppose that we are very tentative in our judgement: we believe with modest confidence rather than high confidence. Table 1 illustrates this situation:
It seems plausible to assume that at least some permissivists will allow that the modest confidence version is plausibly construed as an instance of permissiveness, but the high confidence case is not. To explain these different judgements, let us think of the extent to which the intersubjective probabilities violate probabilistic consistency as ‘rational wiggle room.’ The proposed explanation for the asymmetry then is that the rational wiggle room necessary to make each modestly confident belief rational given E is only 0.02, whereas in the high confidence variant the rational wiggle room is 0.7.
The observation — small violations of intersubjective probability are easier to defend, other things being equal, than large violations — fits well with the tendency of permissivists to emphasize examples with comparatively small amounts of rational wiggle room. For example, Thomas Kelly writing in support of permissiveness notes:
To my mind, uniqueness seems most plausible when we think about belief in a maximally coarse-grained way, so that there are only three options with respect to a given proposition that one has considered: belief, disbelief, or suspension of judgment. On the other hand, as we begin to think about belief in an increasingly fine-grained way, the more counterintuitive Uniqueness becomes. (Kelly, Reference Kelly, Steup, Turri and Sosa2013, p. 300)
In the terms developed here, we can explain Kelly's observation as follows: the rational wiggle room necessary if we hold maximally coarse-grained beliefs about our candidates is greater than if we have certain more fine-grained attitudes. Thus, other things being equal, allowing fine-grained doxastic attitudes appears more favourable to permissiveness. Conversely, we can see how allowing only coarse-grained ascriptions would favour uniqueness. If we had to translate the results in Table 1 into the coarse-grained belief model — where our choices are limited to believe P, believe not-P, or suspend judgement about PFootnote 32 — then the closest analogue for the modest version would be that both parties suspend judgement about their candidates’ chances, and believe that their candidate won for the high confidence case. Permissivists will rightly find this dialectically disadvantageous if they want to claim that this is still a permissive example: it forces them to defend the high confidence variant that requires far more rational wiggle room, since the modest confidence case translates into one where there is agreement. Indeed, there is no inconsistency in opting for a position that says that uniqueness is true when applied to coarse-grained attitudes, and permissiveness is true when applied to fine-grained attitudes (Kopec and Titelbaum, Reference Kopec and Titelbaum2016).
The hiking example, then, is constructed in a way that attempts to be as friendly as possible to permissiveness in that it uses modest fine-grained attitudes that require little rational wiggle room. It will help to see if it is possible to construct an analogue of the hiking case that is similarly friendly to permissiveness. Imagine this time four of us are hiking — you and I are out backpacking with our good Canadian friends Claudia and Jacques. We are out of communication with the rest of the world when the Canadian election takes place. Let us assume that all four parties have similar poll numbers. We each hold our view with 0.51 confidence. Table 2 illustrates this situation:
Notice that extreme permissiveness is not applicable here, since, recall, it is defined in terms of a binary dispute. Let us think of ‘multi-proposition permissiveness’ as instances where it is rational for each party in a multi-proposition dispute to hold contrary positions. We are interested in a proper subset of such disputes, namely, disputes where each disputant holds that her view is more probable than the combined probability of the competitors. Let us refer to these majority multi-proposition permissive cases as ‘majoritarianism.’ The reason of course is to draw the analogy with Dogmatism, where Dogmatists hold that their view is at least more likely than not. So, the sort of multi-proposition dispute cases that are not relevant in defending Dogmatism are ones where we believe that our view is most probable, but probably false. Let us refer to minority multi-proposition permissive cases as ‘minoritarianism.’ The analogue here is Sceptical-Dogmatism that is consistent with believing that our preferred philosophical position is the most probable, but probably false.
In thinking about what the permissivist might say about his case, we should first note a couple of differences from the two-person example. First, in the modest confidence four-person case, we must acknowledge that the rational wiggle room (1.04) is more than a fiftyfold increase (from 0.02) that amounts to the lion's share of the total (2.04). Thus, far more rational wiggle room is required in the modest confidence four-person case than even in the high confidence version of the two-person example (0.7). The difference in the high confidence versions of the two examples is even more pronounced: 0.7 vs. 2.40.
Second, although the confidence about our predictions is the same between the two examples, my confidence that your prediction is mistaken must be higher in the four-person case as compared with the two-person case. In the two-person case, my confidence that your prediction is wrong is 0.51 (since I allow that there is a 0.49 probability that you are right). In the four-person case, let us suppose I treat your prediction the same as Claudia's and Jacques’. That is, in the four-person case, my modest confidence that I am correct (0.51) leaves (0.49) for the collective probability that one of the other three predictions is correct. If I divide this evenly across the three of you, this translates into a 0.16 confidence that your prediction is correct, and a 0.84 probability that your prediction is wrong. I should also have a similarly high confidence that Claudia's prediction is wrong, and Jacques’ prediction is wrong. You are of course in a similar position to be highly confident (0.84) that each of us has made wrong predictions.
There are a couple of reasons to question the plausibility of majoritarianism. First, as intimated, intuitively, the amount of required rational wiggle room makes majoritarianism implausible: it takes us too far down the road to evidential anarchy. Suppose, for example, we each use a different algorithm to predict the winner of the election. While our shared evidence says that the parties are currently tied, our algorithms each predict a small spike just before the election that will favour the party we have predicted. If majoritarianism applies, then I may have high confidence, reasoning from our shared evidence, that your algorithm will fail to predict the election. Of course, you may reason in the same way. This leads to a situation where we may both say, consistent with majoritarianism, ‘I have not made a rational mistake in my high confidence (0.84) that your algorithm will not predict the election, and you have not made a rational mistake in your high confidence (0.84) that my algorithm is mistaken.’
It is worth emphasizing that the question of whether to accept majoritarianism is independent of the more general permissiveness/uniqueness dispute, at least to this extent: it is possible to accept some versions of permissiveness, e.g., extreme permissiveness and minoritarianism, while rejecting both uniqueness and majoritarianism. So, rejecting majoritarianism is consistent with saying that we may be rationally faultless in coming to different minoritarian predictions. For example, suppose that each of us thinks our preferred party has a 0.28 probability of winning, while the other three have a 0.24 probability of winning.
Drawing the analogy with philosophical disagreement, the analogue of the binary election example is a situation where two philosophical factions hold with modest confidence their respective positions. Minoritarianism applied to philosophical views says that each faction holds their view in a multi-proposition dispute is probably wrong, but may hold their view with more confidence than the competitors. Only majoritarianism is consistent with philosophical factions holding with at least modest positive confidence that their view in a multi-proposition dispute is correct. A corollary of this, as we have seen, is that this requires high confidence that each of the competitor views, although not rationally mistaken, are false. The intuitive point says that majoritarianism permits far too much confidence that one's peers, although reasoning in a mistake-free fashion, have reached the wrong conclusion.
I have focused on election examples as these seem as congenial as any to permissiveness. (Recall that the binary case is adapted from Kelly's defence of permissiveness.) One thing that makes the cases congenial is the fact that philosophical disputes appear to be much more complex than reasoning about elections. It is a common observation, for example, that philosophical disputes in one area have implications for other areas of philosophy. This difference in complexity, however, shows that the analogy is dialectically generous, since, other things being equal, it is easier to imagine that in dealing with complex subject matter, some or all of the parties to a dispute have made rational mistakes.
Of course, to say that majoritarianism is intuitively implausible is hardly the last word on the matter. Going forward, those who hope to show the plausibility of explaining philosophical disagreements in permissivist terms have at least a couple of options. One is to show that majoritarianism is, appearances to the contrary notwithstanding, plausible. The other is to opt for some other means to parse the question, ‘How permissive is permissiveness?,’ than in terms of the violations of intersubjective probabilities. However, even parsing the issue in some other manner will have the implication, given certain basic assumptions of probabilistic consistency, that modest positive confidence in one's preferred view in a multi-proposition dispute has the implication of high confidence that competitor views have reached a false conclusion (albeit in a rationally unmistaken manner).
Note too that a permissivist explanation for the problem of philosophical disagreement will have to address the dialectically conservative assumption used in our discussion, namely, the very modest level of confidence attributed to Dogmatists. At least some, perhaps many, philosophers hold their preferred views with a greater than 0.51 confidence. Consistency demands that the higher one's confidence is in one's preferred view, the more confident one should be, given majoritarianism, that the other philosophical factions have reasoned in a rationally faultless manner to the wrong conclusion. If Rawls holds JF with 0.8 confidence, for example, this translates into 0.94 confidence that each of his colleagues have faultlessly reasoned to the wrong conclusion. Permissivists will then need to weigh in on the question of whether such high confidence is plausible. If the former, then the violations of intersubjective probability must be much higher than discussed in the four-person hiking example; if the latter, then permissivism will apply to only some subset of philosophical disagreements, namely, where disputants hold their confidence below some threshold.
The second reason to question the plausibility of permissiveness as a solution to philosophical disagreement stems from the fact that permissiveness — in all its forms — is a type of relativism: it says that the question of whether a doxastic attitude is a rational response to some batch of evidence is relative to some epistemic standard or framework. As such, it inherits the problems often attributed to forms of epistemic relativism. To show how this plays out for majoritarianism, we will first need to do a little spadework. Consider the multi-proposition dispute between (1) ‘minoritarians’ who believe minoritarianism (but deny majoritarianism), (2) ‘uniquers’ who believe uniqueness, and (3) ‘majoritarians’ who believe majoritarianism. Let us suppose that each of these three views has supporters who hold their views with some modest positive confidence, say, 0.51 credence, so it is a majority multi-proposition dispute. Uniquers and minoritarians are committed to the view that at least one party has made a rational mistake in this dispute. Since both positions reject majoritarianism, both uniquers and minoritarians will hold that majoritarians have made a rational mistake in reasoning to their positions.
What should majoritarians say about this dispute? There are two possibilities to explore. First suppose that majoritarians treat this dispute the same as the proposed explanation for other philosophical disputes, namely, that evidence is sufficiently permissive such that no party in a majority multi-proposition dispute has made a mistake in reasoning about their position. So, on this assumption, majoritarians hold that no party made a rational mistake in the dispute between (1) to (3), in which case, majoritarians must hold that uniquers and minoritarians have made no rational mistake in holding that majoritarians hold a view that is rationally mistaken. In other words, majoritarians might summarize this by saying to uniquers and minoritarians, ‘There is no rational mistake when you folks reason from our shared evidence to the conclusion that majoritarians are rationally mistaken.’ Of course, majoritarians do not need to hold that their view is rationally mistaken.
If majoritarians hold that there is no rational mistake in uniquers and minoritarians holding that the majoritarian view is rationally mistaken, then majoritarians must also allow that there is no rational mistake in rejecting the majoritarian account of philosophical disagreement, since it relies on a view that uniquers and minoritarians rationally reject as being rationally mistaken. Majoritarians might summarize this by saying to uniquers and minoritarians, ‘You have not made a rational mistake when you folks reason that our majoritarian account of philosophical disagreement is rationally mistaken.’
The second possibility here is for majoritarianism to suggest that the dispute between (1) to (3) is not a majoritarian dispute. If it is not a majoritarian dispute, then it is open to majoritarians to claim that uniquers and minoritarians have made some rational mistake in arriving at their views. One upshot of this is that majoritarianism will not provide a universal explanation for philosophical disagreement, as at least one dispute (namely, the one involving majoritarianism) is exempt from being analyzed in terms of majoritarianism. As with any view that exempts itself, there are questions as to whether self-exemption is simply a case of special pleading. Exploring this further will take us too far afield. Going forward, majoritarians may want to address this issue.
I don't think that these two reasons, individually or collectively, provide a knockdown argument against majoritarianism. One reason is that any argument against majoritarianism has the implication that we would have to reject the epistemic equality or the anti-scepticism thesis. And since this is not the place to consider whether it is preferable to renounce one of these two theses rather than majoritarianism, we must be content with a more modest conclusion: an anti-sceptical permissivist solution to the problem of philosophical disagreement requires majoritarianism — a view that comes at very steep costs.
7. The Alethic Big Bet
The scope problem for permissiveness, mentioned in the introduction, may be illustrated by a twist on the Rational Big Bet example.
The Alethic Big Bet
The Cruel God of epistemology calls before her royal throne our four heroes for a second time and says this time she is interested not in which political philosophy is a rational response to the evidence, but which theory is true. As before, she announces that the four views are mutually exclusive and jointly exhaustive. In a fit of drunken kindness, the Cruel God offers a hint. She says that she was just toying with them before and in fact the dispute between the four is a legitimate case of majoritarianism: all four have rationally faultless doxastic attitudes given E. All four are visibly relieved. They think to themselves that this solves everything. Now that they know that each of their preferred views is a rational response to the evidence, each thinker reasons that he should bet on his preferred theory. As soon as they consider that their colleagues might be reasoning in a similar fashion, they realize that something has gone terribly wrong — although they might have surmised this from the fact that the Cruel God of epistemology was laughing hysterically the whole time. The problem of course is that truth is one, but, by the stipulation of the case, rational responses are many. The Cruel God whisks each away to his own isolated Aristophanean thinkery to ruminate on how to bet.
The example is meant to be an intellectual purgative to any personal or professional attachments we might have to our views to help us focus on the accuracy of our preferred views in multi-proposition disputes (Walker, Reference Walker, Blackford and Broderick2017). This is important since permissiveness, as discussed so far, is about the rationality of our doxastic attitudes. However, many disagreements — both philosophical and non-philosophical — seem to be primarily about the truth of the disputed views, rather than primarily about the question of the rationality of the views of the participants. Whether truth is often the primary question need not detain us. It is sufficient to note that truth is at least an important consideration, even if not the primary consideration. The Alethic Big Bet invites us to consider our reasons for thinking that our preferred view is likely true, given majoritarianism.
Of course, one thing permissivists might say is that rationality is one thing and truth quite another. Permissiveness helps only with the former. This amounts to an admission that permissiveness is, at best, a partial answer to the problem of philosophical disagreement. So, it is worth exploring what permissivists might say about the Alethic Big Bet. An influential proposal by Schoenfield offers some guidance. Following the quote above (in Section 2) on epistemic standards, Schoenfield writes:
Since what I will be saying does not rely on a particular understanding of what a standard it is, we can just think of a set of standards as a function from bodies of evidence to doxastic states which the agent takes to be truth conducive. Roughly, this means that the agent has high confidence that forming opinions using her standards will result in her having high confidence in truths and low confidence in falsehoods. On the version of permissivism that I will be defending, there are multiple permissible epistemic standards, and what makes it permissible for agents to have different doxastic attitudes is that different attitudes may be prescribed by their different standards. (Schoenfield, Reference Schoenfield2014, p. 199)
There is an obvious question here about the stability of this proposal given that when applied to multi-proposition disputes like the Alethic Big Bet, each party appears to have available a rebutting defeater to the thought that her epistemic standards are truth conducive, namely, the advice to take one's epistemic standards as truth conducive is unreliable when applied to all four: each will be in a position to conclude that the advice is wrong 75% of the time.
It may be remarked that if they take Schoenfield's advice to heart, they will be in a position to believe that they have truth conducive standards while the rest believe falsely that they have truth conducive standards. But this only pushes the problem one step back. The question is how one might reason to the idea that one's epistemic standards are especially advantaged when it comes to the truth, that is, how it is that the others believe falsely that their epistemic standards are truth conducive while your epistemic standards are special in that they are in fact truth conducive. If this specialness is part of the original advice to think that one's standards are truth conducive, then the advice to take one's epistemic standards as special with respect to the truth is generally unreliable. If this specialness is not part of the original advice, then the question of why one should take it as true that one's epistemic standards are truth conducive while others believe falsely that their epistemic standards are truth conducive remains unanswered.
Schoenfield has two different but related responses to this worry. The first reason is connected with the general threat of scepticism: “… a justification for our standards of reasoning is not something we can provide independent justification for and the demand for such justification would result in widespread skepticism” (Schoenfield, Reference Schoenfield2014, p. 202). If I understand Schoenfield's reasoning correctly, since it would be absurd to take such scepticism seriously, we ought not to require anything (like an independent justification for our epistemic standards) that would lead to such an absurd result. The thought, then, is that if we are entitled to reject any form of scepticism about our epistemic standards, including Sceptical-Dogmatism, then we have a defeater for the defeater that our epistemic standards are not truth conducive.
The second reason has to do with a claim that permissiveness is incompatible with the following:
TRUTH INDEPENDENCE: Suppose that independently of your reasoning about p, you reasonably think the following: “were I to reason to the conclusion that p in my present circumstances, there is a significant chance my belief would not be true!” Then, if you find yourself believing p on the basis of your reasoning, you should significantly reduce confidence in that belief. (Schoenfield, Reference Schoenfield2014, p. 202)
Schoenfield's argument, adapted to apply to majoritarian disputes goes as follows: suppose one knows that one has made a rational response to a ‘uniqueness case’ — a case where there is only one rational doxastic attitude given E. It follows that one is in a good position to reasonably believe P even if one were to put aside one's reasoning for P in accordance with TRUTH INDEPENDENCE. The reason, says Schoenfield, is that knowing that P is the unique rational response provides a reason to suppose that P is likely to be true. However, in a majoritarian case, if one knows that one has a rational doxastic attitude P, and others have fully rational doxastic attitudes about contraries Q and R, and one reasons in accordance with TRUTH INDEPENDENCE, then one has reason to suppose that P is probably false, since most rational responses given E are probably false. Indeed, given TRUTH INDEPENDENCE, each disputant should reason that P is probably false, Q is probably false, and R is probably false. But this means that each should reason to the same conclusion; that is, there is only one rational doxastic attitude in this instance, so this is not a permissive case. Hence, if there are majoritarian cases, then TRUTH INDEPENDENCE is false.
Imagine, as the entire world anxiously looks on, Nozick, Cohen, and Goodin announce that they accept Schoenfield's view and apply it to reasoning about the Alethic Big Bet. Thanks to the Cruel God of epistemology, they know that each has formed a rational doxastic attitude, given their shared evidence. So, they are in a position to reject TRUTH INDEPENDENCE. And since they accept Schoenfield's view, they are entitled to think that their epistemic standards are truth conducive in such a way that they may have high confidence that the opinions formed using those standards are true, and low confidence that the opinions are false. Since they have high confidence in the opinions formed on the basis of their epistemic standards, they bet accordingly.
Predictably, three quarters of the world's population dies.
The remaining quarter of the world's population thinks to themselves, ‘It really is a bit of a shame that the four were not Sceptical-Dogmatists. For although this would not have made their views more rational — since the Cruel God has provided assurance that they all reasoned to their conclusion in a rationally mistake-free fashion — nevertheless, Sceptical-Dogmatism offers an alethic advantage: three of the four would have true beliefs about the dispute, and far fewer would have perished.’
It is worth noting that the epistemic tragedy of so many false beliefs is not simply the result of the stipulation that one might have high confidence in opinions produced by one's epistemic standards, since any positive probability greater than 0.5 leads to the same result. To adapt an earlier example, suppose Rawls takes a very modest Dogmatic position with respect to JF, for example, he barely leans toward the position with a mere 0.52 credence that it is true. If he divides his credence that one of the competitor views is correct, then he should attribute a 0.16 credence that Nozick holds the correct view, a 0.16 credence that Cohen holds the correct view, and a 0.16 credence that Goodin has arrived at the correct political theory. This means that, in thinking through this, Rawls must represent himself as an alethic über epistemic superior (hereafter, AÜES): more likely to have arrived at the truth than the combined probability of his disagreeing colleagues (0.16 x 3 = 0.48).
It is not enough that Rawls thinks he is an alethic epistemic superior: more likely to arrive at the truth than each of his colleagues. This would be merely a minoritarian case, which, for present purposes, we may allow. For example, suppose Rawls’ credences are 0.4 for JF and 0.2 for the claim that Nozick is correct, 0.2 for the claim that Cohen is correct, and 0.2 for the claim that Goodin is correct. This sort of case is an instance of Sceptical-Dogmatism. It is only attributing full AÜES status that will suffice to avoid Sceptical-Dogmatism.
So, even granting that majoritarianism applies to philosophical disputes does nothing to address the problem that philosophical disputes seem to require a radical rejection of the alethic equality component of the epistemic equality thesis. I say ‘radical’ here because, as we have just seen, it is not enough for our four heroes to represent themselves as alethic epistemic superiors; they must represent themselves as AÜES. True, this position is consistent with saying that there is epistemic equality in terms of evidence and reasoning prowess. Still, on this view, each is entitled to use his epistemic standards to conclude that he is probably correct — more likely to have lighted upon the truth than all the others — and all the others are very likely incorrect. That is, each is entitled to represent himself as having vaulting alethic powers: one set of epistemic standards to alethically (but not rationally) rule them all. Claiming AÜES status for oneself is consistent with acknowledging that others will be in a position to believe that they possess vaulting epistemic powers. Consistency, however, will require acknowledging a major asymmetry here: one's belief that one possesses vaulting epistemic powers is true, while others falsely believe they possess vaulting epistemic powers.
It is worth noting that the same argument does not work against the use of permissiveness in binary disputes. To see why, suppose that the Alethic Big Bet is changed so that the dispute is between Rawls defending JF and Goodin championing UT. Suppose too that the Cruel God reveals that one of the two views is true. Suppose as before that Rawls and Goodin accept Schoenfield's advice — they both accept that they should have high confidence that their views are true because their epistemic standards are truth conducive. In this case, both Rawls and Goodin have at best an undermining, not a rebutting, defeater to the claim that their preferred views are (probably) true. The difference in this case is that there is only one competitor view, and they have no reason to think that the competitor view is more likely to be true than their preferred view. At most, they may concede that the opposing view is equally likely. This is important because some are willing to entertain the idea that one is justified in believing P, even in the face of an undermining defeater,Footnote 33 but there is little or no support for the idea that one's belief that P might be epistemically justified in light of a rebutting defeater.
We are left with a dilemma. On the one hand, if you think that scepticism is absurd, or it is intuitively obvious that there are majoritarian disputes (and so you reject TRUTH INDEPENDENCE), then you have a reason to reject Sceptical-Dogmatism. The cost, as noted, is rejecting the idea of alethic equality. To what extent it is a solution to the problem of philosophical disagreement can be judged by imagining hearing in sotto voce the following from a colleague in a multi-proposition dispute:
Yours is a rational doxastic response, just as mine is. Oh, by the way, I am an AÜES. I am more than three times as likely to have arrived at the truth in this dispute than you or our other colleagues. My superior truth conduciveness is not due to an evidential or reasoning advantage, but rather, it is because my epistemic standards are, well, mine, so truth-conducive in a way that yours are not. Of course, you are entitled to believe that your epistemic standards are truth conducive in a way mine are not, and so represent yourself as an AÜES. But this is just to say that you are entitled to believe falsely that you are an AÜES.
On the other hand, if we do not represent ourselves as AÜES, then we must accept that our preferred view in a multi-proposition dispute is probably false.Footnote 34 This would have a result of a lot more agreement — agreement that each view in a multi-proposition dispute is probably false — although by no means would it require complete agreement. The cost of this is rejecting the anti-scepticism thesis, since this horn of the dilemma requires saying that Dogmatists reason incorrectly — they should accept Sceptical-Dogmatism. This sins against that intuitive argument that it is rational for Dogmatists to hold contrary positions. Admittedly, there is some cost, perhaps a very considerable cost, since it may well be that the default (or at least majority) assumption of philosophers at present is that philosophers can hold, in a rational manner, contrary Dogmatic positions.
8. Reprise: Multi-Proposition Disputes
It is clear that the idea of multi-proposition disputes does a lot of work in the argument, so it will be helpful to address the following objection:
The distinction between multi-proposition disputes and binary disputes is of little importance since the former can easily be transposed into the latter. Assume P, Q, and R are contraries, in which case, Q and R imply not-P. So, the multi-proposition dispute can be transposed into a binary disagreement. And the binary model helps in response to the argument above that the multi-proposition model requires us to choose between representing ourselves as AÜES or accepting Sceptical-Dogmatism. In other words, if the binary model is correct, then one might simply lean to one side of P versus not-P without having to represent oneself as an AÜES. To illustrate, suppose Rawls argues that the disagreement about political philosophy is best modelled as (J) versus (not-J). He reasons further in response to the Alethic Big Bet that he will lean ever so slightly towards (J), which makes him a Dogmatist about (J), but the epistemic edge he attributes to himself against proponents of (not-J) is ever so slight, for example, he attributes 0.52 probability to his position and 0.48 to the proponents of (not-J).
There are at least two obstacles to this proposal. The first obstacle involves deciding which binary model is correct, since the other three might reason in the same way as Rawls as illustrated in Table 3.
The inconsistency is apparent since the contradictory of each home-field view is simply the conjunction of the other three views. Since at most one theorist is correct, we would need some sort of principled answer to which, if any, of the four has the correct model. Not only is there no principled way to decide who is correct, but the dispute about which is the correct model itself is a multi-proposition dispute: there are four different logically inconsistent binary models from which we must choose.Footnote 35
The second obstacle, independent of the first, can be seen by imagining what Rawls might say in defence of (J) as the home-field view. He might claim, for example, that the fact that he has brought his considered judgements into reflective equilibrium justifies his assignment of 0.52 to (J). When asked, he agrees with the following conditionals:
If (L), then (not-J).
If (S), then (not-J).
If (U), then (not-J).
This means that consistency demands that Rawls attribute at most a combined probability of 0.48 to the claim that either (L), (S), or (U) is true. If Rawls distributes his credence equally across each of these, then at most he can attribute a 0.16 probability that (L) is true, and the same for (S) and (U). But this is just to say that the appeal to a particular binary model does nothing to avoid the question of the likely truth of one's position in a multi-proposition dispute as compared with each of the opposing factions. In our example, Rawls must attribute to his colleagues a reasonably high probability of an implication of their views (not-J) but also must attribute a very small probability that their views considered individually are correct. So, even granting Rawls’ preferred binary model does nothing to avoid the implication that Rawls must represent himself as an AÜES to his disagreeing colleagues about (L), (S), and (U). The binary model makes the problem a little more difficult to see; it does nothing to avoid the problem.
9. Conclusion
As noted in the introduction, one of the advertised benefits of applying permissiveness to philosophical disagreements is that it explains how philosophers might agree to disagree while still respecting the epistemic credentials of their colleagues. I have not argued the strong thesis that permissiveness has no part to play in answering the problem of philosophical disagreement. Rather, I have advanced the weaker claim that any plausible form of permissiveness is not by itself a panacea.Footnote 36 The problem of scale suggests that permissiveness applied to many philosophical disagreements requires the truth of majoritarianism, but majoritarianism looks implausible. The problem of scope is that even if we can respect the rationality of our colleagues’ views, we can't accept that their views are true. We must view ourselves as believing (truly) that we are AÜES, which seems incompatible with the idea that we respect as equals the epistemic credentials of our colleagues, since at best we may view them as permissibly believing (falsely) that they are AÜES.
Acknowledgements
Thanks to several anonymous referees. I am especially grateful (and indebted) to two referees who put in a supererogatory effort on several rounds of revisions. If it weren't so burdensome to readers, almost every page should have several footnotes acknowledging their assistance. Thanks also to Jean-Paul Vessel for numerous helpful conversations about these issues while playing pool.