Evidentialism as an account of theoretical rationality is the position that,
(Evidentialism) a doxastic attitude, D, toward a proposition, p, is rational for an agent, S, at a time, t, iff having D(p) fits S's evidence at t
where the fittingness of D(p) on S's evidence is typically analyzed in terms of evidential support for the propositional contents of the attitude (i.e., p).Footnote 1 For instance, belief in a proposition best fits one's evidence, and is thus the rational attitude to take according to evidentialism, “[w]hen the evidence better supports [the] proposition than its negation” (Feldman and Conee Reference Feldman and Conee2005: 97).Footnote 2 Evidentialism is a popular and well-defended position; however, recently, it's been argued that misleading higher-order evidence (HOE) – roughly, evidence about one's evidence or about one's cognitive functioning – poses a problem for evidentialism.Footnote 3 Take the following case of misleading HOE, which I will call “Flight”:
Imagine you are flying a small, propeller driven aircraft. Midway through your journey you calculate that you have enough fuel to make it to your destination on the basis of your true beliefs regarding the current fuel level of your aircraft, the distance to your destination, the miles per gallon your aircraft can travel given its current speed, etc. To make this case as strong as possible, let's stipulate that your evidence entails your conclusion. After performing the calculation, your co-pilot – who has the same evidence as you – (incorrectly) asserts that your evidence doesn't support your conclusion; you made a miscalculation, which caused you to adopt a belief the propositional contents of which are unsupported by your evidence. From a long history of working with your co-pilot you know her to be (dispositionally) a significantly stronger reasoner than yourself. Whenever you've disagreed about what propositions are evidentially supported by a body of evidence, your co-pilot has been right, and you've been in error. However, unbeknownst to you, your co-pilot is sleep deprived and isn't her regular, hyperrational self. It's the first time that a disagreement over evidential support is explained by your co-pilot making a reasoning error and not yourself.
The misleading testimony from your co-pilot (the HOE) doesn't change the fact that your total evidence still entails – and, thereby, provides very strong evidential support for – the proposition that you have enough fuel to make it to your destination. Entailment is monotonic; the fact that your evidence entails a particular proposition cannot be altered by gaining further evidence. Therefore, according to evidentialism it is (propositionally) rational to believe that you have enough fuel to make it to your destination. However, the testimony from your co-pilot that it's not the case that the proposition that you have enough fuel to make it to your destination is supported by your evidence, along with your knowledge that your co-pilot is (dispositionally) a significantly stronger reasoner than yourself, appears to give you very strong evidential support for <it's not the case that the proposition that you have enough fuel to make it to your destination is supported by your evidence>. In Flight, your total evidence appears to support an akratic conjunction, that is, a proposition of the following form:
(Akratic Conjunction) p, but my evidence doesn't support p.
Thus, according to evidentialism, it appears that it is (propositionally) rational to adopt an akratic belief (a belief in an akratic conjunction). However, akratic beliefs appear to be clearly irrational, despite the fact that their propositional contents can (seemingly) be strongly supported by one's evidence in cases of misleading HOE, like Flight.
Although I've framed the discussion thus far in terms of evidentialism, the issue is an instance of a more general problem, which I join Ru Ye (Reference Ye2014) in calling “Fumerton's Puzzle.” Fumerton's puzzle affects any theory of rationality that takes some condition(s), c, to be necessary and sufficient for it being the case that a proposition, p, is rational to believe such that the following are true:Footnote 4
(Rational Belief) Belief in p is rational iff p meets c. (Assuming evidentialism, Rational Belief amounts to the claim that believing p is rational iff p is adequately supported by one's evidence.)
(Licensed Failure) It is possible that p and the proposition that p doesn't meet c both meet c. (Assuming evidentialism, Licensed Failure amounts to the claim that cases, like Flight, are possible in which one's evidence supports both p and <p isn't supported by one's evidence>.)
(Anti-akrasia) It's not the case that belief in the proposition <p, yet p does not meet c> is ever rational. (Assuming evidentialism, Anti-akrasia amounts to the claim that akratic beliefs are never rational.)
Rational Belief and Licensed Failure entail that it's possible that it is rational to believe an akratic conjunction, that is, a proposition of the form “p, yet p does not meet c,” while Anti-akrasia appears to be the denial of this possibility. Given the structure of the problem, there are two straightforward ways to save our favored account of rationality, whatever that account may be: (i) we can reject Licensed Failure and argue that it's not possible that we occupy an epistemic circumstance in which a proposition, p, and <p doesn't meet c> both meet c. In the context of evidentialism, denying Licensed Failure amounts to arguing that, for example, we can never have sufficient misleading HOE so that our total evidence supports both p and <our evidence doesn't support p>.Footnote 5 Alternatively, (ii) we can deny Anti-akrasia, which, in the context of evidentialism, amounts to accepting that, in certain circumstances, akratic beliefs can be rational.Footnote 6, Footnote 7
In this paper, I argue for a third solution to Fumerton's puzzle. I diffuse Fumerton's puzzle by suggesting that we read Rational Belief as a claim about propositional rationality and Anti-akrasia as a claim about doxastic rationality (I discuss the propositional/doxastic distinction in the following section). There is no conflict between Rational Belief, Licensed Failure, and Anti-akrasia, if Rational Belief and Anti-akrasia invoke two different senses of “rational.”Footnote 8 The solution is a general one. Insofar as your favored account of rationality allows you to draw a distinction between propositional and doxastic rationality, you will be able to use the solution. Of course, it's beyond the scope of this paper to detail how my solution functions for every plausible account of rationality. I offer a thorough discussion of my solution in the context of evidentialism, as evidentialism is assumed in much of the literature on Fumerton's puzzle. In addition, for ease of discussion, I assume evidence consist of propositions (Dougherty Reference Dougherty and Dougherty2011). However, my general solution doesn't hinge on accepting evidentialism or propositionalism about evidence. If you aren't partial to evidentialism or propositionalism about evidence, my discussion may still provide a roadmap for diffusing Fumerton's puzzle. The details of the lessons drawn for evidentialism are applicable, mutatis mutandis, to other accounts of rationality as well.
The paper is structured as follows. In Section 1, I discuss the distinction between propositional and doxastic rationality in terms of reasoning and epistemic basing. In addition, I discuss my choice to read rational belief as a claim about propositional rationality and anti-akrasia as a claim about doxastic rationality. In Section 2, I argue that one cannot inferentially base an akratic belief in one's evidence, and, thus, one cannot (doxastically) rationally possess an akratic belief. In Section 3, I distinguish my view from other positions in the extant literature that invoke the propositional/doxastic distinction in the context of Fumerton's puzzle or in similar contexts involving misleading HOE. In Section 4, I address the worry that my solution to Fumerton's puzzle commits the evidentialist to the possibility of epistemic circumstances in which a proposition, p, is propositionally rational to believe (namely, an akratic conjunction), yet one cannot, in principle, (doxastically) rationally believe p.
1. Propositional and doxastic rationality
It's commonly accepted that evidentialist accounts of justification are accounts of propositional, as opposed to doxastic, justification. Similarly, we should accept that an evidentialist account of rationality is a theory of propositional rationality. Although many elide talk of rationality and justification as if the two were the same notion, I do not assume the two to be identical.Footnote 9 Nonetheless, I take it that the propositional/doxastic distinction can apply to rationality as well. In the remainder of the paper, I talk of justification and rationality interchangeably for ease of discussion. Treating the two as interchangeable is harmless in the context of my argument.
Roughly, on an evidentialist framework, a proposition is propositionally rational to believe when there is sufficient evidence to warrant believing the proposition, and a belief is doxastically rational when one holds the belief on the basis of that evidence.Footnote 10 Propositional rationality is a feature of propositions, whereas doxastic rationality is a feature of beliefs. Traditionally, propositional rationality is taken to be (conceptually/theoretically/metaphysically) primary – one's belief in a proposition, p, is doxastically rational only if (i) p is propositionally rational to believe, and (ii) one epistemically bases one's belief on adequate evidence (Korcz Reference Korcz1997, Reference Korcz2000).Footnote 11
It's also commonly accepted that there are, roughly, two cognitive means of basing a belief, B(p), in one's evidence, depending on the type of evidence one has for p: either (i) one can base B(p) inferentially by inferring B(p) from an antecedent set of attitudes, where the propositional contents of those attitudes constitute one's relevant evidence for p, or (ii) one can base B(p) non-inferentially as a direct response to an experience (or other relevant non-doxastic representational state) that p (Boghossian Reference Boghossian2018; Moretti and Piazza Reference Moretti, Piazza, Carter and Bondy2019). I take it that if one is to properly base an akratic belief in one's evidence, one must do so inferentially. We don't have experiences with propositional contents of the conjunctive form “p, but my evidence doesn't support p” upon which we can directly base an akratic belief. Instead, akratic beliefs need to be inferred (e.g., from beliefs in the conjuncts). Thus, I assume that to properly base an akratic belief in one's evidence, one must do so inferentially.Footnote 12
In the following section I argue that one cannot rationally base an akratic belief in one's evidence in the following sense:
(Thesis) Basing an akratic belief in one's evidence necessitates committing oneself to a contradiction.
In order to establish Thesis, I draw extensively from recent philosophical work on inference and cognitive psychological work on the metacognitive monitoring and control procedures involved in inference. Metacognition is, roughly, “cognition about one's own cognition” (Dokic Reference Dokic2014), and metacognitive monitoring and control are important executive functions that afford us flexibility in regulating our thoughts. I argue that it is not possible to infer an akratic belief without committing oneself to a contradiction. Thus, insofar as akratic beliefs can only be inferentially based, Thesis follows.
As I demonstrate, the mere fact that an agent, S, possesses evidence that strongly supports a proposition, p – like an akratic conjunction – doesn't entail that S can, in principle, do what is constitutive of properly basing a belief in p in S's evidence. Cases of misleading HOE, like Flight, are such that (i) a certain proposition (an akratic conjunction) is propositionally rational to believe, yet (ii) one cannot adopt a (doxastically) rational belief in the proposition. In Section 4, I argue that there are cases outside of those involving misleading HOE where (i) and (ii) hold. There is no theoretical cost to the evidentialist in arguing that cases of misleading HOE are cases in which both (i) and (ii) hold, as the evidentialist is already committed to the joint possibility of (i) and (ii) by other types of cases.
So, given that evidentialism is a theory of propositional rationality, we can consistently accept rational belief, licensed failure, and anti-akrasia, if we accept that anti-akrasia is a claim about doxastic rationality. But why ought we be inclined to read anti-akrasia as a claim about doxastic rationality? Although there is little extended discussion in the extant literature of why akratic beliefs are (or at least appear to be) irrational, the discussions that do occur often focus on what it would be like for an agent to possess an akratic belief.Footnote 13 Sophie Horowitz (Reference Horowitz2014) and Jessica Brown (Reference Brown2018: Ch. 6), for example, motivate the claim that akratic beliefs are irrational on the basis of the poor reasoning dispositions and irrational actions that possessing akratic beliefs would engender. In his (Reference Littlejohn2015), Clayton Littlejohn asks the reader to imagine a conversation with our epistemic conscience regarding our possession of an akratic belief. As Littlejohn writes, the discovery that we possess an akratic belief, “should be the beginning of epistemic self-assessment and revision, not the conclusion of it…The mindset of [a person who knowingly possesses an akratic belief] is opaque” (ibid.: 265). Alexander Worsnip notes that possessing an akratic belief:
amounts to saying “I have nothing that gives any adequate indication to me that p is the case; nevertheless, p is the case”…First-personally, these states do not seem capable of withstanding serious reflection. And third-personally, while we can imagine such agents, in describing and explaining them we reach for some story involving self-deception or a failure to recognize their own mental states. (Reference Worsnip2018: 17)
Instead of focusing on the reasons one might possess that support an akratic conjunction (propositional rationality), Horowitz, Brown, Littlejohn, and Worsnip draw our attention to the utter peculiarity of a mind that possesses an akratic belief (doxastic rationality). The intuitive pull of anti-akrasia – the position that akratic beliefs are irrational – is grounded in the aberrant psychology of one who possesses an akratic belief, as opposed to the strength (or lack thereof) of the evidential support that one possesses for the propositional contents of the akratic belief.
In addition, the reason theorists use the term “akratic” to talk about akratic beliefs is because of the structural similarity between akratic belief and practical akrasia (Greco Reference Greco2014). Practical akrasia (in one of its forms) is a matter of intending to perform an action (or in fact performing an action) that one believes one ought not perform (Wedgwood Reference Wedgwood2013). The irrationality of practical akrasia (insofar as we accept that practical akrasia is possible) is not a function of the epistemic and practical reasons one might possess for adopting both (i) a belief about what one ought to do and (ii) an intention to act in a contrary manner. The irrationality of practical akrasia is a function of the conflict between (i) and (ii) as possessed by an agent – that is, the conflict of intending to act in a way that one believes one ought not.
Interpreting anti-akrasia as a claim about doxastic rationality is not a mere ad hoc assumption used to get my solution to Fumerton's puzzle off the ground. In arguing that one can't properly base an akratic belief in one's evidence – and, thus, that akratic beliefs are doxastically irrational – I offer an account of the aberrant psychology of one who possesses an akratic belief that reflects why we intuitively find akratic beliefs to be irrational and that respects the structural similarity between akratic belief and practical akrasia.
That being said, Declan Smithies (Reference Smithies2019: Ch. 9) offers a novel argument for why an akratic conjunction cannot be propositionally rational to believe. Roughly, Smithies argues that belief “aims at knowledge” in the following sense:
Necessarily, you have justification to believe that p only if you have justification to believe that you're in a position to know that p. (ibid.: 306)
Akratic conjunctions, however, are “knowably unknowable,” to use Smithies' turn of phrase. It's easily demonstrated that one cannot know a proposition of the form “p, but my evidence doesn't support p” – if one knows one of the conjuncts, one can't know the other. For instance, if one knows p, then it must be the case that one is justified in believing p (given knowledge requires justification). Assuming evidentialism, one's total evidence must, thereby, support p. So, it's not the case that one's evidence doesn't support p. Because one cannot know a false proposition, one doesn't know that one's evidence doesn't support p. Given that akratic conjunctions are knowably unknowable, and belief aims at knowledge in the above sense, we can't have justification to believe akratic conjunctions.
Although I disagree with Smithies' claim that belief aims at knowledge (at least in the sense that he explicates), engaging with Smithies' arguments would take us too far afield. Instead of further defending the claim that we ought to read anti-akrasia as a claim about doxastic rationality, I position my argument as an exploration of a possible solution to Fumerton's puzzle – a solution that has the theoretical benefit of retaining many of our intuitions about cases of misleading HOE, like Flight. For the sake of this paper, I assume that akratic conjunctions can be propositionally rational to believe. In other words, I assume that licensed failure is true for evidentialism. In cases like Flight, intuitively, your evidence seems to (on balance) support an akratic conjunction. Thus, given evidentialism, an akratic conjunction is propositionally rational to believe. Of course, a defense of this position would require responding to Smithies and, more broadly, advocates of the fixed-point thesis who deny licensed failure and argue that one cannot be rationally mistaken about the demands of (propositional) rationality. However, seeing that others have already responded to the fixed-point thesis in the literature (e.g., Field Reference Field2019; Skipper Reference Skipper, Skipper and Steglich-Petersen2019a) and many philosophers accept that akratic conjunctions can be propositionally rational to believe (e.g., Coates Reference Coates2012; Lasonen-Aarnio Reference Lasonen-Aarnio2014, Reference Lasonen-Aarnio2020; Weatherson Reference Weatherson2019), I will not devote space to responding to Smithies or the fixed-point thesis here.
Nonetheless, my view is also able to maintain that there is something clearly irrational about akratic beliefs. On my account, the irrationality of akratic beliefs has nothing to do with evidential support for akratic conjunctions; instead, as I argue, the irrationality has to do with attempts to base an akratic belief in one's evidence. Like Horowitz, Brown, Littlejohn, Worsnip, and others, I explain the irrationality of akratic beliefs in terms of the aberrant psychology of one who possesses an akratic belief. For the sake of space, the bulk of the paper will be devoted to providing a novel defense of the claim that akratic beliefs are doxastically irrational, despite it being possible that akratic conjunctions can be propositionally rational to believe. Thus, on my view:
(1) we don't have to accept the fixed-point thesis, a view that, as Claire Field (Reference Field2019) notes, even some advocates admit is counterintuitive, yet
(2) we can also accept anti-akrasia by reading it as a claim about doxastic rationality.
In addition, as I discuss in Section 4, my view comes at little theoretical cost to the evidentialist.
2. Reasoning and rationality
Recently, a cottage industry has formed with the goal of analyzing person-level reasoning and inference.Footnote 14 The dominant position in the literature is that reasoning consists of rule-governed operations defined over propositional attitudes (or their contents) (Boghossian Reference Boghossian2014, Reference Boghossian, Jackson and Jackson2019; Broome Reference Broome2013). In reasoning, one transitions from propositional attitudes to propositional attitudes in virtue of following (as opposed to merely conforming to) a rule, where the rules one follows are (or at least can be modeled as) functions from sets of propositions to further propositions. The structure of the rules reflects the common evidentialist sentiment that rationality is a function of apportioning one's doxastic attitudes to one's evidence. For instance, as Anna-Sara Malmgren claims:
for a proposition, q, to be (good) reason or evidence to believe another proposition, p, q must stand in an appropriate logical – or, more broadly, implication or confirmation – relation to p…A “good (inference) rule,” in turn, is just a rule that encodes some such relation…. (Reference Malmgren2018: 224, emphasis mine)
What matters for our purposes are not the details of a developed account of reasoning but how philosophers have attempted to distinguish reasoning from other state transitions between propositional attitudes. For instance, a psychoanalyst may ask her patient to engage in free association, which can certainly involve a transition between propositional attitudes, and which may provide the grounds for some rather profound insights into the patient's psyche. However, associative transitions are not inferential.
It's my contention that what (at least in part) separates inference from associations and other non-inferential types of transitions between propositional attitudes is the following:
(Commitment) Inference is a commitment constituting process. More specifically, what distinguishes inference from other state transitions between propositional attitudes is that inferring a belief, B(p), from a set of doxastic attitudes, Γ, constitutively involves the reasoner committing herself to the truth of the claim that the propositional contents of Γ support p.
In Section 2.1, I defend Commitment by arguing for a narrower claim, namely, Paul Boghossian's Taking Condition (which I define in Section 2.1) on which commitment comes out true. I also argue that commitment, along with our assumptions about the nature of misleading HOE, entail thesis. Finally, I argue that commitment allows for propositional and doxastic rationality to come apart in cases of misleading HOE such that an akratic conjunction can be propositionally rational to believe yet it not be possible for one to properly base an akratic belief.
As I discuss in Section 2.2, there are a several theorists who reject the Taking Condition but, nonetheless, accept Commitment. Ultimately, what matters for my solution to Fumerton's puzzle is that (i) Commitment is true, and (ii) Commitment, along with our assumptions about the nature of misleading HOE, entails Thesis. Although I have particular views about the nature of inference and what makes Commitment true – views that I defend in Section 2.1 – as long as one accepts Commitment, one can avail oneself of my solution to Fumerton's puzzle.
2.1. Commitment, the taking condition, and thesis
Commitment is reminiscent of a popular, much discussed means of distinguishing inference from other types of attitudinal transitions, namely, Boghossian's Taking Condition:
(Taking Condition) Inferring necessarily involves (i) the thinker taking her premises to support her conclusion and (ii) drawing her conclusion because of (i). (Boghossian Reference Boghossian2014)Footnote 15
As I've framed the Taking Condition, it is composed of two claims, namely, that inference necessarily involves a thinker
(1) taking her premises to support her conclusion, where this taking is typically assumed to be a representational state, more specifically, either a belief or an intuition, and
(2) the taking (in part) explains the fact that the reasoner draws her conclusion.
Although the taking condition is not ubiquitously accepted (e.g., McHugh and Way Reference McHugh and Way2016; Wright Reference Wright2014), it has ample intuitive appeal and successfully demarcates inference. For instance, the impetus for an associative transition in thought is not an agent's recognition of an epistemic support relation but the existence of some context relevant commonality between the content of the agent's thoughts.
Theorists explicate the taking relation – that is, the relation one takes there to be between one's premises and conclusion – in different ways. As stated, on Boghossian's account the taking relation involves one's premises “supporting” one's conclusion.Footnote 16 According to Markos Valaris (Reference Valaris2014, Reference Valaris2016), the taking relation holds when one's conclusion “follows” from one's premises. On Ram Neta's (Reference Neta2013) account, the taking relation holds when one's premises give one justification to believe one's conclusion. Finally, according to Andres Nes (Reference Nes, Breyer and Gutland2016), the taking relation requires that one's premises naturally mean one's conclusion in Grice's (Reference Grice1957) sense of “natural meaning.” Although I will use Boghossian's terminology of “support,” what is important for our purposes is that on all accounts of the taking relation, it can't be the case that the relation holds between one's premises and conclusion and yet one's premises do not evidentially support one's conclusion.
In the following, I defend an interpretation of the Taking Condition on which the taking state constitutes an intuition. Additionally, as I demonstrate, Commitment comes out true on my interpretation, and Commitment, along with our assumptions about the nature of misleading HOE, entails Thesis. However, it should be noted in passing that on a doxastic account of taking, Commitment also comes out true, and it is clearly impossible for one to infer the conjuncts of an akratic belief (or the akratic conjunction itself) without committing oneself to a contradiction. On a doxastic account (e.g., Deutscher Reference Deutscher, Brown and Rollins1969; Neta Reference Neta2013; Valaris Reference Valaris2014, Reference Valaris2017, Reference Valaris2020), taking consists in believing that one's premises support one's conclusion and drawing an inference in virtue of this belief. Recall, in cases of misleading HOE, like Flight, an agent possesses a total body of evidence on which a first-order proposition, p, and a higher-order proposition, <p isn't supported by the agent's evidence>, are both evidentially supported such that both propositions (and, thus, the akratic conjunction of the two) are propositionally rational to believe. If the agent infers p from her evidence, then, according to the doxastic account of taking, the agent must believe that her evidence supports p (reasoning constitutively requires that one adopt this higher-order belief on the doxastic account). However, if the agent also believes the higher-order proposition that p isn't supported by her evidence then the agent will believe both that p is supported by her evidence and that it's not the case that p is supported by her evidence. Thus, if an agent reasons to an akratic belief in a case of misleading HOE, like Flight, she will end up believing a contradiction. Insofar as beliefs clearly constitute commitments, Commitment is true on the doxastic account of taking, and Thesis straightforwardly follows.
Although many theorists find the doxastic account of taking compelling, there are good reasons to be dubious of the account. If taking is understood as full-fledged belief, it appears that the taking condition (i) engenders a familiar Carrollinian (Reference Carroll1895) regress (mustn't the taking belief be reasoned to and, therefore, itself require a meta-level taking belief?) and (ii) over intellectualizes reasoning (children and at least some non-human animals can reason despite lacking the relevant conceptual competences to formulate beliefs regarding epistemic support). There are good responses to (i) and (ii) in the literature (e.g., Müller Reference Müller2019; Valaris Reference Valaris2014), but it is not my intent to defend the doxastic account of taking. Instead, I proceed to defend my favored, intuitional account.
Other theorists argue that taking consists of an intuition or intellectual seeming that one's premise attitudes support one's conclusion (e.g., Broome Reference Broome2013; Chudnoff Reference Chudnoff and Kriegelforthcoming; Dogramaci Reference Dogramaci2013). Minimally, an intuitional account of taking avoids the Carrollinian regress, as intuitions aren't the result of inference, and, therefore, wouldn't require a meta-level taking intuition. However, intuitions needn't constitute commitments to their representational contents. Thus, it's less clear whether, on an intuitional account, Commitment comes out true. For instance, it's not irrational that it intuitively seem to one that p (e.g., that one's premises support a particular conclusion) and yet one adopt the belief that not-p, if one has sufficient reason to reject the intuition.
As I argue, our intuitions regarding which propositions (or proposition types) support which guide the inferences we make. These intuitions constitute commitments to the proposition that one's premise attitudes support one's conclusion in virtue of the guiding role the intuitions play in inference. In order to unpack my claim that intuitions can constitute commitments in virtue of the guiding role they play in inference (and in other cognitive processes, more broadly), I draw from recent work in cognitive psychology on metacognitive monitoring and control, and meta-reasoning in particular (Ackerman and Thompson Reference Ackerman, Thompson, Feeney and Thompson2015, Reference Ackerman and Thompson2017a, Reference Ackerman, Thompson, Ball and Thompson2017b). It's my contention that recent work on metacognition empirically vindicates an intuitional version of the taking condition. However, before I proceed to discuss meta-reasoning, I first discuss metacognition in the case of memory search in which an agent initiates and guides a search of long-term memory. Much of the recent literature on metacognition focuses on mnemonic processing. By first discussing metacognition in the context of memory search I am more easily able to introduce central concepts in the metacognition literature and explain how intuitions can constitute commitments.
In searching long-term memory for an episodic memory of an event or the semantic memory of a set of facts, a series of metacognitive representations allow us intelligently to guide the search process in terms of initiating, persisting in, and terminating the search in light of the likelihood of successfully retrieving relevant information. These metacognitive representations are instances of what cognitive psychologists call epistemic or noetic feelings (Arango-Muñoz Reference Arango-Muñoz2014; da Sousa Reference da Sousa2009; Dokic Reference Dokic2014). As Arango-Muñoz and Michaelian write, “[f]eelings, in general, are spontaneously-emerging occurrent phenomenal experiences” (Arango-Muñoz and Michaelian Reference Arango-Muñoz and Michaelian2014). Epistemic feelings, in particular, are feelings with particular types of evaluative content directed at cognitive processes. Although the correct account of epistemic feelings is contentious, we can summarize the dominant account of epistemic feelings in the following four claims:
(1) Epistemic feelings are intentional states with representational content directed at cognitive processes that constitute evaluations of the processes. For instance, tip-of-the-tongue (TOT) states are commonly experienced epistemic feelings directed at a memory retrieval process (Brown Reference Brown1991). TOT states represent that (/constitute a seeming that) one knows something while not, presently, being able to access (fully) that knowledge such that further mnemonic search may likely succeed in recalling the information that remains unaccessed.
(2) Epistemic feelings play a crucial role in guiding intellectual activity and are closely linked to agency in thought (da Sousa Reference da Sousa2009). For example, TOT states assist in an agent's flexible decision regarding whether to continue to expend cognitive resources on a memory search.
(3) Epistemic feelings are the result of type-1 processes.Footnote 17 In other words, epistemic feelings are not the product of controlled deliberation but are generated by automatic processes operating non-consciously. For example, TOT states aren't generated by a conscious, deliberative estimation of the chance of successful recall on the basis of available evidence. Instead, they are generated by automatic processes operating outside of consciousness.
(4) Finally, epistemic feelings have a phenomenology. For instance, there is something it is like to be in a TOT state – for it to seem as if one knows something while not, presently, being able to access (fully) that knowledge.
Although I will talk of epistemic feelings – thus, using the terminology of cognitive psychology – theorists who accept an intuitional account of taking, like Sinan Drogramaci and Elijah Chudnuff, would categorize epistemic feelings as intuitions. Chudnoff (Reference Chudnoff2020) even mentions a particular epistemic feeling, the feeling of rightness, by name in a recent discussion of intuition.Footnote 18 Epistemic feelings are a particular subtype of intuition, where intuitions are, roughly, sui generis seemings, distinct from perception and occurrent belief (Chudnoff Reference Chudnoff2013).Footnote 19 It should also be noted that epistemic feelings are not some recherché theoretical posit exclusively discussed in cognitive psychology. In fact, several philosophers have recently employed epistemic feelings for a litany of theoretical ends. For instance, Matthew Frise (Reference Frise and McCain2018) uses epistemic feelings in a defense of evidentialism. Anna Drożdżowicz (Reference Drożdżowicz2023) appeals to epistemic feelings in offering an account of the experience of understanding an utterance in a language in which one is fluent. And, finally, Jacques-Henri Vollet (Reference Vollet2022) appeals to epistemic feelings in his analysis of epistemic excuses.
So, what types of epistemic feelings play a role in guiding a search of long-term memory? In attempting to recall some event, set of facts, etc., an initial feeling of knowing will occur prior to any information is consciously accessed from long-term memory.Footnote 20 The gradable strength of the feeling of knowing constitutes, for an agent, an assessment of the relative likelihood that a memory search will be successful (Reder Reference Reder and Bower1988). This initial feeling of knowing, thus, guides an agent's choice to search long-term memory. For instance, in determining the product of two integers agents will use a feeling of knowing to determine whether they need to explicitly calculate the product using an algorithm like long multiplication, or whether they can just recall the product from a rote memorized multiplication table, thus forgoing calculation (Paynter et al. Reference Paynter, Reder and Kieffaber2009). As a search unfolds, feelings of processing fluency, that is, the experience of the demandingness of the cognitive task, are taken by the agent to represent whether further search will (continue to) produce results or whether search should be terminated. As representations are accessed from long-term memory they may be accompanied by, what Johnson et al. (Reference Johnson, Hashtroudi and Lindsay1993) call, a feeling of pastness that indicates to the agent that the representations are of remembered events or facts as opposed to, for example, merely imagined or unrelated events or facts. For instance, when attempting to recall a previously seen list of words – a commonly used task in cognitive psychological research on memory – the activation of a representation of one word may activate representations of semantically associated words, even if those semantically associated words were not on the originally observed list. Agents may use the accompanying feeling of pastness to determine the source of the activated word representation, for example, whether the represented word was previously observed on the list or whether the word is merely semantically associated with a word on the list. As the search continues and requires greater attentional demands on working memory, eventually the gradable feeling of processing fluency will be taken by the agent to indicate that continued search will no longer be successful and ought to be terminated.
So, what makes these epistemic feelings commitments? Broadly speaking, the fact that a mental representational state constitutes a commitment to the truth of its content – or a taking to be true – is grounded in how that state (or states of that type) functions in cognition and guiding behavior. For instance, believing that p constitutes a commitment to the truth of p, whereas imagining that p doesn't. What distinguishes believing that p from imagining that p has nothing to do with the propositional contents (or format of representation) of the representational states. Instead, they differ in the functional role states of the respective types play in cognition and in guiding behavior. We needn't settle on an exact analysis of the functional profile of belief or imagination to recognize that believing p constitutes a commitment to the truth of p, whereas imagining p doesn't. In turn, it's the fact that believing p constitutes a commitment to the truth of p that makes belief the proper subject of theoretical rational evaluation, unlike imagination which involves no such commitment given its functional profile.
Given how the feeling of knowing, feeling of processing fluency, feeling of pastness, and other metacognitive representations guide memory search, the representations constitute evaluative commitments on the part of the agent. For instance, insofar as an agent uses a feeling of knowing to determine whether to initiate and allocate cognitive effort to a memory search, the agent is committed to it being the case (/takes it to be the case) that the search is worth the cognitive effort, given the likelihood of success. The agent cannot rationally believe that the memory search isn't worth the cognitive effort while simultaneously using a feeling of knowing to determine whether to initiate the search, as the agent would, thereby, commit herself to the contradiction that the memory search is worth the cognitive effort, and it's not the case that the memory search is worth the cognitive effort.
Certain mental process types, like the controlled search of long-term memory, constitutively involve an agent adopting commitments. In other words, what, in part, delineates these process types from other, similar processes are the commitments that constitutively guide the processes. The metacognitive states agents use to guide memory search are what differentiates, say, a memory search in which a set of words is recalled in a controlled manner from mere verbal mind wandering in which the same set of words is tokened in working memory without control being exerted by the agent. In turn, it's these metacognitive states that make memory search a process attributable to an agent as opposed to a mental process that is merely happening to her.
It's my contention that (Commitment) inference is, similarly, a commitment constituting process. What differentiates genuine inference from association or other types of state transitions between propositional attitudes are the commitments undertaken by the reasoner, where these commitments manifest as metacognitive monitoring states used to flexibly control the reasoning process. In turn, it's these commitments that make reasoning something attributable to an agent, as opposed to a ballistic cognitive process that merely happens to the agent. Although, as previously noted, much of the work on metacognition focuses on mnemonic processes, more recently, Ackerman and Thompson have generated a model of meta-reasoning, or the metacognitive monitoring and control procedures involved in reasoning (Ackerman and Thompson Reference Ackerman, Thompson, Feeney and Thompson2015, Reference Ackerman and Thompson2017a, Reference Ackerman, Thompson, Ball and Thompson2017b). On their model, meta-reasoning monitoring processes give rise to feelings of certainty and uncertainty throughout deliberation that constitute assessments of the epistemic quality of the attitudinal transitions the reasoner makes. As Jérôme Dokic puts it, feelings of (un)certainty constitute evaluations of “the non-perceptual method [i.e., inference] we have used to reach [our] conclusion” (Dokic Reference Dokic2014: 136). These feelings of (un)certainty are used to control, for example, the allocation of cognitive effort to various processes, the choice of decision procedure to use when problem solving, and whether the agent takes her conclusion to be correct or decides that further reasoning or solution search is necessary. Feelings of (un)certainty are intuitions about the rational status of our inferential transitions. In turn, given the guiding role that feelings of (un)certainty play in reasoning, they constitute commitments (/takings) on the part of the reasoner.
Epistemic feelings of (un)certainty function to guide inferential transitions just like taking beliefs are supposed to guide inferential transitions on the doxastic account of taking. In using epistemic feelings of (un)certainty to guide inference we, thus, commit ourselves to their content. The irrationality of reasoning to an akratic belief on the intuitional account of taking is grounded in our use of certain epistemic feelings to guide inference. Just as it is on the doxastic account of taking, if an agent infers an akratic belief, she will commit herself to a contradiction of the form “p is supported by my evidence, and it's not the case that p is supported by my evidence.” This commitment may not manifest as an explicit belief of the agent, but it is no less a commitment (in virtue of the functional role epistemic feelings play in thought) and no less irrational.
In sum:
(1) Inferring a belief, B(p), requires committing oneself to the claim that the evidence on the basis of which one infers B(p) supports p. (Commitment, which I've defended in this section.)
(2) In a case of misleading HOE, like Flight, an akratic conjunction – a proposition of the form “p, yet p isn't support by my evidence” – is propositionally rational to believe. (Assumption defended in Section 1.)
(3) Properly basing an akratic belief requires inferring the belief from one's evidence (Assumption defended in Section 1.)
(4) If one infers p, one commits oneself to the claim that p is supported by one's evidence. (From (1).)
(5) If one believes what one's evidence supports in a case of misleading HOE, like Flight, one will believe – and thus commit oneself to – the proposition that p isn't supported by one's evidence. (From (2).)
(6) Thus, (Thesis) basing an akratic belief in one's evidence necessitates committing oneself to a contradiction of the form “p is supported by one's evidence, and it's not the case that p is supported by one's evidence.” (From (3)–(5).)Footnote 21
It's important to note that one commits oneself to the claim that B(p) is supported by one's evidence by inferring B(p). So, in an epistemic circumstance involving misleading HOE, like Flight, in which one's evidence supports an akratic conjunction, one only becomes committed to a contradiction of the form “p is supported by my evidence, and it's not the case that p is supported by my evidence” if one infers an akratic belief. The mere possession of evidence that strongly supports the akratic conjunction doesn't, by itself, commit one to a contradiction – it's the act of inferring the akratic belief that generates the commitment. Therefore, although the akratic conjunction is propositionally rational to believe in virtue of the evidential support for the conjunction, an agent can't properly base an akratic belief without incurring a pair of contradictory commitments. Thus, rational belief, licensed failure, and anti-akrasia can all be true, insofar as we accept that rational belief is a claim about propositional rationality and anti-akrasia is a claim about doxastic rationality.
2.2. Rejecting the taking condition
In the previous section, I provided empirical support for the Taking Condition using work on metacognition (and meta-reasoning in particular) to argue that inference involves:
(1) mentally representing that one's premise attitudes support one's conclusion,
(2) where these representations guide the propositional attitude transitions involved in reasoning.
More specifically, I argued that these representations are epistemic feelings that constitute epistemic evaluations of the propositional attitude transitions we make. However, it's important to note that one needn't accept (1) and (2) – or, more broadly, the Taking Condition – in order to accept Commitment and, thus, be eligible for my solution to Fumerton's puzzle.
For instance, Christopher Blake-Turner (Reference Blake-Turner2022), Christian Kietzmann (Reference Kietzmann2018), and Eric Marcus (Reference Marcus2020) have all recently argued for accounts of inference on which inference constitutively involves representing that one's premise attitudes support one's conclusion in a manner that constitutes a commitment on the part of the reasoner (thus accepting Commitment). However, Blake-Turner, Kietzmann, and Marcus reject (2). In other words, they reject the claim that one's commitment to the proposition that one's premise attitudes support one's conclusion guides the inferential process. Nonetheless, Blake-Turner, Kietzmann, and Marcus could still accept my solution to Fumerton's puzzle. As I demonstrated at the end of the previous section, Thesis follows from Commitment and our assumptions about the nature of misleading HOE. Insofar as Blake-Turner, Kietzmann, and Marcus accept Commitment and our assumptions about the nature of misleading HOE, they also ought to accept Thesis.
Departing even further from the position I advanced in the previous section, McHugh and Way (Reference McHugh and Way2015, Reference McHugh and Way2016, Reference McHugh and Way2018a, Reference McHugh and Way2018b) offer a functional account of reasoning on which reasoning is constitutively aim-directed. Although McHugh and Way would reject (1) and (2) – as they reject the claim that inference must involve any mental representation that one's premise attitudes support one's conclusion – McHugh and Way still accept Commitment. For instance, McHugh and Way write:
Theoretical reasoning is guided by the aim of acquiring fitting beliefs. If p does not support q, then reasoning from p to q is not a good way to pursue this aim. So, reasoning from p to q while judging that p does not support q amounts to taking what you acknowledge to be an unreliable means to your end. That looks plainly irrational…this seems enough to give a sense in which reasoning from p to q commits you to thinking that p supports q…. (McHugh and Way Reference McHugh and Way2018b: 191, emphasis mine)
Insofar as McHugh and Way accept Commitment, advocates of McHugh's and Way's account of inference can, thus, avail themselves of my solution to Fumerton's puzzle.
It's clearly beyond the scope of this paper to discuss all extant accounts of inference in the philosophical literature. However, as I've demonstrated in this section, there are several accounts that accept Commitment while rejecting the particular view I've offered regarding the nature of inference and what makes Commitment true. Although I've argued for a representational reading of Commitment on which inference constitutively involves epistemic feelings that guide the attitudinal transitions we make, ultimately, what matters for my solution to Fumerton's puzzle is that (i) Commitment is true, and (ii) Commitment, along with our assumptions about misleading HOE, entails Thesis.
3. Comparing my view to others
Paul Silva (Reference Silva2017), Declan Smithies (Reference Smithies, Silva and Oliveira2022), and Han van Weitmarschen (Reference Van Wietmarschen2013) all discuss the propositional/doxastic distinction in the context of Fumerton's puzzle or in similar contexts involving misleading HOE. In the following, I briefly discuss, in turn, differences between my position and those offered by Silva, Smithies and, van Weitmarschen. It's beyond the scope of this paper to provide an exhaustive discussion of each view; however, as I make clear, the position I defend is significantly dissimilar to those on offer in the extant literature.
Silva argues for a similar thesis to my own, namely, that the propositional/doxastic distinction is key to resolving Fumerton's puzzle and that, although it can be propositionally rational to believe an akratic conjunction, one cannot doxastically rationally possess an akratic belief. However, Silva assumes (without argument) that a person can properly base an akratic belief in her evidence. Thus, Silva is forced to advocate for the position (recently defended by Turri Reference Turri2010) that epistemic basing is not what distinguishes doxastic and propositional justification. Silva argues for the following necessary condition on doxastic justification:
S's doxastic attitude, D(p), is doxastically justified only if S lacks undefeated propositional justification to believe that S's total evidence does not support taking D(p)
to secure the claim that S cannot be doxastically justified in holding an akratic belief. However, as I've demonstrated, (pace Silva) one cannot rationally base an akratic belief in virtue of the fact that akratic beliefs need to be inferentially based and inference is a commitment constitute process. Inferring an akratic belief would commit oneself to a contradiction. The impetus is on Silva to argue that an akratic belief can be rationally based. There is no need to appeal to an additional necessary condition on doxastic justification, like Silva's, to secure the result that akratic beliefs cannot be doxastically justified.
Similarly, Smithies argues that cases of misleading HOE, like Flight, are such that there is a proposition that you are propositionally justified to believe, yet you cannot hold a doxastically justified belief in the proposition. However, as previously mentioned, Smithies is an advocate of (a version of) the fixed-point thesis; thus, he doesn't allow that it is ever rational to be mistaken about the demands of (propositional) rationality. According to Smithies, in Flight your evidence would support the proposition <you have enough fuel to make it to your destination, and your total evidence supports the proposition that you have enough fuel to make it to your destination>, but you can't doxastically rationality believe the proposition or either of its conjuncts. In order to secure this result, Smithies argues for a condition on doxastic justification according to which a belief is properly based “only if it manifests a more general disposition to believe what the evidence supports” (ibid.: 110). Thus, a necessary condition on properly basing a belief on one's evidence is that the one's belief manifests a general sensitivity to the evidence. In other words, in nearby worlds where the evidence is relevantly different, one's belief would be relevantly different.
According to Smithies, the issue with, say, maintaining your first-order belief that you have enough fuel to make it to your destination in Flight in the face of the testimony from your co-pilot is that – for non-ideal agents like us – maintaining the first-order belief would constitute manifesting the disposition to dogmatically maintain beliefs despite HOE that those beliefs are the result of poor reasoning. The disposition to dogmatically maintain beliefs despite HOE that those beliefs are the result of poor reasoning would (given Smithies' characterization of the disposition as “dogmatic”) result in you maintaining beliefs unsupported by your evidence in certain nearby worlds. Which nearby worlds? Bad case worlds, that is, worlds in which the HOE isn't misleading and, thus, you haven't respected your first-order evidence. For instance, given the reasoning acumen of your co-pilot, it could easily happen, in a nearby possible world, that you make a mathematical error, and your co-pilot correctly points out the error (this would be a bad case world). So, if in Flight (the good case) you are disposed to remain steadfast in the face of the evidence from your co-pilot, then (according to Smithies) you would be equally disposed to ignore your co-pilot and stick to your guns in a nearby bad case world in which you've made a routine mathematical error and your co-pilot is correct in her assessment of which propositions your evidence supports. According to Smithies, in both the good case and bad case worlds you manifest the same disposition to dogmatically retain your beliefs despite HOE of your reasoning failure. Thus, for non-ideal agents like us, cases like Flight are such that a certain proposition is propositionally rational to believe (e.g., that we enough fuel to make it to our destination, and that our evidence supports this) yet we cannot doxastically rationally believe the proposition because doing so would be to a manifest a dogmatic disposition such that we wouldn't be properly sensitive to shifts in our evidence in nearby worlds.
The qualification “for non-deal agents” is important for Smithies. It's not in principle impossible to doxastically rationally believe <you have enough fuel to make it to your destination, and your total evidence supports the proposition that you have enough fuel to make it to your destination> in Flight, it's just impossible for non-ideal agents. In fact, according to Smithies, “[b]ecause ideally rational agents are perfectly sensitive to what their evidence supports, they can remain steadfast in good cases without thereby manifesting any disposition to remain steadfast in bad cases where their reasoning dispositions are held constant” (ibid.: 112, emphasis mine). However, this is an odd remark by Smithies. If we hold ideally rational agents' reasoning dispositions fixed – where these dispositions are characterized as “perfectly sensitive to what their evidence supports” – then our modal assessment of the sensitivity to shifts in evidence of ideally rational agents' beliefs won't include any bad case worlds. Bad cases are, by stipulation, worlds in which one isn't perfectly sensitive to what one's evidence supports. Trivially, ideally rational agents will never manifest a disposition to remain dogmatically steadfast in nearby possible worlds in which we hold fixed their perfect evidential sensitivity.
It should be clear that Smithies' result crucially depends on how we characterize the dispositions of ideal and non-ideal agents and, thus, what we hold fixed in examining the evidential sensitivity of agents' beliefs in nearby worlds. Different characterizations of the relevant dispositions for both ideal and non-ideal agents would yield different results. Regardless, it should be obvious that Smithies' position is distinct from my own. I make no appeal to dispositions, sensitivity to shifts in evidence, etc. Again, my argument solely depends on what constitutively distinguishes inference from other types of transitions between propositional attitudes.
Finally, in his (Reference Van Wietmarschen2013), van Wietmarschen discusses the distinction between propositional and doxastic rationality in the context of assessing conciliatory views of peer disagreement on an evidentialist framework. Although van Wietmarschen is specifically focused on peer disagreement, his remarks could be generalized to other types of HOE. Van Wietmarschen concludes that conciliatory views are false when understood to be claims about propositional rationality but true when understand to be claims about doxastic rationality. In order to establish this result, van Wietmarschen invokes the following claim about doxastically rational belief:
for S's belief that p to be [inferentially] well-grounded in S's evidence E [i.e., doxastically rational]: the argument on the basis of which S in fact believes p is or resembles a good argument from E to p. (ibid.: 415)
where a good argument for p given E is an argument that S would find convincing on ideal reflection.
In arguing for his position, van Wietmarschen discuses a case adapted from David Christensen (Reference Christensen2007) in which you are out to lunch with a friend whom you rationally believe is equally as mathematically component as yourself (and, thus, your peer when it comes to mathematical matters). You and your friend agree to split the $46.00 lunch bill evenly and tip 20 percent. You both calculate your respective shares in your heads. You rightly conclude that your shares are $27.60 each while your friend claims that the shares are $27.10. According to van Wietmarschen, your disagreement with your friend presents you with a potential defeater. Responding to this potential defeater would require demonstrating that the best explanation for your disagreement is that your friend made a mistake while you reasoned correctly from the first-order evidence. However, given the disagreement, a good argument from your conclusion “can no longer simply be a calculation from E to the conclusion that your shares are $27.60; a good argument must also respond to [your friend's] disagreement as a potential defeater” (ibid.). In addition, van Wietmarschen invokes the following independence principle, also adapted from Christensen:
when we determine what a subject is justified in believing about the explanation of his or her disagreement with S about p, we should bracket the subject's original reasoning about p. (ibid.: 416)
Therefore, given the disagreement with your friend and the above independence principle, you are no longer doxastically rational in believing that your lunch shares are each $27.60. You lack a good argument for the claim that your lunch shares are each $27.60 and, thus, a belief in this claim wouldn't be well-grounded. So, although you are propositionally rational in believing that your lunch shares are each $27.60 (this proposition is entailed by your evidence, properly construed), you aren't doxastically rational in believing the proposition.
Again, I don't have the space to engage with van Wietmarschen's arguments, but it should be clear how van Wietmarschen's arguments differ from my own. I don't invoke an independence principle or any claims about when our original reasoning ought to be bracketed in the face of HOE. More broadly, the positions of Silva, Smithies, and van Wietmarschen all invoke additional claims about what is required for proper basing, that is, there need's to be a lack of undefeated HOE (in Silva's case), one must exhibit a dispositional sensitivity to shifts in evidence (in Smithies' case), or one's reasoning must meet a Christensen style independence principle (in van Wietmarschen's case). My strategy is different. Instead of discussing what's required for proper basing in general, I shift our attention to the nature of inference. By invoking (what I take to be) a very plausible claim about what constitutively separates inference from other types of transitions between propositional attitudes, I am able to diffuse Fumerton's puzzle.
4. Propositional rationality does not entail doxastic rationality
Accepting my solution to Fumerton's puzzle commits the evidentialist to the possibility of epistemic circumstances (e.g., circumstances, like Flight, in which one gains misleading HOE) in which a proposition, p, is propositionally rational to believe, but it's not possible that one (doxastically) rationally believe p without committing oneself to a contradiction. It might be objected that if a proposition is propositionally rational to believe, it must be possible for one to rationally believe the proposition. In other words, propositional rationality ought to entail the possibility of doxastic rationality.Footnote 22
As I demonstrate in Section 4.1, cases of misleading HOE are not the only epistemic circumstances in which (on an evidentialist framework) a proposition is propositionally rational to believe, yet one won't be able to rationally (doxastically) believe the proposition, without enmeshing oneself in some further form of irrationality. Epistemic circumstances involving finkish evidence – to borrow an expression from Smithies (Reference Smithies2016, Reference Smithies2019) – are cases involving purely first-order evidence in which propositional rationality does not entail the possibility of doxastic rationality. Regardless of how we handle cases of misleading HOE, the evidentialist is committed to accepting that propositional rationality does not entail the possibility of doxastic rationality. Thus, the evidentialist does not incur an additional theoretical cost by accepting my solution to Fumerton's puzzle.
4.1. Anti-expertise, finkish evidence, and finkish epistemic circumstances
One's evidence is finkish if “it is destroyed or undermined in the process of attempting to form a doxastically rational belief that is properly based on the evidence” (Smithies Reference Smithies2016: 205). There are several cases of finkish evidence discussed in the literature, but cases of anti-expertise are a particularly stark example. In a case of anti-expertise, one gains compelling evidence that one is an anti-expert with respect to some proposition (or class of propositions), p, where an anti-expert, S, with respect to p is one for whom the following holds:
p iff it's not the case that S believes (or judges that) p.
Take the following oft cited case of anti-expertise from Earl Conee (Reference Conee1982) (I've altered the case in several non-essential ways for ease of discussion):
After repeated and flawless trials using the best in brain-scanning technology with a massive and diverse sample of people, a thirtieth century brain physiologist, Dave, discovers that a person's N-fibers fire iff it's not the case that the person believes they are all firing. Dave begins to wonder about the following proposition: (q) All of Dave's N-fibers are firing.
Given Dave knows that a person's N-fibers fire iff it's not the case that the person believes they are all firing, Dave knows the following:
(1) If Dave believes q is false, q is true.
(2) If Dave believes q is true, q is false.
(3) If Dave refrains from judgment or holds no doxastic attitude with respect to q, q is true.
Assuming Dave has access to his propositional attitudes about N-fibers, there will be a proposition that is propositionally rational for Dave to believe, given his evidence, but that Dave cannot (doxastically) rationally believe. For example, if Dave has access to the fact that he believes that all of his N-fibers are firing, then Dave's evidence strongly supports the proposition that it's not the case that all of Dave's N-fibers are firing. Thus, the proposition that it's not the case that all of Dave's N-fibers are firing is propositionally rational to believe. But Dave cannot rationally believe the proposition in virtue of the fact that his evidence is finkish. Once Dave believes that it's not the case that all of his N-fibers are firing, his evidence will support the proposition that all of his N-fibers are firing.Footnote 23
As cases of anti-expertise (like Dave's) demonstrate, propositional rationality does not entail the possibility of doxastic rationality, at least on an evidentialist framework. Cases of misleading HOE are not unique in that cases involving finkish evidence also require the evidentialist to accept that propositional rationality does not entail the possibility of doxastic rationality. Although cases of misleading HOE are not cases of finkish evidence, they are, more broadly, what I will call finkish epistemic circumstances. Let's let an epistemic circumstance be the total evidence and set of commitments an agent possesses at a time. An epistemic circumstance, c, is finkish in my sense insofar as
(Finkish Epistemic Circumstance) at least one proposition, p, is such that p is propositionally rational to believe in c, but attempting to form a doxastically rational belief in p would shift c – either by shifting one's evidence or commitments – in a manner that would make a belief in p irrational.
Dave's case counts as a finkish epistemic circumstance in virtue of the fact that attempting to form a doxastically rational belief about his N-fibers would relevantly shift his epistemic circumstance by shifting his evidence. Cases of misleading HOE also count as finkish, in my sense, in virtue of the fact that attempting to form a doxastically rational belief in an akratic conjunction would relevantly shift one's epistemic circumstances by shifting one's commitments. Attempting to form a doxastically rational belief in a proposition of the form “p, but my evidence doesn't support p” would involve undertaking a commitment to the truth of the proposition that one's evidence does support p. The undertaking of this commitment would shift one's epistemic circumstances such that one could not (doxastically) rationally believe the akratic conjunction without being committed to a contradiction.
What allows for the possibility of finkish epistemic circumstances on an evidentialist framework is the fact that the conditions on possessing a total body of evidence that strongly supports some proposition, p, (thus making p propositionally rational to believe) don't guarantee that one can engage in the cognitive activity constitutive of rationally reasoning to or properly basing a belief in p (thus making one's belief in p doxastically rational). In other words, it's not built into the conditions on possessing strong evidence for p that one be able to engage in the constitutive cognitive activity required to rationally reason to or base a belief in p. Dave meets the conditions for possessing very strong evidence for the proposition that it's not the case that all of Dave's N-fibers are firing, but the fact that Dave meets these conditions clearly doesn't entail that he can do what is constitutively required to adopt a doxastically rational belief in the proposition. Similarly, in Flight you meet the conditions for possessing very strong evidence for an akratic conjunction, but the fact that you meet these conditions doesn't entail that you can do what is constitutively required to adopt a doxastically rational akratic belief.
If we want our theory of rationality to make it the case that the (propositional) rationality of believing a proposition, p, entails the possibility of (doxastically) rationally believing p, then we need a theory on which the conditions for propositional rationality entail that the conditions for rationally believing p can be met. Evidentialism just isn't such a theory (Munroe Reference Munroe2023). The objection that propositional rationality ought to entail the possibility of doxastic rationality is an objection to the overarching evidentialist framework that we've assumed for discussion, as opposed to a pointed objection to my solution to Fumerton's puzzle.
5. Conclusion
To take stock: I've argued that, given we accept that,
(Rational Belief) it is rational to adopt a belief in a proposition, p, iff p meets some condition(s) c
we can also accept the following:
(Licensed Failure) It is possible that p and the proposition that p doesn't meet c both meet c.
(Anti-akrasia) It's not the case that belief in the proposition <p, yet p does not meet c> is ever rational
as long as we interpret Rational Belief as a claim about propositional rationality and Anti-akrasia as a claim about doxastic rationality. Fumerton's puzzle is defused with the appropriate understanding of Rational Belief and Anti-akrasia.
One might worry that my solution still involves a conflict of rational injunctions, as it allows that there are epistemic circumstances in which one won't be able to adopt the doxastic attitudes required from the standpoint of propositional rationality while also doing what is required for doxastic rationality. Of course, this worry assumes a deontological conception of (propositional and doxastic) rationality on which rationality is not merely an epistemic evaluative notion but also involves a set of epistemic norms for governing one's attitudes. If we take rationality to be a purely evaluative notion, there will be no conflict of injunctions.Footnote 24 But even assuming a deontological conception of rationality – doxastic and propositional rationality are two different epistemic notions. For example, under a deontological reading of evidentialism, propositional rationality deals with the doxastic attitudes one ought to have given one's evidence, whereas doxastic rationality governs how one ought to hold these attitudes (e.g., one ought to base one's attitudes in one's evidence). There is nothing untoward about conflicts between different types of injunctions. Analogously, in the moral domain it's not uncommon for philosophers to argue that there are objective and subjective senses of the moral “ought” (Dorsey Reference Dorsey2012; Olsen Reference Olsen2017). The objective-ought deals with what one morally ought to do given the normative and non-normative facts, whereas the subjective-ought deals with what one morally ought to do given one's evidence. For example, assuming a simple act utilitarianism is true, there may be circumstances in which one's evidence strongly – but misleadingly – suggests that performing some action, a, will maximize utility. However, as a matter of fact, a-ing won't maximize utility whereas performing some other action, b, will. In this scenario, one objectively ought to b, as b-ing will in fact maximize utility, but one subjectively ought to a, as a-ing is the action that one's evidence suggests will maximize utility. There is no intra-level conflict of injunctions when what one morally objectively ought to do conflicts with what one morally subjectively ought to do. Similarly, there is no worrisome intra-level conflict of requirements in the case of conflicts between propositional and doxastic rationality.
One might want to know what one all-things-considered rationally ought to believe in cases of misleading HOE, which would require an account of how we resolve conflicts between propositional and doxastic rationality. However, discussions of all-things-considered oughts are beyond the scope of this paper. It should be sufficient to note that conflicts between different types of requirements aren't unique to the epistemic domain. Not only are there other domains, for example, the moral domain, in which philosophers posit conflicting types of oughts, but there are also conflicts of requirements across domains. What one is legally required to do may very well conflict with prudential, moral, or aesthetic norms. There is nothing unique to the epistemic domain that would require there to be no conflicts between propositional and doxastic rationality.
I've demonstrated in detail how my solution functions under an evidentialist account of rationality in which it is rational to adopt a belief in a proposition p iff p is adequately supported by one's evidence. Misleading HOE raises no problems for an evidentialist account of propositional rationality. The mere fact that one possesses sufficient evidence to believe a proposition doesn't entail that one can do what is necessary to reason to or properly base a belief in the proposition in one's evidence, without engendering some further form of irrationality. As argued in Section 4, cases of misleading HOE are not the only types of cases that force evidentialists to accept that propositional rationality does not entail the possibility of doxastic rationality. There are no new problems raised by misleading HOE that weren't already present in cases involving purely first-order evidence.