Here I elaborate on some of the points that De Neys discussed only briefly. These concern the nature of deliberative thinking, and the sources of individual differences. This comment is largely a summary of some assertions that I have elaborated elsewhere, and some of them are speculative (Baron, Reference Baron2019; Baron, Isler, & Yilmaz, Reference Baron, Isler, Yilmaz, Ottati and Sternin press).
On the first point, the division of thinking into these two systems fits well in explaining most of the laboratory experiments discussed, where “deliberation” simply means that some controlled (i.e., not automatic or immediately intuitive) deliberation is going on. But this deliberation often involves a series of steps, each of which may draw on some automatic/intuitive processes (Ackerman & Thompson, Reference Ackerman and Thompson2017). The steps may be understood as consisting of search and inference (Baron, Reference Baron1985). The goal is to find the best possibility, the best answer to the question that led to the thinking. Each step may involve the addition or deletion of a possibility from a short list of candidates (which may start with none, in the case of stumper problems), the search for of relevant evidence or arguments bearing on the strength of the various possibilities, and, in many cases, the search for additional goals.
At each step, the thinker makes inferences about the strength of each possibility. Thus, at each step, the think must make another “switch” decision, namely, whether to produce the current strongest possibility as the answer or to continue searching and making inferences. This cycle of search, inference, and deciding whether to continue is clear in such ordinary tasks as consumer purchases, some of which may require days or weeks of deliberation (e.g., buying a house). It is also part of most real-life problem solving, where one possibility is something like, “give up trying to fix it yourself and call the electrician.” And it is part of thinking about moral and political issues, thinking that often occurs on the scale of years.
We can think of the switch decision at the end of each step (including the first, which is the focus of the target article) as based on a summary measure of “confidence” in the results so far (essentially the “feeling of rightness” described by Ackerman and Thompson, Reference Ackerman and Thompson2017). Confidence will be high when one possibility is very strong and the others are weak. Strength of the favored option is itself a function of how the thinking done so far was done, and how the thinker responds to the various determinants. Individuals may differ in how they respond, for example:
(1) A thinker trusts her intuition. Her confidence in the initial intuitive response may be high enough to stop at the end of the first step, thus not making the switch that De Neys discusses.
(2) A thinker accepts the standards of “actively open-minded thinking” (AOT; Baron, Reference Baron2019; Baron et al., Reference Baron, Isler, Yilmaz, Ottati and Sternin press). Possibilities will not be considered strong unless a search has been made for other possibilities and for evidence both favoring and opposing the initially favored possibility (and for possible goals that were neglected so far).
(3) A thinker begins with low strength but suffers from “uncertainty aversion.” Uncertainty in this case can be result of not doing much thinking. To remove the uncertainty, the thinker searches for evidence favoring the initial intuition, bolstering it, so that its strength is artificially high. This bolstering leads to the sorts of apparent failures of system 2 that are noted in the target article.
Thinking does not always have to proceed to get a conclusion with high confidence. It is often reasonable to stop thinking just because thinking is not making progress or because the answer is not worth more time and effort. At this point, a honest answer to a question about confidence, without self-deception, would be that confidence is low. Scientists, when speaking to the public, often qualify their statements with expressions of low confidence. People with uncertainty aversion could think that the scientists are bad thinkers; these people think that good thinkers should always be confident (and that is also why they are inclined to bolster their own confidence, when that is needed).
Alternatively, when thinking is not making progress, it is often reasonable to “outsource” it: for example, consult a professional. In matters like politics, most people outsource their thinking to trusted sources. The problem then becomes how they determine who is trustworthy.
Individual differences can result in part from acceptance/rejection of AOT as a standard, and trying to conform to it (or not). Rejection of this standard consists of myside bias (confirmation bias, looking for support for an initially favored conclusion) and “uncertainty aversion,” which is a belief that uncertainty itself is undesirable. These two properties work together. One way to avoid uncertainty is to try to bolster initial conclusions so that confidence will increase. Shynkaruk and Thompson (Reference Shynkaruk and Thompson2006) found support for such bolstering. Subjects judged the validity of each of 12 syllogisms intuitively (within 10 s) and then deliberatively, rating their confidence in each judgment. Of interest, many subjects showed increased confidence after deliberation even though they did not change their (incorrect) answer. Other evidence, reviewed by De Neys, indicates that deliberation can serve to rationalize initial conclusions, an example of myside bias.
It would be nice to put all the pieces together through studies of individual differences in tasks like that used by Shynkaruk and Thompson.
Here I elaborate on some of the points that De Neys discussed only briefly. These concern the nature of deliberative thinking, and the sources of individual differences. This comment is largely a summary of some assertions that I have elaborated elsewhere, and some of them are speculative (Baron, Reference Baron2019; Baron, Isler, & Yilmaz, Reference Baron, Isler, Yilmaz, Ottati and Sternin press).
On the first point, the division of thinking into these two systems fits well in explaining most of the laboratory experiments discussed, where “deliberation” simply means that some controlled (i.e., not automatic or immediately intuitive) deliberation is going on. But this deliberation often involves a series of steps, each of which may draw on some automatic/intuitive processes (Ackerman & Thompson, Reference Ackerman and Thompson2017). The steps may be understood as consisting of search and inference (Baron, Reference Baron1985). The goal is to find the best possibility, the best answer to the question that led to the thinking. Each step may involve the addition or deletion of a possibility from a short list of candidates (which may start with none, in the case of stumper problems), the search for of relevant evidence or arguments bearing on the strength of the various possibilities, and, in many cases, the search for additional goals.
At each step, the thinker makes inferences about the strength of each possibility. Thus, at each step, the think must make another “switch” decision, namely, whether to produce the current strongest possibility as the answer or to continue searching and making inferences. This cycle of search, inference, and deciding whether to continue is clear in such ordinary tasks as consumer purchases, some of which may require days or weeks of deliberation (e.g., buying a house). It is also part of most real-life problem solving, where one possibility is something like, “give up trying to fix it yourself and call the electrician.” And it is part of thinking about moral and political issues, thinking that often occurs on the scale of years.
We can think of the switch decision at the end of each step (including the first, which is the focus of the target article) as based on a summary measure of “confidence” in the results so far (essentially the “feeling of rightness” described by Ackerman and Thompson, Reference Ackerman and Thompson2017). Confidence will be high when one possibility is very strong and the others are weak. Strength of the favored option is itself a function of how the thinking done so far was done, and how the thinker responds to the various determinants. Individuals may differ in how they respond, for example:
(1) A thinker trusts her intuition. Her confidence in the initial intuitive response may be high enough to stop at the end of the first step, thus not making the switch that De Neys discusses.
(2) A thinker accepts the standards of “actively open-minded thinking” (AOT; Baron, Reference Baron2019; Baron et al., Reference Baron, Isler, Yilmaz, Ottati and Sternin press). Possibilities will not be considered strong unless a search has been made for other possibilities and for evidence both favoring and opposing the initially favored possibility (and for possible goals that were neglected so far).
(3) A thinker begins with low strength but suffers from “uncertainty aversion.” Uncertainty in this case can be result of not doing much thinking. To remove the uncertainty, the thinker searches for evidence favoring the initial intuition, bolstering it, so that its strength is artificially high. This bolstering leads to the sorts of apparent failures of system 2 that are noted in the target article.
Thinking does not always have to proceed to get a conclusion with high confidence. It is often reasonable to stop thinking just because thinking is not making progress or because the answer is not worth more time and effort. At this point, a honest answer to a question about confidence, without self-deception, would be that confidence is low. Scientists, when speaking to the public, often qualify their statements with expressions of low confidence. People with uncertainty aversion could think that the scientists are bad thinkers; these people think that good thinkers should always be confident (and that is also why they are inclined to bolster their own confidence, when that is needed).
Alternatively, when thinking is not making progress, it is often reasonable to “outsource” it: for example, consult a professional. In matters like politics, most people outsource their thinking to trusted sources. The problem then becomes how they determine who is trustworthy.
Individual differences can result in part from acceptance/rejection of AOT as a standard, and trying to conform to it (or not). Rejection of this standard consists of myside bias (confirmation bias, looking for support for an initially favored conclusion) and “uncertainty aversion,” which is a belief that uncertainty itself is undesirable. These two properties work together. One way to avoid uncertainty is to try to bolster initial conclusions so that confidence will increase. Shynkaruk and Thompson (Reference Shynkaruk and Thompson2006) found support for such bolstering. Subjects judged the validity of each of 12 syllogisms intuitively (within 10 s) and then deliberatively, rating their confidence in each judgment. Of interest, many subjects showed increased confidence after deliberation even though they did not change their (incorrect) answer. Other evidence, reviewed by De Neys, indicates that deliberation can serve to rationalize initial conclusions, an example of myside bias.
It would be nice to put all the pieces together through studies of individual differences in tasks like that used by Shynkaruk and Thompson.
Competing interest
None.