Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-13T06:58:47.831Z Has data issue: false hasContentIssue false

Explaining normative–deliberative gaps is essential to dual-process theorizing

Published online by Cambridge University Press:  18 July 2023

Edward J. N. Stupple
Affiliation:
School of Psychology, College of Health, Psychology and Social Care, University of Derby, Derby, UK e.j.n.stupple@derby.ac.uk; https://www.derby.ac.uk/staff/ed-stupple/
Linden J. Ball
Affiliation:
School of Psychology & Computer Science, University of Central Lancashire, Preston, UK lball@uclan.ac.uk; https://www.uclan.ac.uk/academics/professor-linden-ball

Abstract

We discuss significant challenges to assumptions of exclusivity and highlight methodological and conceptual pitfalls in inferring deliberative processes from reasoning responses. Causes of normative–deliberative gaps are considered (e.g., disputed or misunderstood normative standards, strategy preferences, task interpretations, cognitive ability, mindware and thinking dispositions) and a soft normativist approach is recommended for developing the dual-process 2.0 architecture.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Dual-process 2.0 accounts are increasingly compelling, and we welcome De Neys's proposed model, which we bolster here by noting further challenges to assumptions of “exclusivity” (the notion that intuition and deliberation generate unique responses). We additionally argue for considerable methodological care when exploring the nature of deliberative processing.

Among the most crucial considerations when devising thinking, reasoning, and decision-making tasks is determining what constitutes a “correct” answer and what it means when participants produce this answer. Indeed, De Neys cautions against an “ought-is fallacy” (Elqayam & Evans, Reference Elqayam and Evans2011), which arises when responses aligning with “normative” theories (e.g., predicate logic or Bayes' theorem) are viewed as being diagnostic of deliberation. We contend that normative standards, although useful for performance benchmarking, can present blind spots for experimental design and theory building. As such, we concur with Elqayam and Evans (Reference Elqayam and Evans2011) that constructing theories of reasoning around normative standards is problematic for understanding psychological processes.

To evaluate deliberative processing successfully, it seems prudent to adopt a “soft normativist” (Stupple & Ball, Reference Stupple and Ball2014) or “descriptivist” (Elqayam & Evans, Reference Elqayam and Evans2011) approach. Accordingly, research programs should acknowledge the distorting lens of normative standards (while also avoiding the trap of relativism), recognizing that although normative standards may be correlated with deliberation, they are not causally linked to it (Stupple & Ball, Reference Stupple and Ball2014). From a soft-normativism perspective, “normative–deliberative gaps” are expected for many reasons (e.g., disputed or misunderstood norms, strategy preferences, alternative task interpretations, cognitive ability and mindware constraints, and impoverished thinking dispositions), necessitating careful consideration.

Normative standards should also be contested and evaluated whenever multiple, candidate standards exist (Stenning & Varga, Reference Stenning, Varga, Ball and Thompson2018). For some tasks, the normative response is uncontroversial, but for others, participants must make sense of task requirements and may not construe the task as intended. For example, Oaksford and Chater (Reference Oaksford and Chater2009) proposed an alternative normative standard for the Wason selection task based upon “information gain,” which is consistent with the most common responses (contrasting with Wason's [1966] logicist proposals). Oaksford and Chater (Reference Oaksford and Chater2009) extend this perspective to demonstrate that logical fallacies can be rationally persuasive. Indeed, caution is advised for researchers who associate endorsement of fallacies with a lack of deliberation. It is prudent not simplistically to equate standard normative responses with deliberative thinking without also considering individual goals.

In most thinking tasks, participants are not explicitly prescribed a goal or norm. Indeed, Cohen (Reference Cohen1981) famously argued that reasoning research presents “trick” questions with minimalist instructions to naïve participants. The assumption that participants identify tasks as requiring deliberation may itself be naïve. Stupple and Ball (Reference Stupple and Ball2014) proposed that when naïve participants attempt novel reasoning problems, they determine an appropriate normative standard and select a strategy through a process of “informal reflective equilibrium.” Through this, increasing familiarity with problem forms – even in the absence of feedback – can result in participants aligning with normative responses assumed to require deliberation (Ball, Reference Ball2013; Dames, Klauer, & Ragni, Reference Dames, Klauer and Ragni2022). This alignment need not be deliberative, however, but could instead entail detection of patterns in problems and an increasing intuition strength for normatively aligned heuristic responses.

These variations in participants' goals and strategies are captured by Markovits, Brisson, and de Chantal (Reference Markovits, Brisson and de Chantal2017) (cf. Verschueren, Schaeken, & d'Ydewalle, Reference Verschueren, Schaeken and d'Ydewalle2005), who demonstrated individual differences in strategy preferences (probabilistic vs. counterexample) that are orthogonal to preferences for intuitive versus deliberative thinking. These strategies have implications for the interplay between deliberation and normative standards. Participants adopting a counterexample strategy (based on mental models) versus a probabilistic strategy (based on information gain or probability heuristics; Beeson, Stupple, Schofield, & Staples, Reference Beeson, Stupple, Schofield and Staples2019; Oaksford & Chater, Reference Oaksford and Chater2009; Verschueren et al., Reference Verschueren, Schaeken and d'Ydewalle2005) may differ in their task construal and understanding of “correct” answers. Although it is unclear whether strategies necessarily entail adoption of particular normative standards, responding to a problem in terms of information gain versus a necessary truth derived from a mental model would reasonably be assumed to require differing degrees of deliberation and differing use of intuitive cues.

When judging whether deliberation has occurred, we also suggest that responses can be less reliable than response times. For example, for the lily-pad cognitive reflective task (CRT) problem, incorrect non-intuitive answers averaged longer response times than incorrect intuitive or correct answers (Stupple, Pitchford, Ball, Hunt, & Steel, Reference Stupple, Pitchford, Ball, Hunt and Steel2017), which is inconsistent with “cognitive miserliness” and the absence of deliberation. Such outcomes can arise from task misinterpretation, lack of mindware, or the strategy selected. When relying on responses to judge a process, we cannot know if a participant has reasoned deliberatively unless we presume the task was understood as intended, and we cannot know they understood the task as intended unless we presume they reasoned deliberatively (cf. Smedslund, Reference Smedslund1990).

We also note that meta-reasoning studies offer vital insights into individual differences in uncertainty monitoring, facilitating a more nuanced understanding of deliberative processing on a task. For example, when participants determine how long to persevere, they may be optimizing or satisficing, and those of a miserly disposition may simply be looking to bail out through a “computational escape hatch” (Ackerman, Douven, Elqayam, & Teodorescu, Reference Ackerman, Douven, Elqayam, Teodorescu, Elqayam, Douven, Evans and Cruz2020; Ball & Quayle, Reference Ball and Quayle2000). Low confidence responses after an “impasse” can also decouple the link between response time and deliberative thinking, as can uncertainty about the intended “correct” answer. As such an array of individual-differences measures are necessary to understand the nature of deliberative processes. Furthermore, unpicking such deliberative processes goes beyond the observation of fast and slow thinking. Intuitive processes are always necessary for a participant to respond and sometimes they are sufficient. Participants who understand a task as requiring the alleged system 1 response will not be prompted into deliberation by an awareness of an alleged system 2 response (as this is not necessarily the normative response or the participant's goal).

In sum, we advocate for an approach that follows dual-process model 2.0, but which triangulates task responses with response times, metacognitive measures and individual-difference variables, while aligning with a soft or agnostic view of normative standards. When deciding which responses are the product of deliberative thinking, researchers must be mindful of the myriad individual differences in task interpretations, strategies, and perceived “normative” responses.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Ackerman, R., Douven, I., Elqayam, S., & Teodorescu, K. (2020). Satisficing, meta-reasoning, and the rationality of further deliberation. In Elqayam, S., Douven, I., Evans, J. St. B. T., & Cruz, N. (Eds.), Logic and uncertainty in the human mind: A tribute to D. E. Over (pp. 1026). Routledge.10.4324/9781315111902-2CrossRefGoogle Scholar
Ball, L. J. (2013). Microgenetic evidence for the beneficial effects of feedback and practice on belief bias. Journal of Cognitive Psychology, 25(2), 183191.10.1080/20445911.2013.765856CrossRefGoogle Scholar
Ball, L. J., & Quayle, J. D. (2000). Alternative task construals, computational escape hatches, and dual-system theories of reasoning. Behavioral & Brain Sciences, 23(5), 667668.10.1017/S0140525X00243434CrossRefGoogle Scholar
Beeson, N., Stupple, E. J. N., Schofield, M. B., & Staples, P. (2019). Mental models or probabilistic reasoning or both: Reviewing the evidence for and implications of dual-strategy models of deductive reasoning. Psihologijske Teme, 28(1), 2135.10.31820/pt.28.1.2CrossRefGoogle Scholar
Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioral & Brain Sciences, 4(3), 317331.10.1017/S0140525X00009092CrossRefGoogle Scholar
Dames, H., Klauer, K. C., & Ragni, M. (2022). The stability of syllogistic reasoning performance over time. Thinking & Reasoning, 28(4), 529568.10.1080/13546783.2021.1992012CrossRefGoogle Scholar
Elqayam, S., & Evans, J. St. B. T. (2011). Subtracting “ought” from “is”: Descriptivism versus normativism in the study of human thinking. Behavioral & Brain Sciences, 34(5), 233248.10.1017/S0140525X1100001XCrossRefGoogle Scholar
Markovits, H., Brisson, J., & de Chantal, P. L. (2017). Logical reasoning versus information processing in the dual-strategy model of reasoning. Journal of Experimental Psychology: Learning, Memory, & Cognition, 43(1), 7280.Google ScholarPubMed
Oaksford, M., & Chater, N. (2009). Précis of Bayesian rationality: The probabilistic approach to human reasoning. Behavioral & Brain Sciences, 32(1), 6984.10.1017/S0140525X09000284CrossRefGoogle ScholarPubMed
Smedslund, J. (1990). A critique of Tversky and Kahneman's distinction between fallacy and misunderstanding. Scandinavian Journal of Psychology, 31(2), 110120.10.1111/j.1467-9450.1990.tb00822.xCrossRefGoogle Scholar
Stenning, K., & Varga, A. (2018). Several logics for the many things that people do in reasoning. In Ball, L. J. & Thompson, V. A. (Eds.), International handbook of thinking and reasoning (pp. 523541). Routledge.Google Scholar
Stupple, E. J. N., & Ball, L. J. (2014). The intersection between descriptivism and meliorism in reasoning research: Further proposals in support of “soft normativism.” Frontiers in Psychology, 5 (1269), 113.10.3389/fpsyg.2014.01269CrossRefGoogle ScholarPubMed
Stupple, E. J. N., Pitchford, M., Ball, L. J., Hunt, T. E., & Steel, R. (2017). Slower is not always better: Response-time evidence clarifies the limited role of miserly information processing in the cognitive reflection test. PLoS ONE, 12(11), e0186404, 1–18.10.1371/journal.pone.0186404CrossRefGoogle Scholar
Verschueren, N., Schaeken, W., & d'Ydewalle, G. (2005). A dual process specification of causal conditional reasoning. Thinking & Reasoning, 11(3), 239278.10.1080/13546780442000178CrossRefGoogle Scholar