Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-25T16:34:31.736Z Has data issue: false hasContentIssue false

Sequential and simultaneous multiple explanation: Implications for alternative consideration when response options are not provided

Published online by Cambridge University Press:  01 January 2023

Robert C. Litchfield*
Affiliation:
Economics & Business Department, Washington & Jefferson College
Jinyan Fan*
Affiliation:
Psychology Department, Hofstra University
*
* Address: Robert C. Litchfield, Economics & Business Department, Washington & Jefferson College, 60 S. Lincoln St., Washington, PA 15301. Email: rlitchfield@washjeff.edu
Rights & Permissions [Opens in a new window]

Abstract

This paper reports two experiments comparing variants of multiple explanation applied in the early stages of a judgment task (a case involving employee theft) where participants are not given a menu of response options. Because prior research has focused on situations where response options are provided to judges, we identify relevant dependent variables that an intervention might affect when such options are not given. We use these variables to build a causal model of intervention that illustrates both the intended effects of multiple explanation and some potentially competing processes that it may trigger. Although multiple explanation clearly conveys some benefits (e.g., willingness to delay action to engage in information search, increased detail, quality and confidence in alternative explanations) in the present experiments, we also found evidence that it may initiate or enhance processes that attenuate its advantages (e.g., feelings that one does not need more data if one has multiple good explanations).

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2007] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

People often create explanations of aspects of their social worlds when they wish to understand complex events or justify their beliefs (Hastie & Dawes, 2001; Hastie & Pennington, 2000; Koehler, 1991). Because explanations provide answers to “why” questions about causal structures, they can be influential in judgments about the presence and viability of different response options. Thus, prompting individuals to create multiple explanations early in a judgment process may increase the likelihood that they will consider alternative response options. Since those who advise managers and other applied professionals on the process of judgment routinely encourage consideration of alternatives, multiple explanation may have benefits in a variety of contexts (e.g., Arkes, Reference Arkes2001; Einhorn & Hogarth, Reference Einhorn and Hogarth1987; Hammond, Keeney, & Raiffa, Reference Hammond, Keeney and Raiffa1998; Lovallo & Kahneman, Reference Lovallo and Kahneman2003; Plous, Reference Plous1993; Russo & Schoemaker, Reference Russo and Schoemaker1992).

Most research on multiple explanation examines situations in which individuals are given a fixed menu of response options and have no real opportunity for additional information search. Examples include predicting the winner of a sporting event or reaching a verdict in a criminal case.Footnote 1 In these types of situations, explanations may be used to organize and match the data at hand to the available options to justify a response (Hastie & Dawes, Reference Hastie and Dawes2001; Hastie & Pennington, Reference Hastie and Pennington2000; Hirt & Markman, Reference Hirt and Markman1995). In such cases, multiple explanations may spark additional associations or different strategies of searching knowledge in order to see if the first explanation can survive the challenge that these provide (Anderson, Reference Anderson1982; Hastie & Pennington, Reference Hastie and Pennington2000).

Sometimes life does not provide fixed menus of response options. Consider the situation described in Figure 1. Although simplified, this description is similar to the initial presentation of a problem that a colleague or consultant might hear from a manager. In such situations, explanations may be used to develop response options. Here, the value of multiple explanations may be to provide a sense that it is better not to rush to judgment, that more than one potentially likeable option exists, and that there are some directions available for search (Wack, Reference Wack1985a, Reference Wack1985b). Although the ultimate goal of multiple explanation remains the same whether or not a fixed menu of response options is provided (i.e., to make judgments “better” in some way), these interventions may target somewhat different dependent variables in situations lacking a set of pre-ordained alternatives.

Figure 1: Trouble at Big Bite.

In this paper, we report two experiments designed to investigate how multiple explanation might aid alternative generation and consideration processes in the task depicted in Figure 1. Our work contributes to research on multiple explanation in three ways. First, we draw on managerial and psychological research to develop some ideas about what dependent variables an intervention should try to affect when response options are not provided, and we present a model of how multiple explanation may influence these variables. Second, because psychology and management literatures have evolved somewhat different interventions, we test variants of multiple explanation derived from each. Our studies provide the first comparison of these different strategies of intervention. Third, our model explicitly outlines competing psychological processes that could occur, and we discuss when and why such processes are likely.

1.1 Sequential vs.simultaneous multiple explanation

1.1.1 Psychological research: sequential multiple explanation

Although psychological research has proposed a number of somewhat different mechanisms by which multiple explanation interventions affect judgment, a common focal point is the functioning of associative memory (e.g., Arkes, Reference Arkes1991; Hirt & Markman, Reference Hirt and Markman1995; Koehler, Reference Koehler1991; Lord, Lepper, & Preston, Reference Lord, Lepper and Preston1984). It is widely accepted that the process of generating and considering an explanation activates stored knowledge (see Arkes, Reference Arkes1991; Koehler, Reference Koehler1991). A second explanation may be of assistance because it activates a new, somewhat different web of associations that can undo the effects of the initial explanation (Arkes, Reference Arkes1991; Koehler, Reference Koehler1991). These “undoing” effects may occur because subsequent explanations challenge the initial explanation (Anderson, Reference Anderson1982), with successful challenges leading to broader search (Hirt & Markman, Reference Hirt and Markman1995). Thus, prompting individuals to undo their initial explanation can “break the inertia” of the cognitive process (Koehler, Reference Koehler1991).

Conceptualizing second explanations as “undoing” (Koehler, Reference Koehler1991) or “challenging” (Anderson, Reference Anderson1982) initial explanations has influenced the design of interventions, leading to what we label sequential multiple explanation (SeME). Individuals receiving these interventions are typically asked to make an initial explanation, then prompted for a different explanation. Often, this second explanation requires them to imagine a different outcome that is incompatible with the first explanation (e.g., a different team winning a game), implicitly rendering the causal structure of their initial explanation incorrect (but, for an example of explaining the same outcome twice, see Hirt & Markman, Reference Hirt and Markman1995, Study 2). Dependent variables are assessed after completion of the second explanation. Generally, such interventions are deemed a success when individuals who generate two explanations differ on some judgment or decision outcome measure from others who offer only a single explanation. Thus, these studies typically infer that alternative consideration occurred by examining effects of interventions on distal judgments.

Although earlier interventions asked participants to list reasons for and against a particular judgmental possibility (e.g., Koriat, Lichtenstein, & Fischhoff, Reference Koriat, Lichtenstein and Fischhoff1980), Anderson (Reference Anderson1982) provided the first explicit test of a prompt to organize ideas into multiple explanations. In his study, Anderson (Reference Anderson1982) asked some participants to explain either a positive or negative relationship between risk preferences and success in firefighting after examining fictitious data on individual firefighters' risk preferences and job performance. After participants were told that the data were fictional, some were asked to write a second explanation of the opposite relationship from the one they initially explained. Anderson (Reference Anderson1982) then had the participants rate their beliefs about the true relationship between risk preferences and job performance in firefighting. Individuals who made a single explanation tended to rate the relationship between risk preference and job performance as stronger than control participants who were not asked to write an explanation. Multiple explanation significantly reduced this bias, though Anderson (Reference Anderson1982) still observed a preference for the initially explained option.

Research has continued to support the idea that considering alternative reasons or explanations can be valuable (e.g., Anderson & Sechler, Reference Anderson and Sechler1986; Hoch, Reference Hoch1985; Lord et al., Reference Lord, Lepper and Preston1984; Mussweiler, Strack, & Pfeiffer, Reference Mussweiler, Strack and Pfeiffer2000). Hirt and Markman (Reference Hirt and Markman1995) demonstrated that any plausible alternative may be a sufficient challenge to stimulate multiple explanation effects (Hirt & Markman, Reference Hirt and Markman1995; for a replication of the plausibility effect, see Hirt et al., Reference Hirt, Kardes and Markman2004). Yet, what makes an alternative plausible is not very well established. Hirt and colleagues (Hirt et al., Reference Hirt, Kardes and Markman2004; Hirt & Markman, Reference Hirt and Markman1995) used sports prediction tasks in which a degree of objective plausibility could be established based on past performance and expert predictions. Since they also used knowledgeable participants who were not disproportionately likely to be fans of any particular team, their research probably comes reasonably close to equating objective and subjective plausibility (Hirt et al., Reference Hirt, Kardes and Markman2004; Hirt & Markman, Reference Hirt and Markman1995). This procedure makes sense for their research, but it does not specifically address situations where the objective quality of options cannot be readily established. In such cases, what Anderson (Reference Anderson1982) called the theory formation stage of explanation might loom larger.

Anderson (Reference Anderson1982) examined the role of multiple explanation in theory formation by testing a second debiasing method he labeled inoculation. In this approach, individuals described both possible general relationships between the variables to be examined (i.e., risk and firefighting) before looking at any of the data (Anderson, Reference Anderson1982). Although Anderson (Reference Anderson1982) observed that both counterexplanation and inoculation were equally effective in subsequent judgment, he found that inoculation enhanced the subjective plausibility of multiple explanations in the theory formation stage, when objective plausibility data were lacking. This goal has also been pursued by management researchers, to whom we now turn.

1.1.2 Management research: Toward simultaneous multiple explanation

Although management interventions have not ignored cognition (e.g., Schoemaker, Reference Schoemaker1993), the main aim of this literature has been to improve managers' motivation to consider alternatives. One example of this type of intervention is scenario planning (Wack, Reference Wack1985a; Reference Wack1985b). Although rigorous studies of scenario planning are scarce (but see Grant, Reference Grant2003; Phelps, Chan, & kapsalis, Reference Phelps, Chan and Kapsalis2001; Schoemaker, Reference Schoemaker1993), a general consensus appears to have developed among academics and practitioners that, if done properly, the technique may increase managers' motivation to consider alternatives (for reviews, see Schoemaker, Reference Schoemaker2004; Wright, Reference Wright2005).

Scenario planning is designed for use in the initial stages of judgments about future states of factors that could be important to organizational success (Wack, Reference Wack1985a; Reference Wack1985b). Managers are urged to imagine and develop multiple explanations without granting preferential status to a particular scenario until they have examined more than one (Schoemaker, Reference Schoemaker1993; Wack, Reference Wack1985a; Reference Wack1985b), leading us to label it simultaneous multiple explanation (SiME). Scenario planning does not attempt to create a “correct” representation of the future (Hodgkinson & Wright, Reference Hodgkinson and Wright2002; Schoemaker, Reference Schoemaker1993; Wack, Reference Wack1985a). Instead, the process aims to develop multiple, good explanations that reflect different assumptions about important variables (Wack, Reference Wack1985a; Reference Wack1985b).

Scenario planning pursues two seemingly contradictory goals. On one hand, a clear aim is to generate good alternative explanations, where goodness is usually conceived as the extent to which decision makers find the alternatives subjectively likable (Wack, Reference Wack1985a; Reference Wack1985b). All else equal, one might expect this to create momentum for finalizing a judgment. After all, if one has multiple good ideas, why wait to act? On the other hand, scenario planning aims to encourage more detailed consideration of ideas, which requires delaying judgment (Wack, Reference Wack1985a; Reference Wack1985b). To build momentum for detailed consideration, it is important to create multiple scenarios that managers will find realistic and that will provide them with multiple directions for exploration (Wack, Reference Wack1985a, Reference Wack1985b). In extracting from scenario planning a more general tool for multiple explanation, perceptions of explanation strength and willingness to delay action are reasonable success measures because, like Anderson's (Reference Anderson1982) inoculation strategy in psychology, they aim to impact the theory formation stage of judgment.

1.2 Extending past research: Effects of multiple explanation when response options are not given

The proximal goals of multiple explanation differ depending upon whether or not fixed menus of response options are given. When response options are provided, multiple explanation challenges the match of an initial explanation to judgment options (Anderson, Reference Anderson1982). For such tasks, the sequential (SeME) strategy makes sense because it clearly differentiates the initial explanation from the challenger. In contrast, multiple explanation has two major goals in tasks lacking response menus: 1) to justify information search, and 2) to provide a platform for it. Here, the simultaneous (SiME) approach makes intuitive sense to motivate multiple directions of search from a “big” platform provided by multiple explanations that are considered as equals. Yet, the challenges supplied by the SeME approach may also have these effects. This leads to two of the three questions that frame our investigation. First, what effects should we expect multiple explanation, in general, to have on individuals' tendencies to justify and develop a platform for information search? Second, are there areas in which the SeME and SiME variants might be expected to differ?

Multiple explanation is usually considered beneficial, but there might also be some costs associated with applying these interventions. For example, multiple explanation may trigger competition between tendencies to act on acceptable explanations and tendencies to delay judgment to develop and consider alternatives in more detail. In psychological research, concepts like “undoing” (Koehler, Reference Koehler1991) and “challenging” (Anderson, Reference Anderson1982) suggest competition between first and second explanations. In management research, the dual goals for multiple, subjectively likeable options and delay for detailed consideration also seem to constitute competing processes (Wack, Reference Wack1985a; Reference Wack1985b). Fixed menus of response options may force a resolution of competing processes. If there is no option to withhold judgment, get more information, or develop new alternatives beyond those given in the task, then people will likely comply by making some judgment. Without such restrictions, competing processes may find new means of expression. Taking these ideas into account, the third question guiding our investigation is: What competing processes might multiple explanation unleash in tasks lacking fixed menus of response options? We examine this question in the context of the other two.

1.2.1 General effects of multiple explanation

Figure 2 depicts a causal model of multiple explanation in the early stages of the task described in Figure 1. Relative to generating a single explanation, multiple explanations are likely to be valuable because they enhance the level of detail in alternative explanations (where zero detail indicates the absence of an explanation), the desired amount and breadth of information search, and the willingness to delay action to engage in that search. Figure 2 depicts these effects in a causal chain, such that later consequences of multiple explanation interventions are at least partially mediated by the most proximal effects on explanation detail. The solid lines in Figure 2 indicate the hypothesized chain of effects of multiple explanation. The dashed lines indicate additional relationships that could occur. These relationships are, in a mundane sense, statistical controls for the hypothesized mediation processes. However, in a second, important sense, they delineate additional psychological processes that, as we will show, can compete with the hypothesized effects of the interventions to add a layer of unpredictability to the effectiveness of multiple explanation. We discuss these issues as we develop our hypotheses on explanation detail, the amount and breadth of desired information search, and willingness to act.

Figure 2: Causal model of multiple explanation in the early stages of judgments when response options are not provided.

Explanation detail. Individuals sometimes spontaneously construct multiple explanations (Dougherty, Gettys, & Thomas, Reference Dougherty, Gettys and Thomas1997),Footnote 2 but multiple explanation interventions are expected to make this process more likely and/or more detailed. For instance, an individual responding to the Big Bite case depicted in Figure 1 might, if prompted for a single explanation, suggest that the cause of the problem resides in lower level managers, who may not care about enforcing the policy because their own evaluation is not focused on their prevention of theft and because they are not well informed about the scope and importance of the problem. If prompted for multiple explanations, the same person might break out appraisal and training of managers as separate causal explanations. However, since merely breaking a more comprehensive explanation into parts might not signal much of a change in the amount of alternative consideration that is likely, it is important in specifying our explanation detail hypothesis to examine whether improvements in second explanations come at the expense of first explanations.

Explanation Detail Hypothesis: Relative to individuals prompted to make only a single explanation, those asked to create multiple explanations will produce more detailed second explanations without affecting the level of detail in the first explanation offered.

Desired information search. Scenario planning suggests that more detailed options facilitate information search by revealing what variables are unknown and require data to estimate (Wack, Reference Wack1985a; Reference Wack1985b). However, we know of no definitive evidence that developing more detailed explanations leads individuals to desire more information search. Indeed, one might imagine that more detailed explanations could lead individuals to desire less information search. Such an alternative hypothesis would make sense if developing more detailed explanations requires individuals to estimate the likely values of more variables, causing them to feel as though there are fewer needs for search. We see two important task variables that are likely to affect the balance between these possibilities: whether response options are provided and the amount of data available at the time of the explanation formation.

When response options are provided, more detailed explanations might lessen search tendencies by facilitating comparisons between the existing data and the explanations (Hastie & Pennington, Reference Hastie and Pennington2000). However, in situations such as the one at Big Bite (Figure 1), where response options are not provided, we expect that individuals who create more detailed explanations will desire more search. Since more detailed explanations are more complete or precise causal statements, individuals may find it easier to formulate questions to test them out. Thus, multiple explanation should increase the amount of desired information search.

Both cognitive and motivational research supports this view. From a cognitive perspective, more detail likely means that more associations are being stimulated in memory, illuminating avenues for search (Arkes, Reference Arkes1991; koehler, Reference Koehler1991). From a motivational perspective, more detail creates a more specific web of goals, which facilitates recognition of areas where search is needed to provide perceptual input to comparisons between goals and current states (see, e.g., Vancouver, Reference Vancouver2000). Certainly, such search may not be balanced (Klayman & Ha, Reference Klayman and Ha1987). Yet, to the extent that multiple explanations are detailed, individuals might recognize more avenues for testing, leading them to desire to increase the breadth of their search. In the Big Bite case, individuals who formulate detailed explanations about why the problem could be due to poor selection systems for employees and poor training of managers may be more likely to think of questions they would like to ask that span a broader range compared to individuals who develop only one of these explanations. Thus, we propose that explanation detail mediates relationships between interventions and the amount and breadth of desired information search.

Desired Amount of Search Hypothesis: Positive effects of multiple explanation on the amount of search desired will be mediated by explanation detail. Detail from both explanations will be positively related to the amount of information search desired.

Desired Search Breadth Hypothesis: Positive effects of multiple explanation on desired breadth of information search will be mediated by the amount of detail in the second explanation.

However, the amount of information available at the time of initial explanation formation could unleash a competing process. As noted above, to the extent that individuals have more data available at the time that they form their initial explanation, they may experience a lower desire to search. In such an environment, the mere possibility of multiple explanations, regardless of their level of detail, may serve to confirm that a lot of data are already on hand. This would result in direct effects of multiple explanation on the amount and breadth of search. Importantly, such effects would be opposite in sign, competing with our hypotheses. Competing processes should be less likely in an extremely information-impoverished environment, but, in accordance with research showing that individuals make judgments more easily when they have a greater volume of data available, competing processes may be more likely to occur as the information available in the judgment environment increases (Tversky & Kahneman, Reference Tversky and Kahneman2002).

Willingness to act. If people will not take time to consider alternatives, then any benefits of generating them are likely to be quite limited. As the amount of information desired rises, we expect that people will be more willing to delay action to engage in search (i.e., less willing to act without search). However, this effect may not occur exclusively through desired information search. Multiple explanation might have a direct effect on willingness to act if the intervention induces a general hesitation that comes from the simple realization that there exist multiple possibilities (c.f., Anderson, Reference Anderson1982; Wack, Reference Wack1985a). As the dashed lines in Figure 2 show, the level of detail in each explanation could also exert direct effects on willingness to act. Since any direct effect of the first explanation on willingness to act is likely to be positive in sign, multiple explanation interventions must be strong enough to overcome primacy effects (Anderson, Reference Anderson1982).

Willingness to Act Hypothesis: Relative to individuals prompted to make only a single explanation, those asked to create multiple explanations will be less willing to act without additional information search.

1.2.2 Possible differences between sequential and simultaneous multiple explanation

SeME and SiME differ in their framing of the prompt for a second explanation. The SiME approach emphasizes generating subjectively good explanations (Schoemaker, Reference Schoemaker1993; Wack, Reference Wack1985a; Reference Wack1985b), rather than the merely plausible explanations used in the SeME variant (Hirt & Markman, Reference Hirt and Markman1995). This reflects the more motivational roots of the SiME intervention strategy. If “good” is a higher goal level than “plausible,” then SiME may have an advantage. Indeed, under SeME, it seems possible that individuals interpret their alternative explanation as an instruction to provide a “next best” explanation. If so, SeME may unwittingly privilege the first explanation. Since the managerial literature explicitly suggests developing multiple good explanations without giving one a privileged status, we expect that SiME will lead to more detailed second explanations. However, our SiME detail hypothesis requires that the detail effect must not come at the cost of a reduction in the level of detail present in the initial explanation in order to guard against the possibility that SiME simply shifts individuals from a preference for the first option to a preference for the second (i.e., from primacy to recency). We propose:

SiME Detail Hypothesis: SiME will lead to more detail in second explanations relative to SeME without degrading the level of detail in first explanations.

SiME and SeME might also have different effects on confidence in explanations. Prior research has measured individuals' confidence in a focal explanation (e.g., Hirt et al., Reference Hirt, Kardes and Markman2004; Hirt & Markman, Reference Hirt and Markman1995), but it has not examined how confident people feel about other explanations. When individuals need to search out response options, boosting their perceived strength of alternative explanations might foster a more appropriate balance of confidence in the response options that ultimately follow from them. This could be important because research shows that perceived alternative strength affects final judgments. For example, McKenzie et al.(Reference McKenzie, Lee and Chen2002) found that effects of sequentially-presented legal arguments depended on their perceived strength. A weak second alternative did not cause individuals to change their confidence in a defendant's guilt in the direction of the second alternative when the first alternative was strong, and such a pattern could even lead to bolstering of the first option (McKenzie et al., Reference McKenzie, Lee and Chen2002). Thus, although confidence may not have much direct impact on search, it likely plays an important role in the eventual weighting of search results (McKenzie et al., Reference McKenzie, Lee and Chen2002). Since SiME encourages development of good alternatives, we propose that interventions based on this approach will convey greater confidence in the second explanation. As with the SiME detail hypothesis above, it is important for our SiME confidence hypothesis to establish that such effects are not mere transfers from primacy to recency. Accordingly, we propose:

SiME Confidence Hypothesis: SiME will lead to more confidence in second explanations relative to SeME without degrading the level of confidence in first explanations.

Despite the likely benefits for the final judgment of developing confidence in multiple alternatives, it is not clear whether confidence is a good thing when the goal is to justify search. Although some confidence is clearly necessary to spur search (Wack, Reference Wack1985a), and too little confidence in an alternative can lead people to spurn it (McKenzie et al., Reference McKenzie, Lee and Chen2002), much research suggests that overconfidence could also develop (Arkes, Reference Arkes2001). Although these conflicting impulses make it difficult to offer a directional hypothesis, we model effects of confidence on willingness to act in order to explore the downstream consequences of the SiME confidence hypothesis.

To summarize, we predict that prompting people to engage in multiple explanation will affect explanation and information search tendencies in the early stages of judgment tasks where fixed menus of response options are not provided. If multiple explanation is beneficial in such tasks, we expect that it will improve detail and confidence in second explanations, increase the desired amount and breadth of information search, and reduce willingness to act without additional search. Although such effects might not always result in improved judgment, they are consistent with the general prescriptions available in the psychological and managerial literatures we reviewed. However, these interventions may also trigger competing psychological processes that could undermine some of their effects. For example, people who develop multiple good explanations may see them as evidence that a lot of data are already on hand, reducing their desire for search. We also predict differences between sequential and simultaneous multiple explanation based on their framing of the second explanation prompt. If SiME improves judgment relative to SeME, we would expect it to increase detail and confidence in second explanations.

Investigating these similarities and differences could have both theoretical and practical significance. Although all interventions aim to impact distal judgment outcomes, more explicit investigation of proximal dependent variables might ultimately help in understanding how these interventions work. For practice, a better understanding of alternative generation and consideration might aid applied professionals in deciding when and how to intervene.

1.3 Overview of the Present Research

We conducted two experiments using a task designed to model the front end of a judgment where response options are not provided (Figure 1). Thus, we model only the beginning of the diagnostic process, but we assume that it has implications for the eventual outcome (for a detailed discussion of this argument, see koehler, Reference Koehler1991). Although management literature on multiple explanation has ranged freely across levels of analysis, we follow Schoemaker (Reference Schoemaker1993) and focus on individuals in this comparison.

In Study 1, we used a simple problem description that introductory psychology students found realistic but not overwhelming (in pilot studies). This operationalization results in a task environment with a low volume of data. Thus, we expected fewer problems with competing processes would emerge in Study 1. To see how our results might change with the information environment, we augmented the problem presentation substantially in Study 2. We included a variety of supporting materials so that undergraduate business students (our sample in Study 2) would find the task realistic but not overwhelming. We balanced the added information to avoid changing the typical conclusion that it will be necessary to search for more information. This change in materials results in a task environment with a higher volume of data, and, therefore, might be more likely to bring out competing psychological processes.

We operationalized sequential (SeME) and simultaneous (SiME) multiple explanation in each study. In Study 1, our SiME intervention asked participants to pick a winner from the two explanations they generated. One could argue that this goes beyond the focus of scenario planning as an intervention. In Study 2, we kept the first study's operationalization for consistency, but we also added a second method that was identical except that it did not ask participants to pick a winner.

We analyzed results of both studies using structural equation modeling provided by the statistical software Amos. We constructed two primary models in each study. First, we compared the effects of the interventions to the control group. A second model omits the control group and compares the experimental groups to each other. This model enabled us to add our measure of confidence.Footnote 3 An anonymous reviewer suggested that we also model effects of the interventions on explanation quality. We added these tests as a separate analytical step to preserve the integrity of our initial hypotheses.

Since individuals likely differ in the overall degree of detail and quality they are willing or able to put into each explanation, we allowed the errors between the explanation detail and explanation quality variables to be correlated (i.e., explanation 1 detail and explanation 2 detail; explanation 1 quality and explanation 2 quality). In addition, we allowed errors for detail and quality for a given explanation to be correlated when both variables were included in an analysis (i.e., explanation 1 detail and quality; explanation 2 detail and quality). Likewise, because breadth is likely to increase with the amount of search, we modeled correlated errors between the total number of desired questions and the breadth of questions participants wished to ask. Since researchers consistently find individual differences in confidence (Klayman, Soll, Gonzalez-Vallejo, Barlas, Reference Klayman, Soll, Vallejo and Barlas1999; Pallier et al., Reference Pallier, Wilkinson, Danthiir, Kleitman, Knezevic, Stankov and Roberts2002), we allowed correlated errors between the two confidence ratings in these analyses. All of these error covariances were significant (p < .05) in all analyses. Finally, because we expect to see evidence of competing processes, we computed bias-corrected, 95% confidence intervals for the standardized estimates of total, direct, and indirect effects in each analysis using the procedures recommended by Shrout and Bolger (Reference Shrout and Bolger2002). As they suggested, we used 1000 bootstraps (Shrout & Bolger, Reference Shrout and Bolger2002). We examined mediation using the indirect effect, which is the effect of the predictor variable on the mediator multiplied by the effect of the mediator on the dependent variable, using the indirect effect calculation provided by Amos software (Shrout & Bolger, Reference Shrout and Bolger2002). Amos software provides indirect effect estimates calculated over and above direct effects (Amos 5.0.1 documentation, 2003).

2 Study 1

2.1 Method

Participants were 204 introductory psychology students from a large university, who participated in return for course credit. The study was designed as an experiment with one independent variable, explanation type. Participants were randomly assigned to make either a single explanation (control, n ₌ 69), sequential multiple explanation (SeME, n ₌ 67), or simultaneous multiple explanation (SiME, n ₌ 68).

Procedure. Participants received a self-contained packet, which they completed and returned two days later. All participants read the vignette “Trouble at Big Bite,” which presents a brief description of what may be a major problem with employee theft at a fast food chain. Figure 1 presents the full text of the case. Participants were asked to imagine that Mary had asked them to help her solve this problem, and then respond to the questions in the packet.

Manipulation of independent variable. Participants were asked to make an explanation based on the information given. The independent variable (explanation type) was manipulated through the wording of this question. Participants in the control group (single explanation) were asked, “Based on the information given, what do you think is happening?” These participants wrote an explanation, and then responded to the dependent measures. Participants in the SeME condition responded to the same first question as the control group, but were then prompted to provide a reason why their diagnosis may be correct. After the provision of the first explanation and reason, these participants were asked to imagine that their first alternative was incorrect, and to provide an alternative explanation. Participants in this condition were also asked to provide a reason why this second diagnosis could be correct.

Participants assigned to the SiME condition were given a somewhat different opening prompt. They read: “When managers are faced with a complex problem, one recommended strategy is to develop two alternative explanations simultaneously. Please develop two explanations that you think could be good diagnoses of the problem in the Big Bite case (in other words, try to develop two explanations that you think could be correct). Based on the information given, what do you think is happening?” These participants wrote their explanations in spaces labeled “Explanation 1” and “Explanation 2,” and they were asked to write a reason below each explanation.

Measures of dependent variables: Confidence and willingness to act. Participants' responses to survey items were used to measure confidence and willingness to act (WTA). Answers for all confidence items used the same 13-point scale adopted by Hirt and Markman (Reference Hirt and Markman1995), with anchors “Not at all Confident” and “Extremely Confident.” Control condition participants were asked “How confident are you that your assessment is correct?” Participants in the SeME condition answered the same question after each explanation (the second question asked specifically about the second explanation).Footnote 4 Participants in the SiME condition were asked about confidence in each explanation after they had completed both (i.e., “How confident are you that Explanation 1 (2) is correct?”). Because we could not be sure which hypothesis was favored from the confidence ratings alone, we asked these participants: “If you had to choose between your explanations, which explanation do you think is more likely to be the correct one?” Participants circled either “Explanation 1” or “Explanation 2.” Finally, we asked: “How confident do you feel that the explanation you've just chosen is correct, rather than your alternative explanation?”Footnote 5

Willingness to act was measured with eight items (e.g., “Without getting more information, I would be hesitant to take action in this case,” “If this were my decision, I'd be willing to act without searching for more information”). Answers used a seven-point response scale anchored from “Strongly Disagree” to “Strongly Agree.” The WTA measure was reliable (α ₌ .90 in Study 1, .88 in Study 2). In our analyses, we collapsed the eight items from this scale into four random parcels that were modeled as indicators of a latent variable WTA (Little, Cunningham, Shahar, & Widaman, Reference Little, Cunningham, Shahar and Widaman2002). To avoid capitalizing on chance in the composition of the parcels, we created different parcels in each of the two experiments.

Measures of dependent variables: Quality, detail, and desired information search. We measured explanation quality, explanation detail, desired amount of information search, and desired breadth of information search via coding of open-ended responses. An independent rater (unaware of hypotheses or conditions) coded all explanations for detail (scale from 1, “not detailed,” to 3, “very detailed”) using a scoring guide (lack of an explanation was also coded as “0”). A different, independent rater (unaware of hypotheses, conditions, or the first rater's scoring for explanation detail) used word counts recorded by the first rater to identify each explanation and then rated its quality (scale from 1, “low quality” to 5, “high quality”; see Markman & Hirt, Reference Markman and Hirt2002).

Participants were also asked: “Imagine that you can now ask some additional questions about the situation at Big Bite. What questions would you want to ask?” The first rater recorded the number of questions and indicated whether they seemed to target the first explanation, second explanation, both, or neither. We used the total number of questions asked as our measure of desired amount of information search, and the number of categories as a measure of desired breadth of information search. The first author independently coded a 10% sample of the data as a reliability check. No discussion was held between the author and the blind raters, and results reported here use only the independent rater data.

2.2 Results

Means and standard deviations for quantitative variables are listed in Table 1. Overall, reliability for the qualitative coding was adequate. Very good reliability was obtained for the level of detail in the explanations (r ₌ .90 and .93 for explanations 1 and 2, respectively). Judgments of the number of questions asked were also very reliable (r ₌ .99), but ratings of breadth were just adequate in reliability (r ₌ .72). The reliability of explanation quality ratings was also adequate (r ₌ .87 and .62 for explanations 1 and 2, respectively).

Table 1: Study 1 descriptive statistics.

* Note: Con 1 ₌ Initial confidence in first explanation (multiple explanation groups only), Con 2 ₌ Confidence in second explanation (multiple explanation groups only), WTA ₌ Willingness to act. * p < .05.

Figure 3 shows the results of the analysis comparing the two multiple explanation conditions to the control group. The overall fit of the model was good (χ2 ₌ 24.10, df ₌ 19, N ₌ 204, CFI =.99, RMSEA ₌ .04, CI (.00, .08), pclose ₌ .67). Figure 3 shows that the path from multiple explanation to explanation 2 detail is significant (β ₌ .53), but the path to explanation 1 detail is not (β ₌ .09). The explanation detail hypothesis was supported.

Figure 3: Study 1, Multiple Explanation (SiME and SeME combined) vs. Control.

Turning to desired breadth, the indirect effect of multiple explanation on the desired breadth of information search was significant (β ₌ .18, 95 % CI for the standardized effect (.10, .27), p < .05), as was the total effect (β ₌ .24, CI .11, .36, p < .05). The direct effect of multiple explanation on desired breadth was not significant (β ₌ .06, CI –.08, .19, n.s.). The desired search breadth hypothesis was supported.

Consistent with Figure 3, the indirect effect of multiple explanation on the number of questions asked was significant (β ₌ .15, CI .05, .26, p < .05). To clarify the nature of the indirect effect of multiple explanation on the desired amount of search, we tested an alternative model with the path from multiple explanation to explanation 1 detail fixed to be zero. This allowed us to verify that the effect occurred through explanation 2 detail. In this model, the indirect effect of multiple explanation on the desired amount of search remained significant (β ₌ .12, CI .06, .27, p < .05). To confirm that the effect of multiple explanation on the desired amount of search did not occur through explanation 1 detail, we tested a second alternative model fixing the path from multiple explanation to explanation 2 to be zero. In this analysis, the indirect effect of multiple explanation on the number of questions asked was not significant (β ₌ –.02, CI –.06, .01, n.s.). The desired amount of search hypothesis was supported.

Turning to willingness to act, neither the direct (β ₌ –.06, CI –.23, .10, n.s.) nor indirect (β ₌ –.11, CI –.22, .02, n.s.) effects of multiple explanation on WTA were significant on their own. Examination of Figure 3 suggests that the lack of significance for the indirect effect of the interventions on WTA may be a result of the number of different (and differently signed) pathways through which multiple explanation affects WTA. To better understand these results, we tested a restricted model that fixed the paths from explanation 1 detail and explanation 2 detail to WTA to be zero. This analysis revealed the expected indirect effect of multiple explanation through desired amount of search (β ₌ –.03, –.09, .00, p < .05). Interestingly, the direct effect of multiple explanation on WTA in this analysis was also significant (β ₌ –.13, CI –.29, –.01, p < .05). Thus, although multiple explanation had the predicted effects, these were obscured by noise from competing processes. Fortunately, the total effect of the intervention on WTA in the unrestricted analysis was significant and negative (β ₌ –.16, CI –.31, –.04, p < .05). Although each component of the effect was weak, multiple explanation reduced WTA.

Adding explanation quality to the model (with the same effect pathways as those for detail) generally weakened effects involving explanation detail. However, since we found no change in the total, direct, or indirect effects in the model, we attribute observed instability in the path coefficients to the high correlation between quality and detail (see Table 1). Similar to explanation detail, multiple explanation improved the quality of second explanations (β ₌ .54, p < .05) without degrading the quality of first explanations (β ₌ .13, p < .07).

Figure 4 shows the comparison of SiME and SeME approaches. The fit of this model was good (x2 ₌ 54.18, df ₌ 33, N ₌ 135, CFI ₌ .96, RMSEA ₌ .07, CI (.03, .10), pclose ₌ .16). As depicted in Figure 4, simultaneous multiple explanation improved confidence in the second explanation generated (β ₌ .21) without reducing confidence in the first (β ₌ –.07). The SiME confidence hypothesis was supported. However, no effect was observed on explanation detail. The SiME detail hypothesis was not supported. Interestingly, when compared to SeME, explanation detail no longer completely mediated the effects of the SiME intervention on desired search. Instead, SiME caused individuals to desire less information search. A non-significant trend (p < .08) was also detected towards less search breadth among those receiving the SiME intervention. Exploratory analysis of confidence effects on WTA, also shown in Figure 4, revealed a positive effect of confidence in the first explanation on WTA and no effect of confidence in the second explanation. No effects related to explanation quality were found when these measures were added.

Figure 4: Study 1, SiME vs. SeME.

2.3 Summary of Study 1

Study 1 gave us reason to believe that multiple explanation techniques have effects in the early stages of judgment in tasks where response options are not provided, but it also offered evidence of the complexity of processing that these techniques may encourage. On the bright side, observed effects were consistent with the general thesis that multiple explanation has some benefits. However, we also found evidence of suppression effects from competing processes even in this low-information environment.

The SiME and SeME approaches to intervention differed in their effects on confidence as expected, suggesting that SiME increases confidence in the second explanation. Yet, it is interesting to note that the SiME participants desired less information search than SeME participants. Thus, effects on consideration processes do not uniformly favor either technique. This study suffers from limitations of sample size and task simplicity. In addition, as we noted above, the fact that we asked simultaneous multiple explainers to pick a winner is arguably not completely faithful to the idea of scenario planning. Accordingly, we conducted a replication, used a beefed-up task, and added a second operationalization of SiME.

3 Study 2

3.1 Method

Participants were 219 business students from a large university, who participated in return for course credit. The study was designed as an experiment with one independent variable, explanation type. Participants were randomly assigned to make either a single explanation (control, n ₌ 58), SeME (n ₌ 58), or SiME with (n ₌ 51) or without (n ₌ 52) picking a winner.

Procedures, manipulations, and measures were identical to Study 1, with two exceptions. First, we added a second SiME condition. The only difference between this condition and the previously described version was that participants in the “no winner” variant of SiME were not asked to pick a winner from among their explanations. Second, we altered the task to make it a more complex and realistic problem presentation. The Study 2 version of the task retained the Study 1 text in its entirety, but added additional material, more than doubling the original length (324 words in Study 2 vs. 135 words in Study 1). In addition, we added a 125 word meal policy (since the problem deals with possible employee theft of food), a one page accounting statement, and a one page diagram of the standard store layout. As in the original, participants were asked to imagine that the focal manager in the case had asked them to help her solve this problem.

3.2 Results

Means and standard deviations for study variables are listed in Table 2. Reliability for the qualitative coding was lower than in Study 1 for the level of detail (r ₌ .79 and .81 for explanations 1 and 2, respectively) and quality of explanations (r = .72 and .70 for explanations 1 and 2, respectively). Judgments of the number of questions asked were very reliable (r ₌ .99), but ratings of breadth (r ₌ .62) exhibited marginal reliability.

Table 2: Study 2 descriptive statistics.

Note: One sequential group participant did not complete the WTA scale. Con 1 ₌ Initial confidence in first explanation (multiple explanation groups only), Con 2 ₌ Confidence in second explanation (multiple explanation groups only), WTA ₌ Willingness to act. *p < .05.

Figure 5 shows the results of the general analysis comparing multiple explanation to the single explanation group. The model fit the data well (x2 ₌ 35.02, df ₌ 19, N ₌ 218, CFI ₌ .98, RMSEA ₌ .06, CI (.03, .10), pclose ₌ .24). As indicated in Figure 5, the path from multiple explanation to explanation 2 detail was significant (β ₌ .42), but the path to explanation 1 detail was not (β ₌ –.09). The explanation detail hypothesis was supported.

Figure 5: Study 2, Multiple Explanation (both SiME conditions and SeME combined) vs. Control.

Although Figure 5 suggests that multiple explanation affects desired search, these effects were complex. Direct (β ₌ –.16, CI –.29, –.02, p < .05) and indirect (β ₌ .15, CI .09, .22, p < .05) effects of multiple explanation on desired search breadth are significant, but in opposite directions. This is classic evidence of suppression (Shrout & Bolger, 2002). A similar effect occurs with the desired amount of search. Here, the direct effect of multiple explanation is negative (β ₌ –.17, CI –.30, –.03, p < .05), although the indirect effect is opposite in sign, though not significant (β ₌ .07, CI –.02, .14, n.s.).

The total effect of multiple explanation on WTA remained significant in Study 2 (β ₌ –.26, CI –.39, –.12, p < .05), but its composition was different. The direct effect of multiple explanation on WTA was significant (β ₌ –.29, CI –.42, –.14, p < .05), but the indirect effect was not (β ₌ .03, CI –.05, .11, n.s.). Again, we note the conflicting direction and lack of overlap in the confidence intervals of the direct and indirect effects.

When we added explanation quality to the model, we observed decay in the results relating to detail similar to that found in Study 1. The standardized total, direct, and indirect effects remained essentially unchanged when explanation quality was added, leading us to attribute fluctuation in path estimates to the high correlation between quality and detail (see Table 2). As in Study 1, multiple explanation improved the quality of second explanations (β ₌ .41, p < .05) without hurting the quality of first explanations (β ₌ –.04, n.s.).

Before we compared SiME and SeME techniques, we first fit a model comparing the two SiME techniques to see if they were equivalent. This model fit the data well (x2 ₌ 38.86, df ₌ 33, N ₌ 103, CFI ₌ .99, RMSEA ₌ .04, CI (.00, .09), pclose ₌ .57), and there were no significant differences in total, direct, or indirect effects between the two versions of SiME. We therefore considered them as a single intervention in the subsequent analysis.

Figure 6 shows the results of the analysis comparing the two SiME manipulations to the SeME technique. Model fit was good (x2 ₌ 39.04, df ₌ 33, N ₌ 160, CFI ₌ .99, RMSEA ₌ .04, CI (.00, .07), pclose ₌ .73). Figure 6 shows that SiME increased confidence in the second explanation relative to SeME (β ₌ .22) without degrading confidence in first explanations (β ₌ .03), supporting the SiME confidence hypothesis. SiME also increased detail in second explanations (β ₌ .18) without harming first explanations (β ₌ .09), supporting the SiME detail hypothesis. Support of the SiME detail hypothesis created significant indirect effects of SiME on the desired amount (β ₌ .04, CI .00, .09, p < .05) and breadth (β ₌ .05, CI .01, .12, p < .05) of search. However, suppression was again observed, with opposing direct effects of SiME relative to SeME on both amount (β ₌ –.10, CI –.25, .05, n.s.) and breadth (β ₌ –.10, CI –.24, .04, n.s.) of search. The path from confidence in the second explanation to WTA was significant in Study 2, but there was no difference between the two techniques in terms of total effect on WTA (CI –.22, .12, n.s.).

Figure 6: Study 2, SiME (both conditions combined) vs. SeME.

Adding explanation quality resulted in a model that did not fit the data well (x2 ₌ 131.68, df ₌ 46, N ₌ 160, CFI ₌ .90, RMSEA ₌ .11, CI (.09, .13), pclose ₌ .00). Examination of the effects of SiME on detail and quality suggested that SiME may improve detail and quality in second explanations (for detail, β ₌ .19; for quality, β ₌ .25, both p < .05) without harming first explanations (for detail, β ₌ .10; for quality, β ₌ .11, both n.s.). Since fit of this model was poor, we urge caution in interpreting these results.

3.3 Summary of Study 2

Study 2 extended the results of Study 1 to a more detailed version of the same vignette in order to examine multiple explanation in a higher-information environment. We also investigated a second form of SiME. Overall, Study 2 supports the same general message that multiple explanation has effects in early stages of tasks where response options are not provided. Yet, these effects were more complicated in Study 2, as evidenced by the conflicting effects on desired search. In our analysis of differences between the SiME and SeME approaches, we found evidence that SiME improves detail and confidence in the second explanation generated.

4 Discussion

The studies reported here articulate and examine multiple explanation's effects in the initial stages of a judgment task where response options are not provided. Our research examined three aspects of intervening in these situations. First, since the ultimate judgment is distal to the intervention in such tasks, one contribution of this paper is to articulate some intermediate dependent variables that these techniques can reasonably be aimed to affect. Toward that end, we proposed that multiple explanation should focus on aspects of justifying information search and providing a platform for it. We identified a causal model that suggested explanation detail, desired information search, confidence in explanations, and willingness to act, as dependent variables that might be affected by multiple explanation in the focal task environment, and we tested this model in two studies. We predicted that if multiple explanation improves judgment, it should increase detail and confidence in alternative explanations, increase the amount and breadth of desired information search, and reduce willingness to act without information search.

Second, although we believed that the two variants of intervention (SiME and SeME) would lead to mostly similar results, we proposed that SiME might result in stronger effects on detail and confidence in second explanations. To assess this, we compared the two types of intervention. Third, we noted that intervening early in this judgment task seemed to offer considerable potential to unleash competing psychological processes, and we suggested that these might be affected by the amount of information available in the task environment. As a first step toward understanding the degree to which this might be true, we conducted Study 1 with a task that was relatively low in the volume of information it contained, but added substantially to it in Study 2. We re-cap our results for each of these three areas of interest.

In general, we found that our model fit the data well in each study, and that multiple explanation had some potentially beneficial effects. Multiple explanation consistently increased the detail and quality of second explanations without degrading first explanations. It also increased willingness to delay action. Willingness to delay action is not a positive outcome in all tasks. However, without it, consideration of alternatives is likely to be trivial. Multiple explanation also created some positive effects on desired information search, though the net effect on desired search was only positive in Study 1. Thus, we believe that our first goal of developing a model for understanding effects of multiple explanation in the early stages of tasks where fixed menus of response options are not provided was largely successful.

Our predictions about differences between the two types of multiple explanation met with more mixed results. As we expected, the SiME approach produced a consistent advantage relative to SeME with regard to confidence in the second alternative generated. Importantly, this advantage did not involve reduced confidence in the initial alternative. When response options are provided, it may be desirable to reduce confidence in the focal alternative (Hirt & Markman, Reference Hirt and Markman1995). In the present experiments, where response options were not provided, it may be beneficial to develop multiple, likeable possibilities for further investigation (Wack, Reference Wack1985a; Reference Wack1985b). Therefore, this increase in confidence could be a desirable outcome. However, three factors must be noted. First, confidence in the second explanation increased willingness to act in Study 2, which could be a detrimental effect. It may be that benefits of confidence are felt later on, after additional information is collected and individuals begin to compare explanations more actively (see McKenzie, et al., Reference McKenzie, Lee and Chen2002), but that remains to be established by future research. Second, SiME only increased detail in second explanations in Study 2, and we did not find evidence that this variant improved explanation quality over and above detail in either study. Third, whereas previous SeME research often implied to participants that their initial explanation could be incorrect, our SeME intervention explicitly told our participants to imagine that their first explanation was incorrect. Thus, our operationalization may have exaggerated the difference between SiME and SeME.

One other issue that deserves attention is our choice to focus on explanation detail rather than explanation quality. Our analyses showed that multiple explanation affects detail similar to quality, but there appeared to be little added benefit on the downstream variables in our model from employing both measures. To guard against the possibility that we made a mistake by choosing detail over quality, we re-ran our analyses substituting explanation quality for explanation detail. In each study, the models fit the data well (CFI > .96, RMSEA < .07 in all four models). We found a very similar pattern of results, with support for an “explanation quality hypothesis” paralleling our explanation detail hypothesis in each study.Footnote 6 Thus, there is some evidence that multiple explanation improves explanation quality over and above detail, but the large shared variance between quality and detail seems to be driving the downstream effects we modeled. Of course, increased explanation quality might translate into an important downstream benefit we did not model: better judgment. Therefore, effects of multiple explanation on explanation quality are still likely to be important.

A critical message of these studies is that multiple explanation is likely to trigger competing psychological processes when response options are not provided. Using techniques for the analysis of mediation developed by Shrout and Bolger (2002), we found repeated evidence of suppression. Table 3 summarizes total, direct, and indirect effects across studies for the three effects of multiple explanation that we expected to involve mediation. As we expected, the evidence of suppression was stronger in Study 2, where the volume of information available within the task was higher.

Table 3: Summary of effects by study: bias-corrected confidence intervals based on 1000 bootstraps.

*p < .05.

Our results provide several potentially interesting directions for future research. One we would like to highlight is the need to give more attention to the early stages of judgment tasks where response options are not provided. We have offered one way to investigate these types of judgments. However, other contexts may provide other measurement opportunities. For instance, future research might investigate the resources that individuals desire to allocate to different search options after multiple explanations. A second topic that may deserve attention concerns the mechanisms for the effects. Management research suggests that these interventions have their effects largely through motivation rather than cognition, yet our studies were not intended to make this distinction. Future research might more explicitly operationalize goals (Locke & Latham, Reference Locke and Latham1990) to see if the mechanisms can be pinned down more successfully. Finally, future research might examine how these techniques fare in light of individual differences such as need for closure (Hirt et al., Reference Hirt, Kardes and Markman2004).Footnote 7

In conclusion, our research has shown that multiple explanation might have some important, but heretofore largely uninvestigated effects in the early stages of tasks where response options are not provided to individuals. We do wish to note that nothing about our investigation shows directly that multiple explanation will improve the eventual judgment. However, given the long causal chains involved in so many applied judgment processes, we think that it is important to investigate whether these interventions affect intermediate dependent variables, since such effects are surely a pre-condition for any distal effects. We argue that a better understanding of the more proximal effects of multiple explanation will ultimately offer researchers and interveners more options for understanding and improving judgment.

Footnotes

*

Portions of the data from Study 1 were presented at the 2004 annual meeting of the Society for Industrial and Organizational Psychology. Portions of the data from Study 2 were presented at the 2005 annual meeting of the Society for Industrial and Organizational Psychology. We thank Samantha Abbott for data coding and Jennifer Corbin for library support. We are very grateful to Angela Henderson for additional coding assistance.

1 Of course, one could search for more information in sports or jury tasks. However, researchers in laboratory studies of sports prediction, like friends or fellow sports fans, commonly request a prediction on the spot. Juries, at least in the United States, are generally prohibited from conducting their own external information search.

2 Consistent with this finding, a few individuals in each of the studies we report below offered multiple explanations despite receiving only a single explanation prompt.

3 Because no measure of confidence in a second explanation could be collected from groups asked to create only one explanation, including these confidence measures in the overall model would have created an unacceptable level of missing data.

4 These participants also answered a final question about their confidence in the initial explanation. Prior research on whether these interventions reduce confidence in the first explanation after considering alternatives is equivocal (Hirt et al., Reference Hirt, Kardes and Markman2004; Hirt & Markman, Reference Hirt and Markman1995). Similar to Hirt et al. (Reference Hirt, Kardes and Markman2004), we found no indication that any manipulation reduced confidence in the first explanation in either study, and we omit further discussion of this variable.

5 The picking of a winner and rating of confidence in it was designed to facilitate the “final confidence” rating (see note 4 above).

6 A parallel explanation quality hypothesis would read: Relative to individuals prompted to make only a single explanation, those asked to create multiple explanations will produce higher quality second explanations without degrading the quality of the first explanation offered. In analyses with explanation detail removed, multiple explanation improved the quality of second explanations (β ₌ .52 in Study 1, β ₌ .40 in Study 2, both p < .05) without degrading first explanations (β ₌ .13 in Study 1, p < .07; β ₌ .01 in Study 2, n.s.). In a similar fashion, these analyses revealed partial support for a parallel “SiME quality hypothesis.” SiME improved the quality of second explanations relative to SeME in Study 2 (β ₌ .23, p < .05; for explanation 1, β ₌ .11, n.s.), but not in Study 1 (β ₌ .09, n.s., for explanation 1, β ₌ .01, n.s.).

7 We thank an anonymous reviewer for suggesting this potentially important research direction.

References

Anderson, C.A. (1982). Inoculation and counterexplanation: Debiasing techniques in the perseverance of social theories. Social Cognition, 1, 126139.CrossRefGoogle Scholar
Anderson, C.A., & Sechler, E.S. (1986). Effects of explanation and counterexplanation on the development and use of social theories. Journal of Personality and Social Psychology, 50, 2434.CrossRefGoogle Scholar
Arkes, H.R. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin, 110, 486498.CrossRefGoogle Scholar
Arkes, H.R. (2001). Overconfidence in judgmental forecasting. In J. S. Armstrong (Ed.), Principles of forecasting: A handbook for researchers and practitioners(pp. 495515). Boston: Kluwer Academic.CrossRefGoogle Scholar
Dougherty, M.R. P., Gettys, C.F., & Thomas, R.P. (1997). The role of mental simulation in judgments of likelihood. Organizational Behavior and Human Decision Processes, 70, 135148.CrossRefGoogle Scholar
Einhorn, H.J., & Hogarth, R.M. (1987). Decision making: Going forward in reverse. Harvard Business Review, January-February, 6670.Google Scholar
Grant, R.M. (2003). Strategic planning in a turbulent environment: Evidence from the oil majors. Strategic Management Journal, 24, 491517.CrossRefGoogle Scholar
Hammond, J.S., Keeney, R.L., & Raiffa, H. (1998). The hidden traps in decision making. Harvard Business Review, September-October, 4758.Google ScholarPubMed
Hastie, R., & Dawes, R. (2001). Rational choice in an uncertain world: The psychology of judgment and decision making. Thousand Oaks: Sage.Google Scholar
Hastie, R., & Pennington, N. (2000). Explanation-based decision making. In Connolly, T. Arkes, H.R., & K. R. Hammond (Eds.), Judgment and decision making: An interdisciplinary reader(2 edition) (pp. 212228). Cambridge: Cambridge University Press.Google Scholar
Hirt, E.R., Kardes, F.R., & Markman, K.D. (2004). Activating a mental simulation mind-set through generation of alternatives: Implications for debiasing in related and unrelated domains. Journal of Experimental Social Psychology, 40, 374383.CrossRefGoogle Scholar
Hirt, E.R., & Markman, K.D. (1995). Multiple explanation: A consider-an-alternative strategy for debiasing judgments. Journal of Personality and Social Psychology, 69, 10691086.CrossRefGoogle Scholar
Hoch, S.J. (1985). Counterfactual reasoning and accuracy in predicting personal events. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 719731.Google Scholar
Hodgkinson, G.P., & Wright, G. (2002). Confronting inertia in a top management team: Learning from failure. Organization Studies, 23, 949977.CrossRefGoogle Scholar
Klayman, J., & Ha, Y. (1987). Confirmation, disconfirmation and information in hypothesis testing. Psychological Review, 94, 211228.CrossRefGoogle Scholar
Klayman, J., Soll, J.B., Gonzalez-Vallejo, C., & Barlas, S. (1999). Overconfidence: It depends on how, what, and whom you ask. Organizational Behavior and Human Decision Processes, 79, 216247.CrossRefGoogle Scholar
Koehler, D.J. (1991). Explanation, imagination, and confidence in judgment. Psychological Bulletin, 110, 499519.CrossRefGoogle ScholarPubMed
Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107118.Google Scholar
Little, T.D., Cunningham, W.A., Shahar, G., & Widaman, K.F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling, 9, 151173.CrossRefGoogle Scholar
Locke, E.A., & Latham, G.P. (1990). A theory of goal-setting and task performance. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Lord, C.G., Lepper, M.R., & Preston, E. (1984). Considering the opposite: A corrective strategy for social judgment. Journal of Personality and Social Psychology, 47, 12311243.CrossRefGoogle ScholarPubMed
Lovallo, D., & Kahneman, D. (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, July, 5663.Google ScholarPubMed
Markman, K.D., & Hirt, E.R. (2002). Social prediction and the “allegiance bias.” Social Cognition, 20, 5886.CrossRefGoogle Scholar
McKenzie, C.R. M., Lee, S.M., & Chen, K.K. (2002). When negative evidence increases confidence: Change in belief after hearing two sides of a dispute. Journal of Behavioral Decision Making, 15, 118.CrossRefGoogle Scholar
Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26, 11421150.CrossRefGoogle Scholar
Pallier, G., Wilkinson, R., Danthiir, V., Kleitman, S., Knezevic, G., Stankov, L, & Roberts, R.D. (2002). The role of individual differences in the accuracy of confidence judgments. Journal of General Psychology, 129, 257299.CrossRefGoogle ScholarPubMed
Phelps, R., Chan, C., & Kapsalis, S.C. (2001). Does scenario planning affect performance? Two exploratory studies. Journal of Business Research, 51, 223232.CrossRefGoogle Scholar
Plous, S. (1993). The psychology of judgment and decision making. New York: McGraw-Hill.Google Scholar
Russo, J.E., & Schoemaker, P. J. H. (1992), Winter. Managing overconfidence. Sloan Management Review, 32, 717.Google Scholar
Schoemaker, P.J.H. (1993). Multiple scenario development: Its conceptual and behavioral foundation. Strategic Management Journal, 14, 193213.CrossRefGoogle Scholar
Schoemaker, P.J.H. (2004). Forecasting and scenario planning: The challenges of uncertainty and complexity. In D. J. Koehler & N. Harvey (Eds.) Blackwell handbook of judgment and decision making (pp. 274296). Malden, MA: Blackwell.CrossRefGoogle Scholar
Shrout, P.E., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: New procedures and recommendations. Psychological Methods, 7, 422445.CrossRefGoogle ScholarPubMed
Tversky, A., & Kahneman, D. (2002). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. In Gilovich, T. Griffin D. & D. Kahneman (Eds.) Heuristics and biases: The psychology of intuitive judgment (pp. 1948). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Vancouver, J.B. (2000). Self-regulation in organizational settings: A tale of two paradigms. In Boekaerts, M. Pintrich, P.R., & Zeidner M. (Eds.), Handbook of Self-Regulation (pp. 303341). San Diego: Academic Press.CrossRefGoogle Scholar
Wack, P. (1985a). Scenarios: Uncharted waters ahead. Harvard Business Review, September-October, 7389.Google Scholar
Wack, P. (1985b). Scenarios: Shooting the rapids. Harvard Business Review, November-December, 139150.Google Scholar
Wright, A. (2005). Using scenarios to challenge and change management thinking. Total Quality Management,16, 87103.CrossRefGoogle Scholar
Figure 0

Figure 1: Trouble at Big Bite.

Figure 1

Figure 2: Causal model of multiple explanation in the early stages of judgments when response options are not provided.

Figure 2

Table 1: Study 1 descriptive statistics.

Figure 3

Figure 3: Study 1, Multiple Explanation (SiME and SeME combined) vs. Control.

Figure 4

Figure 4: Study 1, SiME vs. SeME.

Figure 5

Table 2: Study 2 descriptive statistics.

Figure 6

Figure 5: Study 2, Multiple Explanation (both SiME conditions and SeME combined) vs. Control.

Figure 7

Figure 6: Study 2, SiME (both conditions combined) vs. SeME.

Figure 8

Table 3: Summary of effects by study: bias-corrected confidence intervals based on 1000 bootstraps.