Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-26T04:52:44.887Z Has data issue: false hasContentIssue false

The environment matters: Comparing individuals and dyads in their adaptive use of decision strategies

Published online by Cambridge University Press:  01 January 2023

Juliane E. Kämmer*
Affiliation:
Max Planck Institute for Human Development, Center for Adaptive Behavior and Cognition, Lentzeallee 94, 14195, Berlin, Germany
Wolfgang Gaissmaier
Affiliation:
Max Planck Institute for Human Development, Harding Center for Risk Literacy
Uwe Czienskowski
Affiliation:
Max Planck Institute for Human Development, Center for Adaptive Behavior and Cognition
Rights & Permissions [Opens in a new window]

Abstract

Individuals have been shown to adaptively select decision strategies depending on the environment structure. Two experiments extended this research to the group level. Subjects (N = 240) worked either individually or in two-person groups, or dyads, on a multi-attribute paired-comparison task. They were randomly assigned to two different environments that favored one of two prototypical decision strategies—weighted additive or take-the-best (between-subjects design in Experiment 1 and within-subject design in Experiment 2). Performance measures revealed that both individuals and dyads learned to adapt over time. A higher starting and overall performance rate in the environment in which weighted additive performed best led to the conclusion that weighted additive served as a default strategy. When this default strategy had to be replaced, because the environment structure favored take-the-best, the superior adaptive capacity of dyads became observable in the form of a steeper learning rate. Analyses of nominal dyads indicate that real dyads performed at the level of the best individuals. Fine-grained analyses of information-search data are presented. Results thus point to the strong moderating role of the environment structure when comparing individual with group performance and are discussed within the framework of adaptive strategy selection.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2013] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Imagine a group of geologists searching for profitable oil-drilling sites for an oil company. Before this group can pick one of several possible sites, it has to decide how to make this decision. First, it needs to decide what information to search for and in what order. Different methods are available for inferring the quality of the available sites, such as chemical and seismic analyses, which differ in their success rate. Second, the group needs to decide when to stop searching for information and, third, how to integrate the pieces of information to make a decision. For example, it could commission all available analyses and weight and add the results. Alternatively, it could proceed sequentially, starting with the most successful method and deciding as soon as one result clearly favors one site.

This example illustrates the idea that decision makers can choose from a repertoire of different decision strategies, for which Reference Gigerenzer and ToddGigerenzer, Todd, and the ABC Research Group (1999) coined the term “adaptive toolbox”. This idea goes back to Reference SimonHerbert A. Simon (1956), who saw cognition as an adaptation to the environment. Different environments require the use of different decision strategies to be successful, as no single strategy will be universally superior (Reference Gigerenzer and GaissmaierGigerenzer & Gaissmaier, 2011). A strategy is considered ecologically rational to the degree that it matches the environment structure. The important questions are whether people are good at deciding how to decide, and how they do so. This fundamental problem is known in the literature as the strategy selection problem (e.g., Reference Juslin, Olsson and OlssonPayne, Bettman, & Johnson, 1988, 1993; Reference Rieskamp and OttoRieskamp & Otto, 2006).

Within the existing literature on adaptive strategy selection in humans (e.g., Reference BröderBröder, 2003; Reference Christensen-SzalanskiChristensen-Szalanski, 1978, 1980; Reference Marewski and SchoolerMarewski & Schooler, 2011; Reference Payne, Bettman and JohnsonPayne et al., 1988, 1993; Reference Rieskamp and HoffrageRieskamp & Hoffrage, 2008; Reference Rieskamp and OttoRieskamp & Otto, 2006), most of the research has focused on adaptive decision making in individuals (for rare exceptions see Reference Kämmer, Gaissmaier, Reimer and SchermulyKämmer, Gaissmaier, Reimer, & Schermuly, 2013; Reference Reimer and KatsikopoulosReimer & Katsikopoulos, 2004). Many decisions in real life, however, are made in a social context, for example, under the advice of another person (e.g., Reference Bonaccio and DalalBonaccio & Dalal, 2006) or in a group of people (Reference Kerr and TindaleKerr & Tindale, 2004; Levine & Smith, in press). In fact, teams are ubiquitous in all sectors of organizations today, such as in the healthcare system or aviation (Reference ManserManser, 2009; Reference WallerWaller, 1999). Reasons for this prevalence are mainly seen in (a) their potential superiority to individuals, as they can combine multiple perspectives, areas of expertise, and resources to work on complex problems (Reference Larson, Foster-Fishman and KeysLarson, Foster-Fishman, & Keys, 1994; Reference StasserStasser, 1992) and (b) their large potential for adaptation to a dynamic environment (Reference Burke, Stagl, Salas, Pierce and KendallBurke, Stagl, Salas, Pierce, & Kendall, 2006; Reference Randall, Resick and DeChurchRandall, Resick, & DeChurch, 2011). The current study extends research on the adaptive use of decision strategies to the group level and addresses the following questions: Do groups learn to select the decision strategy that fits best to a novel environment structure, and how well do they do so in comparison to individuals?

Although we take the perspective of the adaptive toolbox, there are alternative approaches. For example, a lively debate concerns whether a Bayesian approach to cognition could be a universal strategy (see, e.g., Reference Jones and LoveJones & Love, 2011; for comments see Reference Bowers and DavisBowers & Davis, 2012a, 2012b; Reference Griffiths, Chater, Norris and PougetGriffiths, Chater, Norris, & Pouget, 2012). Other single-strategy process models that are discussed are the parallel constraint satisfaction (PCS) models (Reference Glöckner and BetschGlöckner & Betsch, 2008a; Reference Glöckner, Betsch and SchindlerGlöckner, Betsch, & Schindler, 2010; for a debate see Reference Glöckner and BetschGlöckner & Betsch, 2010; Reference MarewskiMarewski, 2010) and sequential-sampling process models such as the adaptive spanner perspective (Reference NewellNewell, 2005) and decision field theory (Reference Busemeyer and TownsendBusemeyer & Townsend, 1993). Note that our goal was not to test these perspectives against each other (see, e.g., Reference Newell and LeeNewell & Lee, 2011) but to better understand performance differences between individuals and groups in distinctive environments, for which we apply the ecological rationality framework.

1.1 Comparing individuals with groups

Comparing individual with group performance has a long tradition in psychology (e.g., Reference WatsonWatson, 1928), which has documented both the superiority of groups to individuals and their inferiority under certain conditions. Some of the inconsistencies can be resolved by taking the specific task context and methodology into account, as performance of individuals and groups is a function of the available resources, strategies for their use, task context, and methodology (Reference Bottger and YettonBottger & Yetton, 1988; Reference HillHill, 1982) and—as we will show—the environment structure (as also argued by Reference Gigerenzer and ToddGigerenzer et al., 1999).

For a fair comparison between individual and group performance, it is also important to specify the dependent measure: The performance of an interactive (i.e., collective) group can be compared to (1) the average individual performance, (2) the most competent member of a statistical aggregate or nominal group (Reference HillHill, 1982), and/or (3) a statistically pooled response (e.g., averaging continuous guesses in research on the wisdom of crowds, see, e.g., Reference Lorenz, Rauhut, Schweitzer and HelbingLorenz, Rauhut, Schweitzer, & Helbing, 2011). For example, research shows that collective groups outperform the average individual on intellective tasks, which are tasks for which a correct answer exists and is demonstrable (for an overview, see Reference Kerr and TindaleKerr & Tindale, 2004). In tasks with highly demonstrable answers, groups are likely to adopt the opinion of the best member (“truth wins”) and may perform at the level of that best member. Very few studies have shown that groups may outperform their best members (e.g., Reference Laughlin, Bonner and MinerLaughlin, Bonner, & Miner, 2002). In brainstorming research, on the other hand, collective groups have been shown to underperform nominal groups in terms of quantity of generated ideas (for an overview, see Reference Stroebe, Nijstad and RietzschelStroebe, Nijstad, & Rietzschel, 2010). In terms of memory capacity, collective groups were shown to remember more than the average individual but less than nominal groups (Reference Betts and HinszBetts & Hinsz, 2010). These few examples illustrate that no general conclusion concerning group superiority can be drawn and that the comparison measure matters.

To assess group performance in our experiments, we therefore compared it with the average as well as the best individual of a nominal group. Besides providing a statistical benchmark, nominal groups can be seen as simulating a group decision process, in which members observe each other’s performance on the first trials or receive feedback about each other’s performance in a similar task, and then agree on following the suggestions of the best member instead of deciding on every trial jointly. If collective groups perform below the level of nominal groups, it may be due to coordination difficulties (Reference SteinerSteiner, 1972), production blocking (Reference Diehl and StroebeDiehl & Stroebe, 1987), or distraction (Reference Baron and BerkowitzBaron, 1986). (A more comprehensive list of factors influencing group performance positively as well as negatively can be found in Reference Lamm and TrommsdorffLamm & Trommsdorff, 2006, and Reference SteinerSteiner, 1972.)

By studying how well groups learn to use the appropriate strategy in an unknown task environment, we extend research that compares individual with group performance to a strategy-learning task. At the same time we aim to broaden the decision-making literature, which has focused on adaptive strategy selection in individuals (Reference Bröder and SchifferBröder, 2003; Reference Rieskamp and OttoRieskamp & Otto, 2006). For example, task characteristics such as costs of information search or time pressure were found to foster limited information search and noncompensatory ways of integrating information (e.g., Reference BröderBröder, 2003; Reference Christensen-SzalanskiChristensen-Szalanski, 1978, 1980; Reference Payne, Bettman and JohnsonPayne et al., 1988, 1993). Moreover, environment characteristics such as the dispersion of cue validities and information redundancy have been found to influence decision making in a systematic way (e.g., Reference Dieckmann and RieskampDieckmann & Rieskamp, 2007; Reference Rieskamp, Hoffrage, Gigerenzer and ToddRieskamp & Hoffrage, 1999; Reference Rieskamp and OttoRieskamp & Otto, 2006). As groups can be conceptualized as information-processing entities where cognition is distributed across individuals (Reference De Dreu, Nijstad and van KnippenbergDe Dreu, Nijstad, & van Knippenberg, 2008; Reference Hinsz, Tindale and VollrathHinsz, Tindale, & Vollrath, 1997; Levine & Smith, in press), and groups and individuals face similar conditions when making decisions, we expect that the same principles found for individuals also hold for groups. Our first hypothesis is therefore that groups are able to learn to use appropriate decision strategies contingent on the task environment. We ground this prediction also on research on group decision making that has shown that groups apply similar decision strategies to those applied by individuals (Reference Reimer, Hoffrage and KatsikopoulosReimer, Hoffrage, & Katsikopoulos, 2007; Reference Reimer and KatsikopoulosReimer & Katsikopoulos, 2004). Last, we base our prediction on organizational psychology research on the adaptive capacity of teams (i.e., the capacity to gather information from the environment and “to make functional adjustments”; Reference Randall, Resick and DeChurchRandall et al., 2011, p. 526) that certifies groups adaptive performance when encountering novel conditions in a number of applied settings (such as by airline crews, Reference WallerWaller, 1999; see also Reference Burke, Stagl, Salas, Pierce and KendallBurke et al., 2006; Reference LePineLePine, 2003). We ran exploratory analyses to test whether they would perform as well as the best individual.

How quickly do groups learn to adapt their decision strategy? One important mechanism behind strategy selection is learning from feedback (Reference Rieskamp and OttoRieskamp & Otto, 2006). Although feedback generally enhances learning and motivation (Reference NadlerNadler, 1979), studies in psychology (e.g., Reference DavisDavis, 1969; Reference Laughlin and ShippyLaughlin & Shippy, 1983; Reference TindaleTindale, 1989; see Reference HillHill, 1982, and Reference Hinsz, Tindale and VollrathHinsz et al., 1997, for reviews) and behavioral economics (Reference Kocher and SutterKocher & Sutter, 2005; Reference Maciejovsky, Sutter, Budescu and BernauMaciejovsky, Sutter, Budescu, & Bernau, 2010) have shown that groups require fewer feedback trials than the average individual to reach asymptotic levels of learning. Reasons for this superiority of groups may be a stronger reliance on memorization (Reference Olsson, Juslin and OlssonOlsson, Juslin, & Olsson, 2006) and better processing of feedback information (Reference HinszHinsz, 1990). This leads us to our second hypothesis that groups will learn to adapt their decision strategy to an unfamiliar environment over time faster than the average individual.

1.2 Two prototypical decision strategies

To investigate these hypotheses, we conducted two experiments with a two-alternative forced-choice task, in which subjects had to select the more profitable oil-drilling site. Each alternative (i.e., oil-drilling site) was described on a range of attributes (henceforth: cues), such as the results of seismic analysis. In line with research on individuals (e.g., Reference Rieskamp and OttoRieskamp & Otto, 2006), our focus was on environments in which two prototypical decision strategies work well: take-the-best (Reference Gigerenzer, Goldstein, Gigerenzer and ToddGigerenzer & Goldstein, 1999) and weighted additive (WADD). Both strategies make predictions about the information search and choice behavior (Reference Bröder and SchifferBröder, 2003; Reference Juslin, Olsson and OlssonPayne et al., 1988; Reference Rieskamp and OttoRieskamp & Otto, 2006), and their success depends on the environment structure.

Take-the-best looks up the best (i.e., most valid) cue for both alternatives. If this cue discriminates between them (i.e., is positive for one but negative for the other), take-the-best selects the alternative with the positive cue value and ignores all other cues (Reference Gigerenzer, Goldstein, Gigerenzer and ToddGigerenzer & Goldstein, 1999). Think of our introductory example: if the group considers seismic analysis as the most valid cue and if this indicates a high quality for oil-drilling site X but not for Y, the group would administer no further tests and would choose oil-drilling site X. But if seismic analysis showed positive results for both sites, a group using take-the-best would acquire the next-best cue, and so on, until a discriminating cue was found. A frequent criticism is that people violate the stopping rule and search for more information than necessary, that is, acquire information after the first discriminating cue (Reference Newell, Weston and ShanksNewell & Shanks, 2003; Reference Newell, Weston and ShanksNewell, Weston, & Shanks, 2003). This is particularly common when information search does not incur any costs (e.g., Reference Dieckmann and RieskampDieckmann & Rieskamp, 2007). However, others have argued that it does not rule out take-the-best when people look up too many cues as long as the final choice is based on a single cue (see Reference Hogarth and KarelaiaHogarth & Karelaia, 2007). In this regard, our experiment constitutes a challenging test bed as information search did not incur any costs. We report a method for testing whether unnecessarily acquired information influenced the decision, which would more strictly speak against a consistent use of take-the-best than the mere number of acquired cues (which is usually taken, as done by: Reference Newell, Weston and ShanksNewell & Shanks, 2003; Reference Rieskamp, Dieckmann, Todd and GigerenzerRieskamp & Dieckmann, 2012).

In contrast, WADD looks up all cues for both alternatives, multiplies each cue value by its weight, and then selects the alternative with the larger weighted sum. Variants of WADD take—instead of the validities—chance-corrected validities (Reference Glöckner and BetschGlöckner & Betsch, 2008b) or log odds as weights (termed naïve Bayes; Reference Bergert and NosofskyBergert & Nosofsky, 2007; Reference Katsikopoulos and MartignonKatsikopoulos & Martignon, 2006; Reference Lee and CumminsLee & Cummins, 2004). Strictly speaking, WADD is assumed to integrate all available cues (e.g., Reference Czerlinski, Gigerenzer, Goldstein, Gigerenzer and ToddCzerlinski, Gigerenzer, & Goldstein, 1999). However, WADD also works with limited information search, if one assumes that WADD searches cues sequentially according to their validity and stops search as soon as no additional cue can overrule a preliminary decision (as suggested by Reference Rieskamp, Dieckmann, Todd and GigerenzerRieskamp & Dieckmann, 2012). On this basis, we can define “necessary information” as the minimum number of cues WADD has to search for so that no additional cue could possibly compensate for the decision based on the acquired cues. Searching for fewer than necessary cues would violate the search rule of WADD (Reference Hogarth and KarelaiaHogarth & Karelaia, 2007), but the predictions for choice do not change. The advantage of these two models is that they formulate testable predictions on information search, stopping, and choice rules, which can also be tested in groups.

As this is the first study that examines the adaptive use of take-the-best and WADD in groups, we also explored how groups apply strategies as compared to individuals. Is accordance with the strategy’s search and stopping rules higher in groups than in individuals? Do groups apply strategies more consistently than individuals (Reference Chalos and PickardChalos & Pickard, 1985)? We will explore these questions on the basis of process and outcome data.

2 Experiment 1

Experiment 1 constitutes a first test bed for our assumptions on adaptive strategy selection in groups as opposed to individuals. To investigate whether subjects learn to select strategies adaptively, that is, contingent on the environment structure, we randomly assigned them to one of two environments, which were constructed to discriminate between the use of take-the-best and WADD: Take-the-best led to the highest performance in the take-the-best-friendly environment and WADD in the WADD-friendly environment. In such environments, people’s accordance with the best-performing (i.e., adaptive) strategy has been shown to increase over time when working alone (Reference Bröder and SchifferBröder, 2003; Reference Bröder and SchifferBröder & Schiffer, 2006; Reference Rieskamp and OttoRieskamp & Otto, 2006). The task in each case was to select the more profitable of two oil-drilling sites based on a range of cues, with outcome feedback after each trial. Subjects were randomly assigned to work alone or in same-sex two-person groups (hereafter: dyads).

2.1 Method

2.1.1 Subjects

Subjects included 120 people (60 females; M age = 26.3 years, SD = 3.7), of whom 77% indicated being a student. Subjects received €12.96 on average (SD = 0.83; €1 ≈ $1.37 at the time). To complete the experimental task, individuals took on average 36 min (SD = 12) and dyads 50 min (SD = 21).

2.1.2 Design and procedure

The experiment had a 2 × 2 × 3 factorial design: (Subject [individual, dyad] × Environment [take-the-best-friendly, WADD-friendly] × Block). The first two factors (Subject, Environment) were between subjects, the third (Block) within subject. Upon arrival, subjects were randomly assigned to one of the four between-subjects conditions, forcing equal cell sizes of 20 units. Of the 120 subjects, 80 were assigned to the dyad condition and 40 to the individual condition. For data analysis, each dyad was counted as a unit, since the two subjects worked together.

Subjects were seated in front of a touch screen either individually or in dyads. After answering demographic questions, subjects completed a practice trial and then worked on the experimental task. Dyads were encouraged to discuss their information search and to agree on a joint decision (see Appendix A for instructions).

2.1.3 Experimental task

The oil-drilling task (Reference CzienskowskiCzienskowski, 2004) is a MouseLab-like task (Reference Juslin, Olsson and OlssonPayne et al., 1988) that asks subjects to choose the more profitable of two oil-drilling sites in a sequence of trials. Each oil-drilling site was described by six cues and their validities (which correspond to the actual validities in the set; see Figure 1). Validities in decreasing order in both environments were (in percentages, with the discrimination rates for the take-the-best-friendly and WADD-friendly environment in parentheses): 78% (.35; .69), 71% (.54; .65), 65% (.65; .77), 60% (.58; .58), 56% (.69; .69), and 53% (.58; .58).Footnote 1 Cues appeared in alphabetical order. Cue validities and cue names were randomly paired once before the experiment and stayed fixed throughout the experiment and for all subjects. “Validity” was described as the proportion of correct answers using that cue alone when the cue was applicable (in German the word for “success” was used). The cues were framed as tests that could be commissioned (i.e., clicked on) to inform choice. Figure 1 illustrates the two decision strategies, WADD and take-the-best, with screenshots of the task interface. At the beginning of each trial, all boxes contained question marks. They could be clicked on separately to reveal whether the cue had a positive (“+”) or a negative (“−”) value, which remained visible until a choice was made. Clicking on cues was cost free. Outcome feedback followed each trial. For each correct choice, the subject’s account increased by 1,000 petros, a fictitious currency, equivalent to €0.10.

Figure 1: Screenshots of the task interface including six cues for each oil-drilling site (X and Y) illustrating the search behavior of a weighted additive strategy (WADD, left) and take-the-best (right). WADD required looking up all cues to calculate the weighted sum for each alternative. Take-the-best looked up the cue with the highest validity (here: seismic analysis) first, and, as this one did not discriminate, it looked up the cue with the second highest validity (geophones) next. As this cue discriminated, take-the-best reached a decision and ignored the remaining cues, which is why they are still hidden (“?”).

The task comprised three blocks, each consisting of the same set of 2 × 26 items (adapted from Reference Rieskamp and OttoRieskamp & Otto, 2006, Study 2; for the complete item sets see Tables A.1 and A.2 in Appendix A). The items within each block were randomly ordered for each subject with the restriction that the oil-drilling sites on the left and right were equally often correct. Overall, 50% of the total item set were critical items, that is, items for which the two strategies make opposing predictions. To create a WADD-friendly environment, items were constructed by means of genetic algorithms such that WADD reached an accuracy of 88%, while take-the-best reached an accuracy of only 62%. In the take-the-best-friendly environment, accuracies were reversed: 88% for take-the-best and 62% for WADD.Footnote 2 Footnote 3

2.2 Results

The results section is structured as follows: We first investigate whether subjects learned to adapt their strategy to the environment by analyzing performance changes over the three trial blocks. If dyads were faster than individuals, the performance difference should manifest itself from the first to the second block. We thus compared the first with the second and third block combined with a planned contrast. Performance was measured as the percentage of correct trials out of the 156 trials. To better compare performance between individuals and dyads, we also report analyses on nominal dyads. To evaluate the adaptivity of strategy use, we focus on accordance rates with the most appropriate strategy in each environment. Last, we test how subjects conformed to the corresponding search and stopping rules. Note that we have additionally analyzed the correspondence with a range of alternative strategies (Tally, chance-corrected WADD, and naïve Bayes). For clarity, we report the results of these extended classification analyses only in Appendix C but summarize and discuss them in the main text.

2.2.1 Performance

To investigate performance changes over the three blocks, we conducted a repeated-measures analysis of variance (ANOVA) with block as a within-subject factor and environment and individuals vs. dyads as between-subjects factors, and the accuracy per block as dependent variable. Figure 2 depicts the results. Accuracy generally increased over time, F block (1.65, 125.594) = 28.294, p < .001, η2p = .27 (Greenhouse-Geisser corrected). This improvement was more pronounced in the take-the-best-friendly environment, F Block × Environment (2, 152) = 15.341, p < .001, η2p = .17. Most importantly, we observed a Block × Ind. vs. Dyad interaction, F Block × Ind. vs. Dyads (2, 152) = 4.588, p = .01, η2p = .06. A planned contrast comparing block 1 with blocks 2 and 3 combined revealed that individuals and dyads started from the same level, but dyads then improved more quickly than individuals, F (1, 76) = 5.313, p = .02, η2p = .07. Overall, dyads were not better than the average individual, however, F ind. vs. dyads (1, 76) = 1.84, p = .18, η2p = .02. Last, mean performance was lower in the take-the-best-friendly environment (M take-the-best = .81, SD = .05) than in the WADD-friendly environment (M WADD = .85, SD = .05), F environment (1, 76) = 11.779, p = .001, η2p = .13.

Figure 2: Mean performance per block of dyads (n = 20) and individuals (n = 20), in the WADD-friendly (left) and take-the-best-friendly (TTB; right) environments. Error bars: ±1 SE.

2.2.2 Comparison with the best individual

To create nominal dyads, all 20 individuals of the individual condition in each environment were exhaustively paired, leading to 190 nominal dyads per environment. To determine the performance of each nominal dyad, we took the performance of the “best” (i.e., most accurate) member of a nominal dyad. “Best” was operationalized in two ways: The best individual was the one who made more accurate choices either (a) overall (“best member overall”) or (b) in the first 26 trials, which equals half a block (“best member in 26 trials”). Measure (a) has been criticized for being accessible to the researcher only a posteriori (Reference MinerMiner, 1984); Measure (b) is supposed to reflect the idea that groups first determine their best member and afterward adopt this person’s choices (Reference HenryHenry, 1995).

We found that in both environments real dyads (M take-the-best = .82, SD = .05; M WADD = .85, SD = .05) reached the benchmark provided by the nominal dyads, be it by the best member overall (M take-the-best = .83, SD = .04; M WADD = .87, SD = .03) or by the best member in 26 trials (M take-the-best = .82, SD = .05; M WADD = .86, SD = .04), but did not exceed it.Footnote 4

2.2.3 Strategy use

To understand the reasons for the different learning curves, we next explored the rates of accordance with the two best performing strategies, take-the-best and WADD, in their respective environments. Accordance rates measure how often the strategy predictions match the actual choices and can be interpreted as a measure of consistency of using a certain strategy. Accordance is highly correlated with performance but differs conceptually: To illustrate, a consistent (100%) use of the most appropriate strategy in each environment would have resulted in a performance level of only 88%. Performance, on the other hand, is a more neutral measure, being directly observable and allowing for comparisons with other learning tasks.

Again, we conducted a repeated-measures ANOVA to study strategy use over time. The three blocks were entered as the within-subject factor, the two environments and individuals vs. dyads as between-subjects factors, and the rate of accordance with the adaptive strategy as dependent variable (Figure B.1 in Appendix B). Mirroring performance, accordance generally increased over time, F block (1.74, 132.40) = 41.530, p < .001, η2p = .35 (Greenhouse-Geisser corrected). This increase was more pronounced in the take-the-best-friendly environment, F Block × Environment (2, 152) = 22.695, p < .001, η2p = .23. Again, we observed a Block × Ind. vs. Dyad interaction, F Block × Ind. vs. Dyads (2, 152) = 3.284, p < = .04, η2p = .04. A planned contrast comparing block 1 with blocks 2 and 3 combined revealed that dyads adapted more quickly than individuals in the take-the-best-friendly environment, F (1, 76) = 4.899, p < = .03, η2p = .06. A contrast comparing block 2 with block 3 revealed in addition a three-way interaction: dyads were more in accordance with WADD in the last block of the WADD-friendly environment, F (1, 76) = 6.799, p = .01, η2p = .08. No overall differences between individuals and dyads were revealed, F ind. vs. dyads (1, 76) = 2.195, p = .14, η2p = .03.

2.2.4 Information search and stopping rule

As accordance rates have been criticized for being too imprecise to reveal cognitive processes from behavioral data (Reference Bröder and SchifferBröder & Schiffer, 2003), we provide in the following some additional measures to validate the conclusion that subjects improved over time because they learned to use the most appropriate strategy. In particular, we looked at information search behavior and investigated how it accorded with the information search and stopping rules predicted by take-the-best and WADD. Before we could do that, however, we had to determine the decision strategy each individual and dyad most likely used. For this, we used Reference Bröder and SchifferBröder and Schiffer’s (2003) maximum-likelihood method of strategy classification. With this method, the best-fitting model from take-the-best, WADD, Tally and guessingFootnote 5 can be determined, where the fit is determined in reference to the likelihood of the data given the model (see Bröder & Schiffer, 2003, for details).

In the take-the-best-friendly environment, 13 individuals and 18 dyads were classified as adaptively using take-the-best, while in the WADD-friendly environment 16 individuals and 18 dyads were classified as adaptively using WADD.Footnote 6 On the surface, they did not differ in their information search, as these subjects searched in both environments on average for 81.2% (SD = 15.6) of the available information (ANOVA: all Fs < 2.9). The number of cues was more than necessary for take-the-best (on average, 4.46 boxes [SD = 2.01] were opened in addition to the first discriminating cue in the take-the-best-friendly environment), indicating that cost-free cues triggered extensive cue acquisition. This is congruent with previous findings, which showed that people may learn different strategies and apply different choice rules even though they do not differ in their stopping rule when there are no search costs (but they do differ as soon as search costs are introduced; see Reference Dieckmann and RieskampDieckmann & Rieskamp, 2007; Reference Rieskamp, Dieckmann, Todd and GigerenzerRieskamp & Dieckmann, 2012). In fact, searching for cues does not necessarily imply that the cues are integrated; search is often continued to enhance confidence in decisions already made (Reference Harvey and BolgerHarvey & Bolger, 2001; Reference Newell, Weston and ShanksNewell et al., 2003; Reference SvensonSvenson, 1992).

In a next step, we analyzed information search over time and now introduce two more fine-grained measures of strategy use: (1) To validate WADD as a choice rule, we checked how often subjects who were classified as adaptive WADD users opened fewer cues than necessary, in short “too few” (recall that necessary means that no further evidence would overrule the decision based on the acquired cues). (2) To validate take-the-best as choice rule, we analyzed those cases in which subjects who were classified as adaptively using take-the-best opened less valid cues that contradicted the first discriminating (more valid) cue, and checked whether this less valid cue overruled their decision—which, according to take-the-best, it should not. In other words, we counted how often the decision of take-the-best users was overruled by compensatory evidence (“compensatory choices”).

Figure 3 depicts the results for these two measures. In the left panel, the results concerning the WADD users can be seen. It shows that in the first block, WADD users opened fewer boxes than necessary in about 30% of cases, which decreased over blocks to 16%, F block (1.454, 46.531) = 16.907, p < .001, η2p = .35 (Greenhouse-Geisser corrected), with no differences between individuals and dyads, F ind. vs. dyads (1, 32) = 3.104, p = .09, η2p = .09. In other words, all WADD users became more consistent with their search rule but still showed some deviations.

Figure 3: Two measures of strategy use concerning the stopping rule, in the WADD-friendly environment (left) and in the take-the-best friendly environment (right). The left panel depicts the relative frequency of cases in which too few cues were looked up, that is, cues that should have been opened so that the decision could not be overruled by additional evidence. This measure was calculated for the 16 individuals and 18 dyads who were classified as adaptive WADD users. The right panel depicts the proportion of those trials in which people decided against the first discriminating cue based on less valid cues that were additionally opened, although, according to take-the-best, these less valid cues should not have overruled the first discriminating cue. This measure was calculated for the 13 individuals and 18 dyads who were classified as adaptive take-the-best users. Error bars: ±1 SE.

In the right panel of Figure 3, the results concerning the individuals classified as adaptive take-the-best users can be seen. It shows the percentages of those cases in which subjects saw contradictory evidenceFootnote 7 that overruled the decision suggested by take-the-best. In the first block, individuals and dyads decided in around 35% of cases, in which they saw contradictory evidence, against take-the-best. Over time, this proportion decreased, indicating a growing consistency in using take-the-best, F block (2, 52) = 29.909, p < .001, η2p = .54, and it did so more strongly for dyads (where it decreased to about 15%) than for individuals (where it decreased to about 25%), F Block × Ind. vs. Dyads (2, 52) = 3.654, p = .03, η2p = .12. Again, dyads were faster, which was revealed by a planned contrast comparing block 1 with blocks 2 and 3 combined, F (1, 26) = 5.744, p = .02, η2p = .18.

2.3 Summary

In Experiment 1 we sought to test how well individuals and dyads performed in an unknown task environment and if they learned to select the appropriate strategy. It provided some evidence that not only individuals but also dyads are able to adapt to different, but stable environment structures.

Dyads even showed a faster adaptation process, but they did not surpass the best individual, on average. The high performance rates were supported by the finding that the majority of subjects were classified as using the adaptive strategy. When looking at only the two prototypical strategies (WADD and take-the-best), accordance rates mirrored performance results and indicated a more consistent (though not a perfect) use of take-the-best by dyads. Convergent evidence came from process measures: information search became more consistent over time, and again to a greater extent for dyads in the take-the-best-friendly environment. Still, deviations from strategy predictions concerning information search amounted to 15% and 25% for those being classified as using the appropriate strategy in the two environments, respectively.

To summarize the extended classification results for six strategies (see Appendix C), we found again that, in the take-the-best-friendly environment, more dyads (n = 17) than individuals (n = 14) were classified as using one of the three best performing strategies, though not the very best one (n = 7 dyads, n = 6 individuals were classified as take-the-best users). In the WADD-friendly environment, all individuals and dyads were classified as using one of the three best performing strategies, though more dyads than individuals were classified as using the very best (n = 17 dyads, n = 10 individuals classified as WADD users).

3 Experiment 2

In Experiment 2 we sought to replicate the findings of Experiment 1 and extend them to a task in which environment structures changed over time so that a new strategy had to be learned. Experiment 2 thus comprised two phases: the learning phase, which was identical to Experiment 1 and varied the environment structure between subjects, and the relearning phase, in which subjects were confronted with the alternative environment. Consequently, each subject encountered both environments (the take-the-best friendly and the WADD friendly) from Experiment 1, one after the other. Experiment 2 thus provides a stricter test for adaptive strategy selection by varying the environment structure within subjects, as Payne et al. (1988) have suggested.

Because Experiment 2 contained a change in the environment that rendered another strategy adaptive, it differed in some important aspects from Experiment 1. While the learning phase of Experiment 2 was equivalent to Experiment 1 (with the difference that people were informed at the beginning that there would be two phases), the relearning phase of Experiment 2, though structurally corresponding to the learning phase, required additional subtasks. These subtasks were (a) to detect the need for change, (b) to find and apply a new and better strategy than the one selected in the learning phase, and (c) to overcome a—now maladaptive—routine established in the learning phase.

When people are faced with familiar problems, routinized decision behavior has many advantages, such as allowing for efficiently dealing with a situation and for immediately reacting and performing well. On the group level, having developed a routine reduces the need for consideration, coordination, and negotiation (Reference Gersick and HackmanGersick & Hackman, 1990). When a situation changes, however, and some novel decision behavior is—unnoticeably—required, routines become maladaptive. In fact, individuals as well as groups have difficulty overcoming maladaptive routines, especially with increasing routine strength or when they are under time pressure (e.g., Reference Betsch, Fiedler and BrinkmannBetsch, Fiedler, & Brinkmann, 1998; Reference Betsch, Haberstroh, Glöckner, Haar and FiedlerBetsch, Haberstroh, Glöckner, Haar, & Fiedler, 2001; Reference Bröder and SchifferBröder & Schiffer, 2006; Reference Reimer, Bornstein, Opwis, Betsch and HaberstrohReimer, Bornstein, & Opwis, 2005; for a review of theories, see Reference Betsch, Haberstroh and HöhleBetsch, Haberstroh, & Höhle, 2002). The additional requirements make the relearning phase more difficult than the learning phase of Experiment 2 and more difficult than Experiment 1. We thus expected an overall lower performance in the relearning phase. This enhanced difficulty has one additional advantage though, as it leaves more room for learning to take place. In fact, one could argue that in Experiment 1 the lack of learning in the WADD-friendly environment was due to a ceiling effect, as subjects, both individuals and dyads, had started out with an already very high accordance to WADD. If performance is already high and people do not know the upper benchmark of performance, they might not see any need to change their strategy, which might have been one reason for the lack of further improvement in the WADD-friendly environment in Experiment 1.

3.1 Methods

3.1.1 Subjects

Subjects included 120 people (60 females; M age = 24.2 years, SD = 3.7), of whom 83% indicated being a student. Subjects received €24.40 on average (SD = 1.55). To complete the oil-drilling task, individuals took on average 53 min (SD = 15) and dyads 72 min (SD = 24).

3.1.2 Design and procedure

Again, the experiment had a 2 × 2 × 3 (Subject [individual, dyad] × Starting Environment [take-the-best friendly, WADD friendly] × Block) factorial design, with phase as an additional factor (Phase 1, Phase 2). The first two factors were between subjects, the third and fourth within subject. Upon arrival, subjects were randomly assigned to one of the four between-subjects conditions, forcing equal cell sizes of 20 units. As in Experiment 1, subjects worked with a touch screen either individually or in same-sex dyads, and, again, dyads were treated as single subjects for purposes of analysis. After answering demographic questions, subjects completed a practice trial and then worked on the experimental task, which was exactly the same in each phase as in Experiment 1. The difference was that this time all subjects worked on the two environments consecutively, one half first on the take-the-best-friendly environment and then on the WADD-friendly environment with a break in between, the other half in the reverse order. Subjects were told at the very beginning that they had to work on two phases, finding profitable oil-drilling sites first in the United States and then in Argentina (or vice versa, counter-balanced per environment). We provided this country hint in all conditions to suggest to subjects that something might have changed and to thereby secure a minimum level of adaptivity; it has previously been shown that without a hint almost no adaptivity is observed in a changing environment, resulting in a floor effect (Reference Bröder and SchifferBröder & Schiffer, 2006).

3.2 Results

3.2.1 Performance

To study performance differences between the two environments and between individuals and dyads over the two phases, we conducted a repeated-measures ANOVA with the three blocks and the two phases as within-subject factors, the order of environments and individuals vs. dyads as independent variables, and the percentage of correct trials as dependent variable. As can be seen in Figure 4, performance generally increased over time in both phases, F block (1.82, 138.57) = 90.458, p < .001, η2p = .54 (Greenhouse-Geisser corrected). Dyads were on average better than individuals, F ind. vs. dyads (1, 76) = 3.939, p = .05, η2p = .05. This difference was moderated by phase and order of environments, F Phase × Ind. vs. Dyads × Order (1, 76) = 3.601, p = .06, η2p = .05: Dyads who started with the take-the-best- friendly environment achieved a higher performance than individuals in this environment (M dyads = .81, SD = .07 vs. M ind. = .76, SD = .10), but did not differ in the second (WADD-friendly) phase (M dyads = .78, SD = .08 vs. M ind. = .78, SD = .06). Individuals and dyads who started with the WADD-friendly environment achieved a similarly high performance in this environment (M dyads = .85, SD = .06 vs. M ind. = .85, SD = .06), but individuals’ performance then dropped to a larger degree in the second (take-the-best-friendly) phase than that of dyads (M dyads = .73, SD = .04 vs. M ind. = .69, SD = .05). Moreover, different learning curves were observable: individuals mainly improved from the first to the second block, though this time not to a lesser degree (which was revealed by a planned contrast comparing block 1 with blocks 2 and 3 combined, F (1, 76) = 0.282, p = .60, η2p = .004). But dyads kept on improving to reach a higher final level, F Block × Ind. vs. Dyads (2, 152) = 3.617, p = .03, η2p = .05, which was supported by a contrast comparing the second with the third block, F (1, 76) = 9.166, p = .003, η2p = .11.

Figure 4: Individuals’ and dyads’ average performance in the two experimental orders: The left panel depicts the rates of performance with the adaptive strategies in the experimental order of first the WADD-friendly and then the take-the-best-friendly environment; the right panel depict the results for the reverse order. Error bars: ±1 SE.

As expected, average performance of all subjects dropped from the first to the second phase, F phase (1, 76) = 63.416, p < .001, η2p = .46. In other words, subjects suffered from the change in the environment. However, the direction of change played an important role. Learning to apply WADD in the second (relearning) phase when it had not been adaptive before was more likely than adopting take-the-best as a novel strategy. In both phases, performance was higher in the WADD-friendly environment than in the take-the-best-friendly environment. Thus, the drop from the first to the second phase was much less pronounced when the WADD-friendly environment constituted the second environment than when the take-the-best-friendly environment came second, F Phase × Environment (1, 76) = 52.855, p < .001, η2p = .41, indicating a preference for WADD. As a result, when the take-the-best-friendly environment constituted the starting environment, subjects’ performance did not differ between the phases. This was not the case in the reverse experimental order.

3.2.2 Comparison with the best individual

Again we compared the performance of real dyads with that of nominal dyads. Nominal dyads were composed by exhaustively pairing the 20 individuals of the individual condition of each environment, and performance was determined by giving each nominal dyad the score obtained by the better of the two individuals (“best member overall” and “best member in 26 trials”). In the take-the-best-friendly environments, real dyads (M phase 1 = .81, SD = .07; M phase 2 = .73, SD = .04) reached the baseline provided by the nominal dyads in both phases, be it by the best member overall (M phase 1 = .82, SD = .05; M phase 2 = .73, SD = .03) or the best member in the first 26 trials (M phase 1 = .81, SD = .05; M phase 2 = .71, SD = .04). Also in the WADD-friendly environments, real dyads (M phase 1 = .85, SD = .06; M phase 2 = .78, SD = .08) were close to the performance of the best member overall (M phase 1 = .88, SD = .03; M phase 2 = .81, SD = .04) and of the best member in 26 trials (M phase 1 = .87, SD = .03; M phase 2 = .79, SD = .05).

3.2.3 Strategy use

Strategy use over time (i.e., accordance rate of the adaptive strategy in each environment) was entered into a repeated-measures ANOVA with the three blocks and two phases as within-subject factors, and the environment of the first phase and individuals vs. dyads as independent variables (see Figure B.2 in Appendix B).

Within each phase, accordance generally increased over time, F block (1.693, 128.705) = 119.992, p < .001, η2p = .61 (Greenhouse-Geisser corrected). Like performance, average accordance with the adaptive strategy dropped from the first phase to the second, F phase (1, 76) = 100.145, p < .001, η2p = .57; this drop was particularly deep when subjects were confronted with the take-the-best-friendly environment in the second phase, F Phase × Environment (1, 76) = 28.770, p < .001, η2p = .28; and increase in accordance was steepest in this environment and phase too, F Block × Phase × Environment (2, 152) = 12.594, p < .001, η2p = .14. Overall, accordance with the adaptive strategy was lower in the take-the-best-friendly environment than in the WADD-friendly environment, F environment (1, 76) = 7.132, p = .01, η2p = .09.

Dyads achieved in both phases higher accordance rates with take-the-best in the take-the-best-friendly environment than individuals, but slightly lower accordance rates with WADD in the WADD-friendly environment in both phases, F Phase × Ind. vs. Dyads × Environment (1, 76) = 8.201, p = .01, η p2 = .10, so that dyads only slightly surpassed individuals in overall accordance with the most adaptive strategy (M individuals = .77, SD = .06 vs. M dyads = .80, SD = .06), F ind. vs. dyads (1, 76) = 3.454, p = .07, η2p = .04.

3.2.4 Information search and stopping rule

Again we used the maximum-likelihood method of Bröder and Schiffer (2003) to classify subjects as using one of the following strategies: take-the-best, WADD, Tally, or guessing (for results concerning the classification with six strategies, see Tables C.2 and C.4 in Appendix C). In the first phase, 15 individuals and 17 dyads were classified as adaptively using take-the-best in the take-the-best-friendly environment. In the WADD-friendly environment, 18 individuals and 18 dyads were classified as using WADD. In the second phase, no individual and only seven dyads were classified as adaptively using take-the-best in the take-the-best-friendly environment. In the WADD-friendly environment, more subjects, namely, 13 individuals and 13 dyads, were classified as adaptively using WADD, probably indicating that WADD was either easier to learn or a default strategy when encountering a changing environment, as others have argued before (e.g., Reference Bröder and SchifferBröder & Schiffer, 2006).

We then restricted the number of subjects to the adaptively classified and entered individuals vs. dyads and the environment as independent variables and the percentage of acquired cues as dependent variable into an ANOVA for the first phase. It revealed that all subjects in the first phase searched for more information in the WADD-friendly environment (M = 84.3%, SD = 14.0) than in the take-the-best-friendly environment, where search was still quite high (M = 69.1%, SD = 20.4), F environment (1, 66) = 12.899, p = .001, η2p = .16. Due to the lack of individuals classified as take-the-best users in the second phase, only a comparison within dyads was possible. Here, the mean number of acquired cues was not an indicator of strategy use, as no differences were revealed between environments (overall M = 77.8%, SD = 14.3). This amount of information acquisition again exceeded the amount required by take-the-best (on average, 3.75 boxes [SD = 2.12] were opened after the first discriminating cue in the first phase and 6.59 boxes [SD = 1.72] in the second phase in the take-the-best-friendly environment).

We next analyzed how often fewer cues than necessary were opened by the adaptive WADD users. The left panel of Figure 5 depicts the results for the first phase. An ANOVA with repeated measures revealed that individuals and dyads became more consistent with the WADD stopping rule over time, opening in around 27% of trials fewer boxes than necessary in the first block, which decreased over blocks to 18% of trials, F block (2, 68) = 11.354, p < .001, η2p = .25. In the second phase (right panel of Figure 5), subjects started with opening in 42% of trials on average too few cues, which decreased to a proportion of around 29% in the last block, again indicating an increasing consistency with WADD, though the absolute numbers were higher than in the first phase, F block (1.220, 29.277) = 5.808, p = .01, η2p = .20 (Greenhouse-Geisser corrected).

Figure 5: Mean percentage of trials in which “too few” cues were opened by subjects who were classified as WADD users in the WADD-friendly environment, in the first phase (left; n = 18 individuals and n = 18 dyads) and in the second phase (right; n = 13 individuals and n = 13 dyads). Error bars: ±1 SE.

The two panels of Figure 6 depict the proportion of trials in which individuals classified as adaptive take-the-best users saw contradictory evidence after the first discriminating cue and—being influenced by this evidence—chose the option not favored by the first discriminating cue. As in Experiment 1, a steady decrease in those compensatory choices was observable in the first phase, though without differences between individuals and dyads, reaching a final level of about 20%, F block (2, 60) = 26.985, p < .001, η2p = .47. In phase 2, no comparison between individuals and dyads was possible as only seven dyads but no individuals were classified as adaptive take-the-best users. Dyads showed a similar decreasing trend as in phase 1, though on a higher absolute level with a final level of around 29%, F block (2, 12) = 39.148, p < .001, η2p = .87.

Figure 6: Average proportion of those trials in which people decided against the first discriminating cue based on less valid cues that were additionally opened (i.e., contradictory evidence), in the first (left) and in the second (right) phase in the take-the-best-friendly environment. This measure was calculated for those subjects who were classified as adaptive take-the-best users (phase 1: n = 15 individuals and n = 17 dyads; phase 2: n = 7 dyads). Note that no individuals were classified as take-the-best users in the second phase, so no results can be displayed for individuals in the right panel. Error bars: ±1 SE.

3.3 Summary

In sum, Experiment 2 mainly replicated the findings of Experiment 1 and tested them in a relearning phase. In the learning phase, dyads were superior to individuals in learning to adaptively follow take-the-best but did not differ in following WADD. The relearning phase apparently constituted a much harder test bed, with performances much lower than in the learning phase. Again, dyads were superior to individuals in learning to adaptively follow take-the-best but did not differ in following WADD. Dyads performed at the level of the best members. Strategies were more consistently used in the first phase than in the second, and dyads applied take-the-best more consistently than individuals, which was indicated by accordance rates and was shown more clearly by the classification, which revealed that no single individual was using take-the-best in the second phase. However—and similar to Experiment 1—consistency was not perfect, as deviations in the range of 18% to 29% of trials from the predicted information search were observed.

In summary of the extended classification results for six strategies (see Appendix C), in the WADD-friendly environment, again the vast majority of subjects were classified as using one of the three best performing strategies. This holds true for both phases (phase 1: n = 20 dyads, n = 19 individuals; phase 2: n = 18 dyads, n = 18 individuals). In the take-the-best-friendly environment in phase 1, more dyads (n = 12) than indivduals (n = 8) were classified as using one of the three best performing strategies. However, the proportion of individuals and dyads being classified as using take-the-best was equal (and low with n = 4 out of 20). In the take-the-best-friendly environment in phase 2, again more dyads (n = 6) than indivduals (n = 1) were classified as using one of the three best performing strategies, and 2 dyads and 0 individuals were classified as using take-the-best. In other words, these analyses suggest that most people were not able to find the very best strategy when WADD was not adaptive but that dyads learned to apply one of the three most successful strategies relatively more often than individuals.

4 Discussion

Applying the appropriate decision strategy in a given environment can have direct implications for one’s payoff. Two experiments were conducted to investigate whether and how well two-person groups (dyads), as opposed to individuals, adaptively select decision strategies that exploit the structure of two unfamiliar task environments. In detail, the two task environments were designed so that the most successful decision strategies differed in their information search, stopping, and choice rules: the take-the-best-friendly environment required subjects to limit collecting evidence and to ignore less valid information that contradicted more valid information and to base their decisions on the most valid discriminating cue. The WADD-friendly environment, in contrast, required subjects to collect all the available pieces of information about both alternatives for at least as long as no further evidence could overrule the decision based on the acquired information and to base their decisions on the weighted sum of collected information. Thus, the use of the most appropriate strategies secured a high performance in the respective environment.

4.1 Performance differences between individuals and groups, and between environments

We hypothesized that groups would be able to adapt their strategy selection as well as the average individual did and explored whether they would even surpass the level of the best individual. We further expected to find a faster learning rate in groups, taking research on other learning tasks as a benchmark (e.g., Reference Hinsz, Tindale and VollrathHinsz et al., 1997).

In fact, we found that groups were on average as good as the average individual in Experiment 1 and somewhat better in Experiment 2. We can thus conclude that no process losses, such as from distraction or social inhibition (e.g., Reference SteinerSteiner, 1972), hindered group performance in this strategy selection task. How well did groups perform in comparison to the best individual? Recall that a mere statistical reason for high group performance might be that groups have a higher probability of containing at least one individual who is above the mean ability level of people working alone (Reference Lorge and SolomonLorge & Solomon, 1955). To look into this, we compared performance levels of the interacting groups with that of the best member of nominal groups. We found that real groups performed by and large as well as the best individuals in both environments and both experiments. In other words, one possible mechanism behind the high group performance we observed could be that groups identified their best member and adopted this person’s choices (and hence could not become better than the best). This finding might be used to argue against investing in (time-consuming) group interaction. Some caution is warranted, though, because another conclusion could be that it is of sufficient interest for groups to reach the potential given by the performance of their best member, since groups rarely perform better than individuals, according to a vast amount of literature (e.g., Reference Kerr and TindaleKerr & Tindale, 2004; Reference Laughlin, Bonner and MinerLaughlin et al., 2002; Reference Tindale and SheffeyTindale & Sheffey, 2002), possibly because groups usually have difficulty identifying their best member without help (e.g., Reference HenryHenry, 1995; Reference Henry, Strickland, Yorges and LaddHenry, Strickland, Yorges, & Ladd, 1996). Even more relevant may be that group decision making has other advantages, such as legitimacy and acceptance, which may play an important role in many organizational contexts (see Reference Allen and HechtAllen & Hecht, 2004, for more benefits).

Aside from overall performance differences between individuals and groups were apparent differences between the learning speeds of individuals and dyads, with the type of environment being an important moderator: The learning curve in the take-the-best-friendly environment was steeper for groups than for individuals, with either individuals reaching the same level of performance in the final block (Experiment 1) or groups staying on a higher level in all blocks (Experiment 2). In the WADD-friendly environment, in contrast, individuals and dyads performed on a similarly high level throughout. Overall, performance was higher in the WADD-friendly environment than in the take-the-best-friendly environment and particularly in the first block, although it did diminish over time.

In the relearning phase of Experiment 2, routine effects led to an overall decrease in performance, but mostly when the take-the-best-friendly environment was encountered second. Such negative transfer effects have been widely documented before (e.g., Reference Betsch and HaberstrohBetsch & Haberstroh, 2005). But, although individuals and groups started at a similarly low performance levels in Phase 2, the groups’ superiority again became apparent: Groups’ performance was more likely to recover, whereas only the best individuals were successful in doing the same, as the comparison with nominal groups suggests. In fact, not a single individual was classified as adaptively using take-the-best in the second phase, but seven groups were. Our finding that most people were able to adapt to the environment when it was new (phase 1) but had difficulties in discovering the most appropriate strategies in the relearning phase replicates previous results in similar tasks with individuals only (see e.g., Reference Bröder, Todd and GigerenzerBröder, 2012 for an overview). Reference Bröder, Todd and GigerenzerBröder (2012) speculated that different cognitive processes might come into play in these two distinct tasks: deliberate and effortful learning in a new situation versus slow reinforcement learning (e.g., Reference Rieskamp and OttoRieskamp & Otto, 2006) in a known situation. Our finding also suggests that giving people many opportunities to encounter a novel task that requires abandoning a routine is especially beneficial for groups, although they might appear as more prone to routines than individuals in the first place (Reference Reimer, Bornstein, Opwis, Betsch and HaberstrohReimer et al., 2005).

In sum, this study highlights the strong moderating role of the environment when comparing individual with group performance. Two findings stand out that will be considered in more depth in the following: (1) the higher performance from the first block on in the WADD-friendly environment as compared to the take-the-best-friendly environment, and (2) the apparent differences between individuals and groups in the learning curves within the take-the-best-friendly environment.

4.1.1 What explains the higher performance in the WADD-friendly environment?

In fact, the observed asymmetry in favor of WADD is a common finding in research with individual decision makers (Reference Bröder and SchifferBröder, 2003; Reference Bröder and SchifferBröder & Schiffer, 2003, 2006; Reference Rieskamp and OttoRieskamp & Otto, 2006) and here it is extended to the group level. It can be interpreted in several ways: First—and with special consideration of the asymmetry from the first block on—it may simply reflect an exploration phase, in which people try to get a sense of which pieces of information are useful before settling on a decision strategy (Reference McAllister, Mitchell and BeachMcAllister, Mitchell, & Beach, 1979). Similarly, it may be attributable to an adaptive behavior. Reference Hogarth and KarelaiaHogarth and Karelaia (2006) argued from a prescriptive perspective that in unknown environments linear models perform better than one-reason decision strategies. In fact, explorative strategies of novices, for example, often look like WADD, while that of experts rather look like take-the-best (Reference Garcia-Retamero and DhamiGarcia-Retamero & Dhami, 2009). On the other hand, the observed asymmetry may reflect a deliberate decision to integrate all pieces of information because of the belief that “more is better” (Reference Chu and SpiresChu & Spires, 2003). From a descriptive perspective, it may thus reflect an overgeneralization of the applicability of normally reasonable strategies (Reference Payne, Bettman and JohnsonPayne et al., 1993, p. 206) and may have been enhanced by leading the subjects to focus on accuracy, which has been found to foster WADD (Reference Creyer, Bettman and PayneCreyer, Bettman, & Payne, 1990).

Last and somewhat critically, one could argue that the experimental setting as such may lead to a general demand effect, whereby subjects feel obliged to integrate all pieces of information offered (for a similar argumentation see Reference Bröder, Todd and GigerenzerBröder, 2012). In particular, our MouseLab-like experimental setup probably set conditions that favored the use of a strategy that integrates all available information (such as WADD) by default (Reference Glöckner and BetschGlöckner & Betsch, 2008b), as searching for information did not incur any costs, all pieces of information were clearly presented on the screen upon request, and there was no time pressure (see e.g., Reference BröderBröder, 2003). If a setting that triggers WADD as a default strategy has at the same time a WADD-friendly environment structure, applying WADD (or similarly information-intense strategies) turns out to be successful from the very beginning, and accordance with it will stay high. In contrast, if a setting has an underlying take-the-best-friendly environment, the default strategy has to be (deliberately) abolished and a new strategy learned (here: one that ignores information!), leading to performance declines in the beginning. This might be the reason for the observed performance differences between environments. More research is needed on the role of the specific features of the setting (such as the time or costs for acquiring information) in performance changes in a strategy selection task by individuals and groups.

4.1.2 Did subjects apply the most appropriate strategies?

Before we elaborate on the within-environment differences between individuals and groups, we briefly review the issue of which strategies subjects most likely used in the two environments. First, the high performance level we observed can be seen as an indirect indicator that our subjects actually used the appropriate strategies. Support for this interpretation comes from more direct indicators, which were the number of subjects being classified as using the respectively most appropriate strategy and their accordance rate with the information search, stopping, and choice rules. The information search measures revealed an increasing consistency in the use of the appropriate information search and stopping rule over time and again a higher consistency by groups than by individuals in the take-the-best-friendly environment. This higher decision consistency in groups is consistent with work by Reference Chalos and PickardChalos and Pickard (1985). Also, the classification supported the superiority of groups over individuals. The conclusion that most individuals and groups indeed learned to apply the single best strategy requires caution, however. One limiting factor is the observed extent of deviations from the predicted information search rules, which ranged from 15% to 29%, even in the final block. Despite the plausibility of the measures we used and the insights they provide into strategy use, only a restricted evaluation is possible, as no established thresholds exist and no comparison of the observed absolute deviations with previous studies is possible. Future studies should further validate these measures.

Another limiting factor is the result from the extended classification analyses, where we considered six instead of only four strategies for classification (see Appendix C).Footnote 8 Although take-the-best and WADD have been identified as two prototypical decision strategies (Reference Bröder and SchifferBröder, 2003; Reference Rieskamp and OttoRieskamp & Otto, 2006), many more decision strategies are assumed to be part of the toolbox (for an overview see table A.1-1 in Reference Todd, Gigerenzer, Todd and GigerenzerTodd & Gigerenzer, 2012, pp.8-9), and other strategies besides those two performed well in the two environments (though not as well as take-the-best and WADD, respectively; see Appendix C). Our extended strategy classification analyses give credence to this notion. Here, we found that the majority of subjects learned to adopt one of the three most successful strategies (though not necessarily the single best) in a given environment. While in the WADD-friendly environment the range of classified strategies was rather small and most were classified as WADD users, subjects in the take-the-best-friendly environment were distributed over a wider range of strategies so that only up to a third was classified as using take-the-best. The distribution of subjects over different strategies can be interpreted as a sign of individual preferences or of the learning states subjects were in (assuming that people learn the more successful strategies over time, see Reference Rieskamp and OttoRieskamp & Otto, 2006). But it can also be seen as a sign that take-the-best in fact played only a minor role in people’s strategy choice, which is further supported by the observed deviations from information search and stopping rules. Having in mind the explanation that WADD seems to serve as a default strategy, it is only plausible that subjects also selected different strategies than take-the-best, as it is not the only alternative to WADD. This holds particularly true as take-the-best was not explicitly favored by apparent environmental characteristics (such as noncompensatory weights, costs for information search).

Still another explanation of our data is that subjects stayed with one single weighted additive strategy in both environments but adapted cue weights and information search given feedback over time. Therefore, future research should test environments that allow better for the differentiation between a wider set of decision strategies and these alternative explanations.

4.1.3 Why were groups better than individuals in the take-the-best-friendly environment?

In fact, superiority of small groups over individuals has been documented before in other learning tasks (e.g., Reference HillHill, 1982). This study demonstrates it in a strategy selection task and thus contributes to research on the adaptive capacity of teams (e.g., Reference Burke, Stagl, Salas, Pierce and KendallBurke et al., 2006; Reference Randall, Resick and DeChurchRandall et al., 2011). Plausible explanations for the superiority of groups in the take-the-best-friendly environment can be derived from the literature that discusses reasons for the superiority of groups in intellective tasks in general (e.g., Reference Laughlin, VanderStoep and HollingsheadLaughlin, VanderStoep, & Hollingshead, 1991) and a faster learning rate of groups in particular (e.g., Reference DavisDavis, 1969). These are (a) the greater likelihood of recognizing the correct answer due to a larger sample size; (b) a better joint memory due to better error correction ability (e.g., Reference HinszHinsz, 1990; Reference Vollrath, Sheppard, Hinsz and DavisVollrath, Sheppard, Hinsz, & Davis, 1989) and/or better encoding (Reference Weldon, Blair and HuebschWeldon, Blair, & Huebsch, 2000; for an overview of findings on collaborative group memory, see Reference Betts and HinszBetts & Hinsz, 2010); and (c) the capacity to process more information and use decision rules more consistently (Reference Chalos and PickardChalos & Pickard, 1985). Additionally, articulating the decision procedure during discussion may enhance awareness, foster deeper processing, and promote a rather explicit meta- cognitive thinking style, which may, in turn, render it more likely that the appropriate strategy will be detected (Reference Kerr, MacCoun and KramerKerr, MacCoun, & Kramer, 1996; but see Reference Olsson, Juslin and OlssonOlsson et al., 2006).

The aforementioned reasons, however, would also suggest a superiority of groups over individuals in the WADD-friendly environment, which we did not find. One might argue that a ceiling effect was responsible for our not finding this or, in other words, that a certain low starting level of performance is needed to trigger learning. This explanation, however, is inconsistent with the results of the second phase of Experiment 2, where performance dropped and no levels as high as in the first phase were reached.

What might explain that performance differences mainly prevailed in the take-the-best-friendly environment? Assuming that subjects in fact adopted take-the-best, we speculate that the possibility for social validation in a dyadic setting may be one reason for our finding that groups were more prone to be less influenced by irrelevant cues (i.e., cues that were less valid than the best discriminating cue). The approval of one’s partner may replace looking up or taking into consideration more cues to feel reassured in one’s decision. Another reason may be that a better calibration of cue orderings may be the result of collaborating with another person, as exchanging information with others can speed up learning the order in which cues should be considered (Reference Garcia-Retamero, Takezawa and GigerenzerGarcia-Retamero, Takezawa, & Gigerenzer, 2009). Because this was helpful only in the take-the-best-friendly environment, the observed asymmetry may have appeared. It may also be the case that groups per se rather overweight apparently important cues (Reference Gigone and HastieGigone & Hastie, 1997), which may be unhelpful in certain environments, such as one that is WADD friendly, but advantageous in others, such as a take-the-best-friendly environment. Last, the information search steps and integration rule of take-the-best might be much easier to verbalize than those of WADD, rendering take-the-best easier to communicate and teach to another person once it has been detected as the appropriate rule (for a related argument that simple, sequential strategies are easier to learn than strategies that weight and add all pieces of information, see Reference Wegwarth, Gaissmaier and GigerenzerWegwarth, Gaissmaier, & Gigerenzer, 2009).

A different explanation would be that the groups’ superiority in the take-the-best-friendly environment was not a result of learning take-the-best in particular but of a more general superiority in learning to abolish the default strategy when ceasing to be successful and to adopt another more successful one (though not necessarily the single best one). Recall that groups may have a greater cognitive capacity, as summarized above. Previous research has found that greater cognitive capacity does not affect the use of any particular strategy (as would be expected from the classical effort-accuracy trade-off perspective; e.g., Reference Christensen-SzalanskiChristensen-Szalanski, 1978; Reference Payne, Bettman and JohnsonPayne et al., 1993) but rather on the use of the appropriate strategy (e.g., Reference BröderBröder, 2003, see also Reference Mata, Schooler and RieskampMata, Schooler, & Rieskamp, 2007). A greater cognitive capacity seems to be helpful in the meta-process of detecting the payoff structure and selecting the appropriate strategy, which may be a one-reason decision strategy in some environments (Reference Bröder, Todd and GigerenzerBröder, 2012). Also, considering one-reason decision making may require some form of deliberate discounting of information (counter to the default use of WADD and the common belief that more is better). Thus, even though subjects might have not necessarily learned take-the-best but some other (though less) successful strategy that exploited some features of the environment, our findings provide evidence for the adaptive capacity of individuals and (for the somewhat greater adaptive capacity of) teams in general.

4.2 Limitations and open questions

This study is certainly just one step in studying adaptive strategy selection in groups. Some limitations in its generalizability may rest in its focus on inferences from givens and a rather abstract, unfamiliar experimental task. In everyday life, people probably find that new and old situations bear some resemblance and thus are able to exploit their repertoire of strategies better. However, (perceived) familiarity with the task may not always be beneficial, as was shown by Experiment 2, where the task surface stayed the same but the underlying structure changed from Phase 1 to Phase 2. Here, subjects had particular problems in finding the best strategy. Other factors that may play a role in real-world tasks are, for instance, strategic interests that may influence information sharing and weighting, having to actively search for and remember information, and also having to decide what to search for in the first place. The MouseLab-like experimental setup in our study has certainly simplified the task in these respects (Reference Glöckner and BetschGlöckner & Betsch, 2008b). Therefore, more naturalistic settings and a broader set of decision domains should be considered in future studies. On the side of the decision maker, further influencing factors worthy of study include intelligence, working memory load (Reference Bröder and SchifferBröder, 2003), the size of the group, and group composition (Reference Kämmer, Gaissmaier, Reimer and SchermulyKämmer et al., 2013).

With regard to WADD, we are aware that more variations than taking the validities as weights are conceivable (e.g., taking unit weights, log odds, or chance-corrected weights; see, e.g., Reference Bergert and NosofskyBergert & Nosofsky, 2007) and we considered them in the extended classification analyses (Appendix C). Analyses of the take-the-best-friendly environment show that these alternative weighting schemes may play a role in people’s strategy choice, but analyses of the WADD-friendly environment showed that they played a minor role compared to WADD, probably because using validities as weights was fostered by our experimental setup (being the most successful strategy and presenting validities). Future studies should consider those different weighting schemes more explicitly already in the design of experiments.

Future research should also address the question of whether and to what extent the superiority effect can be found “in the wild”, that is, in real groups that encounter environments where ignoring irrelevant information can facilitate and improve decision making. Admittedly this is an unusual endeavor in light of much group research that aims at finding ways of fostering the quantity of information considered by groups (e.g., Reference Frey, Schulz-Hardt and StahlbergFrey, Schulz-Hardt, & Stahlberg, 1996; Reference Larson, Foster-Fishman and KeysLarson et al., 1994; Reference Parks and CowlinParks & Cowlin, 1996; Reference Stasser, Taylor and HannaStasser, Taylor, & Hanna, 1989). This line of research has been stimulated by the repeated finding that groups did not exhaust their potential to pool more information but mainly discussed shared information known to every member (e.g., Reference StasserStasser, 1992; Reference Wittenbaum, Stasser, Nye and BrowerWittenbaum & Stasser, 1996). In those studies, the option with the highest overall sum score was usually defined as the best solution, though (i.e., Tally; Reference Reimer, Hoffrage, Todd and GigerenzerReimer & Hoffrage, 2012; for a critique see Reference Reimer and HoffrageReimer & Hoffrage, 2006). Therefore, groups ignoring part of the available information necessarily performed worse than the benchmark strategy. This limitation to one type of environment structure restricts the possible findings concerning group adaptivity. Our results draw an optimistic picture that groups are able to adapt to different environments. The lesson here is that not the mere quantity of information determines the success of a group (Reference Reimer and HoffrageReimer & Hoffrage, 2006) but rather the adaptive integration of information, which may mean, in certain environments, ignoring irrelevant information.

4.3 Conclusion

Adaptive capacity is essential for individuals and groups who are engaged in judgment and decision making (Reference Burke, Stagl, Salas, Pierce and KendallBurke et al., 2006; Reference Betsch, Haberstroh and HöhleGigerenzer et al., 1999; Reference Randall, Resick and DeChurchRandall et al., 2011). It enables people to adjust their operations in (changing) environments accordingly. The selection of an appropriate strategy from the adaptive toolbox, for example, will lead to efficient and effective decision making in an uncertain environment. The current study provides some evidence for the adaptive capacity of individuals and groups and even group superiority in a task environment in which the default strategy was not the most successful one. By this, it extended research on adaptive strategy selection to the group level, which is necessary not only for theoretical progress but also because of the practical relevance of social interactions for decision making. Despite the common (and partly justified; see Reference Richter, Dawson and WestRichter, Dawson, & West, 2011) belief of organizations in the superiority of teams (Reference Allen and HechtAllen & Hecht, 2004), however, no generalized verdict in favor of groups can be derived from this study. Instead, it demonstrates how important it is to take the environmental structure of the task into account when comparing individual with group strategy learning and performance.

Appendix A: Experimental material Experiment 1: Screenshots with instructions

(Note that the original instructions were in German, the translation is given below the screenshots.)

  1. 1) Imagine you are a geologist and have a contract with an oil-drilling company to find profitable oil-drilling sites. In the following, you are supposed to choose the more profitable of two oil-drilling sites. In order to make a decision you can commission six different measures (i.e., you can click on them). The six measures can inform you with different levels of certainty (“success”) whether an oil-drilling site is profitable (“+”) or not (“–”).

  2. 2) See, for example, the figure below “seismic analyses”: It allows you in 78% of cases to make a correct prediction about whether you can find oil (“+”) or not (“–”). The measure “chemical analyses” in the lower example, however, allows for only 53% correct predictions.

  3. 3) You are free to choose which and how many measures and in which order you “commission” them (i.e., which ones you uncover), until you choose one of the two oil-drilling sites (X or Y). To see the result of a measure, just click on the corresponding box with the question mark.

  4. 4) In order to choose one of the two oil-drilling sites, just click either on the box with the “X” (left oil-drilling site) or on the box with the “Y” (right oil-drilling site). After your choice, you will receive feedback about the accuracy of your choice. For each correct choice, you will receive 1000 Petros. At the end of the experiment, the experimenter will pay you €0.10 in exchange for 1000 Petros.

    In the following practice trial you can practice how the program works. The result will not be counted.

    Additional oral instructions from the experimenter:

  5. 5) “Please read through the instructions. There will be a practice trial. If you have questions, please ask me. There is no time limit.” [In dyad condition: “Please work jointly on the task and do not leave it to one person to click on the boxes.”]

Table A.1: Item set in the WADD-friendly environment.

Note: C1 = cue 1, C2 = cue 2, etc.; Correct = correct alternative.

Table A.2: Item set in the take-the-best-friendly environment.

Note: C1 = cue 1, C2 = cue 2, etc.; Correct = correct alternative.

Appendix B: Additional results

1) Accordance rates in Experiment 1

Figure B.1. Individuals’ and dyads’ mean rates of accordance with the adaptive strategy in the WADD-friendly (left) and take-the-best-friendly (TTB; right) environments. In both environments, choices were strongly in accordance with the appropriate adaptive strategy. Dyads, however, either reached asymptotic accordance faster (take-the-best-friendly environment) or reached higher final levels of accordance with the adaptive strategy (WADD-friendly environment). Error bars: ±1 SE.

2) Accordance rates in Experiment 2

Figure B.2. Individuals’ and dyads’ mean accordance rates with the adaptive strategy in the WADD-friendly and take-the-best-friendly (TTB) environments. The two left panels depict the rates of accordance with the adaptive strategies in the experimental order of first the WADD-friendly and then the take-the-best-friendly environment; n = 20 individuals, n = 20 dyads); the two right panels depict the results for the reverse order. Error bars: ±1 SE.

Appendix C: Additional results: Classification with six strategies

Here we present the results for the classification according to Reference Bröder and SchifferBröder and Schiffer (2003) for six strategies: WADD, Tally, chance-corrected WADD, naïve Bayes, take-the-best, and guessing. Strategy predictions were based on the cues observed by each individual and dyad, that is, predictions were tailored to the acquired cues. This results in a stricter classification than when predictions are based on all cues, as was done in the main text.

Table C.1 and Table C.2 show the average accordance and performance rates of the subgroups of subjects who were classified as users of one strategy in Experiment 1 and 2. These tables show that, in the take-the-best-friendly environment, there were (slightly) more dyads than individuals classified as using one of the three most successful strategies (take-the-best, naïve Bayes, chance-corrected WADD), which may be the reason why dyads also achieved on average a slightly higher performance level than individuals. In the WADD-friendly environment, equally many individuals and dyads were classified as using one of the three most successful strategies (WADD, Tally, chance-corrected WADD), and no performance differences were observed on average.

Table C.1 Results for the classification according to Reference Bröder and SchifferBröder and Schiffer (2003) including six strategies for Experiment 1.

Note: The columns contain the theoretical accuracy of each strategy in the respective environment, the number of classified subjects in each category (n), the average accordance with the respective strategy of the classified subjects and the observed average performance (SD in parentheses). Strategies are ordered per environment according to their theoretical accuracy in decreasing order. Classification = classification into one of the following strategies: WADD, Tally, WADD (chance) = WADD with chance corrected weights, naïve Bayes = WADD with log odds as weights, TTB = take-the-best. The sixth strategy was guessing. No subject was classified as guessing.

Table C.2: Results for the classification according to Reference Bröder and SchifferBröder and Schiffer (2003) for six strategies for Experiment 2, for phase 1 (upper part) and phase 2 (lower part).

Note: The columns “accordance” and “performance” contain mean values with SD in parentheses. Classification = classification in one of the following strategies: WADD, Tally, WADD (chance) = WADD with chance corrected weights, naïve Bayes = WADD with log odds as weights, TTB = take-the-best. The sixth strategy was guessing.

* One subject was classified as using guessing. This subject had an average performance of .71 in phase 1 and of .67 in phase 2.

Information search behavior

For comparison with the main results, we also include the results concerning the two information search measures for those subjects who were now classified as adaptive take-the-best and WADD users in the two environments (see Table C.3 for Experiment 1 and Table C.4 for Experiment 2). In these tables it can be seen that, similar to the main results, in the take-the-best-friendly environment dyads showed a larger decrease in the proportion of compensatory choices than individuals, speaking for a faster adaptation, though without any differences in the last block. In the WADD-friendly environment, results were mixed. In Experiment 1 (and phase 2 of Experiment 2), dyads showed a much larger drop from block 1 to block 3 in their proportion of opening “too few” cues and also reached a lower level in the end. However, in phase 1 of Experiment 2, the decrease was proportionally equal between individuals and dyads, though on an overall higher level for dyads. In phase 2 of Experiment 2, individuals and dyads started at a similarly high deviation rate but the opening rate of dyads decreased to a lower level in the final block, indicating some superiority here.

Table C.3: Average values with SE in parentheses in Experiment 1.

Note: Measure for information search behavior and accordance with stopping rule were the percentage of trials in which “too few” cues were opened by subjects classified as adaptive WADD users and the proportion of compensatory choices for those classified as adaptive take-the-best (TTB) users. Note that the classification was based on the observed cues, for six strategies.

Table C.4: Average values with SE in parentheses in Experiment 2, in phase 1 (upper part) and phase 2 (lower part).

Note: Measure for information search behavior and accordance with stopping rule were the percentage of trials in which “too few” cues were opened by subjects classified as adaptive WADD users and the proportion of compensatory choices for those classified as adaptive take-the-best (TTB) users. Note that the classification was based on the observed cues, for six strategies.

Footnotes

This research was funded by the Max Planck Institute for Human Development, Berlin, Germany. We would like to thank Ulrich Klocke and Torsten Reimer for helpful discussions, Henrik Olsson and two anonymous reviewers for insightful comments on an earlier version of this article, and Anita Todd and Katherine McMahon for editing the manuscript. Thanks are also due to Gregor Caregnato and Jann Wäscher for collecting the data.

Note: C1 = cue 1, C2 = cue 2, etc.; Correct = correct alternative.

Note: C1 = cue 1, C2 = cue 2, etc.; Correct = correct alternative.

Note: The columns contain the theoretical accuracy of each strategy in the respective environment, the number of classified subjects in each category (n), the average accordance with the respective strategy of the classified subjects and the observed average performance (SD in parentheses). Strategies are ordered per environment according to their theoretical accuracy in decreasing order. Classification = classification into one of the following strategies: WADD, Tally, WADD (chance) = WADD with chance corrected weights, naïve Bayes = WADD with log odds as weights, TTB = take-the-best. The sixth strategy was guessing. No subject was classified as guessing.

Note: The columns “accordance” and “performance” contain mean values with SD in parentheses. Classification = classification in one of the following strategies: WADD, Tally, WADD (chance) = WADD with chance corrected weights, naïve Bayes = WADD with log odds as weights, TTB = take-the-best. The sixth strategy was guessing.

* One subject was classified as using guessing. This subject had an average performance of .71 in phase 1 and of .67 in phase 2.

Note: Measure for information search behavior and accordance with stopping rule were the percentage of trials in which “too few” cues were opened by subjects classified as adaptive WADD users and the proportion of compensatory choices for those classified as adaptive take-the-best (TTB) users. Note that the classification was based on the observed cues, for six strategies.

Note: Measure for information search behavior and accordance with stopping rule were the percentage of trials in which “too few” cues were opened by subjects classified as adaptive WADD users and the proportion of compensatory choices for those classified as adaptive take-the-best (TTB) users. Note that the classification was based on the observed cues, for six strategies.

1 The differently high discrimination rates of the most valid cue had no effect on the times this cue was opened (i.e., its opening rate) in the two environments: Opening rates were (first value for the WADD-friendly environment, second for the take-the-best-friendly environment) for oil-drilling site X 98.7%, 98.9%, and for oil-drilling site Y 98.6%, 98.1%. Also in experiment 2, there were no differences in the opening rates of the most valid cue between environments: Opening rates were (first value for the WADD-friendly environment, second for the take-the-best-friendly environment) for oil-drilling site X in phase 1 96.4%, 96.3% and in phase 2 96.2%, 96.1%, and for oil-drilling site Y in phase 1 97.6%, 93.2% and in phase 2 95.5%, 97.1%.

2 The theoretical accuracy of alternative strategies such as Tally, WADD with chance-corrected weights (i.e., chance-corrected WADD) and naïve Bayes lay in between these two benchmarks. In detail (first value for the WADD-friendly environment, second value for the take-the-best-friendly environment), theoretical accuracies were Tally: .79, .58, chance-corrected WADD: .73; .77, naïve Bayes: .69; .81.

3 Using the same item sets repeatedly might invite reliance on exemplar processing instead of strategy or cue-based learning. In this case, decisions are based on the similarity between cue-pattern of the target case and that of previously encountered exemplars. People have been found to rely more on exemplar knowledge when categorizing perceptual objects (Reference Nosofsky and JohansenNosofsky & Johansen, 2000) or making memory-based decisions when cue abstraction is hindered (Platzer & Bröder, in press). Additionally, the type of learning, be it comparison learning (i.e., learning which of two objects in a paired comparison has the higher criterion value) or direct criterion learning (i.e., directly learning an object’s criterion value), has been identified as an important moderating factor (Reference Pachur and OlssonPachur & Olsson, 2012). In the current study, learning by comparison may occur, and it could foster cue-based mechanisms (Reference Pachur and OlssonPachur & Olsson, 2012). Moreover, research on exemplar models provided evidence for a “rule bias”, that is, that people tend to rely on rule knowledge (e.g., validities) whenever possible (e.g., Reference Juslin, Olsson and OlssonJuslin, Olsson, & Olsson, 2003). We would thus expect that subjects will engage in cue-based learning (i.e., learning to use WADD or take-the-best).

4 We did not test these differences statistically because of the very unequal sample sizes (n = 190 nominal dyads vs. n = 40 real dyads; Reference FieldField, 2009). Moreover, it can be seen from the values that no practically relevant differences are observable.

5 Tally is considered as the fourth alternative after the strategies with the highest expected accuracy in the two respective environments and a baseline guessing model, as is usually done (e.g., Reference Bröder and SchifferBröder & Schiffer, 2006). Tally (or Dawes’s rule; Reference DawesDawes, 1979) assumes that people sum up the positive cues and choose the option with the larger total sum. It thus searches for all information. In the WADD-friendly environments, it performed second best (79%) and in the take-the-best-friendly environment it performed worse than take-the-best (58%).

6 As the results of the classification procedure depend on the number of competing strategies, we report all subsequent results also for a second, stricter classification procedure with six strategies in Appendix C. For details see Table C.1 in Appendix C. The results concerning information search for the reduced sample of classified adaptive strategy users can be found in Table C.3.

7 The amount of contradictory evidence can be measured in different ways, for example, by calculating the weighted sum of all those cues that were opened after the first discriminating one, for each option X and Y, and comparing these sums with each other. If the first discriminating cue points to X, for example (i.e., has a positive value for X), but the weighted sum of cues opened after the first discriminating one is larger for Y, this is regarded as contradictory evidence. We report the results for this measure. An alternative way would be to count the number of discriminating cues that follow the first discriminating one and to note the direction in which they point. If, after the first discriminating cue more discriminating cues follow that point in the other direction (Y), this would be regarded as contradictory evidence. These measures yield very similar results.

8 We are grateful to the action editor Andreas Glöckner and reviewers for their suggestion to integrate more strategies into our analyses as they allowed for a more general interpretation concerning performance differences between individuals and dyads.

References

Allen, N. J., & Hecht, T. D. (2004). The ’romance of teams’: Toward an understanding of its psychological underpinnings and implications. Journal of Occupational & Organizational Psychology, 77, 439461. http://dx.doi.org/10.1348/0963179042596469CrossRefGoogle Scholar
Baron, R. S. (1986). Distraction-conflict theory: Progress and problems. In Berkowitz, L. (Ed.), Advances in experimental social psychology. Vol. 19 (pp. 1-40). New York: Academic Press.Google Scholar
Bergert, F. B., & Nosofsky, R. M. (2007). A response-time approach to comparing generalized rational and take-the-best models of decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 107129. http://dx.doi.org/10.1037/0278-7393.33.1.107Google ScholarPubMed
Betsch, T., Fiedler, K., & Brinkmann, J. (1998). Behavioral routines in decision making: The effects of novelty in task presentation and time pressure on routine maintenance and deviation. European Journal of Social Psychology, 28, 861878. http://dx.doi.org/10.1002/(SICI)1099-0992(1998110)28:6{\textless}861::AID-EJSP899{\textgreater}3.0.CO;2-D3.0.CO;2-D>CrossRefGoogle Scholar
Betsch, T., & Haberstroh, S. (2005). The routines of decision making. Mahwah, NJ: Erlbaum.Google Scholar
Betsch, T., Haberstroh, S., Glöckner, A., Haar, T., & Fiedler, K. (2001). The effects of routine strength on adaptation and information search in recurrent decision making. Organizational Behavior and Human Decision Processes, 84, 2353. http://dx.doi.org/10.1006/obhd.2000.2916CrossRefGoogle ScholarPubMed
Betsch, T., Haberstroh, S., & Höhle, C. (2002). Explaining routinized decision making. Theory & Psychology, 12, 453488. http://dx.doi.org/10.1177/0959354302012004294CrossRefGoogle Scholar
Betts, K. R., & Hinsz, V. B. (2010). Collaborative group memory: Processes, performance, and techniques for improvement. Social and Personality Psychology Compass, 4, 119130. http://dx.doi.org/10.1111/j.1751-9004.2009.00252.xCrossRefGoogle Scholar
Bonaccio, S., & Dalal, R. S. (2006). Advice taking and decision-making: An integrative literature review, and implications for the organizational sciences. Organizational Behavior and Human Decision Processes, 101, 127151. http://dx.doi.org/10.1016/j.obhdp.2006.07.001CrossRefGoogle Scholar
Bottger, P., & Yetton, P. (1988). An integration of process and decision scheme explanations of group problem solving performance. Organizational Behavior and Human Decision Processes, 42, 234249. http://dx.doi.org/10.1016/0749-5978(88)90014-3CrossRefGoogle Scholar
Bowers, J. S., & Davis, C. J. (2012a). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138, 389414. http://dx.doi.org/10.1037/a0026450CrossRefGoogle ScholarPubMed
Bowers, J. S., & Davis, C. J. (2012b). Is that what Bayesians believe? Reply to Griffiths, Chater, Norris, and Pouget (2012). Psychological Bulletin, 138, 423426. http://dx.doi.org/10.1037/a0027750CrossRefGoogle ScholarPubMed
Bröder, A. (2003). Decision making with the ’adaptive toolbox’: Influence of environmental structure, intelligence, and working memory load. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 611625. http://dx.doi.org/10.1037/0278-7393.29.4.611Google ScholarPubMed
Bröder, A. (2012). The quest for take the best—Insights and outlooks from experimental research. In Todd, P., Gigerenzer, G. & the ABC Research Group (Eds.), Ecological rationality: Intelligence in the world (pp. 216240). New York: Oxford University Press.Google Scholar
Bröder, A. & Schiffer, S. (2003). Bayesian strategy assessment in multi-attribute decision making. Journal of Behavioral Decision Making, 16, 193213. http://dx.doi.org/10.1002/bdm.442CrossRefGoogle Scholar
Bröder, A., & Schiffer, S. (2006). Adaptive flexibility and maladaptive routines in selecting fast and frugal decision strategies. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 904918. http://dx.doi.org/10.1037/0278-7393.32.4.904Google ScholarPubMed
Burke, C. S., Stagl, K. C., Salas, E., Pierce, L., & Kendall, D. (2006). Understanding team adaptation: A conceptual analysis and model. Journal of Applied Psychology, 91, 11891207. http://dx.doi.org/10.1037/0021-9010.91.6.1189CrossRefGoogle ScholarPubMed
Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100, 432459. http://dx.doi.org/10.1037/0033-295X.100.3.432CrossRefGoogle Scholar
Chalos, P., & Pickard, S. (1985). Information choice and cue use: An experiment in group information-processing. Journal of Applied Psychology, 70, 634641. http://dx.doi.org/10.1037/0021-9010.70.4.634CrossRefGoogle Scholar
Christensen-Szalanski, J. J. (1978). Problem solving strategies: A selection mechanism, some implications, and some data. Organizational Behavior & Human Performance, 22, 307323. http://dx.doi.org/10.1016/0030-5073(78)90019-3CrossRefGoogle Scholar
Christensen-Szalanski, J. J. (1980). A further examination of the selection of problem-solving strategies: The effects of deadlines and analytic aptitudes. Organizational Behavior & Human Performance, 25, 107122. http://dx.doi.org/10.1016/0030-5073(80)90028-8CrossRefGoogle Scholar
Chu, P., & Spires, E. E. (2003). Perceptions of accuracy and effort of decision strategies. Organizational Behavior and Human Decision Processes, 91, 203214. http://dx.doi.org/10.1016/S0749-5978(03)00056-6CrossRefGoogle Scholar
Creyer, E. H., Bettman, J. R., & Payne, J. W. (1990). The impact of accuracy and effort feedback and goals on adaptive decision behavior. Journal of Behavioral Decision Making, 3, 116. http://dx.doi.org/10.1002/bdm.3960030102CrossRefGoogle Scholar
Czerlinski, J., Gigerenzer, G., & Goldstein, D. G. (1999). How good are simple heuristics? In Gigerenzer, G., Todd, P., & the ABC Research Group (Eds.), Simple heuristics that make us smart (pp. 97-118). New York: Oxford University Press.Google Scholar
Czienskowski, U. (2004). The oil drilling experiment [Computer software]. Berlin, Germany: Max Planck Institute for Human Development.Google Scholar
Davis, J. H. (1969). Individual-group problem solving, subject preference, and problem type. Journal of Personality and Social Psychology, 13, 362374. http://dx.doi.org/10.1037/h0028378.CrossRefGoogle ScholarPubMed
Dawes, R. M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34, 571582. http://dx.doi.org/doi:10.1037/0003-066X.34.7.571.CrossRefGoogle Scholar
De Dreu, C. K. W., Nijstad, B. A., & van Knippenberg, D. (2008). Motivated information processing in group judgment and decision making. Personality and Social Psychology Review, 12, 2249. http://dx.doi.org/10.1177/1088868307304092CrossRefGoogle ScholarPubMed
Dieckmann, A., & Rieskamp, J. (2007). The influence of information redundancy on probabilistic inferences. Memory & Cognition, 35, 18011813. http://dx.doi.org/10.3758/BF03193511CrossRefGoogle ScholarPubMed
Diehl, M., & Stroebe, W. (1987). Productivity loss in brainstorming groups: Toward the solution of a riddle. Journal of Personality and Social Psychology, 53, 497509. http://dx.doi.org/http://dx.doi.org/10.1037/0022-3514.53.3.497.CrossRefGoogle Scholar
Field, A. (2009). Discovering statistics using SPSS. London: Sage Publications Ltd.Google Scholar
Frey, D., Schulz-Hardt, S., & Stahlberg, D. (1996). Information seeking among individuals and groups and possible consequences for decision making in business and politics Understanding group behavior, Vol 2: Small group processes and interpersonal relations (pp. 211-225). Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc; England.Google Scholar
Garcia-Retamero, R., & Dhami, M. (2009). Take-the-best in expert-novice decision strategies for residential burglary. Psychonomic Bulletin & Review, 16, 163169. http://dx.doi.org/10.3758/PBR.16.1.163CrossRefGoogle ScholarPubMed
Garcia-Retamero, R., Takezawa, M., & Gigerenzer, G. (2009). Does imitation benefit cue order learning? Experimental Psychology, 56, 307320. http://dx.doi.org/10.1027/1618-3169.56.5.307CrossRefGoogle ScholarPubMed
Gersick, C. J. G., & Hackman, J. R. (1990). Habitual routines in task-performing groups. Organizational Behavior and Human Decision Processes, 47, 6597. http://dx.doi.org/10.1016/0749-5978(90)90047-DCrossRefGoogle ScholarPubMed
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451482. http://dx.doi.org/10.1146/annurev-psych-120709-145346CrossRefGoogle ScholarPubMed
Gigerenzer, G., & Goldstein, D. G. (1999). Betting on one good reason: The take the best heuristic. In Gigerenzer, G., Todd, P. M., & the ABC Research Group, Simple heuristics that make us smart (pp. 75-96). New York, NY: Oxford University Press.Google Scholar
Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple heuristics that make us smart. New York, NY: Oxford University Press.Google Scholar
Gigone, D., & Hastie, R. (1997). The impact of information on small group choice. Journal of Personality and Social Psychology, 72, 132140. http://dx.doi.org/10.1037/0022-3514.72.1.132CrossRefGoogle Scholar
Glöckner, A., & Betsch, T. (2008a). Modeling option and strategy choices with connectionist networks: Towards an integrative model of automatic and deliberate decision making. Judgment and Decision Making, 3, 215228.10.1017/S1930297500002424CrossRefGoogle Scholar
Glöckner, A., & Betsch, T. (2008b). Multiple-reason decision making based on automatic processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 1055. http://dx.doi.org/10.1037/0278-7393.34.5.1055Google ScholarPubMed
Glöckner, A., & Betsch, T. (2010). Accounting for critical evidence while being precise and avoiding the strategy selection problem in a parallel constraint satisfaction approach: A reply to Marewski (2010). Journal of Behavioral Decision Making, 23, 468472. http://dx.doi.org/10.1002/bdm.688CrossRefGoogle Scholar
Glöckner, A., Betsch, T., & Schindler, N. (2010). Coherence shifts in probabilistic inference tasks. Journal of Behavioral Decision Making, 23, 439462. http://dx.doi.org/10.1002/bdm.668CrossRefGoogle Scholar
Griffiths, T. L., Chater, N., Norris, D., & Pouget, A. (2012). How the Bayesians got their beliefs (and what those beliefs actually are): Comment on Bowers and Davis (2012). Psychological Bulletin, 138, 415422. http://dx.doi.org/10.1037/a0026884CrossRefGoogle ScholarPubMed
Harvey, N., & Bolger, F. (2001). Collecting information: Optimizing outcomes, screening options, or facilitating discrimination? Quarterly Journal of Experimental Psychology: Section A, 54, 269301. http://dx.doi.org/10.1080/02724980042000110CrossRefGoogle ScholarPubMed
Henry, R. A. (1995). Improving group judgment accuracy: Information sharing and determining the best member. Organizational Behavior and Human Decision Processes, 62, 190197. http://dx.doi.org/10.1006/obhd.1995.1042CrossRefGoogle Scholar
Henry, R. A., Strickland, O. J., Yorges, S. L., & Ladd, D. (1996). Helping groups determine their most accurate member: The role of outcome feedback. Journal of Applied Social Psychology, 26(13), 11531170. http://dx.doi.org/10.1111/j.1559-1816.1996.tb02290.xCrossRefGoogle Scholar
Hill, G. W. (1982). Group versus individual performance: Are N + 1 heads better than one? Psychological Bulletin, 91, 517539. http://dx.doi.org/10.1037/0033-2909.91.3.517CrossRefGoogle Scholar
Hinsz, V. B. (1990). Cognitive and consensus processes in group recognition memory performance. Journal of Personality and Social Psychology, 59, 705718. http://dx.doi.org/10.1037/0022-3514.59.4.705CrossRefGoogle Scholar
Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121, 4364. http://dx.doi.org/10.1037/0033-2909.121.1.43CrossRefGoogle ScholarPubMed
Hogarth, R. M., & Karelaia, N. (2006). “Take-the-best” and other simple strategies: Why and when they work “well” with binary cues. Theory and Decision, 61, 205249. http://dx.doi.org/10.1007/s11238-006-9000-8CrossRefGoogle Scholar
Hogarth, R. M., & Karelaia, N. (2007). Heuristic and linear models of judgment: Matching rules and environments. Psychological Review; Psychological Review, 114, 733758. http://dx.doi.org/10.1037/0033-295X.114.3.733CrossRefGoogle ScholarPubMed
Jones, M., & Love, B. C. (2011). Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 169231. http://dx.doi.org/10.1017/S0140525X10003134CrossRefGoogle ScholarPubMed
Juslin, P., Olsson, H., & Olsson, A.-C. (2003). Exemplar effects in categorization and multiple-cue judgment. Journal of Experimental Psychology: General, 132, 133156. http://dx.doi.org/10.1037/0096-3445.132.1.133CrossRefGoogle ScholarPubMed
Kämmer, J. E., Gaissmaier, W., Reimer, T., & Schermuly, C. C. (2013). The adaptive use of recognition in group decision making. Manuscript submitted for publication.Google Scholar
Katsikopoulos, K. V., & Martignon, L. (2006). Naïve heuristics for paired comparisons: Some results on their relative accuracy. Journal of Mathematical Psychology, 50, 488494. http://dx.doi.org/10.1016/j.jmp.2006.06.001CrossRefGoogle Scholar
Kerr, N. L., MacCoun, R. J., & Kramer, G. P. (1996). Bias in judgment: Comparing individuals and groups. Psychological Review, 103, 687719. http://dx.doi.org/10.1037/0033-295X.103.4.687CrossRefGoogle Scholar
Kerr, N. L., & Tindale, R. S. (2004). Group performance and decision making. Annual Review of Psychology, 55, 623655. http://dx.doi.org/10.1146/annurev.psych.55.090902.142009CrossRefGoogle ScholarPubMed
Kocher, M. G., & Sutter, M. (2005). The decision maker matters: Individual versus group behaviour in experimental beauty-contest games. The Economic Journal, 115, 200223. http://dx.doi.org/10.1111/j.1468-0297.2004.00966.xCrossRefGoogle Scholar
Lamm, H., & Trommsdorff, G. (2006). Group versus individual performance on tasks requiring ideational proficiency (brainstorming): A review. European journal of social psychology, 3, 361388. http://dx.doi.org/10.1002/ejsp.2420030402CrossRefGoogle Scholar
Larson, J. R., Foster-Fishman, P. G., & Keys, C. B. (1994). Discussion of shared and unshared information in decision-making groups. Journal of Personality and Social Psychology, 67, 446461. http://dx.doi.org/10.1037/0022-3514.67.3.446CrossRefGoogle Scholar
Laughlin, P. R., Bonner, B. L., & Miner, A. G. (2002). Groups perform better than the best individuals on letters-to-numbers problems. Organizational Behavior and Human Decision Processes, 88, 605620. http://dx.doi.org/10.1016/S0749-5978(02)00003-1CrossRefGoogle Scholar
Laughlin, P. R., & Shippy, T. A. (1983). Collective induction. Journal of Personality and Social Psychology, 45, 94100. http://dx.doi.org/10.1037/0022-3514.45.1.94CrossRefGoogle Scholar
Laughlin, P. R., VanderStoep, S. W., & Hollingshead, A. B. (1991). Collective versus individual induction: Recognition of truth, rejection of error, and collective information processing. Journal of Personality and Social Psychology, 61, 5067. http://dx.doi.org/10.1037/0022-3514.61.1.50CrossRefGoogle Scholar
Lee, M. D., & Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the “take the best” and the “rational” models. Psychonomic Bulletin & Review, 11, 343352.CrossRefGoogle Scholar
LePine, J. A. (2003). Team adaptation and postchange performance: Effects of team composition in terms of members’ cognitive ability and personality. Journal of Applied Psychology, 88, 2739. http://dx.doi.org/10.1037/0021-9010.88.1.27CrossRefGoogle ScholarPubMed
Levine, J. M., & Smith, E. R. (in press). Group cognition: Collective information search and distribution. In Carlston, D. E. (Ed.), Oxford handbook of social cognition. New York: Oxford University Press.Google Scholar
Lorenz, J., Rauhut, H., Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108, 90209025. http://dx.doi.org/10.1073/pnas.1008636108CrossRefGoogle ScholarPubMed
Lorge, I., & Solomon, H. (1955). Two models of group behavior in the solution of eureka-type problems. Psychometrika, 20, 139148.10.1007/BF02288986CrossRefGoogle Scholar
Maciejovsky, B., Sutter, M., Budescu, D. V., & Bernau, P. (2010). Teams make you smarter: Learning and knowledge transfer in auctions and markets by teams and individuals. IZA Discussion Paper No. 5105. Available at SSRN: http://ssrn.com/abstract=1659084 10.2139/ssrn.1659084CrossRefGoogle Scholar
Manser, T. (2009). Teamwork and patient safety in dynamic domains of healthcare: A review of the literature. Acta Anaesthesiologica Scandinavica, 53, 143151. http://dx.doi.org/10.1111/j.1399-6576.2008.01717.xCrossRefGoogle ScholarPubMed
Marewski, J. N. (2010). On the theoretical precision and strategy selection problem of a single-strategy approach: A comment on Glöckner, Betsch, and Schindler (2010). Journal of Behavioral Decision Making, 23, 463467. http://dx.doi.org/10.1002/bdm.680CrossRefGoogle Scholar
Marewski, J. N., & Schooler, L. J. (2011). Cognitive niches: An ecological model of strategy selection. Psychological Review, 118, 393437. http://dx.doi.org/10.1037/a0024143CrossRefGoogle ScholarPubMed
Mata, R., Schooler, L. J., & Rieskamp, J. (2007). The aging decision maker: Cognitive aging and the adaptive selection of decision strategies. Psychology and Aging, 22, 796810. http://dx.doi.org/10.1037/0882-7974.22.4.796CrossRefGoogle ScholarPubMed
McAllister, D. W., Mitchell, T. R., & Beach, L. R. (1979). The contingency model for the selection of decision strategies: An empirical test of the effects of significance, accountability, and reversibility. Organizational Behavior and Human Performance, 24, 228244. http://dx.doi.org/10.1016/0030-5073(79)90027-8CrossRefGoogle Scholar
Miner, F. C. (1984). Group versus individual decision making: An investigation of performance measures, decision strategies, and process losses / gains. Organizational Behavior and Human Performance, 33, 112124. http://dx.doi.org/10.1016/0030-5073(84)90014-XCrossRefGoogle Scholar
Nadler, D. A. (1979). The effects of feedback on task group behavior: A review of the experimental research. Organizational Behavior and Human Performance, 23, 309338. http://dx.doi.org/10.1016/0030-5073(79)90001-1CrossRefGoogle Scholar
Newell, B. R. (2005). Re-visions of rationality? Trends in Cognitive Sciences, 9, 1115. http://dx.doi.org/10.1016/j.tics.2004.11.005CrossRefGoogle ScholarPubMed
Newell, B. R., & Lee, M. D. (2011). The right tool for the job? Comparing an evidence accumulation and a naive strategy selection model of decision making. Journal of Behavioral Decision Making, 24, 456481. http://dx.doi.org/10.1002/bdm.703CrossRefGoogle Scholar
Newell, B. R., & Shanks, D. R. (2003). Take the best or look at the rest? Factors influencing “one-reason” decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 5365. http://dx.doi.org/10.1037/0278-7393.29.1.53Google ScholarPubMed
Newell, B. R., Weston, N. J., & Shanks, D. R. (2003). Empirical tests of a fast-and-frugal heuristic: Not everyone “takes-the-best”. Organizational Behavior and Human Decision Processes, 91, 8296. http://dx.doi.org/10.1016/S0749-5978(02)00525-3CrossRefGoogle Scholar
Nosofsky, R. M., & Johansen, M. K. (2000). Exemplar-based accounts of “multiple-system” phenomena in perceptual categorization. Psychonomic Bulletin and Review, 7, 375402.Google ScholarPubMed
Olsson, A. C., Juslin, P., & Olsson, H. (2006). Individuals and dyads in a multiple-cue judgment task: Cognitive processes and performance. Journal of Experimental Social Psychology, 42, 4056. http://dx.doi.org/10.1016/j.jesp.2005.01.004CrossRefGoogle Scholar
Pachur, T., & Olsson, H. (2012). Type of learning task impacts performance and strategy selection in decision making. Cognitive Psychology, 65, 207240. http://dx.doi.org/10.1016/j.cogpsych.2012.03.003CrossRefGoogle ScholarPubMed
Parks, C. D., & Cowlin, R. A. (1996). Acceptance of uncommon information into group discussion when that information is or is not demonstable. Organizational Behavior and Human Decision Processes, 66, 307315. http://dx.doi.org/10.1006/obhd.1996.0058CrossRefGoogle Scholar
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. Cambridge, England: Cambridge University Press.10.1017/CBO9781139173933CrossRefGoogle Scholar
Platzer, C., & Bröder, A. (in press). When the rule is ruled out: Exemplars and rules in decisions from memory. Journal of Behavioral Decision Making. http://dx.doi.org/10.1002/bdm.1776CrossRefGoogle Scholar
Randall, K. R., Resick, C. J., & DeChurch, L. A. (2011). Building team adaptive capacity: The roles of sensegiving and team composition. Journal of Applied Psychology, 96, 525540. http://dx.doi.org/10.1037/a0022622CrossRefGoogle ScholarPubMed
Reimer, T., Bornstein, A.-L., & Opwis, K. (2005). Positive and negative transfer effects in groups. In Betsch, T. & Haberstroh, S. (Eds.), The routine of decision making (pp. 175192). Mahwah, NJ: Erlbaum.Google Scholar
Reimer, T., & Hoffrage, U. (2006). The ecological rationality of simple group heuristics: Effects of group member strategies on decision accuracy. Theory and Decision, 60, 403438. http://dx.doi.org/10.1007/s11238-005-4750-2CrossRefGoogle Scholar
Reimer, T., & Hoffrage, U. (2012). Ecological rationality for teams and committees: Heuristics in group decision making. In Todd, P. M., Gigerenzer, G., & the ABC Research Group (Eds.), Ecological rationality: Intelligence in the world (pp. 335359). New York: Oxford University Press.Google Scholar
Reimer, T., Hoffrage, U., & Katsikopoulos, K. V. (2007). Entscheidungsheuristiken in Gruppen [Heuristics in group decision-making]. NeuroPsychoEconomics, 2, 729.Google Scholar
Reimer, T., & Katsikopoulos, K. V. (2004). The use of recognition in group decision-making. Cognitive Science, 28, 10091029. http://dx.doi.org/10.1016/j.cogsci.2004.06.004Google Scholar
Richter, A., Dawson, J., & West, M. (2011). The effectiveness of teams in organizations: A meta-analysis. The International Journal of Human Resource Management, 22, 27492769. http://dx.doi.org/10.1080/09585192.2011.573971CrossRefGoogle Scholar
Rieskamp, J., & Dieckmann, A. (2012). Redundancy: Environment structure that simple heuristics can exploit. In Todd, P. M., Gigerenzer, G., & the ABC Research Group (Eds.), Ecological rationality: Intelligence in the world (pp. 187215). New York: Oxford University Press.Google Scholar
Rieskamp, J., & Hoffrage, U. (1999). When do people use simple heuristics, and how can we tell? In: Gigerenzer, G., Todd, P. M. & the ABC Research Group, Simple heuristics that make us smart (pp. 141167). New York: Oxford University Press.Google Scholar
Rieskamp, J., & Hoffrage, U. (2008). Inferences under time pressure: How opportunity costs affect strategy selection. Acta Psychologica, 127, 258276. http://dx.doi.org/10.1016/j.actpsy.2007.05.004CrossRefGoogle ScholarPubMed
Rieskamp, J., & Otto, P. E. (2006). SSL: A theory of how people learn to select strategies. Journal of Experimental Psychology: General, 135, 207236. http://dx.doi.org/10.1037/0096-3445.135.2.207CrossRefGoogle ScholarPubMed
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129138. http://dx.doi.org/10.1037/h0042769CrossRefGoogle ScholarPubMed
Stasser, G. (1992). Information salience and the discovery of hidden profiles by decision-making groups: A “thought experiment”. Organizational Behavior and Human Decision Processes, 52, 156181. http://dx.doi.org/10.1016/0749-5978(92)90049-DCrossRefGoogle Scholar
Stasser, G., Taylor, L. A., & Hanna, C. (1989). Information sampling in structured and unstructured discussions of three- and six-person groups. Journal of Personality and Social Psychology, 57, 6778. http://dx.doi.org/http://dx.doi.org/10.1037/0022-3514.57.1.67CrossRefGoogle Scholar
Steiner, I. D. (1972). Group process and productivity. New York: Academic Press.Google Scholar
Stroebe, W., Nijstad, B. A., & Rietzschel, E. F. (2010). Beyond productivity loss in brainstorming groups: The evolution of a question. Advances in Experimental Social Psychology, 43, 157203. http://dx.doi.org/10.1016/S0065-2601(10)43004-XCrossRefGoogle Scholar
Svenson, O. (1992). Differentiation and consolidation theory of human decision making: A frame of reference for the study of pre- and post-decision processes. Acta Psychologica, 80, 143168. http://dx.doi.org/10.1016/0001-6918(92)90044-ECrossRefGoogle Scholar
Tindale, R. S. (1989). Group vs. individual information processing: The effects of outcome feedback on decision making. Organizational Behavior and Human Decision Processes, 44, 454473. http://dx.doi.org/10.1016/0749-5978(89)90019-8CrossRefGoogle Scholar
Tindale, R. S., & Sheffey, S. (2002). Shared information, cognitive load, and group memory. Group Processes & Intergroup Relations, 5, 518. http://dx.doi.org/10.1177/1368430202005001535CrossRefGoogle Scholar
Todd, P. M., & Gigerenzer, G. (2012). What is ecological rationality? In Todd, P. M., Gigerenzer, G., & the ABC Research Group (Eds.), Ecological rationality: Intelligence in the world (pp. 3-30). New York: Oxford University Press.CrossRefGoogle Scholar
Vollrath, D. A., Sheppard, B. H., Hinsz, V. B., & Davis, J. H. (1989). Memory performance by decision-making groups and individuals. Organizational Behavior and Human Decision Processes, 43, 289300. http://dx.doi.org/10.1016/0749-5978(89)90040-XCrossRefGoogle Scholar
Waller, M. J. (1999). The timing of adaptive group responses to nonroutine events. Academy of Management Journal, 42, 127137.CrossRefGoogle Scholar
Watson, G. B. (1928). Do groups think more efficiently than individuals? The Journal of Abnormal and Social Psychology, 23, 328336. http://dx.doi.org/10.1037/h0072661CrossRefGoogle Scholar
Wegwarth, O., Gaissmaier, W., & Gigerenzer, G. (2009). Smart strategies for doctors and doctors-in-training: Heuristics in medicine. Medical Education, 43, 721728.10.1111/j.1365-2923.2009.03359.xCrossRefGoogle ScholarPubMed
Weldon, M. S., Blair, C., & Huebsch, P. D. (2000). Group remembering: Does social loafing underlie collaborative inhibition? Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 15681577. http://dx.doi.org/10.1037/0278-7393.26.6.1568Google ScholarPubMed
Wittenbaum, G. M., & Stasser, G. (1996). Management of information in small groups. In Nye, J. L. & Brower, A. M. (Eds.), What’s Social about Social Cognition? (pp. 967978). Thousand Oaks, CA: Sage.Google Scholar
Figure 0

Figure 1: Screenshots of the task interface including six cues for each oil-drilling site (X and Y) illustrating the search behavior of a weighted additive strategy (WADD, left) and take-the-best (right). WADD required looking up all cues to calculate the weighted sum for each alternative. Take-the-best looked up the cue with the highest validity (here: seismic analysis) first, and, as this one did not discriminate, it looked up the cue with the second highest validity (geophones) next. As this cue discriminated, take-the-best reached a decision and ignored the remaining cues, which is why they are still hidden (“?”).

Figure 1

Figure 2: Mean performance per block of dyads (n = 20) and individuals (n = 20), in the WADD-friendly (left) and take-the-best-friendly (TTB; right) environments. Error bars: ±1 SE.

Figure 2

Figure 3: Two measures of strategy use concerning the stopping rule, in the WADD-friendly environment (left) and in the take-the-best friendly environment (right). The left panel depicts the relative frequency of cases in which too few cues were looked up, that is, cues that should have been opened so that the decision could not be overruled by additional evidence. This measure was calculated for the 16 individuals and 18 dyads who were classified as adaptive WADD users. The right panel depicts the proportion of those trials in which people decided against the first discriminating cue based on less valid cues that were additionally opened, although, according to take-the-best, these less valid cues should not have overruled the first discriminating cue. This measure was calculated for the 13 individuals and 18 dyads who were classified as adaptive take-the-best users. Error bars: ±1 SE.

Figure 3

Figure 4: Individuals’ and dyads’ average performance in the two experimental orders: The left panel depicts the rates of performance with the adaptive strategies in the experimental order of first the WADD-friendly and then the take-the-best-friendly environment; the right panel depict the results for the reverse order. Error bars: ±1 SE.

Figure 4

Figure 5: Mean percentage of trials in which “too few” cues were opened by subjects who were classified as WADD users in the WADD-friendly environment, in the first phase (left; n = 18 individuals and n = 18 dyads) and in the second phase (right; n = 13 individuals and n = 13 dyads). Error bars: ±1 SE.

Figure 5

Figure 6: Average proportion of those trials in which people decided against the first discriminating cue based on less valid cues that were additionally opened (i.e., contradictory evidence), in the first (left) and in the second (right) phase in the take-the-best-friendly environment. This measure was calculated for those subjects who were classified as adaptive take-the-best users (phase 1: n = 15 individuals and n = 17 dyads; phase 2: n = 7 dyads). Note that no individuals were classified as take-the-best users in the second phase, so no results can be displayed for individuals in the right panel. Error bars: ±1 SE.

Figure 6

Table A.1: Item set in the WADD-friendly environment.

Figure 7

Table A.2: Item set in the take-the-best-friendly environment.

Figure 8

Figure B.1. Individuals’ and dyads’ mean rates of accordance with the adaptive strategy in the WADD-friendly (left) and take-the-best-friendly (TTB; right) environments. In both environments, choices were strongly in accordance with the appropriate adaptive strategy. Dyads, however, either reached asymptotic accordance faster (take-the-best-friendly environment) or reached higher final levels of accordance with the adaptive strategy (WADD-friendly environment). Error bars: ±1 SE.

Figure 9

Figure B.2. Individuals’ and dyads’ mean accordance rates with the adaptive strategy in the WADD-friendly and take-the-best-friendly (TTB) environments. The two left panels depict the rates of accordance with the adaptive strategies in the experimental order of first the WADD-friendly and then the take-the-best-friendly environment; n = 20 individuals, n = 20 dyads); the two right panels depict the results for the reverse order. Error bars: ±1 SE.

Figure 10

Table C.1 Results for the classification according to Bröder and Schiffer (2003) including six strategies for Experiment 1.

Figure 11

Table C.2: Results for the classification according to Bröder and Schiffer (2003) for six strategies for Experiment 2, for phase 1 (upper part) and phase 2 (lower part).

Figure 12

Table C.3: Average values with SE in parentheses in Experiment 1.

Figure 13

Table C.4: Average values with SE in parentheses in Experiment 2, in phase 1 (upper part) and phase 2 (lower part).

Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 1
Download Kämmer et al. supplementary material(File)
File 2.1 MB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 2
Download Kämmer et al. supplementary material(File)
File 2.1 MB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 3
Download Kämmer et al. supplementary material(File)
File 12.6 KB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 4
Download Kämmer et al. supplementary material(File)
File 3.1 KB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 5
Download Kämmer et al. supplementary material(File)
File 5.2 MB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 6
Download Kämmer et al. supplementary material(File)
File 5.3 MB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 7
Download Kämmer et al. supplementary material(File)
File 20.9 KB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 8
Download Kämmer et al. supplementary material(File)
File 3.8 KB
Supplementary material: File

Kämmer et al. supplementary material

Kämmer et al. supplementary material 9
Download Kämmer et al. supplementary material(File)
File 6.7 KB