1.1 Introduction
The topic of the present volume, sampling approaches to judgment and decision-making (JDM) research, is ideally suited to illustrate the power and fertility of theory-driven research and theorizing in a flourishing area of behavioral science. The last two decades of rationality research, in psychology, economics, philosophy, biology, and computer science, are replete with ideas borrowed from statistical sampling models that place distinct constraints on information transition processes. These sampling approaches highlight the wisdom gained from Kurt Lewin and Egon Brunswik that in order to understand cognitive and motivational processes within the individual, it is first of all essential to understand the structure and distribution of the environmental stimulus input that impinges on the individual’s mind. This is exactly the focus of sampling-theory approaches.
The environmental input triggers, enables, constrains, and biases the information transmission process before any cognitive processes come into play. Because the information offered in newspapers, TV, Internet, textbooks, and literature, or through personal communication is hardly ever an unbiased representative sample of the world, but is inevitably selective and biased toward some and against other topics and sources, a comprehensive theory of judgment and decision-making must take the ecology into account. Importantly, the information input is not only reflective of existing biases of a wicked environment. It is also empowered by the statistical strength and reliability of a distributed array of observations, the statistical properties of which are well understood. So, the challenges of a potentially biased “wicked” environment (Hogarth et al., Reference Hogarth, Lejarraga and Soyer2015) come along with normative instruments for debiasing and separating the wheat from the chaff.
For a comprehensive theory of judgments and decisions in a probabilistic world, the cognitive stage of information processing cannot be understood unless the logically antecedent stage of environmental sampling is understood in the first place. Figure 1.1 illustrates this fundamental notion. The left box at the middle level reflects the basic assumption that the distal constructs that constitute the focus of judgment – such as health risks, student ability, a defendant’s guilt, or the profitability of an investment – are not amenable to direct perception. We do not have sense organs to literally perceive risk, ability, or guilt. We only have access to samples of proximal cues (in the middle box) that are more directly assessable and that allow us to make inferences about the distal entities, to which they are statistically related. Samples of accident rates or expert advice serve to infer risk; students’ responses to knowledge questions allow teachers to infer their ability in math or languages; samples of linguistic truth criteria in eyewitness protocols inform inferences of a defendant’s guilt (Vrij & Mann, Reference Vrij and Mann2006). A nice feature of these proximal stimulus distributions is that normative rules of statistics allow us to monitor and control the process, inferring the reliability from the sample’s size and internal consistency and – when the proximal data are representative of a domain – even the validity of the given stimulus information.

Figure 1.1 Two stages of information transmission from a cognitive-ecological perspective
Regardless of how valid or reliable the environmental input is, it constrains and predetermines the subsequent cognitive judgment and decision process. The accuracy of a health expert’s risk estimate, a teacher’s student evaluation, and a judge’s guilt assessment depend on the diagnostic value of the cue samples used to infer risk, ability, or guilt. The accuracy and confidence of their judgments and decisions depend primarily on the quality of the sampled data. Resulting distortions and biased judgments need not reflect biases in human memory or reasoning; such biases may already be inherent in the environmental sample with which the cognitive process was fed.
Indeed, the lessons taken from the entire research program of the Kahneman–Tversky tradition can be revisited and revised fundamentally from a sampling-theoretical perspective. Illusions and biases may not, or not always, reflect deficits of human memory or flawed heuristic processes within the human mind. They may rather reflect an information transition process that is anchored in the environment, prior to all cognitive operations. Samples of risk-related cues may be deceptive or lopsided; too small a sample of student responses may be highly unreliable; the defendant’s sample of verbal utterances may be faked intentionally. Considered from a broader cognitive-ecological perspective, bounded rationality is not merely limited by memory restrictions or cognitive heuristics reflecting people’s laziness. Judgments and decisions in the real world are restricted, and enabled, by cognitive as well as ecological limitations and capacities. For instance, risk estimations – concerning the likelihood of contracting Covid-19 or being involved in car accidents – are not just restrained by wishful thinking or ease of retrieval (Block et al., Reference Block, Hoffman and Raabe2020; Combs & Slovic, Reference Combs and Slovic1979). They also depend on a rational answer to the question: What sample affords an unbiased estimate of my personal risk of a disease or accident? Should it be a sample of the entire world population, a sample of people in my subculture, or a biographical sample of my own prior behavior? As the example shows, there is no alternative to devising a heuristic algorithm for risk estimation. Heuristics are sorely needed indeed, not just for the human mind but also for machine learning, and expert and robot systems (Fiedler et al., Reference Fiedler, McCaughey, Prager, Knauff and Spohn2021).
1.2 Historical Review of Origins and Underpinnings of Sampling Approaches
The information transition process that underlies judgments and decisions can be decomposed into two stages (see Figure 1.1): an ecological sampling stage and a cognitive processing stage. While traditional cognitive research was mainly concerned with the processing of stimulus input within the individual’s mind (attention, perception, encoding, storage, retrieval, constructive inferences) of stimulus cues, the ecological input to the cognitive–decision stage reflects a logically antecedent sampling stage, which takes place in the environment. Judgment biases and decision anomalies that were traditionally explained in terms of retrieval or reasoning biases during the cognitive–decision stage may already be inherent in the stimulus input, as a consequence of biased sampling in the environment, before any cognitive operations come into play. Biased judgments and decisions can thus result from fully unbiased mental operations applied to biased sampling input. Conversely, unbiased and accurate estimates may reflect the high quality of information from certain environments.
1.2.1 Methodological and Meta-Theoretical Assets
The causal sequence (of sampling as an antecedent condition of cognitive processing) and the normative-statistical constraints imposed on the sampling stage jointly explain the beauty and fertility, and the theoretical success of sampling approaches. As in psychophysics, an analysis of the samples of observations gathered in the information search process imposes strong constraints on the judgments and decisions informed by this input. Statistical sampling theory imposes distinct normative constraints (in terms of sample size, stochastic independence, etc.) on how inferences from the sample should be made. Both sources of constraints together lead to refined hypotheses that can be tested experimentally. Because the causal and statistical constraints are strong and clear-cut, the predictions tested in such experiments are cogent and nonarbitrary, and, not by coincidence, empirical findings often support the a priori considerations. Indeed, replication and validation do not appear to constitute serious problems for sampling research (Denrell & Le Mens, Reference Denrell and Le Mens2012; Fiedler, Reference Fiedler2008; Galesic et al., Reference Galesic, Olsson and Rieskamp2018).
1.2.1.1 Recording the Sampled Input
Having a measure of the sampling input in addition to the judgments and decision in the ultimate dependent measure offers a natural candidate for a mediational account of cognitive inferences relying on the sample. Comparing the recorded sample to the ultimate cognitive measure provides a way to disentangle the two processes. A causal origin of a judgment or decision effect that is already visible in the actuarial sample must originate in the environment, before cognition comes into play. Evidence for a genuine cognitive influence (e.g., selective retrieval or an anchoring bias) requires demonstrating a tendency in the cognitive process that is not yet visible in the recorded sample.
Let us illustrate the methodological advantage of having a record of the sample with reference to recent research on sample-based impression judgments. Prager et al. (Reference Prager, Krueger and Fiedler2018) had participants provide integrative likeability judgments of target persons described by samples of n = 2, 4, or 8 traits drawn at random from a universe defined by an experimentally controlled distribution of positive and negative traits. Each participant provided 36 impression judgments, nine based on random samples drawn from each of four universes of extremely positive, moderately positive, moderately negative, and extremely negative sets of traits, selected in careful pilot testing. Across all participants and trials, impression judgments were highly predictable from the recorded samples of traits. Not only the positive versus negative valence and extremity of the universe from which the stimulus traits were drawn, but also the deviations of the random samples from the respective universe strongly predicted the ultimate impression judgments. Consistent with Bayesian updating principles, impression extremity increased with increasing n. Altogether, these findings provided strong and regular support for the (actuarial) stimulus sample as major determinant of person impression (Asch, Reference Asch1946; Norton et al., Reference Norton, Frost and Ariely2007; Ullrich et al., Reference Ullrich, Krueger, Brod and Groschupf2013).
However, in spite of their close fit to the sampled input, the impression judgments were also highly sensitive to the structure of the environment, specifically, the diagnosticity of the information. The diagnosticity of a trait is determined by the covariation of features in the environment and can be defined in the same way as in a likelihood ratio in Bayesian updating; a trait is diagnostic for a hypothetical impression (e.g., for the hypothesis: likable person) to the extent that it is more likely to occur in a likable than a nonlikable person.Footnote 1
Holding the valence scale value of the sampled traits constant, diagnostic traits exerted a stronger influence on person judgments than nondiagnostic traits. Diagnosticity was enhanced if a trait was negative rather than positive (Rothbart & Park, Reference Rothbart and Park1986); if a trait referred to negative morality or positive ability rather than positive morality or negative ability (Fiske et al., Reference Fiske, Cuddy and Glick2007; Reeder & Brewer, Reference Reeder and Brewer1979); if a trait was infrequent rather than frequent (Prager & Fiedler, Reference Prager and Fiedler2021); or if a trait’s distance from other traits in a semantic network was high (Unkelbach et al., Reference Unkelbach, Fiedler, Bayer, Stegmüller and Danner2008). However diagnosticity was operationalized, the resulting impression of a target person was not fully determined by the average valence scale value of the traits recorded in a sample but depended on the diagnosticity of the sampled traits. Adding a diagnostic trait had a stronger impact on a growing impression than adding a nondiagnostic trait of the same valence.
Further evidence of how people actively interpret the observed samples will be provided later. Suffice it here to point out the advantage of a research design with a twofold measure for the sampling input on one hand and for the cognitive process output on the other hand. Let us now turn to the second major asset of the sampling-theory approach, namely, the existence of normative constraints imposed by statistical sampling theory on the information transition process. To the extent that judgments and decisions are sensitive to such distinct normative constraints, which often exceed intuition and common sense, this would provide cogent evidence for the explanatory value of sampling theories.
1.2.1.2 Impact of Sampling Constraints
The keywords in the lower left of Figure 1.1 refer to a number of subtle sampling constraints, which are firmly built into the probabilistic environment. For instance, in a world in which many frequency distributions are inherently skewed, probability theory constrains the probability that a sample reveals a dominant trend, for instance, that a sample reflects the relative frequency of lexical stimuli, animals, or causes of death. Skewed distributions are highly indicative of moral and material value. Rare objects tend to be more precious than common things (Pleskac & Hertwig, Reference Pleskac and Hertwig2014); scarcity increases the price of economic goods. Abnormal or norm-deviant behaviors are less frequent than normal or norm-abiding behaviors. Likewise, skewness is indicative of psychological distance. Frequently encountered stimuli more likely belong to temporally, spatially, socially, close and probable origins than infrequent stimuli, which are indicative of distant origins (Bhatia & Walasek, Reference Bhatia and Walasek2016; Fiedler et al., Reference Fiedler, Kutzner, Keren and Wu2015; Trope & Liberman, Reference Trope and Liberman2010). In any case, normal variation in distance, density, resolution level, and perspective can open up a variety of environmental information.
Small samples from skewed distributions are often unrepresentative of the underlying distribution and this can lead to seemingly biased judgments. Suppose, for example, that the population probability of a success is 0.9. In a small sample of five trials, an agent will most often observe a proportion larger than 0.9; the probability of observing five successes in five trials is 0.6. It is 0.77 if the probability of a success is 0.95. Thus, if judgments are sensitive to experienced proportions, most agents will overestimate the success probability. To be sure, agents may be more sophisticated and understand that small samples can be unrepresentative. Suppose an agent believes that all probabilities between zero and one are equally likely (a uniform prior distribution) and uses this information in combination with the observed proportion. Such a Bayesian agent will estimate the true success probability to be lower than the population proportion of 0.9. Having observed five successes in five trials, this Bayesian agent will estimate the success probability to be only 0.86.Footnote 2 Thus, normative-statistical laws not only justify that sample proportions can deviate from true probabilities in the population but also specify predictions of how sample-dependent estimations can be expected to deviate from population parameters.
It is no wonder then that decisions about risk-taking differ substantially between settings where the winning probability of a lottery is described numerically versus when a sample of outcomes is experienced extensionally – the so-called description–experience gap (Hertwig et al., Reference Hartwig, Barron, Weber and Erev2004). Statistical sampling theory as an integral part of a cognitive-ecological approach can therefore offer a viable explanation of many findings related to the description–experience gap (Fox & Hadar, Reference Fox and Hadar2006; Rakov et al., Reference Rakow, Demes and Newell2008).
The importance of skewed sampling distributions also inspired a prominent finding by Kareev (Reference Kareev2000). Assuming an actually existing (population) correlation of, say, , the majority of observed correlations r in restricted samples from this population is higher than ρ. (Undoing this asymmetry of the sampling distribution of r-statistics is the purpose of the common Fisher-z transformation). Kareev (Reference Kareev2000) showed that the tendency of r to exaggerate existing correlations reaches a maximum at
, suggesting that the evolution may have prepared Homo sapiens with a memory span that maximally facilitates the extraction of existing regularities. Regardless of the viability of Kareev’s vision (see Juslin & Olsson, Reference Justin and Olsson2005, for a critical note), it clearly highlights the fascinating ability of sampling theories to inform creative theorizing in cognitive-ecological context.
Let us now move from unsystematic sampling error (around p or ρ) derived from statistical sampling theory to systematic sampling biases lying outside the domain of statistics.Footnote 3 Some behavioral laws are so obvious and universal that one hardly recognizes their statistical consequences. For example, Thorndike’s (Reference Thorndike1927) law of effect states that responses leading to pleasant outcomes are more likely repeated than responses leading to unpleasant outcomes. In other words, organisms are inclined to sample more from pleasant than from unpleasant sources. A hot-stove effect motivates organisms to stop sampling from highly unpleasant sources (e.g., a restaurant where one got sick). Such a simple and self-evident preference toward hedonically positive stimuli was sufficient to inspire a series of highly influential simulations and experiments that opened up completely novel perspectives on behavior regulation (Denrell, Reference Denrell2005; Denrell & Le Mens, Reference Denrell and Le Mens2007, Reference Denrell and Le Mens2011; Fazio et al., Reference Fazio, Eiser and Shook2004). The tendency to stop sampling from negative targets and more likely continue sampling from positive targets implies that negative first impressions are less likely corrected than positive first impressions. Long-term negativity biases may be the result of such a simple and incontestable hedonic bias.
An example of another effect that has been prematurely taken for a cognitive bias refers to the seminal work on heuristics and biases by Tversky and Kahneman (Reference Tversky and Kahneman1973). In their famous availability heuristic, they postulate a cognitive bias to overestimate the frequency or probability of easily retrievable events. Thus, a bias in frequentist judgments is attributed to a cognitive bias to overrate information that easily comes to one’s mind. Hardly anybody ever contested that the bias may be already apparent in the sampling stage, well before a retrieval bias may come into play, although this possibility was discussed from the beginning. For instance, the erroneous tendency to rate murder more frequent than suicide, to overrate lightning and to underrate coronary disease as causes of death, need not reflect a retrieval bias but a bias in newspaper coverage (Combs & Slovic, Reference Combs and Slovic1979). Thus, prior to cognitive retrieval processes, newspapers or the information environment are more likely to report on murder than on suicide, on lightning than coronary disease, and this preexisting sampling bias may account for availability effects. Even when every cause of death reported in the media is equally likely to be retrieved, biased media coverage may well account for biased probability estimates. A critical examination of the literature reveals, indeed, that countless experiments on the availability heuristic have provided little evidence for memory retrieval proper.
1.2.2 Properties of Proximal Samples
So far, we have seen that merely analyzing the statistical properties or the hedonic appeal of the environment opens up alternative explanations of various psychological phenomena as well as genuine innovations that had been never discovered without the sampling perspective. The following discussion of the properties of the proximal samples implanted by the distal world leads to further insights about the beauty, fertility, and the explanatory power of the sampling approach.
One important and common property of observations based on noisy samples is regression to the mean. If the observed value, X’ is higher than MeanX, then the true value X is likely lower than X’, but if the observed value, X’ is lower than MeanX, then the true value X is likely higher than X’ (see Figure 1.2; for precise definitions see Samuels [Reference Samuels1991], and Schmittlein [Reference Schmittlein1989]). This property holds for many distributions and implies that observed values diverge regularly and in predictable ways from true values.Footnote 4 Regressiveness increases with the amount of noise, or error variance. To illustrate this, consider two normally distributed random variables, X’ and X, where X’ is a noisy observation of X (i.e., X’ = X + e, where e is an error term). Suppose, for simplicity, that the variables are standardized to z scores with zero mean and variance: zX = (X – MeanX)/SDX. Then the expected zX given an observed standardized zX’ value is E[zX|zX’] = rX,X’ zX’. Whenever the correlation between X’ and X is less than 1, the best estimate of zX given zX’ is less extreme than zX’. Specifically, if the correlation is rX,X’ = 0.50, the expected population values are only half as extreme as the observed values; if rX,X’ = 0.75, the expected population values shrink by one-fourth to 75 percent of the observed deviation from the mean. Thus, observed values diverge from expected values in predictable ways.

Figure 1.2 Expected (population) probability p as a function of observed sample proportion P, at two sample sizes n, 5 and 20 (assuming a uniform prior for p). An agent that myopically reports the sample proportion P as their estimate of probability p will make too extreme estimates, as identified by the deviation from the identity line. An agent who takes the sample size into account dampens the observed proportion according to n.
While regression is not a bias but a reflection of noise in the probabilistic world, it can create what appears to be a bias. An agent that reports the raw observed sample value as their estimates – like ignoring effects implied by regression to the mean – will make systematically too extreme judgments, as compared to the long-run expected value. Alternatively, an agent may take expectable regression effects into account and make estimates that are systematically less extreme than the observed sample values. Because regression effects decrease with increasing reliability, and reliability increases with sample size n, it follows that small samples should inform more regressive (“dampened”) estimates than larger samples drawn from the same population.
Figure 1.2 illustrates this by plotting the expected population probability , conditional on the observed sample proportion P at sample size n, assuming that any value of p is equally likely prior to the observation (a uniform prior for the population probability p). If an agent naïvely takes the observed sample proportions P as estimates of p, the resulting estimates of p will be profoundly too extreme, and more so for smaller samples. A more sophisticated agent may take the regression effect into account and make estimates of p that are less extreme than P, as captured by the function for the relevant sample size in Figure 1.2, making the estimates a function both of the observed sample proportion P and the sample size n. Both the “naïve” or “myopic” use of the sample and the more sophisticated use of the sample content P make clear and identifiable a priori predictions for the judgments.
The regression slopes <1 in Figure 1.2 simply reflect that the real world hardly ever provides us with perfect correlations. Under most reasonable conditions, a larger sample size is less affected by regression and thus motivates more extreme and informative estimates. The difference in regression slopes in Figure 1.2, which can be termed differential regression, implies that even normatively correct estimates can vary with sample size (Costello & Watts, Reference Costello and Watts2019).
The principle of differential regression – taking into account that estimates based on small samples are less reliable and more regressive than estimates based on large samples – provides alternative accounts for a number of alleged cognitive biases, in the absence of any cognitive bias, simply because small and large samples differ in regressiveness. For instance, to rationally justify the abundantly cited phenomenon of confirmation biases, it is sufficient to assume that the same high rate of, say, 80 percent confirmation holds for one’s favorite or focal hypothesis Hfocal as for a rival hypothesis Hrival, but that scientists, consumers, or politicians typically gather larger samples of evidence for Hfocal than for Hrival.
Similarly, in a norm-abiding world, in which norm-deviant behaviors are the exception, the same high rate of positive behaviors may hold for oneself as for other people, but because self-referent experience samples are typically larger than other-referent samples, the difference in sample size justifies the so-called self-serving bias toward more positive self than other judgments. The same argument holds for ingroup favoritism; larger samples about familiar ingroups than about remote outgroups enable us to develop more positive impressions of ingroups than of outgroups (Fiedler et al., Reference Fiedler, Kemmelmeier and Freytag1999; Linville et al., Reference Linville, Fischer and Salovey1989).
The assumption of larger self-related than other-related samples was the inspiration of a refined theory of overconfidence proposed by Moore and Healy (Reference Moore and Healy2008). While regression is ubiquitous, such that performance is overestimated for difficult and underestimated for easy tasks, this “hard–easy effect” is more pronounced for others than for the self, because other-referent samples tend to be smaller than self-referent samples. Consequently, people claim to be better than others on easy tasks but presume to be worse than others on difficult tasks.
1.2.2.1 Theoretical Progress and Explanatory Power
This sketch of research on regressiveness highlights the explanatory power and the theoretical progress of sampling approaches. Their explanatory potential was impressively demonstrated in a two-fold manner. On one hand, sampling perspectives have brought about enlightening and challenging alternative explanations of a growing number of well-established phenomena that had been traditionally explained in terms of intrapsychic (cognitive or motivational) principles (Fiedler, Reference Fiedler2014). On the other hand, modeling and theorizing from a sampling perspective have inspired the generation of completely novel predictions and implications that were never anticipated before the new theoretical sampling perspectives were articulated and formalized.
1.2.3 Strategies of Information Search
Although the insight that many apparent cognitive biases are predetermined in the environment is at the heart of many innovative theory approaches (e.g., Gigerenzer et al., Reference Goerner, Fiedler, Olsson, Todd and Gigerenzer2012), examining individuals’ information search strategies is equally enlightening. As indicated by the keywords in the lower right part of Figure 1.1, the cognitive-ecological interplay depends on a genuine interaction between the environment’s properties and the individual’s active search strategies. As evident from the work of Denrell and Le Mens (Reference Denrell and Le Mens2007, Reference Denrell and Le Mens2011, Reference Denrell and Le Mens2017; Chapter 4 in the present volume), the individual’s sampling preferences have a profound influence on sampled observations and their impact on judgments and decisions. It is worthwhile considering at least three sources of variation in social information search. In addition to hedonic influences, individuals are subject to majority influence (conformity) and interdependent sampling. Let us briefly discuss all three issues by turns.
To the extent that agents adhere to the law of effect (Thorndike, Reference Thorndike1927), they follow a hedonic motive to more likely sample from pleasant than from unpleasant or painful sources (see Figure 1.3). As a consequence of a hot-stove effect (Denrell & March, Reference Denrell and March2001), people fall prey to long-term negativity biases. Thus, if negative experience causes a sufficiently fast and sudden drop in the likelihood of sampling (see the abrupt drop of the curve in Figure 1.3), there will be no (or very little) subsequent opportunity to correct for a misleading negative impression. In contrast, if sampling of positive information is only slightly more likely than sampling of negative information, then a larger sample of positive stimuli may cause a polarization of positive evaluations (Fiedler et al., Reference Fiedler, Wöllert, Tauber and Heß2013).

Figure 1.3 Increasing likelihood of sampling at t + 1 as a function of valence experienced at t
The latter contention already suggests that assumptions about sampling rates can inform refined theories. Consider the reinforcement-learning assumption about the acquisition of illusory correlations between two hedonically charged variables (Denrell & Le Mens, Reference Denrell and Le Mens2011), say, political orientation and humor. Let us assume that people only continue to interact with people that are liberal or humorous, and that they stop interacting with people who are both conservative and humorless. Given such a disjunctive sampling scheme, they update all combinations of political orientation and humor except for people who are both conservative and humorless, which cannot be corrected for. A disjunctive scheme will thus create an illusory positive correlation between conservative and humorless. In contrast, if people follow a conjunctive scheme and only continue interacting with people who are both liberal and humorous, then only this combination can be corrected. All other combinations go uncorrected, causing an illusory negative correlation between being liberal and humorous.
In the absence of any hedonic preferences for pleasant over unpleasant stimuli, social individuals are inevitably subject to conformity effects; that is, they are likely exposed to majority influences more than to minority influences. As delineated by Denrell and Le Mens (Reference Denrell and Le Mens2017), “When people learn about the alternatives from their own experiences but tend to adopt the behaviors of others, they will mistakenly learn to believe that a popular alternative is superior to a better, but unpopular alternative.” In a similar vein, mutual social influence and contagion can be due to the co-sampling experience of people who jointly live in the same environment, like yoked controls. Regardless of the hedonic value and validity of the information sampled by such a yoked pair (of colleagues, classmates, or consumers of the same media), they will reflect the same sampling influences (Denrell & Le Mens, Reference Denrell and Le Mens2007).
1.2.4 Specifying Computational Assumptions
Granting the naïve-sampling assumption that decision makers take a given sample for granted, an open question is what specific aspect of a given sample determines the sample-based decision. The decision-by-sampling approach (Chater et al., Reference Stewart, Chater and Brown2006), for instance, strongly assumes that decisions are sensitive to the relative rank rather than the expected outcome value of decision options. As a consequence, in a right-skewed distribution of (predominantly low) prices, a focal price of $100 occupies a higher percentile rank and therefore appears more expensive than the same price with a lower percentile rank in a left-skewed distribution of (predominantly high) prices. Likewise, gaining $100 can be subjectively less than losing $100 if the distribution of credits on one’s bank account is left-skewed (predominantly large credits) whereas the distribution of debits is right-skewed (predominantly small debits; Walasek & Stewart, Reference Walasek and Stewart2015).
A computational model of sample-based decision processes must rely on distinct assumptions about which specific statistical sample properties are supposed to determine cognitive inferences. Are decision makers sensitive to the percentile rank, the expected value, the maximal or minimal outcomes of a sample, the experienced samples in primacy or recency positions? Are they sensitive to zero-order information or to dynamic changes described by the first or second derivative, to the first (mean), second (dispersion), third (skewness), or fourth moment (curtosis) of a sampled distribution?
There are countless ways of specifying computational models as a multiple function of sampling sources, errors, and biases, sample size and reliability, search strategies, hedonic motives, and cognitive assumptions about sample statistics. The variety is immense but – given an actuarial sample measure and a set of powerful normative rules – even refined assumptions can be tested empirically.
To illustrate this asset, consider the fascinating issue of stopping rules or sample-truncation criteria. When will consumers, diagnostic interviewers, or personnel managers stop sampling and get the feeling to have gathered information to make a decision or choice? Do they stop when the sample has exceeded some threshold, or when a sample has settled to a stable value, when a critical sample size is reached, or when time pressure or impatience has reached a limit (Ackerman, Reference Ackerman2014; Prager & Fiedler, Reference Prager and Fiedler2021)? An analysis of actuarial records of self-truncated sequences of sampled information enables researchers to develop taxonomies and to test distinct hypotheses about truncation decisions.
1.3 New Developments in Recent Sampling Research
The sampling research we have reviewed so far is roughly twenty years old. Precursors had already appeared in the 1960s and 1970s (Edwards, Reference Edwards1965; Laughlin & Ellis, Reference Laughlin and Ellis1986; Peterson & Beach, Reference Peterson and Beach1967). Yet, the cognitive-ecological framework of Figure 1.1 and the notion that biased judgments and decisions may originate in unbiased processes became the focus of rapidly increasing research in the last two decades. At the heart of this book project are the goals to identify the conceptual challenges current sampling research is facing, and to elaborate on how to interpret and extrapolate the most recent developments. What major trends in research and theorizing are visible on the horizon, after more and more cognitive psychologists and students of adaptive cognition have discovered the beauty and fertility of sampling theories?
1.3.1 Conceptual Challenges
1.3.1.1 The Limiting Conditions on “Metacognitive Myopia”
The earlier sampling volume by Fiedler and Juslin (2006), was largely embedded in the post-Tversky and Kahneman landscape of documenting and explaining fallacious judgments and decisions. In the sampling approach, this often (but not always) takes the form of an emphasis on the “metacognitive myopia,” or “naïvety,” by which people interpret and use the proximal samples they experience. In other words, they accurately extract and describe the content of the samples they experience, but they are oblivious to – or take insufficient account of – biases that exist in the samples. This is still part of the explanatory toolbox of the sampling approach, but the toolbox has expanded.
The chapters of this new volume also identify and address conditions where people’s behavior is explained by their efforts to adaptively correct for sampling biases, demonstrating nontrivial abilities. In the Social Sampling Model (Chapter 16 by Pachur & Schulze), for example, in their judgments people actively counteract the homogeneity of their proximal social circle of friends and family, in effect, correcting for the biased proximal sample of people that one is likely to meet on a day-to-day basis. As noted above, differences in the judgments of in- and outgroups can be explained by an inertia of beliefs influenced by small samples that can be functionally accounted for as Bayesian updating (Fiedler, Reference Fiedler2000).
The challenge for sampling research – and perhaps the ultimate challenge for any research program in psychology – is to develop it beyond isolated examples or “existence proofs” to a systematic understanding of the limiting conditions for the phenomena, allowing us to predict and understand both when and why people sometimes are “myopic” with regard to the samples, yet sometimes engage in sophisticated corrections of the sample content. In the research tradition after Tversky and Kahneman on heuristics and biases, this meta-theoretical role has been carried by so-called dual systems theories (e.g., Evans & Stanovich, Reference Evans and Stanovich2013), which purport to account for both our (allegedly mainly analytic) competencies and our fallible judgments. As illustrated by the research on dual systems theories, this step from existence proofs to theoretical systematicity, in order to account for both cognitive competencies and limitations, is a nontrivial challenge for any research program in psychology. The sampling approach is already making progress toward this goal and the pursuit of this systematicity will be crucial in the years to come.
1.3.1.2 Beyond the Idealizations of Probability
Theorizing in cognitive psychology (not to speak of its methods) is entrenched by idealized concepts drawn from probability theory and statistics, like “population,” “reference class,” “random sample,” “representative sample,” “unbiased sample,” “independent events,” etc., that are conceptually useful for theorizing, but have exact meanings only under limited and specific conditions. Many results in statistics hold under the assumption of an infinite and independent replication of the same stochastic event (“in the limit”). This is an idealized conception of the world.
Without a clear conception about what the relevant population actually is, or what “random sampling” would concretely mean in a real-world situation, theorizing based on probability theory risks accumulating a “conceptual deficit,” an increasing abyss between the theories and the processes they purport to describe, which sooner or later has to be “cashed in.” Indeed, if there is anything we know, it is that information search is not random and that the situations we encounter are not representative random samples from well-defined populations of events. Fortunately, the research on Bayesian sampling algorithms (Zhu et al., Chapter 20 this volume) has started to address this limitation by exploring realistic sampling procedures from machine learning as candidates for realistic and psychologically plausible sampling algorithms.
This shift from idealized to computationally realistic sampling algorithms has indeed been forced by attempts in machine learning to apply probability theory to real-world problems and these are, of course, the problems that people have to confront. The computationally most demanding sampling algorithm is that of random sampling, routinely assumed in applications of probability theory, typically making unreasonable demands of knowledge on the complete outcome spaces of the world. By contrast, people (and machines) typically need to engage in “local,” successive explorations of the sample space, which will gradually allow them to build more and more correct representations of the distributions in the world. Sundh et al. (Chapter 21 this volume) review how taking into account such psychologically plausible sampling procedures provide a new understanding of important psychological phenomena. An important challenge for future research is to continue this quest to understand sampling, not only as it occurs in the context of dices and urns, but in the exploration and learning of real environments.
1.3.2 One, Two Decades Later …
… after the new sampling perspectives were introduced, contemporary cognitive and social psychology and behavioral economics are still dominated by “mentalistic approaches,” explaining psychological phenomena by postulating hypothetical mental mechanism driven by motives, desires, and cognitive biases. The sampling approach does not ignore the existence and the potential importance of such mental constructs. Both approaches can be combined in fruitful ways (as in Denrell & Le Mens’, Reference Denrell and Le Mens2012, notion of hedonic sampling or in Oaksford & Chater’s, Reference Oaksford and Chater1994, work on positive testing). However, crucially, the core of sampling theory construction does not assign the primary role to mentalistic constructs. The basic meta-theoretical assumption is, rather, that an analysis of the statistical distribution, the contingent structures, and the semiotic properties of the stimulus samples afford a most promising way to understand the chances and the limits of cognitive information processing. The default assumption is that people are principally sensitive to sampled information; they are equipped with sensory and cognitive tools to accurately assess many different aspects of these samples. As in psychophysics, the point here is not to deny that people sometimes misrepresent the content or transform stimulus attributes in nonlinear fashion.
From this gradually unfolding meta-theoretical perspective, and in the light of an increasing repertoire of sampling paradigms, one can analyze the current trends and speculate about future developments in behavioral science. Frontiers of cognitive-ecological research seem to be moving in two major directions. On one hand, functional-level approaches have led to new perspectives on a variety of experimental paradigms, resulting in theoretical progress and many novel insights. On the other hand, pioneering researchers have unfolded the underpinnings of sampling mechanisms and outlined the gist of refined computational models of adaptive behavior. The following preview of the contents of this volume, beyond the already outlined chapters, testifies to both exciting developments.
1.3.3 Preview of all Chapters of the Present Volume
Part I, Historical Review of Sampling Perspectives and Major Paradigms, provides an introduction to the origins and meta-theoretical foundations of the sampling approach. Starting with the historical review by Fiedler, Juslin and Denrell in Chapter 1 (that you are currently reading), it also includes state-of-the-art reviews of three of the most influential frameworks. Specifically, Brown and Walasek outline the decision-by-sampling model in Chapter 2; the description–experience gap is the focus of Chapter 3 by Pleskac and Hertwig, and Chapter 4 by Denrell and Le Mens is devoted to research inspired by the hot-stove effect.
Rather than representing competing models or theoretical rivals, these approaches complement each other in providing theoretical ground for different facets of sample-based judgment and decision-making, emphasizing theoretical and practical implications of different aspects of an overarching sampling framework. While decision-by-sampling proposes a psychophysical model that focuses on the percentile rank of stimuli or thresholds in a sample rather than their scale values on underlying attribute dimensions, the description–experience gap is concerned with the encoding format of a sample, which can be either experienced extensionally, as a series of elementary raw observations or described in terms of numerical or linguistic summary statistics. Decision by description relies on explicitly stated risks (known probabilities), whereas decision by experience relies on uncertainty (unknown probabilities that have to be inferred from a sample of raw events). The hot-stove effect, finally, highlights the active role played by adaptive agents in information search. The tendency to avoid sampling from unpleasant sources implies an asymmetric sampling process that favors positive over negative stimuli.
The four chapters of Part II constitute attempts to illuminate the underlying Sampling Mechanisms, that is, the black box of mental processes that mediate between causes and consequences of sample-based inferences. Mechanistic research is typically juxtaposed to functionalist research that merely clarifies the functional relationships between antecedent and consequent conditions, independent of the black box containing the mediating algorithms.
Chapter 5, titled The J/DM Separation Paradox and the Reliance on the Small Samples Hypothesis, deals with mechanisms of the description–experience gap introduced in Chapter 3. The authors, Erev and Plonsky, try to find a mechanistic account in the analysis of apparently paradoxical findings from three paradigms: an apparent over-sensitivity to rare events in probability assessment and in decisions from description, but an apparent under-sensitivity to rare events in decisions from experience. The authors note that in the first two paradigms, judgment (J) and decision-making (DM) are separable, whereas in the latter paradigm, J and DM become integral parts of an overarching process. In other words, decisions from experience encompass both judgment of the probability from experience (rather than reading of a stated number) and decision between the alternatives. The authors point out that all three findings of the paradox can be parsimoniously derived from the assumption that the “mere presentation” of an outcome in the task increases its perceived probability and a robust reliance on small samples. For example, we may only be able to consider a small sample of candidate diseases in a medical diagnosis task, but the mere mentioning of coronary cancer in the task formulation immediately tends to raise the salience of this specific possibility.
Chapter 6, to repeat, offers a refined sampling mechanism of the evaluative-conditioning phenomenon, conceptualizing Sampling as Preparedness in Evaluative Learning. Drawing on their own recent experimental research, Hütter and Niese delineate a completely novel theory of conditioning, giving up the restriction that conditioning involves a fixed sequence of experimentally controlled pairings of conditions stimuli (CS) and unconditioned stimuli (US). When conditioning takes place as an active sampling task, such that participants can themselves click on what stimulus face they want to include as CS paired in a trial with pleasant versus unpleasant photographs (IAPS pictures; Lang et al., Reference Lang, Öhman and Vaitl1988) serving as US, conditioning turns out to depend crucially on the learners’ sampling strategy. A clear-cut preference to sample predominantly CSs paired with positive USs served to charge CSs with positive valence. Frequently choosing a face was similarly effective as pairing a face with positive USs. Merely sampling a CS face functioned like a reinforcer. Because most conditioning processes under natural conditions allow organisms to selectively attend to distinct stimuli, this stimulus–response mechanism constitutes an important innovation in the conditioning literature.
Chapter 7 examines how beliefs about how a sample was selected impact judgments from the sample. In The Dog that Didn’t Bark: Bayesian Approaches to Reasoning from Censored Data, Hayes, Desai, Ransom,, and Kemp review research that shows that judgments differ depending on whether people believe the sample was randomly or selectively chosen. Suppose you are shown a sample of small rocks that contain the mineral “plaxium.” Do also large rocks contain plaxium? Participants who are told that only small rocks are sampled are more likely to believe large rocks contain plaxium than participants who are told that any rock containing plaxium was sampled are; a pattern is consistent with Bayesian updating. Hayes et al. review past research demonstrating these effects and show that similar effects occur in more complex scenarios. What are the cognitive mechanisms involved in such inferences? Hayes et al. show that it is important that participants are made aware of how the sample was constructed before they see the data, suggesting that sampling information impacts encoding and memory organization. Furthermore, there are significant individual differences in how well individuals take sample bias into account.
In Chapter 8, Unpacking Intuitive and Analytic Memory Sampling in Multiple-Cue Judgments, Collsiöö, Sundh, and Juslin explore the cognitive mechanisms involved in one canonical form of sampling – the sampling of memory for similar previous instances or exemplars. While models assuming sampling of exemplars abound in the literature (including in this volume) the exact nature of the cognitive processes involved often remains elusive. The research reported in Chapter 8 demonstrates that both rule-based cognitive inferences and sampling from memory can come in both intuitive and analytic disguises, distinguishable by cognitive modeling.
Part III covers four chapters dealing with functional-level analyses of Consequences of Selective Sampling. In Chapter 9, Harris and Custers are concerned with persistent biases as a consequence of selective varying on the exploration versus exploitation dimension. The major take-home message points to Biased Preferences through Exploitation. Thus, in reward-rich environments, when both choices in a two-armed bandit task are rewarded frequently, participants continue to exploit an apparently superior choice option. In contrast, a reward-poor environment facilitates shifting to an exploration strategy as a precondition for noticing that a seemingly inferior option produces the same (or even more) payoff. This is why exploitation (in reward-rich situations) stabilizes transient biases whereas exploration entails a chance to get rid of cognitive biases and illusions.
Evaluative Consequences of Sampling Distinct Information is the focus of Chapter 10 by Alves, Koch, and Unkelbach. Granting that impressions, attitudes, and judgments necessarily rely on samples of behavioral information, they elaborate on the assumption that rare and diverse information is more diagnostic than common and normatively expected information. Because in many ecological contexts, negative information is less frequent and more diverse than positive information, resulting impressions, attitudes, and judgments often exhibit a clear-cut negativity bias.
Chapter 11 by Bott and Meiser deals with Information Sampling in Contingency Learning: Sampling Strategies and their Consequences for (Pseudo-) Contingency Inferences. In a pseudo-contingency illusion, participants infer a positive contingency between the prevalent values or the infrequent values of two uncorrelated skewed attributes X and Y. Although alignment of marginal distributions, without access to cell distributions, does not allow one to compute the mathematical contingency between two variables, people tend to infer such pseudo-contingencies. Upon scrutiny, however, this appears to be more than an arbitrary folly. A Bayesian analysis shows that alignment of the marginal distributions actually increases the posterior probability that there exists a positive contingency between the variables, thus providing a normative justification of inferring pseudo-contingencies.
In Chapter 12, Le Mens, Kovács, Avrahami, and Kareev examine the consequences of sampling decisions based on evaluations and opinions of others. Product reviews and ratings, based on the opinions of people who have tried a product, influence whether new consumers try the product. Le Mens et al. show that this leads to The Collective Hot-Stove Effect. Objects with low ratings are avoided and their ratings are less likely to be revised. As a result, negative ratings are more persistent than positive ratings are (positive ratings attract more consumers, who may have a less positive opinion). Le Mens et al. demonstrate this effect in data on Amazon Reviews and in an experiment where participants can use ratings to decide which objects to sample.
Truncation and Stopping Rules afford the joint topic of three mechanistic accounts covered in Part IV. Whenever the length and content of stimulus samples are not predetermined experimentally but depend on decision makers’ feeling to have gathered sufficient information and to be ready to make a decision eventually, the stopping rule becomes a crucial aspect of the cognitive mechanism. In Chapter 13, titled Sequential Decisions from Sampling: Inductive Generation of Stopping Decisions Using Instance-Based Learning Theory, Gonzalez and Aggarwal review prior research that explains when people stop sampling and offers a new model that fits the data well. Their model is based on the instance-based model but adds a process where decision makers track the change in prediction errors. The decision to stop is based on a comparison of the prediction errors of the available alternatives and how these errors have changed. The intuition is that the marginal value of additional samples is low when the relative prediction error has not changed much. Gonzalez and Aggarwal show that their model fits well to sampling decisions made by more than 4,000 participants in experiments on decisions from experience.
In their Chapter 14, Thurstonian Uncertainty in Self-Determined Judgment and Decision Making, Prager, Fiedler, and McCaughey discuss a series of experiments on person impressions informed by samples of traits. The freedom to stop sampling traits when sufficient information has been accrued to make an impression judgment regularly produced a less-is-more effect. Samples remain small when the first few traits are highly diagnostic but, in the absence of such a primacy effect, initially indeterminate and conflicting samples grow larger and often remain equivocal and conflict-prone (see also Prager et al., Reference Prager, Krueger and Fiedler2018). Closer inspection shows that the truncation decision not only depends on Brunswikian sampling of stimulus traits in the environment but also on Thurstionian sampling of oscillating states of mind within the individual (substantiating a conceptual distinction borrowed from Juslin & Olsson, Reference Juslin and Olsson1997).
In Chapter 15, McCaughey, Prager, and Fiedler propose a paradigm suited to study The Information Cost–Benefit Trade-Off as a Sampling Problem in Information Search. To investigate speed–accuracy tradeoffs in sample-based choices (between pairs of investment funds), they develop a paradigm in which the total payoff is the product of the average choice accuracy times the number of choices completed in a given time period. In this paradigm, speed (number of completed choices) decreases linearly with increasing sample size, whereas accuracy increases only in a clearly sublinear fashion. Because the normative requirement to give more weight to speed than to accuracy is counterintuitive, participants exhibit a persistent oversampling bias, which is at variance to the often cited tendency to draw (too) small samples in decisions from experience. To account for this memorable oversampling bias, the authors refer to higher evaluability (Hsee & Zhang, Reference Hsee and Zhang2010) of accuracy than speed.
A sprawling new line of sampling research on Sampling as a Tool in Social Environments, reviewed in Part V, applies sampling models to capture the cognitive processes through which people infer distributions of properties in their social environments, as well as to understand the reasons for well-known biases in people’s social judgments. Prominent examples would be judgments of whether more people in the USA identify as Republicans or as Democrats, or whether most people prefer red to white wine. A common finding is false consensus – overestimation of the prevalence of one’s own beliefs and preferences.
Chapter 16 by Pachur and Schulze offers a review and tutorial of Heuristic Social Sampling, advocating a Social Circle Model where search in memory for examples in one’s social circle is structured by social categories (i.e., self, family, friends, acquaintances), sequential, and truncated as soon as the innermost social circle provides good enough evidence for a decision. The authors evaluate the empirical support for the assumptions of structured and truncated memory search and the ecological rationality of heuristic social sampling. By raising issues of structured (“nonrandom”) sampling of memory, Chapter 16 connects to the work discussed in Part 6 below on computational modeling.
In Chapter 17, Social Sampling for Judgments and Predictions of Societal Trends, Olsson, Galesic, and de Bruin relate the Social Sampling Model to a theoretical anomaly in a truly counterintuitive application of social judgments. The theoretical anomaly is that, although social judgments are inherently social and depend heavily on accuracy, the pertinent literature focuses on biases and shortcomings of social judgments. The authors propose that this anomaly can be resolved by more careful attention to how the social sampling processes interact with the social and the task environments, allowing a reinterpretation of phenomena previously understood in terms of motivational or cognitive biases. The counterintuitive application – given the blemished reputation of the exactitude of social judgments – uses individual people’s micro-level knowledge of social circles to improve on the predictive accuracy of macro-level distributional judgments, like predicting national election results.
Derreumaux, Bergh, Lindskog, and Hughes, the authors of Chapter 18, Group-Motivated Sampling: From Skewed Experiences to Biased Evaluations, review experiments conducted in paradigms that attempt to unpack at what stages in the cognitive process selective sampling arises, and how these effects interact with group-based social beliefs and motivations of the observer. The results suggest a distinct interaction between internal beliefs, motives, and biases that are exogenously available in the input experiences when people perceive and evaluate ingroup and outgroup members. The authors argue that intergroup research would greatly benefit from more crosstalk between researchers emphasizing selective sampling in addition to mental biases.
Chapter 19 by Konovalova and Le Mens shows how a sampling approach can help to understand opinion polarization. Many have blamed polarization on the biased exposure to news and ideas generated in the social media. In their chapter, Opinion Homogenization and Polarization: Three Sampling Models, Konovalova and Le Mens examine different mechanisms through which biased exposure to information generates polarization or homogenization. The models illustrate that sampling accounts that do not assume motivated cognition are sufficient to account for polarization.
The Computational Approaches portrayed in Part VI reflect modern developments inspired by probability theory and Bayesian statistics, applied to complex real-life problems in machine learning. Here sampling processes take on a slightly new role by not primarily being evoked as explanations of alleged judgment errors, but as enablers to allow us to perform the complex computational tasks that we meet in many real-world situations, which are otherwise intractable and unachievable (often both to humans and machines). Because the mind has to solve adaptive problems of this kind, it may rely on similar computational capacities. Although these strategies are adaptive, they also imply idiosyncratic sampling patterns that are sometimes endearingly similar to those observed in humans.
As noted by Zhu, Chater, León-Villagrá, Spier, Sundh, and Sanborn in Chapter 20, An Introduction to Psychologically Plausible Sampling Schemes for Approximating Bayesian Inference, “independent random sampling” is something of a default assumption, often boldly stated without hesitation in statistical and cognitive modeling, although it turns out to be a very demanding assumption. Essentially, it requires perfect knowledge of a population of events, with equal or known probabilities, to which some completely stochastic process of selection (whatever that is) can be applied. Outside of a narrow space of “aleatory devices” (dices, coins, roulette wheels, etc.), this knowledge is typically lacking or has to be generated online at the time of the problem-solving. In Chapter 20, sampling schemes that – in various ways – approximate full knowledge of a distribution by generating a sample of observations from the distribution are reviewed. The authors elaborate on how the insights from this research can inform psychological research.
In Chapter 21, Approximating Bayesian Inference through Internal Sampling, Sundh, Sanborn, Zhu, Spicer, León-Villagrá, and Chater discuss how the over-application of such simple-sampling schemes for approximating distributions to even simple judgment problems can provide a coherent alternative account of many of the well-known judgment biases. They propose that many of the judgment biases in judgment and decision-making literature can be reinterpreted as side effects of sampling procedures that in the limit are adaptive solutions to complex real-life problems.
In Chapter 22, Sampling Data, Beliefs, and Actions, finally, Brockbank, Holdaway, Acosta-Kane, and Vul point to the future by exploring an organizing framework for model integration. Crucial to their approach is the assumption that samples serve the function of approximating (too) complex calculations in reality. Much research on judgment and decision-making can be organized around three steps in expected utility maximization, all of which can be made subject to small-sample approximations. Updating beliefs about the world can be approximated by considering a small sample of observations (Data). The expected utility of an action can be approximated by considering beliefs about a small sample of world-states (Beliefs). Identification of the utility-maximizing action can be approximated by evaluating a small set of actions (Actions). These three small-sample approximations make expected utility maximization – often otherwise claimed to be intractable – a psychologically plausible behavioral option. The authors discuss how the research on sampling in the three steps of expected utility maximization can fertilize each other and psychological theorizing.
1.3.4 Outlook … Two More Decades Later
An open prospective question, to be sure, is what theoretical developments and what empirical research programs will dominate the future of behavioral science on judgment and decision-making, two more decades later. Will progress in future research and theorizing be most visible at the functional level in applications of sampling theories to such applied domains as consumer science (Powell et al., Reference Powell, Yu, DeWolf and Holyoak2017), health (Khoury & Ioannides, Reference Khoury and Ioannidis2014), intergroup affairs (Azzi & Jost, Reference Azzi and Jost1997; Konovalova & Le Mens, Reference Konovalova and Le Mens2020), politics and democracy (Ohtsubo & Masuchi, Reference Ohtsubo and Masuchi2004; Van Hiel & Franssen, Reference Van Hiel and Franssen2003), or the current pandemic (Block et al., Reference Block, Hoffman and Raabe2020)?
Or will the future lie in theoretical fertilization, combining sampling models with such issues as connectionist modeling approaches (Thomas & McClelland, Reference Thomas, McClelland and Sun2008), serial reproduction and collective knowledge acquisition (Lyons & Kashima, Reference Lyons and Kashima2003; Moussaïd et al., Reference Moussaïd, Herzog, Kämmer and Hertwig2017), the power of intuition (Ambady & Rosenthal, Reference Ambady and Rosenthal1993; Olivola & Todorov, Reference Olivola and Todorov2010), advice-taking and crowd wisdom (Yaniv, Reference Yaniv2004)?
Or maybe the new competence in machine learning and computational modeling will dominate the field. To illustrate this notion with a pioneer, Brown et al. (Reference Brown, Lewandowsky and Huang2022) delineated in an agent-based simulation setting (involving sampling contact between stochastically selected pairs of agents in a 100 × 100 grid) how an impressive variety of social psychological phenomena can be unfolded in decision by sampling. No extra ad hoc assumptions were required to explain attitude similarity, extremeness aversion, group polarization, and backfire effects as normal products of decision by sampling, independent of the ad hoc postulation of any social motives or biases.
A generalized computational model, which combines decision-by-sampling with elements of Parducci’s (Reference Parducci1965) range-frequency model was recently proposed by Bhui and Gershman (Reference Bhui and Gershman2018). For another example, in a revised social-circle model, called the social sampling model (SSM), Galesic, Olsson and Rieskamp (Reference Galesic, Olsson and Rieskamp2018) offers a comprehensive account of a whole variety of well-known social-psychological phenomena, including false consensus and false uniqueness, treating memory as a genuine part of the environment from which social information is sampled.
In any case, however the future of sampling approaches may look, we reckon it will continue to be a success story of a fruitful and enlightening theory approach. By combining the scrutiny of statistical sampling principles with emergent insights about the cognitive-ecological interface, we are convinced that future research and theorizing will unravel the major constraints imposed on judgments and decision-making in a probabilistic world.