We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the most vexing problems in cluster analysis is the selection and/or weighting of variables in order to include those that truly define cluster structure, while eliminating those that might mask such structure. This paper presents a variable-selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. The heuristic was subjected to Monte Carlo testing across more than 2200 datasets with known cluster structure. The results indicate the heuristic is extremely effective at eliminating masking variables. A cluster analysis of real-world financial services data revealed that using the variable-selection heuristic prior to the K-means algorithm resulted in greater cluster stability.
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical applications in the behavioral sciences. We compared the performances of nine prominent heuristic procedures for WCSS partitioning across 324 simulated data sets representative of a broad spectrum of test conditions. Performance comparisons focused on both percentage deviation from the “best-found” WCSS values, as well as recovery of true cluster structure. A real-coded genetic algorithm and variable neighborhood search heuristic were the most effective methods; however, a straightforward two-stage heuristic algorithm, HK-means, also yielded exceptional performance. A follow-up experiment using 13 empirical data sets from the clustering literature generally supported the results of the experiment using simulated data. Our findings have important implications for behavioral science researchers, whose theoretical conclusions could be adversely affected by poor algorithmic performances.
To date, most methods for direct blockmodeling of social network data have focused on the optimization of a single objective function. However, there are a variety of social network applications where it is advantageous to consider two or more objectives simultaneously. These applications can broadly be placed into two categories: (1) simultaneous optimization of multiple criteria for fitting a blockmodel based on a single network matrix and (2) simultaneous optimization of multiple criteria for fitting a blockmodel based on two or more network matrices, where the matrices being fit can take the form of multiple indicators for an underlying relationship, or multiple matrices for a set of objects measured at two or more different points in time. A multiobjective tabu search procedure is proposed for estimating the set of Pareto efficient blockmodels. This procedure is used in three examples that demonstrate possible applications of the multiobjective blockmodeling paradigm.
Two-mode binary data matrices arise in a variety of social network contexts, such as the attendance or non-attendance of individuals at events, the participation or lack of participation of groups in projects, and the votes of judges on cases. A popular method for analyzing such data is two-mode blockmodeling based on structural equivalence, where the goal is to identify partitions for the row and column objects such that the clusters of the row and column objects form blocks that are either complete (all 1s) or null (all 0s) to the greatest extent possible. Multiple restarts of an object relocation heuristic that seeks to minimize the number of inconsistencies (i.e., 1s in null blocks and 0s in complete blocks) with ideal block structure is the predominant approach for tackling this problem. As an alternative, we propose a fast and effective implementation of tabu search. Computational comparisons across a set of 48 large network matrices revealed that the new tabu-search heuristic always provided objective function values that were better than those of the relocation heuristic when the two methods were constrained to the same amount of computation time.
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30×30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation time considerations generally limit their applicability to matrix sizes no greater than 35×35. Accordingly, a variety of heuristic methods have been proposed for larger matrices, including iterative quadratic assignment, tabu search, simulated annealing, and variable neighborhood search. Although these heuristics can produce exceptional results, they are prone to converge to local optima where the permutation is difficult to dislodge via traditional neighborhood moves (e.g., pairwise interchanges, object-block relocations, object-block reversals, etc.). We show that a heuristic implementation of dynamic programming yields an efficient procedure for escaping local optima. Specifically, we propose applying dynamic programming to reasonably-sized subsequences of consecutive objects in the locally-optimal permutation, identified by simulated annealing, to further improve the value of the objective function. Experimental results are provided for three classic matrix permutation problems in the combinatorial data analysis literature: (a) maximizing a dominance index for an asymmetric proximity matrix; (b) least-squares unidimensional scaling of a symmetric dissimilarity matrix; and (c) approximating an anti-Robinson structure for a symmetric dissimilarity matrix.
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of “exemplars” as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed a new simulated annealing heuristic for the p-median problem and completed a thorough investigation of its computational performance. The salient findings from our experiments are that our new method substantially outperforms a previous implementation of simulated annealing and is competitive with the most effective metaheuristics for the p-median problem.
Although the K-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The p-median model is an especially well-studied clustering problem that requires the selection of p objects to serve as cluster centers. The objective is to choose the cluster centers such that the sum of the Euclidean distances (or some other dissimilarity measure) of objects assigned to each center is minimized. Using 12 data sets from the literature, we demonstrate that a three-stage procedure consisting of a greedy heuristic, Lagrangian relaxation, and a branch-and-bound algorithm can produce globally optimal solutions for p-median problems of nontrivial size (several hundred objects, five or more variables, and up to 10 clusters). We also report the results of an application of the p-median model to an empirical data set from the telecommunications industry.
The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal or ordinal attributes. In such instances, the CPP incorporates edge costs obtained from an aggregation of binary equivalence relations among the attributes. We review existing theory and methods for the CPP and propose two versions of a new neighborhood search algorithm for efficient solution. The first version (NS-R) uses a relocation algorithm in the search for improved solutions, whereas the second (NS-TS) uses an embedded tabu search routine. The new algorithms are compared to simulated annealing (SA) and tabu search (TS) algorithms from the CPP literature. Although the heuristics yielded comparable results for some test problems, the neighborhood search algorithms generally yielded the best performances for large and difficult instances of the CPP.
When considering the implications of the shareholder-stakeholder debate in defining the purpose of a company, epistemological clarity is vital in this emerging theory of the firm. Such clarity can prevent recurrence based solely on rephrasing key terms. To understand how various stakeholders develop and interpret a shared purpose, I argue for the necessity of a pragmatist approach that is normative and process-oriented. Mental models play a crucial role in interpretive processes that define decision-making, where individual perspectives converge. The figures of Milton Friedman and Ed Freeman serve as “beacons,” as artefacts, in the transmission of knowledge through which we, as individuals, shape a shared understanding. In current societies, profound polarization obstructs solutions to grand challenges. Pragmatism starts by questioning the underlying values of everyone involved. It assumes that sound deliberative processes are the only way to reach real solutions—not only for the mind but, above all, for the heart.
During the Cold War, logical rationality – consistency axioms, subjective expected utility maximization, Bayesian probability updating – became the bedrock of economics and other social sciences. In the 1970s, logical rationality underwent attack by the heuristics-and-biases program, which interpreted the theory as a universal norm of how individuals should make decisions, although such an interpretation is absent in von Neumann and Morgenstern’s foundational work and dismissed by Savage. Deviations in people’s judgments from the theory were thought to reveal stable cognitive biases, which were in turn thought to underlie social problems, justifying governmental paternalism. In the 1990s, the ecological rationality program entered the field, based on the work of Simon. It moves beyond the narrow bounds of logical rationality and analyzes how individuals and institutions make decisions under uncertainty and intractability. This broader view has shown that many supposed cognitive biases are marks of intelligence rather than irrationality, and that heuristics are indispensable guides in a world of uncertainty. The passionate debate between the three research programs became known as the rationality wars. I provide a brief account from the ‘frontline’ and show how the parties understood in strikingly different ways what the war entailed.
In March 2024, Daniel Kahneman – the man who did perhaps more than anyone else to shape the field of behavioural public policy – died. He is among a small handful of scholars who have had a huge effect on my own career, and in this essay – the first in a series of essays in a special section of the Journal that honour him – I reflect on how his work inspired much of my own.
How does the general public perceive immigrants, whom do they think of when thinking about “immigrants,” and to what extent are these perceptions related to the actual composition of immigrant populations? We use three representative online surveys in the United States, South Africa, and Switzerland (total N = 2,778) to extend existing work on the perception of immigrants in terms of geographic coverage and individual characteristics. We also relate these responses to official statistics on immigration and integration patterns. In all three countries, there are significant discrepancies between perceptions of immigrants and their real proportion in the population and characteristics. Although we observe clear country differences, there is a striking pattern of people associating “immigrants” with “asylum seekers” in all three countries. We consider two possible explanations for the differences between perceptions and facts: the representativeness heuristic and the affect heuristic. In contrast to previous research, we find only partial support for the representativeness heuristic, whereas the results are consistent with the affect heuristic. We conclude that images of “immigrants” are largely shaped by pre-existing attitudes.
Nudging is a policy tool that steers people’s behavior through noncoercive psychological pushes. This has consequences for people’s lives to varying degrees. For example, the nudge of a sticker of a fly in a urinal encourages peeing inside a urinal, while an organ donation default brings people to agree to donating their organs after their decease. Governments do not yet systematically examine which nudges have to be subjected to all safeguards of the rule of law—for example, parliamentary control, judicial review, or compliance with legal principles such as proportionality. This article argues that a legal doctrine is necessary to carry out this examination. Moreover, it contributes to the development of such a doctrine, using the approach of the European Court of Human Rights as a source of inspiration. The doctrine consists of a “de minimis” principle for nudges: Public institutions only need to ensure that a nudge complies with rule of law safeguards when the nudge has substantial consequences. In addition, the doctrine includes a criterion to determine which nudges have such substantial consequences. In particular, it is argued that a nudge should be subjected to at least some safeguards when it has a serious effect on people’s autonomy.
Forecasting elections is a high-risk, high-reward endeavor. Today’s polling rock star is tomorrow’s has-been. It is a high-pressure gig. Public opinion polls have been a staple of election forecasting for almost ninety years. But single source predictions are an imperfect means of forecasting, as we detailed in the preceding chapter. One of the most telling examples of this in recent years is the 2016 US presidential election. In this chapter, we will examine public opinion as an election forecast input. We organize election prediction into three broad buckets: (1) heuristics models, (2) poll-based models, and (3) fundamentals models.
This paper presents a toolkit of heuristics for enabling non-professionals to design for wellbeing, merging design, psychology, and ergonomics. It demystifies design, focusing on happiness and long-term wellbeing, making design principles accessible to all. This toolkit narrows the divide between design theory and practice, advocating design as a tool to enhance life for individuals and society.
Insight experiences are powerful: They feel true, they are remembered, and they can shift our decisions and our beliefs. Feelings of insight are also accurate most of the time. However, recent work shows that it is possible to systematically induce false insights and even misattribute our Aha! moments to make false facts seem true. Insights, therefore, seem to be adaptive on average but error prone. This chapter suggests that these results can be integrated by thinking of insights as a metacognitive heuristic for selecting ideas from the stream of consciousness (dubbed the “Eureka heuristic”), reviews key findings about the accuracy of insights and where and why insights go wrong, and discusses implications for our understanding of the development of delusions, false beliefs, and misinformation. Ultimately, understanding the role that feelings of insight play in human cognition may make us better decision-makers in an uncertain world.
The Element begins by claiming that Imre Lakatos (1922–74) in his famous paper 'Proofs and Refutations' (1963–64) was the first to introduce the historical approach to philosophy of mathematics. Section 2 gives a detailed analysis of Lakatos' ideas on the philosophy of mathematics. Lakatos died at the age of only 51, and at the time of this death had plans to continue his work on philosophy of mathematics which were never carried out. However, Lakatos' historical approach to philosophy of mathematics was taken up by other researchers in the field, and Sections 3 and 4 of the Element give an account of how they developed this approach. Then Section 5 gives an overview of what has been achieved so far by the historical approach to philosophy of mathematics and considers what its prospects for the future might be.
Political scientists have proposed that party cues can be used to compensate for the public's well-documented lack of substantive political knowledge, but some critics have argued that applying party cues is more difficult than assumed. We argue that this debate has proven intractable in part because scholars have used ambiguous normative criteria to evaluate judgments. We use a unique task and clear normative criteria to evaluate the use of party cues in making political judgments among two samples: a sample of state legislators and an online sample of the public. We find that the public sample performs poorly when using cues to make judgments. State legislators make much more accurate judgments on average than even the most attentive segment of the public and are more likely to place less weight on irrelevant cues when making judgments, although there is evidence that both samples performed worse with the inclusion of non-diagnostic cues. We conclude with a discussion of the relevance of the results, which we interpret as showing that party cue use is more difficult than theorized, and discuss some practical implications of the findings.
To develop a scientific perspective on intuition, we first need to dispense with the old and misleading dualistic opposition of intuition and reason. Rather, intuition and reason go hand in hand: where a doctor feels that something is wrong with a patient, intuition comes first, followed by a deliberate search for what is wrong. Even abstract disciplines like mathematics, need both intuition and reasoning. As George Pólya emphasized, finding a problem or discovering a proof requires intuition and heuristics; checking whether the proof is correct requires logic and analysis. The theoretical framework to understand the nature of intuition is that of ecological rationality, the study of how mental processes are adapted to their environments, based on Herbert Simon’s notion of bounded rationality, i.e., how people make decisions under uncertainty – where the best action cannot be calculated. Good intuitions rely on adaptive heuristics that are not logically, but ecologically, rational. The fluency heuristic, the recognition heuristic, and satisficing exemplify tools of the adaptive toolbox. Under uncertainty, they can lead to better decisions than complex algorithms.
Chapter 1 begins with the distinction between reasoning from associations and reasoning from rules – a distinction that will resurface in subsequent chapters on creativity and innovation. The associative system is reproductive, automatic, and emphasizes similarity. The rule-based system is productive, deliberative, and emphasizes verification. Daniel Kahneman’s (2011) best-selling book Thinking Fast and Slow introduced readers to how associative and rule-based reasoning influence the speed of responses. The third section on biases in reasoning describes Kahneman’s classic research with Amos Tversky on how the use of heuristics such as availability and representativeness influence frequency estimates. The final section discusses monitoring reasoning in which people use knowledge to improve their thinking skills. Monitoring reasoning is a metacognitive skill that controls the selection, evaluation, revision, and abandonment of cognitive tasks, goals, and strategies.