We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Formal enforcement punishing defectors can sustain cooperation by changing incentives. In this paper, we introduce a second effect of enforcement: it can also affect the capacity to learn about the group's cooperativeness. Indeed, in contexts with strong enforcement, it is difficult to tell apart those who cooperate because of the threat of fines from those who are intrinsically cooperative types. Whenever a group is intrinsically cooperative, enforcement will thus have a negative dynamic effect on cooperation because it slows down learning about prevalent values in the group that would occur under a weaker enforcement. We provide theoretical and experimental evidence in support of this mechanism. Using a lab experiment with independent interactions and random rematching, we observe that, in early interactions, having faced an environment with fines in the past decreases current cooperation. We further show that this results from the interaction between enforcement and learning: the effect of having met cooperative partners has a stronger effect on current cooperation when this happened in an environment with no enforcement. Replacing one signal of deviation without fine by a signal of cooperation without fine in a player's history increases current cooperation by 10%; while replacing it by a signal of cooperation with fine increases current cooperation by only 5%.
Online experiments allow researchers to collect datasets at times not typical of laboratory studies. We recruit 2336 participants from Amazon Mechanical Turk to examine if participant characteristics and behaviors differ depending on whether the experiment is conducted during the day versus night, and on weekdays versus weekends. Participants make incentivized decisions involving prosociality, punishment, and discounting, and complete a demographic and personality survey. We find no time or day differences in behavior, but do find that participants at nights and on weekends are less experienced with online studies; on weekends are less reflective; and at night are less conscientious and more neurotic. These results are largely robust to finer-grained measures of time and day. We also find that those who participated earlier in the course of the study are more experienced, reflective, and agreeable, but less charitable than later participants.
Is there a connection between pro-social behavior and well-being? This question has long been of interest, with Aristotle famously suggesting a nexus between virtues and well-being. To delve into this relationship, I conducted an extensive study encompassing multiple classical economic games and nearly 100 well-being questions. My findings confirm that different patterns of pro-sociality are robustly correlated with each other. On top, I find reliable correlations between well-being and pro-social behavior, as well as certain forms of punishment. In terms of underlying explanations, I observe that pro-sociality is particularly associated with a form of long-term well-being known as eudaimonia, suggesting that pro-social behavior plays a fundamental role in people perceiving their life as meaningful.
The hypothesis that intuition promotes cooperation has attracted considerable attention. Although key results in this literature have failed to replicate in pre-registered studies, recent meta-analyses report an overall effect of intuition on cooperation. We address the question with a meta-analysis of 82 cooperation experiments, spanning four different types of intuition manipulations—time pressure, cognitive load, depletion, and induction—including 29,315 participants in total. We obtain a positive overall effect of intuition on cooperation, though substantially weaker than that reported in prior meta-analyses, and between studies the effect exhibits a high degree of systematic variation. We find that this overall effect depends exclusively on the inclusion of six experiments featuring emotion-induction manipulations, which prompt participants to rely on emotion over reason when making allocation decisions. Upon excluding from the total data set experiments featuring this class of manipulations, between-study variation in the meta-analysis is reduced substantially—and we observed no statistically discernable effect of intuition on cooperation. Overall, we fail to obtain compelling evidence for the intuitive cooperation hypothesis.
This paper examines cooperation and punishment in a public goods game in Istanbul. Unlike prior within-subject designs, we use a between-subject design with separate no-punishment and punishment conditions. This approach reveals that punishment significantly increases contributions, demonstrating the detrimental effect of having prior experience without sanctions. We highlight two critical factors—heterogeneous initial contributions across groups and how subjects update their contributions based on prior contributions and received punishment. An agent-based model verifies that the interaction between these two factors leads to a strong persistence of contributions over time. Analysis of related data from comparable cities shows similar patterns, suggesting our findings likely generalize if using a between-subject design. We conclude that overlooking within-group heterogeneity biases cross-society comparisons and subsequent policy implications.
A number of recent papers have looked at framing effects in linear public good games. In this comment, I argue that, within this literature, the distinction between give-take and positive–negative framing effects has become blurred, and that this is a barrier towards understanding the experimental evidence on framing effects. To make these points, I first illustrate that frames can differ along both an externality and choice dimension. I then argue that the existing evidence is consistent with a strong positive–negative framing effect but no give-take framing effect on average contributions.
Experiments in economics usually provide subjects with starting capital to be used in the experiment. This practice could affect decisions as there is no risk of loss. This phenomenon is known as the house-money effect. In a repeated public goods game, we test for house-money effects by paying subjects in advance an amount they could lose in the experiment. We do not find evidence of a house-money effect over time.
While people are surprisingly cooperative in social dilemmas, cooperation is fragile to the emergence of defection. Punishment is a key mechanism through which people sustain cooperation, but when are people willing to pay the costs to punish? Using data from existing work on punishment in public goods games conducted in industrialized countries throughout the world (Herrmann et al. in Science, 319(5868):1362–1367, 2008. https://doi.org/10.1126/science.1144237), I find first that those who contribute more are consistently punished less. Second, in many study locations, there are insignificant differences in the propensity of those who contribute and defect to punish. Finally, those who contribute and defect both carry out punishment against defectors. Some defectors do punish cooperators, but less often than they punish other defectors. The determinants of punishment are largely consistent across cities.
Do people discriminate between men and women when they have the option to punish defectors or reward cooperators? Here, we report on four pre-registered experiments that shed some light on this question. Study 1 (N = 544) shows that people do not discriminate between genders when they have the option to punish (reward) defectors (cooperators) in a one-shot prisoner’s dilemma with third-party punishment/reward. Study 2 (N = 253) extends Study 1 to a different method of punishing/rewarding: participants are asked to rate the behaviour of a defector/cooperator on a scale of 1–5 stars. In this case too, we find that people do not discriminate between genders. Study 3a (N = 331) and Study 3b (N = 310) conceptually replicate Study 2 with a slightly different gender manipulation. These latter studies show that, in situations where they do not have specific beliefs about the gender of the defector/cooperator’s partner, neither men nor women discriminate between genders.
Evidence shows that the willingness of individuals to avenge punishment inflicted upon them for transgressions they committed constitutes a significant obstacle toward upholding social norms and cooperation. The drivers of this behavior, however, are not well understood. We hypothesize that ulterior motive attribution—the tendency to assign ulterior motives to punishers for their actions—increases the likelihood of counter-punishment. We exogenously manipulate the ability to attribute ulterior motives to punishers by having the punisher be either an unaffected third party or a second party who, as the victim of a transgression, may be driven to punish by a desire to take revenge. We show that survey respondents consider second-party punishment to be substantially more likely to be driven by ulterior motives than an identical, payoff-equalizing punishment meted out by a third party. In line with our hypothesis, we find that second-party punishment is 66.3% more likely to trigger counter-punishment than third-party punishment in a lab experiment. The loss in earnings due to counter-punishment is 64.6% higher for second-party punishers than third-party punishers, all else equal.
People behave much more cooperatively than predicted by the self-interest hypothesis in social dilemmas such as public goods games. Some studies have suggested that many decision makers cooperate not because of genuine cooperative preferences but because they are confused about the incentive structure of the game—and therefore might not be aware of the dominant strategy. In this research, we experimentally manipulate whether decision makers receive explicit information about which strategies maximize individual income and group income or not. Our data reveal no statistically significant effects of the treatment variation, neither on elicited contribution preferences nor on unconditional contributions and beliefs in a repeated linear public goods game. We conclude that it is unlikely that confusion about optimal strategies explains the widely observed cooperation patterns in social dilemmas such as public goods games.
Economists conducting laboratory experiments on cooperation and peer punishment find that a non-negligible minority of punishments is directed at cooperators rather than free riders. Such punishments have been categorized as ‘perverse’ or ‘antisocial,’ using definitions that partially overlap, but not entirely so. Which approach better identifies punishment that discourages cooperation? We analyze the data from 16 sites studied by Herrmann et al. (Science 319(5868):1362–1367, 2008) and find that when subjects are uninformed about who punished them, the recipient’s contribution relative to the group average (whether it is ‘perverse’) is a better predictor of negative impact on contribution than is her contribution relative to the punisher’s (whether it is ‘antisocial’). Regression estimates nevertheless suggest that punished subjects attempt to take relative contribution of punisher into account even if only by conjecture.
We propose a framework for identifying discrete behavioural types in experimental data. We re-analyse data from six previous studies of public goods voluntary contribution games. Using hierarchical clustering analysis, we construct a typology of behaviour based on a similarity measure between strategies. We identify four types with distinct stereotypical behaviours, which together account for about 90% of participants. Compared to the previous approaches, our method produces a classification in which different types are more clearly distinguished in terms of strategic behaviour and the resulting economic implications.
This paper investigates the effectiveness of peer punishment in non-linear social dilemmas and replicates Cason and Gangadharan (Exp Econ 18:66–88, 2015). The contribution of this replication is that cooperation is quantified across payoff equivalent, strategically symmetric public good and common pool resource experiments. Results suggest that the cooperation-inducing effect of peer punishment is statistically equivalent across conditions. Despite this increase in cooperation, earnings are significantly lower than in the absence of punishment. Institutional features which improve the effectiveness of peer punishment in linear public good experiments may, similarly, make self-governance possible in more complex social dilemmas.
We systematically investigate prisoner’s dilemma and dictator games with valence framing. We find that give versus take frames influence subjects’ behavior and beliefs in the prisoner’s dilemma games but not in the dictator games. We conclude that valence framing has a stronger impact on behavior in strategic interactions, i.e., in the prisoner’s dilemma game, than in allocation tasks without strategic interaction, i.e., in the dictator game.
We establish whether the efficacy of mutual monitoring in fostering cooperation is dependent on the degree of approval motivation within teams. Approval motivation is defined as the desire to produce positive perceptions in others and the incentive to acquire the approval of others as well as the desire to avoid disapproval, (Martin in J Personality Assess 48(5):508–519, 1984). Contrary to the theoretical predictions, the results from the experiment suggest that mutual monitoring was not effective in fostering cooperation in teams. Furthermore, the efficacy of mutual monitoring in fostering cooperation was not correlated with the degree of approval motivation within teams.
Punishment plays a role in human cooperation, but it is costly. Prior research shows that people are more cooperative when they expect to receive negative feedback for non-cooperation, even in the absence of costly punishment, which would have interesting implications for theory and applications. However, based on theories of habituation and cue-based learning, we propose that people will learn to ignore expressions of disapproval that are not clearly associated with material costs or benefits. To test this hypothesis, we conducted a between-subjects, 40-round public goods game (i.e. much longer than most studies), where participants could respond to others’ contributions by sending numerical disapproval messages, paying to reduce others’ earnings, or neither. Consistent with previous results, we observed steadily increasing contributions in the costly punishment condition. In contrast, contributions declined after the early rounds in the expressed disapproval condition, and were eventually no higher than the basic control condition with neither costly punishment nor disapproval ratings. In other words, costless disapproval may temporarily increase cooperation, but the effects fade. We discuss the theoretical and applied implications of our findings, including the unexpectedly high levels of cooperation in a second control condition.
Communication and cognition are presented as deeply interrelated aspects of the mind, the means by which animals perceive, respond to, and understand each other as well as their world. This chapter reviews chemosensory, vibrational (acoustic and seismic), visual and tactile sense modalities, the various ways in which people have attempted to exploit these sensory channels to manage problematic behaviors, and the ways in which anthropogenic disturbances and pollutants can interfere with signaling. It then delves into domains such as self-awareness, personality, problem-solving, cooperation, social learning, and culture. The chapter considers intriguing adaptive hypotheses such as that of cognitive buffering, before provoking reflection on the downstream consequences of social disturbance and trauma. Drawing on experimental studies on elephants and a range of other species from honeybees to whales, the comparative perspective positions cognitive abilities within their broader ecological and evolutionary contexts, and highlights why it is crucial to account for phenomena such as social learning and culture in protecting and managing elephant populations.
This chapter considers the coordination of the actions of bionanomachines, such as cluster formation. This task is important to applications such as drug delivery at tumour sites. Mathematical models of cluster formation and system designs are presented, along with computer simulation results demonstrating that bionanomachines can move collectively and form clusters.
Chapter 2 provides the theoretical and methodological foundations for understanding East Asian international relations and demonstrates how facts and theories are constructed. Building on that foundation, the chapter then provides a preliminary review of the merits and demerits of the prevailing theories: realism, liberal institutionalism, constructivism, Marxism, and neo-traditionalism, depending on the research questions we are interested in. The chapter also offers an initial connection between the existing IR theories and theory of evolution. It emphasizes that the theory of evolution does not necessarily replace any existing IR theory but offers instead a different insight and scientific framework, which may be left in the background or be explicitly applied.