We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
How can groups best coordinate to solve problems? The answer touches on cultural innovation, including the trajectory of science, technology, and art. If everyone acts independently, different people will explore different solutions, but there is no way to leverage good solutions across the community. If everyone acts in consort, early successes can lead the group down dead ends and stifle exploration. The challenge is one of maintaining innovation but also communicating effective solutions once they are found. When solutions spaces are smooth – that is, easy – communication is good. But when solution spaces are rugged – that is, hard – the balance should tilt toward exploration. How can we best achieve this? One answer is to place people in social structures that reduce communication, but maintain connectivity. But there are other solutions that might work better. Algorithms, like simulated annealing, are designed to deal with such problems by adjusting collective focus over time, allowing systems to “cool off” slowly as they home in on solutions. Network science allows us to explore the performance of such solutions on smooth and rugged landscapes, and provides numerous avenues for innovation of its own.
Behavioral Network Science explains how and why structure matters in the behavioral sciences. Exploring open questions in language evolution, child language learning, memory search, age-related cognitive decline, creativity, group problem solving, opinion dynamics, conspiracies, and conflict, readers will learn essential behavioral science theory alongside novel network science applications. This book also contains an introductory guide to network science, demonstrating how to turn data into networks, quantify network structure across scales, and hone one's intuition for how structure arises and evolves. Online R code allows readers to explore the data and reproduce all the visualizations and simulations for themselves, empowering them to make contributions of their own. For data scientists interested in gaining a professional understanding of how the behavioral sciences inform network science, or behavioral scientists interested in learning how to apply network science from the ground up, this book is an essential guide.
Experimental evidence shows that human subjects frequently rely on adaptive heuristics to form expectations but their forecasting performance in the lab is not as inadequate as assumed in macroeconomic theory. In this paper, we use an agent-based model (ABM) to show that the average forecasting error is indeed close to zero even in a complex environment if we assume that agents augment the canonical adaptive algorithm with a Belief Correction term which takes into account the previous trend of the variable of interest. We investigate the reasons for this result using a streamlined nonlinear macro-dynamic model that captures the essence of the ABM.
Decades of research seek to understand how people form perceptions of risk by modeling either individual-level psychological processes or broader social and organizational mechanisms. Yet, little formal theoretical work has focused on the interaction of these 2 sets of factors. In this paper, I contribute to closing this gap by modifying a psychologically rich individual model of probabilistic reasoning to account for the transmission and collective emergence of risk perceptions. Using data from 357 individuals, I present experimental evidence in support of my main building assumptions and demonstrate the empirical validity of my model. Incorporating these results into an agent-based setting, I simulate over 1.5 billion social interactions to analyze the emergence of risk perceptions within organizations under different information frictions (i.e., limits on the availability and precision of social information). My results show that by focusing on information quality (rather than availability), groups and organizations can more effectively boost the accuracy of their emergent risk perceptions. This work offers researchers a formal framework to analyze the relationship between psychological and organizational factors in shaping risk perceptions.
The chapter surveys the current disconnect between academic economics and the economic analysis that is conducted at policy institutions. It argues that microfoundations and general equilibrium theory are not useful for thinking about important questions in macroeconomics, and assesses some alternative approaches that have been proposed. The usefulness of aggregate data for empirical analyses is briefly discussed.
This chapter formulates an analytical toolkit that incorporates an intricate – yet realistic – chain of causal mechanisms to explain the expenditure–development relationship. First, we explain several reasons why we take a complexity perspective for modelling the expenditure–development link and why we choose agent-based modelling as a suitable tool for assessing policy impacts in sustainable development. Second, we introduce the concept of social mechanisms and explain how we apply them to measure the impact of budgetary allocations when systemic effects are relevant. Third, we compare different concepts of causality and explain the advantages of an account that simulates counterfactual scenarios where policy interventions are absent.
This chapter describes computational models developed to represent basic and applied phenomena of interest to I-O psychology. The basic phenomena of interest relate to motivational, learning, and decision-making processes. The applied phenomena relate to selecting, training, evaluating, retaining, and managing employees. These employees may work in teams, be leaders of others, or engage in action, information sharing, and decision making relevant to organizational outcomes. A computational control systems architecture is used in many of the more basic models, and agent-based modeling as well as control systems modeling are used for the more applied models.
We build an agent-based model (ABM) of how senior politicians navigate the complex governance cycle using relatively simple heuristics. They first test whether they can form a single party minority government. If not, they seek coalition partners and negotiate with these. They treat “Gamson’s Law” – government parties get perks payoffs in proportion to their seat shares – as common knowledge. When different politicians attach different importance to the same issue, "logrolling" allows them to realize gains from trade and agree a joint policy position even when they have divergent policy preferences. We allow for the realistic possibility that multiple proposals for government are under consideration at the same time. Nonetheless, there may often be a “Condorcet winner” among the set of proposals, which beats all others in pairwise comparisons. Finally, we specify a model of government survival, which assumes incumbent governments are subject to a stream of unbiased random shocks which may perturb model parameters so much that legislators now prefer some alternative to the incumbent. For any given government, our model allows us to estimate the probability of this happening.
While heavy-duty computational methods have revolutionized much empirical work in political science, computational analysis has yet to have much any impact on theoretical accounts of politics – in contrast to the situation in many of the natural sciences. We set here out to map a path forward in computational social science. Analyzing the complex and deductively intractable “governance cycle” that plays out in the high-dimensional issue spaces of parliamentary systems, we use two different computational approaches. One models functionally rational politicians who deploy rules of thumb to navigate their complex environment. The other deploys an artificial intelligence algorithm which systematic learns, from massively repeated self-play, to find near-optimal strategies. Future work made possible by greater computational firepower would enable better AI, more realistic ABMs, and the modeling of logrolling under the conditions of incomplete information which characterize most real-world bargaining and negotiation.
A number of theoretical results have provided sufficient conditions for the selection of payoff-efficient equilibria in games played on networks when agents imitate successful neighbors and make occasional mistakes (stochastic stability). However, those results only guarantee full convergence in the long-run, which might be too restrictive in reality. Here, we employ a more gradual approach relying on agent-based simulations avoiding the double limit underlying these analytical results. We focus on the circular-city model, for which a sufficient condition on the population size relative to the neighborhood size was identified by Alós-Ferrer & Weidenholzer [(2006) Economics Letters, 93, 163–168]. Using more than 100,000 agent-based simulations, we find that selection of the efficient equilibrium prevails also for a large set of parameters violating the previously identified condition. Interestingly, the extent to which efficiency obtains decreases gradually as one moves away from the boundary of this condition.
Building on the Cambridge Element Agent Based Models of Social Life: Fundamentals (Cambridge, 2020), we move on to the next level. We do this by building agent based models of polarization and ethnocentrism. In the process, we develop: stochastic models, which add a crucial element of uncertainty to human interaction; models of human interactions structured by social networks; and 'evolutionary' models in which agents using more effective decision rules are more likely to survive and prosper than others. The aim is to leave readers with an effective toolkit for building, running and analyzing agent based modes of social interaction.
Social interactions are rich, complex, and dynamic. One way to understand these is to model interactions that fascinate us. Some of the more realistic and powerful models are computer simulations. Simple, elegant and powerful, tools are available in user-friendly free software to help you design, build and run your own models of social interactions that intrigue you, and do this on the most basic laptop computer. Focusing on a well-known model of housing segregation, this Element is about how to unleash that power, setting out the fundamentals of what is now known as 'agent based modeling'.
Within the seminal asset-pricing model by Brock and Hommes (Journal of Economic Dynamics Control 22, 1235–1274, 1998), heterogeneous boundedly rational agents choose between a fixed number of expectation rules to forecast asset prices. However, agents’ heterogeneity is limited in the sense that they typically switch between a representative technical and a representative fundamental expectation rule. Here, we generalize their framework by considering that all agents follow their own time-varying technical and fundamental expectation rules. Estimating our model using the method of simulated moments reveals that it is able to explain the statistical properties of the daily and monthly behavior of the S&P500 quite well. Moreover, our analysis reveals that heterogeneity is not only a realistic model property but clearly helps to explain the intricate dynamics of financial markets.
We study a Markovian agent-based model (MABM) in this paper. Each agent is endowed with a local state that changes over time as the agent interacts with its neighbours. The neighbourhood structure is given by a graph. Recently, Simon, Taylor, and Kiss [40] used the automorphisms of the underlying graph to generate a lumpable partition of the joint state space, ensuring Markovianness of the lumped process for binary dynamics. However, many large random graphs tend to become asymmetric, rendering the automorphism-based lumping approach ineffective as a tool of model reduction. In order to mitigate this problem, we propose a lumping method based on a notion of local symmetry, which compares only local neighbourhoods of vertices. Since local symmetry only ensures approximate lumpability, we quantify the approximation error by means of the Kullback–Leibler divergence rate between the original Markov chain and a lifted Markov chain. We prove the approximation error decreases monotonically. The connections to fibrations of graphs are also discussed.
UNAIDS established fast-track targets of 73% and 86% viral suppression among human immunodeficiency virus (HIV)-positive individuals by 2020 and 2030, respectively. The epidemiologic impact of achieving these goals is unknown. The HIV-Calibrated Dynamic Model, a calibrated agent-based model of HIV transmission, is used to examine scenarios of incremental improvements to the testing and antiretroviral therapy (ART) continuum in South Africa in 2015. The speed of intervention availability is explored, comparing policies for their predicted effects on incidence, prevalence and achievement of fast-track targets in 2020 and 2030. Moderate (30%) improvements in the continuum will not achieve 2020 or 2030 targets and have modest impacts on incidence and prevalence. Improving the continuum by 80% and increasing availability reduces incidence from 2.54 to 0.80 per 100 person-years (−1.73, interquartile range (IQR): −1.42, −2.13) and prevalence from 26.0 to 24.6% (−1.4 percentage points, IQR: −0.88, −1.92) from 2015 to 2030 and achieves fast track targets in 2020 and 2030. Achieving 90-90-90 in South Africa is possible with large improvements to the testing and treatment continuum. The epidemiologic impact of these improvements depends on the balance between survival and transmission benefits of ART with the potential for incidence to remain high.
Although the Paris Agreement arguably made some progress, interest in supplementary approaches to climate change co-operation persist. This article examines the conditions under which a climate club might emerge and grow. Using agent-based simulations, it shows that even with less than a handful of major actors as initial members, a club can eventually reduce global emissions effectively. To succeed, a club must be initiated by the ‘right’ constellation of enthusiastic actors, offer sufficiently large incentives for reluctant countries and be reasonably unconstrained by conflicts between members over issues beyond climate change. A climate club is particularly likely to persist and grow if initiated by the United States and the European Union. The combination of club-good benefits and conditional commitments can produce broad participation under many conditions.
Recent work in regional science, geography, and urban economics has advanced spatial modeling of land markets and land use by incorporating greater spatial complexity, including multiple sources of spatial heterogeneity, multiple spatial scales, and spatial dynamics. Doing so has required a move away from relying solely on analytical models to partial or full reliance on computational methods that can account for these added features of spatial complexity. In the first part of the paper, we review economic models of urban land development that have incorporated greater spatial complexity, focusing on spatial simulation models with spatial endogenous feedbacks and multiple sources of spatial heterogeneity. The second part of the paper presents a spatial simulation model of exurban land development using an auction model to represent household bidding that extends the traditional Capozza and Helsley (1990) model of urban growth to account for spatial dynamics in the form of local land use spillovers and spatially heterogeneous land characteristics.
We study the persistence of network segregation in networks characterized by the co-evolution of vertex attributes and link structures, in particular where individual vertices form linkages on the basis of similarity with other network vertices (homophily), and where vertex attributes diffuse across linkages, making connected vertices more similar over time (influence). A general mathematical model of these processes is used to examine the relative influence of homophily and influence in the maintenance and decay of network segregation in self-organizing networks. While prior work has shown that homophily is capable of producing strong network segregation when attributes are fixed, we show that adding even minute levels of influence is sufficient to overcome the tendency towards segregation even in the presence of relatively strong homophily processes. This result is proven mathematically for all large networks and illustrated through a series of computational simulations that account for additional network evolution processes. This research contributes to a better theoretical understanding of the conditions under which network segregation and related phenomenon—such as community structure—may emerge, which has implications for the design of interventions that may promote more efficient network structures.
In reaction to the criticisms to which analytical sociology has been subject with increasing frequency, the article attempts an overall assessment of this research program by addressing the following questions: where does contemporary analytical sociology come from? what are the differences between the “old” and “new” analytical sociology? what does analytical sociology really consist of? do the critics of analytical sociology have good reasons to be critical? Gross’s 2009 ASR article is deeply discussed in order to answer the last question.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.