We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The implicit revolution seems to have arrived with the declaration that “explicit measures are informed by and (possibly) rendered invalid by unconscious cognition.” What is the view from survey research, which has relied on explicit methodology for over a century, and whose methods have extended to the political domain in ways that have changed the landscape of politics in the United States and beyond? One survey researcher weighs in. The overwhelming evidence points to the continuing power of explicit measures to predict voting and behavior. Whether implicit measures can do the same, especially beyond what explicit measures can do, is far more ambiguous. The analysis further raises doubts, as others before have done, as to what exactly implicit measures measure, and particularly questions the co-opting among implicit researchers the word “attitude” when such measures instead represent associations. The conclusion: Keep your torches at home. There is no revolution.
Much of the literature on first (L1) second language (L2) reading agrees that there are noticeable behavioral differences between L1 and L2 readers of a given language, as well as between L2 speakers with different L1 backgrounds (Finnish vs German readers of English). Yet, this literature often overlooks potential variability between multiple samples of speakers of the same L1. This study examines this intersample variance using reading data from the ENglish Reading Online (ENRO) database of English reading behavior comprising 27 university student samples from 15 distinct L1 backgrounds. We found that the intersample variance within L2 readers of English with the same L1 background (e.g., two samples of Russian speakers) often overshadowed the difference between samples of L2 readers with different L1 backgrounds (Russian vs Chinese speakers of English). We discuss these and other problematic methodological implications of representing each L1 background with a single participant sample.
The move from theory to empirics requires figuring out how to collect evidence that could support or disconfirm hypotheses derived from your theory. Empirically studying the network in your theory requires two steps: determining which nodes to include in your data and operationalizing the link type. This chapter helps a reader select the boundary that contains the nodes of interest, pointing out some subtle downsides to random sampling in network studies. It also helps readers determine whether they want to measure full networks or ego ones and offers pointers on operationalizing link types.
Students are introduced the logic, foundation, and basics of statistical inference. The need for samples is first discussed and then how samples can be used to make inferences about the larger population. The normal distribution is then discussed, along with Z-scores to illustrate basic probability and the logic of statistical significance.
Gives a brief overview of the book. Notations for signal representation in continuous time and discrete time are introduced. Both one-dimensional and two-dimensional signals are introduced, and simple examples of images are presented. Examples of noise removal and image smoothing (filtering) are demonstrated. The concept of frequency is introduced and its importance as well as its role in signal representation are explained, giving musical notes as examples. The history of signal processing, the role of theory, and the connections to real-life applications are mentioned in an introductory way. The chapter also draws attention to the impact of signal processing in digital communications (e.g., cell-phone communications), gravity wave detection, deep space communications, and so on.
This introductory chapter sets out the aim of the project, which is to reassess the social and cultural relations between the Aegean and the Mediterranean through a new examination of some of the earliest Greek pottery finds overseas. The focus is on Protogeometric and Geometric ceramics from Greek and Phoenician colonies, certain Phoenician metropolises and further Indigenous sites in the Aegean and the Mediterranean, which were analysed by Neutron Activation. The analytical results are examined against the background of the social and economic relations that were generated through the production, exchange and consumption of the pottery finds under scrutiny.
Chapter 3 focuses on WTO disputes about anti-dumping issues. Anti-dumping or to be more specific, zeroing is the single-most litigated issue under WTO law. Although the Appellate Body has found zeroing method inconsistent with ADA several times, it is still being used with small alterations. Chapter 3 shows that the so-called jewel in the crown is sometimes ineffective. To do so, the role of the DSM in the anti-dumping issue is presented and anti-dumping cases dealing with procedural issues are analysed. The procedural issues mentioned in this chapter are as follows: calculation methods, transparency, public notice/notification, selection of investigated parties (sampling), submission of evidence and rebuttals, access to non-confidential files, hearings, newcomers and enforcement.
The album Slave to the Rhythm is typical of the exaltation of pop stars but atypical in its presentation and interaction with biographical material. Three crossings are considered in this assessment of the work: technological, cultural, and structural. These are presented with a detailed track-by-track analysis using a range of signal processing techniques, some adapted specifically for this project. This Element focuses on the combination of digital, novel, and analogue technology that was used, and the organisational and transformational treatments of recorded material it offered, along with their associated musical cultures. The way in which studio technology functions, and offers interaction with its users, has a direct influence over the sound of the music that is created with it. To understand how that influence is manifested in Slave, there is considerable focus on the development and use of music technology.
This chapter explores the evolution of the djent subgenre from the perspective of the musical, technological and environmental factors that have shaped its identity. The chapter considers the early circumstances of djent’s emergence during the early mid−2000s, with particular reference to the online culture which contributed to its wider transmission and proliferation. Key musical influences are also discussed, including djent’s roots in progressive metal and the work of bands such as Meshuggah and SikTh, as well as the subgenre’s interaction with electronic music aesthetics and popular music. A principal focus of the chapter is on the role of emerging digital technologies, particularly Digital Audio Workstations (DAW) and digital amplifier and drum kit modelling software, in the formation of djent’s musical and sonic characteristics. Finally, the chapter considers djent’s position as a subgenre within modern metal music and evaluates, with reference to the critical reception literature, the debates that persist concerning its legitimacy within metal.
This chapter describes how relationship scientists conduct research to answer questions about relationships. It explains all aspects of the research process, including how hypotheses are derived from theory, which study designs (e.g., experiments, cross-sectional studies, experience sampling) best suit specific research questions, how scientists can manipulate variables or measure variables with self-report, implicit, observational, or physiological measures, and what scientists consider when recruiting a sample to participate in their studies. This chapter also discusses how researchers approach questions about boundary conditions (when general trends do not apply) and mechanisms (the processes underlying their findings) and describes best practices for conducting ethical and reproducible research. Finally, this chapter includes a guide for how to read and evaluate empirical research articles.
The Introduction sets out how the number of forcibly displaced persons in the world is the highest ever recorded. Violence associated with armed conflict has become the main cause of forced displacement in the twenty-first century and most refugees are fleeing armed conflicts. Most asylum seekers in the European Union (EU) originate from Syria, Afghanistan and Iraq. However, there are many misconceptions whether persons fleeing armed conflicts are refugees as defined by the Refugee Convention. This book is thus an enquiry into the continued relevance of the Refugee Convention and examines the extent to which asylum appellate authorities in the EU take into account the changing nature of contemporary armed conflicts. The book also explores how the Refugee Convention may be interpreted in a manner that better responds to the changed nature of contemporary armed conflicts from a gender perspective, thus reconceptualising the concept of the refugee. The Introduction sets out the conceptual notions adopted in the book, such as the importance of distinguishing between violence and armed conflicts, the research methodology and sampling of 320 asylum appeal decisions from Belgium, Denmark, France, the Netherlands, Spain and the UK. Finally, it sets out the structure of the book.
The majority of research papers in computer-assisted language learning (CALL) report on primarily quantitative studies measuring the effectiveness of pedagogical interventions in relation to language learning outcomes. These studies are frequently referred to in the literature as experiments, although this designation is often incorrect because of the approach to sampling that has been used. This methodological discussion paper provides a broad overview of the current CALL literature, examining reported trends in the field that relate to experimental research and the recommendations made for improving practice. It finds that little attention is given to sampling, even in review articles. This indicates that sampling problems are widespread and that there may be limited awareness of the role of formal sampling procedures in experimental reasoning. The paper then reviews the roles of two key aspects of sampling in experiments: random selection of participants and random assignation of participants to control and experimental conditions. The corresponding differences between experimental and quasi-experimental studies are discussed, along with the implications for interpreting a study’s results. Acknowledging that genuine experimental sampling procedures will not be possible for many CALL researchers, the final section of the paper presents practical recommendations for improved design, reporting, review, and interpretation of quasi-experimental studies in the field.
How do people judge the sizes of things? What determines people’s evaluations of quantities such as prices or wages? People’s judgements and evaluations are typically relative; the same quantity will be judged or evaluated differently when it appears in different comparison samples. This chapter describes a simple psychological account – the Decision by Sampling model – of how sample-based judgements and evaluations are made. According to the model, what matters is the relative ranked position of an item within a comparison sample. For example, an income of $50,000 a year will be evaluated more favourably within a context of four lower and two higher incomes than in relation to one higher and five lower incomes. According to Decision by Sampling, estimates of the relative ranked position of items within comparison contexts are made by simple sampling and ordinal comparison processes. These estimates are assumed to underpin choice and valuation. The chapter reviews the Decision by Sampling model, relates it to other models such as Adaptation Level Theory and Range Frequency Theory, and shows how it can explain the shape of utility curves and probability weighting functions. The relation of coding efficiency to rank-based models is also discussed.
Sequential decisions from sampling are common in daily life: we often explore alternatives sequentially, decide when to stop such exploration process, and use the experience acquired during sampling to make a choice for what is expected to be the best option. In decisions from experience, theories of sampling and experiential choice are unable to explain the decision of when to stop the sequential exploration of alternatives. In this chapter, we propose a mechanism to inductively generate stopping decisions, and we demonstrate its plausibility in a large and diverse human data set of the binary choice sampling paradigm. Our proposed stopping mechanism relies on the choice process of a theory of experiential choice, Instance-Based Learning Theory (IBLT). The new stopping mechanism tracks the relative prediction errors of the two options during sampling, and stops when such difference is close to zero. Our results from simulation are able to accurately predict human stopping decisions distributions in the dataset. This model provides an integrated theoretical account of decisions from experience, where the stopping decisions are generated inductively from the sampling process.
We describe three sampling models that aim to cast light on how some design features of social media platforms systematically affect judgments of their users. We specify the micro-mechanisms of belief formation and interactions and explore their macro implications such as opinion polarization. Each model focuses on a specific aspect of platform-mediated social interactions: how popularity creates additional exposure to contrarian arguments; how differences in popularity make an agent more likely to hear particularly persuasive arguments in support of popular options; and how opinions in favor of popular options are reinforced through social feedback. We show that these mechanisms lead to self-reinforcing dynamics that can result in local opinion homogenization and between-group polarization. Unlike nonsampling-based approaches, our focus does not lie in peculiarities of information processing such as motivated cognition but instead emphasizes how structural features of the learning environment contribute to opinion homogenization and polarization.
Research into evaluative learning has focused almost exclusively on passive learning. That is, this research tradition is built on paradigms that minimize participants’ autonomy to exert control over the stimuli they learn about at a certain point in time. These paradigms thereby neglect the individuals’ preparedness to process certain information, although evidence is accumulating that individuals are not merely passive recipients of information but that they enrich stimuli with self-generated information. Moreover, in their daily lives, individuals have plenty of opportunities to create their own learning environments. This chapter first provides a definition of preparedness that embraces a constructivist view on evaluative learning. We then review the method and results of a recently developed sampling approach to evaluative learning and relate our findings back to our definition of preparedness. We show that the sampling approach to evaluative learning generates intriguing new findings and a variety of relevant questions for future research.
People revisit the restaurants they like and avoid the restaurants with which they had a poor experience. This tendency to approach alternatives believed to be good is usually adaptive but can lead to a systematic bias. Errors of underestimation (an alternative is believed to be worse than it is) will be less likely to be corrected than errors of overestimation (an alternative is believed to be better than it is). Denrell & March (2001) called this asymmetry in error correction the “Hot Stove Effect.” This chapter explains the basic logic behind the Hot Stove Effect and how this bias can explain a range of judgment biases. We review empirical studies that illustrate how risk aversion and mistrust can be explained by the Hot Stove Effect. We also explain why even a rational algorithm can be subject to the same bias.
The “Hot Stove Effect” pertains to an asymmetry in error corrections that affects a learner who estimates the quality of an option based on his or her experience with the option: errors of overestimation of the quality of an option are more likely to be corrected than errors of underestimation. In this chapter, we describe a “Collective Hot Stove Effect” which characterizes the dynamics of collective valuations rather than individual quality estimates. We analyze settings in which the collective valuation of options is updated sequentially based on additional samples of information. We focus on cases where the collective valuation of an option is more likely to be updated when it is higher than when it is lower. Just as the law-of-effect implies a Hot Stove Effect for individual learners, a Collective Hot Stove Effect emerges: errors of overestimation of the quality of an object by the collective valuation are more likely to be corrected than errors of underestimation. We test the unique predictions of our model in an online experiment and test assumptions and predictions of our model in analyses of large datasets of online ratings from popular websites (Amazon.com, Yelp.com, Goodreads.com, Weedmaps.com) comprising more than 160 million ratings.
When faced with a choice under incomplete knowledge, people can turn to the practical option of actively collecting information and ultimately deciding from experience. Here we review the dynamic interplay between perceiving and acting that arises during these decisions: What the person sees and experiences depends on how the person acts, and how the person acts depends on what the person has seen and experienced. We also review how this interaction and choice can be crucial to understanding risk-taking and how it can help advance our understanding of human competence. Finally, we contend that a truly successful model of how people make decisions from experience will capture this dynamic interplay.
Inductive reasoning involves generalizing from samples of evidence to novel cases. Previous work in this field has focused on how sample contents guide the inductive process. This chapter reviews a more recent and complementary line of research that emphasizes the role of the sampling process in induction. In line with a Bayesian model of induction, beliefs about how a sample was generated are shown to have a profound effect on the inferences that people draw. This is first illustrated in research on beliefs about sampling intentions: was the sample generated to illustrate a concept or was it generated randomly? A related body of work examines the effects of sampling frames: beliefs about selection mechanisms that cause some instances to appear in a sample and others to be excluded. The chapter describes key empirical findings from these research programs and highlights emerging issues such as the effect of timing of information about sample generation (i.e., whether it comes before or after the observed sample) and individual differences in inductive reasoning. The concluding section examines how this work can be extended to more complex reasoning problems where observed data are subject to selection biases.