We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Cait Lamberton, Wharton School, University of Pennsylvania,Derek D. Rucker, Kellogg School, Northwestern University, Illinois,Stephen A. Spiller, Anderson School, University of California, Los Angeles
A meta-analysis is a statistical analysis that combines and contrasts two or more studies of a common phenomenon. Its emphasis is on the quantification of the heterogeneity in effects across studies, the identification of moderators of this heterogeneity, and the quantification of the association between such moderators and effects. Given this, and in line with the growing appreciation for and embracement of heterogeneity in psychological research as not a nuisance but rather a boon for advancing theory, gauging generalizability, identifying moderators and boundary conditions, and assisting in future study planning, we make the assessment of heterogeneity the focus of this chapter. Specifically, we illustrate the assessment of heterogeneity as well as the advantages offered by contemporary approaches to meta-analysis relative to the traditional approach for the assessment of heterogeneity via two case studies. Following our case studies, we review several important considerations relevant to meta-analysis and then conclude with a brief summation.
Edited by
Cait Lamberton, Wharton School, University of Pennsylvania,Derek D. Rucker, Kellogg School, Northwestern University, Illinois,Stephen A. Spiller, Anderson School, University of California, Los Angeles
Online platforms such as Amazon’s Mechanical Turk (MTurk), CloudResearch, and Prolific have become a common source of data for behavioral researchers and consumer psychologists alike. This chapter reviews contemporary issues associated with online panel research, discussing first how the COVID-19 pandemic impacted the extent to which researchers use online panels and the workers participating on certain online panels. The chapter explores how factors like a TikTok video can impact who uses these online panels and why. A longitudinal study of researcher perceptions and data quality practices finds that many practices do not align with current recommendations. The authors provide several recommendations for researchers to conduct high-quality behavioral research online, including the use of appropriate prescreens before data collection, data analysis preregistration practices, and avoiding post-screens after data collection that are not preregistered. Finally, the authors recommend researchers thoroughly report details on recruitment, restrictions, completion rates, and any differences in dropout rates across conditions.
Edited by
Cait Lamberton, Wharton School, University of Pennsylvania,Derek D. Rucker, Kellogg School, Northwestern University, Illinois,Stephen A. Spiller, Anderson School, University of California, Los Angeles
This chapter assesses how consumer research defines a “field experiment,” takes a look at trends in field experimentation in consumer research journals, explores the advantages and shortcomings of field experimentation, and assesses the status and value of open science practices for field experiments. These assessments render four insights. First, the field of consumer research does not have a consensus on the definition of field experiments, though an established taxonomy helps us determine the extent to which any given field experiment differs from traditional lab settings. Second, about 7 ercent of the published papers in one of the top consumer psychology journals include some form of field experiment – a small but growing proportion. Third, although field experimentation can be useful for providing evidence of external validity and estimating real-world effect sizes, no single lab or field study offers complete generalizable insight. Instead, each well-designed, high-powered study adds to the collection of findings that converge to advance our understanding. Finally, open science practices are useful for bridging scientific findings in field experiments with real-life applications.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.