Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-13T06:47:44.323Z Has data issue: false hasContentIssue false

12 - Debriefing and Post-Experimental Procedures

from Part II - The Building Blocks of a Study

Published online by Cambridge University Press:  25 May 2023

Austin Lee Nichols
Affiliation:
Central European University, Vienna
John Edlund
Affiliation:
Rochester Institute of Technology, New York
Get access

Summary

The steps social and behavioral scientists take after the end of a study are just as important as the steps taken before and during it. The goal of this chapter is to discuss the practical and ethical considerations that should be addressed before participants leave the physical or virtual study space. We review several post-experimental techniques, including the debriefing, manipulation checks, attention checks, mitigating participant crosstalk, and probing for participant suspicion regarding the purpose of the study. Within this review, we address issues with the implementation of each post-experimental technique as well as best practices for their use, with an emphasis placed on prevention of validity threats and the importance of accurate reporting of the steps taken after the experiment ends. Finally, we emphasize the importance of continuing to develop and empirically test post-experimental practices, with suggestions for future research.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Further Reading

For a detailed example of a funnel debriefing procedure and the empirical test of various post-experimental practices including suspicion probing, we recommend the following article:

For further discussion of the history and progression of manipulation checks as well as specific recommendations for their use, we recommend Table 4 in the following article:

We are proponents of manipulation checks (with the proper precautions), but criticisms of manipulation checks should be seriously considered. For further reading on critiques of manipulation check practices we recommend the following article:

Blackhart, G. C., Brown, K. E., Clark, T., Pierce, D. L., & Shell, K. (2012). Assessing the adequacy of postexperimental inquiries in deception re-search and the factors that promote participant honesty. Behavior Research Methods, 44, 24–40. https://doi.org/10.3758/s13428-011-0132-6Google Scholar
Ejelöv, E. & Luke, T. (2020). “Rarely safe to assume”: Evaluating the use and interpretation of manipulation checks in experimental social psychology. Journal of Experimental Social Psychology, 87, 103937. https://doi.org/10.1016/j.jesp.2019.103937Google Scholar
Hauser, D., Ellsworth, P., & Gonzalez, R. (2018). Are manipulation checks necessary? Frontiers in Psychology, 9, 998. https://doi.org/10.3389/fpsyg.2018.00998CrossRefGoogle ScholarPubMed

References

Adair, J., Dushenko, T., & Lindsay, R. (1985). Ethical regulations and their impact on research practice. The American Psychologist, 40, 5972. https://doi.org/10.1037//0003-066X.40.1.59CrossRefGoogle ScholarPubMed
American Psychological Association (2017). Ethical principles of psychologists and code of conduct (2002, amended effective June 1, 2010, and January 1, 2017). Available at: www.apa.org/ethics/code.Google Scholar
American Sociological Association (2018). Code of ethics. Available at:www.asanet.org/sites/default/files/asa_code_of_ethics-june2018a.pdf.Google Scholar
Aronson, E. (1966). Avoidance of inter-subject communication. Psychological Reports, 19, 238. https://doi.org/10.2466/pr0.1966.19.1.238Google Scholar
Aronson, E., Wilson, T. D., & Brewer, M. B. (1998). Experimentation in social psychology. In Gilbert, D. T., Fiske, S. T., & Lindzey, G. (eds.), The Handbook of Social Psychology (pp. 99142). McGraw-Hill.Google Scholar
Bargh, B. A. & Chartrand, T. L. (2000). The mind in the middle: A practical guide to priming and automaticity research. In Reis, H. T. & Judd, C. M. (eds.), Handbook of Research Methods in Social and Personality Psychology (pp. 253285). Cambridge University Press.Google Scholar
Blackhart, G. C., Brown, K. E., Clark, T., Pierce, D. L., & Shell, K. (2012). Assessing the adequacy of postexperimental inquiries in deception research and the factors that promote participant honesty. Behavior Research Methods, 44, 2440. https://doi.org/10.3758/s13428-011-0132-6Google Scholar
Brody, J. L., Gluck, J., & Aragon, A. S. (2000). Participants’ understanding of the process of psychological research: Debriefing. Ethics and Behavior, 10, 1325, https://doi.org/10.1207/S15327019EB1001_2Google Scholar
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112130. http://dx.doi.org/10.3758/s13428-013-0365-7Google Scholar
Chester, D. S. & Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16, 377395. https://doi.org/10.1177/1745691620950684CrossRefGoogle ScholarPubMed
Clark, T. D. (2013). Using social influence to enhance post-experimental inquiry success (unpublished Master’s thesis). University of North Dakota, Grand Forks, ND.Google Scholar
Cook, T. D. & Perrin, B. F. (1971). The effects of suspiciousness of deception and the perceived legitimacy of deception on task performance in an attitude change experiment. Journal of Personality, 39, 204224. https://doi.org/10.1111/j.1467-6494.1971.tb00037.xGoogle Scholar
Cronbach, L. & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281302. https://doi.org/10.1037/h0040957CrossRefGoogle ScholarPubMed
Diener, E., Matthews, R., & Smith, R. E. (1972). Leakage of experimental information to potential future subjects by debriefed participants. Journal of Experimental Research in Personality, 6, 264267.Google Scholar
Edlund, J. E., Sagarin, B. J., Skowronski, J. J., Johnson, S., & Kutter, J. (2009). Whatever happens in the laboratory stays in the laboratory: The prevalence and prevention of participant crosstalk. Personality and Social Psychology Bulletin, 35, 635642. https://doi.org/10.1177/0146167208331255Google Scholar
Edlund, J. E., Nichols, A. L., Okdie, B. M., (2014). The prevalence and prevention of crosstalk: A multi-institutional study. The Journal of Social Psychology, 154, 181185. https://doi.org/10.1080/00224545.2013.872596Google Scholar
Edlund, J. E., Lange, K. M., Sevene, A. M., et al. (2017). Participant crosstalk: Issues when using the Mechanical Turk. Tutorials in Quantitative Methods for Psychology, 13, 174182. http://doi.org/10.20982/tqmp.13.3.p174CrossRefGoogle Scholar
Edlund, J. E., Cuccolo, K., Irgens, M. S., Wagge, J. R., & Zlokovich, M. S. (2022). Saving science through replication studies. Perspectives on Psychological Science, 17(1), 216225. https://doi.org/10.1177/1745691620984385Google Scholar
Ejelöv, E. & Luke, T. (2020). “Rarely safe to assume”: Evaluating the use and interpretation of manipulation checks in experimental social psychology. Journal of Experimental Social Psychology, 87, 103937. https://doi.org/10.1016/j.jesp.2019.103937Google Scholar
Fayant, M.-P., Sigall, H., Lemonnier, A., Retsin, E., & Alexopoulos, T. (2017). On the limitations of manipulation checks: An obstacle toward cumulative science. International Review of Social Psychology, 30, 125130. https://doi.org/10.5334/irsp.102CrossRefGoogle Scholar
Forgas, J. P. & East, R. (2008). On being happy and gullible: Mood effects on skepticism and the detection of deception. Journal of Experimental Social Psychology, 44, 13621367. https://doi.org/10.1016/j.jesp.2008.04.010Google Scholar
Golding, S. L. & Lichtenstein, E. (1970). Confession of awareness and prior knowledge of deception as a function of interview set and approval motivation. Journal of Personality and Social Psychology, 14, 213223. https://doi.org/10.1037/h0028853CrossRefGoogle Scholar
Hauser, D. J. & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48, 400407. https://doi.org/10.3758/s13428-015-0578-zCrossRefGoogle ScholarPubMed
Hauser, D., Ellsworth, P., & Gonzalez, R. (2018). Are manipulation checks necessary? Frontiers in Psychology, 9, 998. https://doi.org/10.3389/fpsyg.2018.00998CrossRefGoogle ScholarPubMed
Hertwig, R. & Ortmann, A. (2008). Deception in experiments: Revisiting the arguments in its defense. Ethics and Behavior, 18, 5992. https://doi.org/10.1080/10508420701712990CrossRefGoogle Scholar
Holmes, D. S. (1976). Debriefing after psychological experiments: I. Effectiveness of postdeception dehoaxing. American Psychologist, 31, 858867. https://doi.org/10.1037/0003-066X.31.12.858Google Scholar
Junk, T. R. & Lyons, L. (2021). Reproducibility and replication of experimental particle physics results. PsyArXiv. https://arxiv.org/abs/2009.06864.Google Scholar
Kees, J., Berry, C., Burton, S., & Sheehan, K. (2017). An analysis of data quality: Professional panels, student subject pools, and Amazon’s Mechanical Turk. Journal of Advertising, 46, 141155. https://doi.org/10.1080/00913367.2016.1269304CrossRefGoogle Scholar
Keltner, D., Locke, K. D., & Audrain, P. C. (1993). The influence of attributions on the relevance of negative feelings to personal satisfaction. Personality and Social Psychology Bulletin, 19, 2129. https://doi.org/10.1177/0146167293191003Google Scholar
Kühnen, U. (2010). Manipulation checks as manipulation: Another look at the ease-of-retrieval heuristic. Personality and Social Psychology Bulletin, 36, 4758. https://doi.org/10.1177/0146167209346746Google Scholar
Lerman, C., Trock, B., Rimer, B. K., et al. (1991). Psychological side effects of breast cancer screening. Health Psychology, 10, 259267. https://doi.org/10.1037/0278-6133.10.4.259Google Scholar
Levy, L. (1967). Awareness, learning, and the beneficent subject as expert witness. Journal of Personality and Social Psychology, 6, 363370.Google Scholar
Lichtenstein, E. (1970). “Please don’t talk to anyone about this experiment”: Disclosure of deception by debriefed subjects. Psychological Reports, 26, 485486.CrossRefGoogle Scholar
McFarland, C., Cheam, A., & Buehler, R. (2007). The perseverance effect in the debriefing paradigm: Replication and extension. Journal of Experimental Social Psychology, 43, 233240. https://doi.org/10.1016/j .jesp.2006.01.010Google Scholar
McMillen, D. & Austin, J. (1971). Effect of positive feedback on compliance following transgression. Psychonomic Science, 24, 5961. https://doi.org/10.3758/BF03337892Google Scholar
Meade, A. W. & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437455. https://doi.org/10.1037/a0028085Google Scholar
Miketta, S. & Friese, M. (2019). Debriefed but still troubled? About the (in)effectiveness of postexperimental debriefings after ego threat. Journal of Personality and Social Psychology, 117, 282309. https://doi.org/10.1037/pspa0000155Google Scholar
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67, 371378. https://doi.org/10.1037/h0040525Google Scholar
National Communication Association (2017). A code of professional ethics for the communication scholar/teacher. Available at: www.natcom.org/sites/default/files/pages/1999_Public_Statements_A_Code_of_Professional_Ethics_for_%20the_Communication_Scholar_Teacher_November.pdf.Google Scholar
Necka, E., Cacioppo, S., Norman, G., & Cacioppo, J. (2016). Measuring the prevalence of problematic respondent behaviors among MTurk, campus, and community participants. PloS One, 11(6), e0157732. https://doi.org/10.1371/journal.pone.0157732CrossRefGoogle ScholarPubMed
Newberry, B. H. (1973). Truth telling in subjects with information about experiments: Who is being deceived? Journal of Personality and Social Psychology, 25, 369374. https://doi.org/10.1037/h0034229CrossRefGoogle Scholar
Nichols, A. & Edlund, J. (2020): Why don’t we care more about carelessness? Understanding the causes and consequences of careless participants, International Journal of Social Research Methodology, 23, 525638. https://doi.org/10.1080/13645579.2020.1719618CrossRefGoogle Scholar
Nichols, A. L. & Maner, J. (2008). The good-subject effect: Investigating participant demand characteristics. The Journal of General Psychology, 135, 151165. https://doi.org/10:3200/GENP.1352.151-t66Google Scholar
Nuijten, M. B., Hartgerink, C. H., Van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48, 12051226. https://doi.org/10.3758/s13428-015-0664-2Google Scholar
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716Google Scholar
Oppenheimer, D., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45, 867872. https://doi.org/10.1016/j.jesp.2009.03.009Google Scholar
Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776783. https://doi.org/10.1037/h0043424Google Scholar
Ortmann, A. & Hertwig, R. (2002). The costs of deception: Evidence from psychology. Experimental Economics, 5, 111131. https://doi.org/10.1023/A: 1020365204768CrossRefGoogle Scholar
Rubin, M. (2017). When does HARKing hurt? Identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Review of General Psychology, 21, 308320. https://doi.org/10.1037/gpr0000128Google Scholar
Sagarin, B. J., Rhoads, K. v. L., & Cialdini, R. B. (1998). Deceiver‘s distrust: Denigration as a consequence of undiscovered deception. Personality and Social Psychology Bulletin, 24, 11671176. https://doi.org/10.1177/01461672982411004Google Scholar
Sharpe, D. & Faye, C. (2009). A second look at debriefing practices: Madness in our method? Ethics & Behavior, 19, 432447. https://doi.org/10.1080/10508420903035455Google Scholar
Shimp, T. A., Hyatt, E. M., & Snyder, D. J. (1991). A critical appraisal of demand artifacts in consumer research. The Journal of Consumer Research, 18, 273283. https://doi.org/10.1086/209259Google Scholar
Sigall, H. & Mills, J. (1998). Measures of independent variables and mediators are useful in social psychology experiments: But are they necessary? Personality and Social Psychology Review, 2, 218226. https://doi.org/10.1207/s15327957pspr0203_5Google Scholar
Taylor, K. & Sheppard, J. (1996). Probing suspicion among participants in deception research. American Psychologist, 51, 886887. https://doi.org/10.1037/0003-066X.51.8.886Google Scholar
Tesch, F. E. (1977). Debriefing research participants: Though this be method there is madness to it. Journal of Personality and Social Psychology, 35, 217224. https://doi.org/10.1037/0022-3514.35.4.217Google Scholar
Walsh, W. B. & Stillman, S. M. (1974). Disclosure of deception by debriefed subjects. Journal of Counseling Psychology, 21, 315319. https://doi.org/10.1037/h0036683Google Scholar
Wilson, T. D., Aronson, E., & Carlsmith, K. (2010). The art of laboratory experimentation. In Fiske, S. T., Gilbert, D. T., & Lindzey, G. (eds.), Handbook of Social Psychology, 4th ed. (vol. 1, pp. 5181). Wiley.Google Scholar
Zadvinskis, I. M. & Melnyk, B. M. (2019). Making a case for replication studies and reproducibility to strengthen evidence‐based practice. Worldviews on Evidence-Based Nursing, 16(1), 23. https://doi.org/ezproxy.library.und.edu/10.1111/wvn.12349Google Scholar
Zannella, L., Vahedi, Z., & Want, S. (2020). What do undergraduate students learn from participating in psychological research? Teaching of Psychology, 47, 121129. https://doi.org/10.1177/0098628320901379Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×