Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-01-08T04:53:43.916Z Has data issue: false hasContentIssue false

Running online experiments using web-conferencing software

Published online by Cambridge University Press:  01 January 2025

Jiawei Li*
Affiliation:
The University of Michigan, Ann Arbor, USA
Stephen Leider*
Affiliation:
The University of Michigan, Ann Arbor, USA
Damian Beil*
Affiliation:
The University of Michigan, Ann Arbor, USA
Izak Duenyas*
Affiliation:
The University of Michigan, Ann Arbor, USA

Abstract

We report the results of a novel protocol for running online experiments using a combination of an online experimental platform in parallel with web-conferencing software in two formats—with and without subject webcams—to improve subjects’ attention and engagement. We compare the results between our online sessions with the offline (lab) sessions of the same experiment. We find that both online formats lead to comparable subject characteristics and performance as the offline (lab) experiment. However, the webcam-on protocol has less noisy data, and hence better statistical power, than the protocol without a webcam. The webcam-on protocol can detect reasonable effect sizes with a comparable sample size as in the offline (lab) protocol.

Type
Original Paper
Copyright
Copyright © 2021 The Author(s), under exclusive licence to Economic Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arechar, A. A., Gächter, S., Molleman, L. (2018). Conducting interactive experiments online. Experimental Economics, 21(1), 99131. 10.1007/s10683-017-9527-2CrossRefGoogle ScholarPubMed
Arechar, A. A., Rand, D. G. (2021). Turking in the time of covid. Behavior Research Methods, 53, 25912595. 10.3758/s13428-021-01588-4CrossRefGoogle ScholarPubMed
Carpenter, J., Graham, M., Wolf, J. (2013). Cognitive ability and strategic sophistication. Games and Economic Behavior, 80, 115130. 10.1016/j.geb.2013.02.012CrossRefGoogle Scholar
Chandler, J., Mueller, P., Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1), 112130. 10.3758/s13428-013-0365-7CrossRefGoogle ScholarPubMed
Croson, R., Gneezy, U. (2009). Gender differences in preferences. Journal of Economic Literature, 47(2), 448474. 10.1257/jel.47.2.448CrossRefGoogle Scholar
Duch, M. L., Grossmann, M. R., Lauer, T. (2020). z-Tree unleashed: A novel client-integrating architecture for conducting z-Tree experiments over the Internet. Journal of Behavioral and Experimental Finance, 28, 100400. 10.1016/j.jbef.2020.100400CrossRefGoogle Scholar
Feltz, C. J., Miller, G. E. (1996). An asymptotic test for the equality of coefficients of variation from k populations. Statistics in Medicine, 15(6), 647658. 10.1002/(SICI)1097-0258(19960330)15:6<647::AID-SIM184>3.0.CO;2-P3.0.CO;2-P>CrossRefGoogle ScholarPubMed
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 2542. 10.1257/089533005775196732CrossRefGoogle Scholar
Gneezy, U., Rustichini, A., Vostroknutov, A. (2010). Experience and insight in the race game. Journal of Economic Behavior & Organization, 75(2), 144155. 10.1016/j.jebo.2010.04.005CrossRefGoogle Scholar
Greiner, B. (2015). Subject pool recruitment procedures: Organizing experiments with ORSEE. Journal of the Economic Science Association, 1(1), 114125. 10.1007/s40881-015-0004-4CrossRefGoogle Scholar
Hauser, D., Paolacci, G., & Chandler, J. (2019). Common concerns with MTurk as a participant pool: Evidence and solutions. Handbook of Research Methods in Consumer Psychology, pages 319337.Google Scholar
Holt, C. A., Laury, S. K. (2002). Risk aversion and incentive effects. American Economic Review, 92(5), 16441655. 10.1257/000282802762024700CrossRefGoogle Scholar
Horton, J. J., Rand, D. G., Zeckhauser, R. J. (2011). The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3), 399425. 10.1007/s10683-011-9273-9CrossRefGoogle Scholar
Howell, D. C. (2012). Statistical Methods for Psychology, Cengage Learning.Google Scholar
Kleinman, K. & Huang, S. S. (2016). Calculating power by bootstrap, with an application to cluster-randomized trials. EGEMs, 4(1).Google ScholarPubMed
Kovalchik, S., Camerer, C. F., Grether, D. M., Plott, C. R., Allman, J. M. (2005). Aging and decision making: A comparison between neurologically healthy elderly and young individuals. Journal of Economic Behavior & Organization, 58(1), 7994. 10.1016/j.jebo.2003.12.001CrossRefGoogle Scholar
Paolacci, G., Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science, 23(3), 184188. 10.1177/0963721414531598CrossRefGoogle Scholar
Peng, X., Peng, G., & Gonzales, C. (2005). Power analysis and sample size estimation using bootstrap. Phoenix: Paper presented at PharmaSUG.Google Scholar
Zhao, S., Vargas, K., Friedman, D., & Chavez, M. (2020). UCSC LEEPS lab protocol for online economics experiments. SSRN Electronic Journal.CrossRefGoogle Scholar
Zhou, H., Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4),493. 10.1037/pspa0000056CrossRefGoogle ScholarPubMed
Supplementary material: File

Li et al. supplementary material

Li et al. supplementary material
Download Li et al. supplementary material(File)
File 524.3 KB