1 Introduction
A rapidly growing number of behavioural researchers use online experiments to study human decision-making. Online labour markets, such as Amazon Mechanical Turk (MTurk; www.mturk.com) and Prolific (www.prolific.co), allow researchers to conveniently recruit participants for experiments and compensate them for their efforts. The quality of data from online experiments is generally deemed comparable to data obtained in the laboratory (Berinsky et al. Reference Berinsky, Huber and Lenz2012; Buhrmester et al. Reference Buhrmester, Kwang and Gosling2011; Hauser and Schwarz Reference Hauser and Schwarz2016; Mason and Suri Reference Mason and Suri2012; Paolacci and Chandler Reference Paolacci and Chandler2014; Paolacci et al. Reference Paolacci, Chandler and Ipeirotis2010; Snowberg and Yariv Reference Snowberg and Yariv2018; Thomas and Clifford Reference Thomas and Clifford2017; but see Hergueux and Jacquemet Reference Hergueux and Jacquemet2015), making online experimentation a promising complement to laboratory research. However, online experiments have typically used non-interactive tasks that participants complete on their own, either using survey software (e.g., SurveyMonkey, Qualtrics) to document decisions or emulating social interactions by using the strategy method and matching participants post hoc. Online studies using designs with live interactions between participants have typically employed tailor-made software (Egas and Riedl Reference Egas and Riedl2008; Gallo and Yan Reference Gallo and Yan2015; Nishi et al. Reference Nishi, Shirado, Rand and Christakis2015; Schmelz and Ziegelmeyer Reference Schmelz and Ziegelmeyer2015; Suri and Watts Reference Suri and Watts2011; Wang et al. Reference Wang, Suri and Watts2012).
A number of software platforms are currently available for conducting experiments via the Internet, at varying stages of development.Footnote 1 Typically, the use of these platforms for interactive experiments online requires considerable programming skills, and involves substantial installation and setup times. Moreover, these platforms do not provide integrated methods to address the specific logistic and methodological challenges of conducting interactive experiments with participants recruited online (Arechar et al. Reference Arechar, Gächter and Molleman2018). As a result, the online use of interactive designs has thus been largely restricted to experimenters with advanced technical skills or considerable financial resources, limiting the potential of online experimentation for behavioural research.
Here we introduce LIONESS Lab, a free web-based platform that enables experimenters to develop, test, run, and share their interactive experiments online. The software is developed and maintained at the Centre for Decision Research and Experimental Economics (University of Nottingham, UK) and the Chair of Economic Theory (University of Passau, Germany) and can be accessed via https://lioness-lab.org. LIONESS stands for Live Interactive ONline Experimental Server Software. LIONESS Lab provides an intuitive online interface to develop LIONESS experiments. LIONESS experiments include a standardized set of methods to deal with the typical challenges arising when conducting interactive experiments online (Arechar et al. Reference Arechar, Gächter and Molleman2018). These methods reflect current ‘best practices’, e.g., for preventing participants to enter a session more than once, facilitating on-the-fly formation of interaction groups, reducing waiting times for participants, driving down attrition by retaining attention of online participants and, importantly, adequate handling of cases in which participants drop out.
A key distinguishing feature of LIONESS Lab is that researchers require only minimal programming skills to develop and conduct their own interactive online experiments. The LIONESS Lab platform provides a user-friendly environment to create and edit LIONESS experiments in a point-and-click fashion. No installation is needed, and programming is only required for calculations inherent to the researchers’ specific experimental design. At the same time, LIONESS Lab supports adding JavaScript to experiments, allowing for great flexibility in implementing design features (for example, dynamic elements or conditional display). Experimenters can create new experiments starting from scratch or import existing designs (e.g., created by other researchers) through a repository. The repository enables experimenters to share their experimental designs with co-workers and other colleagues, allowing researchers to base their experiments on designs of others and facilitating transparency and replicability of research (Camerer et al. Reference Camerer, Dreber, Forsell, Ho, Huber and Johannesson2016; Munafò et al. Reference Munafò, Nosek, Bishop, Button, Chambers and du Sert2017; Open Science Collaboration 2015).
Testing LIONESS experiments is facilitated by a test mode including a ‘robot’ feature that simulates participant responses (or can be programmed to generate custom responses). Experiments can be downloaded and used on the experimenter’s own server. Participants access the experiment through a link (e.g., posted on MTurk or Prolific). Experimenters can monitor the progress of a session through a control panel. Upon session completion, data can be exported as a spreadsheet ready for analysis. This spreadsheet includes a tab for automating the performance-based payment of participants through online labour markets.
2 Overview
Figure 1 provides a general overview of LIONESS Lab. Experimenters access the platform using a free-of-charge account, where they can develop, test and share their LIONESS experiments. Developing, testing, and sharing is done on the LIONESS Lab server to allow for an easy start with no need to set up a server. Once a LIONESS experiment has been developed, it can be downloaded and used on a personal server, giving the experimenter full control over their experimental data and allowing researchers to adjust server capacities according to their needs.Footnote 2 Each experiment is a standalone program comprising a set of files defining the experimental screens that participants will navigate during a session.Footnote 3 This program incorporates standardised methods for dealing with participant dropout, one of the most challenging aspects of interactive online experimentation (Arechar et al. Reference Arechar, Gächter and Molleman2018). These methods have been thoroughly tested in a range of different interactive and non-interactive experimental designs, conducted by various research groups and with participants recruited online and in the field (for a list of publications based on LIONESS experiments, see Table 2 in the Appendix; ongoing projects include LIONESS experiments with online groups of up to 16 participants and over 1000 participants per session). For a demo experiment, see https://lioness-lab.org/demo.
Researchers and experimental participants use LIONESS Lab online; no installation procedures are needed. Participants can complete experiments on devices with an internet connection, including laptops, tablets, and smartphones. LIONESS Lab was specifically designed for conducting interactive experiments online with participants recruited from crowd-sourcing platforms or from the participant pool of research institutions. However, it can also be used in the laboratory (Arechar et al. Reference Arechar, Gächter and Molleman2018; Molleman and Gächter Reference Molleman and Gächter2018). An important advantage of this portability is that the screens of experimental participants are exactly the same in the physical lab and online, facilitating comparisons between the two. Furthermore, it enables experimenters to complement online studies with data from the physical lab, for example, to test the robustness of their results in more highly controlled lab conditions (a request that may well occur during the referring process of papers conducted solely online).
3 Developing experiments
Experimenters can register for an account on https://lioness-lab.org. Upon login, one can choose to either start developing a LIONESS experiment from scratch, or to build upon existing experiments which can be imported through the repository (Sect. 6).
LIONESS Lab aims at making the development of experiments as simple and intuitive as possible. To this end, the user interface allows for developing experiments by creating them stage by stage.Footnote 4 Each stage corresponds to an experimental screen which participants will navigate during a session. Experimenters can define each stage in a point-and-click fashion by adding ‘elements’ in the order they want them to appear on the participants’ screens. The development interface largely displays these elements in the way they will be shown to experimental participants.
Figure 2 illustrates how stages are specified in LIONESS Lab. In many economic experiments, participants are required to provide numerical input (e.g., place a bid in an auction, or make a contribution to a public good). To specify such an input stage, an experimenter needs two elements: a ‘numeric input’ and a ‘button’ to submit the input and continue to the next stage. Experimenters add these elements to a stage with a dropdown menu and specify accompanying texts to show to participants. In the button element, the experimenter specifies which stage participants will be directed to next, and adds a condition upon which participants can continue to that next stage (e.g., ‘as soon as possible’, or ‘wait for others’).Footnote 5
A range of different types of elements can be added to stages. These include text boxes to display information, a chat box, and various input boxes for recording participants’ responses such as numeric or text input fields, sliders, radio buttons, and discrete choices. Participants’ input is automatically validated before it is written to the database which stores all responses in the experiment.Footnote 6 For each input element, experimenters can specify ‘display conditions’ so that the elements are only displayed when certain conditions are met. This is often useful, e.g., when defining treatment variations.
Dynamic features can be added to stages with JavaScript elements, which allow for manipulation of variables. JavaScript elements provide predefined functions to access data from (and write data to) the experiment’s database, and can be used to perform any calculations needed for the specific experimental design at hand. An example of a simple JavaScript program is shown in Fig. 3. JavaScript is the most widely used language for programming dynamic webpages and is supported by all modern web browsers. Due to its large user base, solutions for programming issues can be found easily. Experimenters with more advanced programming skills can use the great flexibility of JavaScript to incorporate their own custom-made components into experimental stages.
Stages have two different ‘screens’: an ‘active screen’ and an optional ‘waiting screen’. Many experimental designs require that, at times, participants will only be allowed to proceed to the next stage when all members of their group have submitted their response in the current stage. In these cases, participants will be directed to the waiting screen of that stage after making their decision. As soon as all group members have made their decisions, they will all be directed to the next stage.Footnote 7
4 Test experiments
At any point during the development of their LIONESS experiment, users can choose to compile and test the experiment. Once the compilation process has finished—usually within less than a second—the control panel of the experiment opens in a new browser tab. The control panel is the centre of a LIONESS experiment, responsible for general coordination (Sect. 5). It includes a switch to activate a ‘test mode’ designed to facilitate testing during development. The test mode allows experimenters to start multiple mock participants (called ‘test players’) within the same browser, or to start a ‘robot’ participant generating random responses in each of the stages. Using robots is particularly useful for testing designs for larger groups, so that the researcher can test the experiment from the perspective of one group member, while the responses of the other group members are generated automatically.
While testing, experimenters can check the design of their experimental screens, and make adjustments as they go through them by updating their experimental screens in LIONESS Lab and refreshing the experimental screen they are viewing as a test player. In the control panel, experimenters can verify whether all variables are correctly recorded in the database. Experimenters can share the link to the experiment with collaborators, who can then instantly view the implemented design. The development interface also allows for downloading all experimental screens as a single webpage which can then be shared with collaborators for checking.
Once experimenters have finished specifying their LIONESS experiment, they can download it by clicking a button.Footnote 8 This will yield a standalone program that can subsequently be uploaded to a privately-owned server. This setup ensures that experimenters have full control over the data generated by their participants and that they can tailor server capacity to the prospective number of participants.Footnote 9
5 Conducting experiments online
Once the experimental screens have been specified and the LIONESS experiment has been uploaded to the server of the experimenter, participants can be invited (e.g., via MTurk or Prolific) and the data collection process can start. Here, we briefly describe how LIONESS experiments deal with the challenges of conducting interactive experiments, from starting up a session, through the interaction phase, to payment of participants. We focus on the technical aspects of LIONESS lab; an extensive methodological discussion of conducting interactive experiments online can be found in Arechar et al. (Reference Arechar, Gächter and Molleman2018).
Start-up phase Before starting a session, experimenters go to the control panel of their LIONESS experiment, where they find a web link through which participants can access the experiment. Once the experiment is ‘activated’ by clicking a button in the control panel, the link directing prospective participants to the experiment can be posted as a ‘job’ on a crowd-sourcing website.Footnote 10 By default, LIONESS experiments store participants’ IPs (after one-way encryption to meet data privacy requirements) to block participants who try to enter a session more than once. After a further check for browser support,Footnote 11 a participant can enter the experiment. After reading instructions, participants typically complete control questions. LIONESS Lab allows for adding ‘quiz’ stages, automatically recording the number of attempts that participants needed for solving each item. It is also possible to specify a maximum number of failed attempts, after which a participant is excluded from further participation in the experiment.
Once participants have read the instructions and successfully completed the quiz, they are ready to be matched. Matching takes place in a ‘lobby’ stage in which they wait for other participants with whom they will form a group. Experimenters can choose to inform participants in the lobby about the number of participants necessary to form a group. In case participants cannot be matched into a group within a predefined time limit, they are directed to a screen where they can choose to either return to the lobby or to leave to another experimental stage (e.g., ending the experiment, or presenting an alternative task).
Experimenters can choose from various pre-set matching procedures. As the number of participants in online experiments often cannot be determined exactly, each of these matching procedures flexibly forms groups ‘on-the-fly’. In online experiments, statically predefining the composition of matching groups is often infeasible as some participants may drop out before being matched. The default option ‘first-come, first-served’ ensures that groups will be formed as soon as a sufficient number of participants are in the lobby. This minimizes waiting time and reduces dropout (Arechar et al. Reference Arechar, Gächter and Molleman2018).Footnote 12 Available standard matching procedures can be based on treatments (grouping participants from the same treatment) or unique roles (grouping participants with different roles). Alternative matching procedures—such as stranger matching, rotation, or matching based on responses prior to the lobby—can be implemented with custom JavaScript code. Various implemented examples of custom matching procedures, including (perfect) stranger matching, are publicly available from the LIONESS repository.
Interaction phase Once a group has been formed, participants are directed to the first stage of a period. The flow of an experiment is centrally regulated by the control panel, ensuring that all group members move through the periods of the experiment in synchrony.Footnote 13 In the control panel, experimenters can track the progress of participants during an experimental session (Fig. 4).
Participant dropout is a great challenge for online experimentation (see, e.g., Zhou and Fischbach Reference Zhou and Fishbach2016), particularly for interactive designs (Arechar et al. Reference Arechar, Gächter and Molleman2018). Experimenters can reduce dropouts by using attractive pay rates, conditioning payment upon completion, and carefully managing participants’ expectations from the outset (see, e.g., Horton et al. Reference Horton, Rand and Zeckhauser2011). Complementing these general measures, LIONESS has methods in place to further reduce participant dropout in interactive experiments and, importantly, to deal with logistical issues should dropouts occur. The time that participants wait on others (and associated waning attention; a major source of dropouts) can be mitigated by adding timers to experimental screens, keeping up the pace of the experiment.Footnote 14 By default, waiting screens display an animated spinning wheel to ensure participants that the experiment is still running. Furthermore, if an experiment progresses to a new stage while a participant has the experimental screens in the background of their device (which can occur when participants are waiting for others and get distracted), an overlaying notification will be shown.
Despite these measures, dropouts cannot be avoided altogether. The controller algorithm detects when a participant loses connection or fails to respond in time. Experimenters can select pre-set options defining what happens to an unresponsive participant as well as the other members of this participant’s group. By default, unresponsive participants are removed from the sessionFootnote 15 and the other members can continue in a group reduced in size. This way, the earnings of remaining participants will likely match their expectations (which would not happen if the whole group was terminated or stalled after a dropout of a group member; management of participant expectations is key for the online reputation of experimenter accounts on crowd-sourcing websites, which can be easily damaged by disappointed participants). Alternative options to handle dropouts—e.g., terminating the interaction phase of a group when one member drops out, directing the other group members to an alternative stage or directly to the end of the experiment—are available.Footnote 16 Using JavaScript, experimenters can also define programmed responses to replace group members who dropped out (e.g., based on previous responses of other participants). A number of example experiments implementing such ‘robots’ are available from the repository (see Sect. 6 below).
Payment phase At the end of an experiment, participants typically receive a unique completion code, which they can enter on the crowd-sourcing platform to get paid. For performance-specific payment, LIONESS experiments link the amount of ‘bonus payments’ to the completion codes.
At the end of a session, experimenters can download their data via the control panel. By clicking a button, the browser will download an Excel file. This file includes each of the tables of the database underlying the experiment, which are shown in separate tabs. The downloaded file also contains tabs to help experimenters automating the bonus payment of online participants in a few simple steps. An extensive description of how to do this for payments on MTurk can be found in the online documentation (https://lioness-lab.org/documentation).
6 Sharing experiments through a repository
Users of LIONESS Lab can choose to share their experiments with others through the repository (Fig. 5). Sharing experiments with co-authors facilitates collaboration during development and testing. Moreover, the repository enables other colleagues to view the experiment and replicate results once the data has been collected and a study has been completed.Footnote 17 The repository aims to promote transparency and replicability of research, which is essential to the reliability of scientific research in general (Camerer et al. Reference Camerer, Dreber, Forsell, Ho, Huber and Johannesson2016; Munafò et al. Reference Munafò, Nosek, Bishop, Button, Chambers and du Sert2017; Open Science Collaboration 2015), and to the relatively young field of behavioural online experimentation in particular (Stewart et al. Reference Stewart, Chandler and Paolacci2017). Finally, by making their experiments publicly available to other LIONESS Lab users, experimenters contribute to the range of experimental designs available for others to view, edit, and build upon. Experimenters can avoid reinventing the wheel and copy solutions to common issues in developing their designs, helping speed up the interaction between theory and empirics. In addition, the repository is frequently used to ask for help in the online discussion forum (see Sect. 8) and to provide example solutions to raised issues.
7 Data security and privacy
LIONESS Lab runs on a server which is professionally maintained by the University of Passau IT services, to guarantee minimal downtimes and high stability. To register for an account, experimenters have to accept the terms of use, which include references to the EU General Data Protection regulation (GDPR) to regulate the collection of personal data (from experimenters). LIONESS Lab does not store any personal data of participants. LIONESS Lab further implements the standard EU cookie policy with a disclaimer on the login screen. Communication is secured via standard HTTPS protocols. Intellectual property rights are clearly defined in the terms of use.
Basic aspects of LIONESS experiments, such as the flow of participants, matching in groups, synchronization between participants and screen timers are controlled on the server side, and by the control panel (which runs in the experimenter’s browser). For participant screens, however, LIONESS experiments depend to a large extent on JavaScript, which runs in the participants’ browsers. This has a range of advantages (very high flexibility, most programming issues are easily addressed by a Google search, not having to rely on web sockets, which are often blocked by institutions) but also has a number of drawbacks. For example, experimental participants with advanced web programming skills might be able to interact with the web pages and view restricted information using the JavaScript console of their browser. LIONESS experiments cannot completely rule out such attempts, but they do make it hard (and generally not worthwhile, or feasible under time pressure) for participants to try and circumvent such restrictions. Display conditions are enforced every 100 ms so that manipulation requires more advanced skills than just changing the display setting of an element on a page. Finally, external data manipulation attempts are curbed on the server side by using internal identifiers for the database and its tables are hard-coded into an experiment’s PHP files, and standard measures are in place to prevent SQL injections.
8 Conclusion and future outlook
In this paper we introduce LIONESS Lab, a free platform for the development of interactive online experiments with minimal need for programming experience. By allowing researchers to conveniently develop, test, run, and share their interactive experimental designs, LIONESS Lab aims at helping online behavioural research reach its full potential.
LIONESS experiments include thoroughly tested measures to deal with the methodological and logistical challenges of conducting interactive experiments online (Arechar et al. Reference Arechar, Gächter and Molleman2018). Most importantly, these measures drive down participant dropouts and adequately deal with situations in which dropouts do occur.
LIONESS Lab enhances the potential of online behavioural research and increases the number of ways in which it can complement experimental research conducted in the physical lab. Research could benefit from systematic comparisons between results obtained in the physical lab and results obtained online. There is a lot of research on this topic for non-interactive tasks (Buhrmester et al. Reference Buhrmester, Kwang and Gosling2011; Paolacci et al. Reference Paolacci, Chandler and Ipeirotis2010; Snowberg and Yariv Reference Snowberg and Yariv2018; Stewart et al. Reference Stewart, Chandler and Paolacci2017), and first comparisons for interactive designs look encouraging (Arechar et al. Reference Arechar, Gächter and Molleman2018). However, it remains to be established to what extent behavioural results of laboratory studies across the broad range of possible experimental designs are replicable with participants recruited online, and to what extent the methodological and conceptual differences lead to different results.
LIONESS Lab has a growing user base of experimenters who can access and build on each other’s code through the repository. For technical aspects of the software and common coding issues, users can refer to the extensive online documentation. Furthermore, there is an active dedicated forum on which programming issues are discussed (https://lioness-lab.org/forum). Finally, there is an email address (info@lioness-lab.org) where users can ask for help. As developers of LIONESS Lab, we regularly update the software and its documentation, now and in the foreseeable future. This way, LIONESS Lab will accommodate recurrently emerging issues, incorporate solutions and new features (sometimes suggested by users), and respond to the changing needs of the world wide web. Should institutional support for LIONESS Lab unexpectedly cease, the developers of LIONESS Lab will make the full platform open-source, so that it can be deployed elsewhere. This information can also be found in the terms of use.
Acknowledgements
We thank Antonio Alonso Aréchar, Benjamin Beranek, Urs Fischbacher, Johann Graf Lambsdorff, José Guinot Saporta, KyeongTae Lee, Katrin Schmelz, Nina Sedarevic, Shruti Surachita, Jungsun Yoo, and the members of the Ecology, Evolution and Biodiversity Conservation lab of the University of Leuven for useful comments and discussions. We also thank the users of the beta version of LIONESS Lab for suggestions for improvement and Susanna Grundmann for proof-reading our manuscript. We also thank the editor and two anonymous referees for their valuable comments. This work was supported by the European Research Council Grant ERC-AdG 295707—COOPERATION, the Economic and Social Research Council Network for Integrated Behavioural Sciences (ES/K002201/1) and by the Chair in Economic Theory, University of Passau, Germany. Lucas Molleman is supported by Open Research Area grant ASTA ID: 176.
Appendix
LIONESS Lab |
z-Tree (Fischbacher Reference Fischbacher2007) |
oTree (Chen et al. Reference Chen, Schonger and Wickens2016) |
|
---|---|---|---|
Platform independent |
Yes |
No only stationary PCs |
Yes |
Optimized for mobile use |
Yes |
No |
Yes |
Programming language for user |
JavaScript |
z-Tree specific |
Python |
Required programming knowledge |
Basic |
Basic |
Advanced |
Modular backend system |
Yes |
Yes |
No |
Number of participants |
Automatic flexibility |
Fixed at the start of the session |
Flexibility programmable |
Experiments with large groups (N > 100) |
Yes |
Yes |
Unknown |
Run on own server |
Yes |
Yes |
Yes |
Type of software |
Free service |
Free proprietary software |
Open source |
We focus on comparing our platform to z-Tree (Fischbacher Reference Fischbacher2007) and oTree (Chen et al Reference Chen, Schonger and Wickens2016). For a more extensive overview comparing various web-based experimental software platforms, see Chan et al (Reference Chan, Schilizzi, Iftekhar and Rosa2019). Users of LIONESS Lab require little programming skills. Experiments can be developed with a modular backend system to set up stages and elements. To develop simple experiments, users can start right away, without having to own a web server. While oTree is distributed as software, LIONESS runs as a service on dedicated servers. LIONESS experiments have been successfully run with hundreds of simultaneous participants recruited from Amazon MTurk (e.g. Molleman et al. Reference Molleman, Kölle, Starmer and Gächter2019), we do not find evidence for oTree in the literature on large groups so far. In addition, LIONESS experiments can account for dynamically determined data quantity (after the start of the session), while oTree requires complex and deep changes to the data structure (Konrad 2018)
Arechar, A. A., Gächter, S., & Molleman, L. (2018). Conducting interactive experiments online. Experimental Economics, 21(1), 99–131 |
de Quidt, J., Fallucchi, F., Kölle, F., Nosenzo, D., & Quercia, S. (2017). Bonus versus penalty: How robust are the effects of contract framing? Journal of the Economic Science Association, 3(2), 174–182 |
Fallucchi, F. & Nosenzo, D. (2020). The coordinating power of social norms. CeDEx Discussion Paper Series 2020-14 |
Fongoni, M, Hepp, J & Waltl SR. (2019) Social comparison, wage inequality and procedural fairness: an experimental investigation. https://www.aiel.it/cms/cms-files/submission/all20190615171533.pdf |
Fornwagner, H., Pompeo, M., & Serdarevic, N. (2020). Him or her? Choosing competition on behalf of someone else. CeDEx Discussion Paper Series 2020-13 |
Gächter, S., Huang, L., & Sefton, M. (2016). Combining “real effort” with induced effort costs: the ball-catching task. Experimental Economics, 19(4), 687–712 |
Glowacki, L., & Molleman, L. (2017). Subsistence styles shape human social learning strategies. Nature Human Behaviour, 1, 0098 |
Li, X., Molleman, L., & Van Dolder, D (2020). Conditional punishment: descriptive norms drive negative reciprocity. CeDEx Discussion Paper Series 2020-05. |
Molleman, L., Kanngiesser, P., & van den Bos, W. (2019). Social information use in adolescents: The impact of adults, peers and household composition. PLoS ONE, 14(11) |
Molleman, L., Kurvers, R. H., & van den Bos, W. (2019). Unleashing the BEAST: a brief measure of human social information use. Evolution and Human Behavior, 40(5), 492–499 |
Molleman, L., Kölle, F., Starmer, C., & Gächter, S. (2019). People prefer coordinated punishment in cooperative interactions. Nature Human Behaviour, 3(11), 1145–1153 |
Molleman, L., Tump, A. N., Gradassi, A., Herzog, S., Jayles, B., Kurvers, R., & van den Bos, W. (2020). Strategies for integrating disparate social information. PsyArXiv, https://psyarxiv.com/wgzna |
Molleman, L., & Gächter, S. (2018). Societal background influences social learning in cooperative decision making. Evolution and Human Behavior, 39(5), 547–555 |
Stagnaro, M. N., Arechar, A. A., & Rand, D. G. (2017). From good institutions to generous citizens: Top-down incentives to cooperate promote subsequent prosociality but not norm enforcement. Cognition, 167, 212–254 |
Tonning, H, & Underhaug, M. (2019). In bots we (dis)trust. https://uis.brage.unit.no/uis-xmlui/bitstream/handle/11250/2618905/Underhaug_Martin.pdf |
Weidinger, L., Gradassi, A., Molleman, L., & van den Bos, W (2019). Test–retest reliability of canonical reinforcement learning models. Proceedings of the Annual Conference on Cognitive Computational Neuroscience. https://ccneuro.org/2019/proceedings/0000513.pdf |
Yahosseini, Reijula, Molleman, & Moussaid (2018); Yahosseini, K. S., Reijula, S., Molleman, L., & Moussaid, M. (2018). Social information can undermine individual performance in exploration–exploitation tasks. Cognitive Science Conference Proceedings, 17. https://doi.org/10.31234/osf.io/upv8k |
These experiments include experiments conducted online (e.g., de Quidt et al. Reference de Quidt, Fallucchi and Kölle2017), in the lab (e.g., Gächter et al. Reference Gächter, Huang and Sefton2016) and in the field (e.g., Glowacki and Molleman 2017). The above list was compiled on 11 June 2020. For an up-to-date list, see https://lioness-lab.org/publications