Introduction
Behavioural science interventions have been implemented in various policy areas, from health and education to justice and sustainability, and used to influence behaviours such as pension savings, tax compliance, or healthy food consumption, to name but a few (e.g., Oliver, Reference Oliver2013, Reference Oliver2019; Halpern, Reference Halpern2015; Sunstein, Reference Sunstein, Reisch and Thøgersen2015, Reference Sunstein2020; Sanders et al., Reference Sanders, Snijders and Hallsworth2018). Although these interventions are highly diverse and can be based on different theoretical assumptions, an underlying characteristic they share is that they influence behaviour by changing the ‘architecture’ of the context in which people act (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Vlaev et al., Reference Vlaev, King, Dolan and Darzi2016; Mongin & Cozic, Reference Mongin and Cozic2018). For example, this may involve altering the order of foods in a cafeteria, changing how the information a person considers when deciding is framed, exposing people to a scent before they are about to act, etc. (de Lange et al., Reference De Lange, Debets, Ruitenburg and Holland2012; Marteau et al., Reference Marteau, Hollands and Fletcher2012).
Interventions that behavioural scientists use are typically linked to the concept of libertarian paternalism (Sunstein & Thaler, Reference Sunstein and Thaler2003; Thaler & Sunstein, Reference Thaler and Sunstein2003, Reference Thaler and Sunstein2008; Hansen, Reference Hansen2016; Oliver, Reference Oliver2019). Paternalism in this context means that the interventions are aimed to influence people's behaviour in a specific direction, and this behavioural change should be welfare promoting and thus make people ‘better off’ according to some criterion that is established as objectively as possible (Thaler & Sunstein, Reference Thaler and Sunstein2003, Reference Thaler and Sunstein2008; Sunstein, Reference Sunstein2014). Although it is not plausible that all behavioural science interventions are designed or applied to make people ‘better off’, which means that they can, in principle, be inconsistent with paternalism, they should not violate this principle when ethically applied (Lades & Delaney, Reference Lades and Delaney2020). Proponents of libertarian paternalism argue that, despite being paternalistic, behavioural interventions are aligned with liberalism (Thaler & Sunstein, Reference Thaler and Sunstein2003, Reference Thaler and Sunstein2008; Sunstein, Reference Sunstein2014), which broadly refers to respecting people's freedom of choice (Gane, Reference Gane2021). For example, it is claimed that these interventions respect the freedom because, unlike prohibitions or bans, changing the ‘architecture’ of the context in which people act does not forbid an action or take any choice options away from them; people, therefore, remain free to select whatever course of action they desire (Thaler & Sunstein, Reference Thaler and Sunstein2008).
However, despite its emphasis on the freedom of choice, libertarian paternalism has faced several criticisms that have argued it is not compatible with liberalism for various reasons (Alberto & Salazar, Reference Alberto and Salazar2012; Gill & Gill, Reference Gill and Gill2012; Grüne-Yanoff, Reference Grüne-Yanoff2012; Heilmann, Reference Heilmann2014; Rebonato, Reference Rebonato2014; Barton & Grüne-Yanoff, Reference Barton and Grüne-Yanoff2015; Mongin & Cozic, Reference Mongin and Cozic2018; Le Grand, Reference Le Grand2020; Reijula & Hertwig, Reference Reijula and Hertwig2020). First, interventions aligned with libertarian paternalism interfere in choice processes and hence limit negative freedom, which involves freedom from interference by other people (Grüne-Yanoff, Reference Grüne-Yanoff2012; Gane, Reference Gane2021). Second, these interventions are frequently not transparent, which means that people may not understand how they operate, in which direction they should change their behaviour, and/or to what degree they are supported by sound scientific evidence (Grüne-Yanoff, Reference Grüne-Yanoff2012; Barton & Grüne-Yanoff, Reference Barton and Grüne-Yanoff2015). People's freedom of choice is, therefore, limited because they lack the information about how they are being influenced and why, and hence they cannot deliberate on this information to make a choice. Third, libertarian paternalism does not respect the subjectivity or plurality of values, which in a nutshell means that it endorses changing behaviours in a specific direction that is considered welfare promoting (e.g., eating healthy or being physically active), rather than respecting people's individual freedoms by changing behaviour in line with ‘the values that individuals have determined as their values’ (Grüne-Yanoff, Reference Grüne-Yanoff2012, p. 641). To resolve these impediments to freedom, the critics of libertarian paternalism have proposed that behavioural interventions should be devised to promote people's capability to make their own choices (i.e., boosting) rather than nudging them to act in a particular direction (Hertwig & Grüne-Yanoff, Reference Hertwig and Grüne-Yanoff2017).
In the present article, we look at this issue from an alternative perspective. We argue that one of the possible solutions to making behavioural interventions more compatible with liberalism is integrating them with cutting edge developments in technology. More specifically, there are various promising technological tools from different domains (e.g., social robotics, self-quantification, etc.) that have either already been used or could potentially be used to implement behavioural change techniques. Importantly, administering behavioural interventions via these technologies would require that people deliberately choose which behaviour(s) they want to change (if any) and select the desired technological tool(s) and intervention(s) for this purpose. Also, transparency could be ensured by creating a summary for potential users regarding how each intervention operates, in which direction it should change their behaviour, and to what degree it is supported by sound scientific evidence. Overall, this approach would be consistent with liberalism because it would ensure negative freedom, transparency, and the freedom to select interventions and desired behaviours to change in line with one's values and beliefs.
In this article, we first overview the technological domains we find compatible with behavioural interventions and examine both the interventions that have already been implemented within these domains and the potential they have for future integration with behavioural change techniques. We then explore whether knowing how the interventions operate and the behaviours they target would be an obstacle to the effectiveness of combining cutting edge technologies with behavioural science. Finally, we discuss new ethical issues that could arise because of this approach, and we address additional policy considerations. To aid the interpretation of the article, in Table 1, we overview the technologies we cover and their potential for behaviour change.
Behavioural Science in an Age of New Technology
Virtual and Augmented Reality
Introducing the technological domain
Virtual reality (VR) and augmented reality (AR) share one main characteristic – they can alter the visual environment in which people act. The main difference is that VR immerses people into a virtual world inside a VR headset (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016), whereas AR changes people's actual physical environment by projecting holograms onto it (Ng et al., Reference Ng, Ma, Ho, Ip and Fu2019). For example, by using the VR headset, we can immerse ourselves into a virtual world in which we assume the appearance of the older version of ourselves (Hershfield et al., Reference Hershfield, Goldstein, Sharpe, Fox, Yeykelis, Carstensen and Bailenson2011), whereas AR glasses can project virtual material objects or beings into the space around us, thus blending the virtual and physical world into one (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016). Whereas VR headsets such as Oculus Rift, HTC Vive, or Google Daydream View are relatively affordable and tend to be widely used, AR glasses such as Microsoft Hololens or Magic Leap are still not easily affordable for most individuals and tend to be used by large organizations and research labs (Elmqaddem, Reference Elmqaddem2019; Xue et al., Reference Xue, Sharma and Wild2019).
Theoretical argument and available evidence
The main benefit of VR and AR regarding behaviour change is that they can directly alter the visual context of action. A theoretical paradigm that supports the effectiveness of these technologies is construal level theory (CLT). According to CLT, one of the reasons why people sometimes fail to act is that the consequences or circumstances of action are too psychologically distant (Spence et al., Reference Spence, Poortinga and Pidgeon2012; Kim et al., Reference Kim, Schnall and White2013; Jones et al., Reference Jones, Hine and Marks2017; Touré-Tillery & Fishbach, Reference Touré-Tillery and Fishbach2017; Chu & Yang, Reference Chu and Yang2018; Kogut et al., Reference Kogut, Ritov, Rubaltelli and Liberman2018; Simonovits et al., Reference Simonovits, Kezdi and Kardos2018; Brügger, Reference Brügger2020). That is, the action may concern some event that will not happen immediately, a person that is not close to us, or a place that is not near us. For example, people may not recycle because climate change feels far away, they may not attempt to reduce their prejudice because they do not know what it feels like to be the target of the prejudice, or they may not bother donating to charity because the benefactor is from a distant country. Construal level theory posits that reducing psychological distance to these events, circumstances, or individuals by making them more concrete can propel action, given that concreteness is both more emotionally arousing and may activate various motivational mechanisms that propel behaviour (Van Boven et al., Reference Van Boven, Kane, McGraw and Dale2010; Bruyneel & Dewitte, Reference Bruyneel and Dewitte2012; Kim et al., Reference Kim, Schnall and White2013). This is exactly what AR or VR can achieve: for example, they can visually simulate the consequences of climate change in one's current environment or transform people into a person they are prejudiced against, thus making action more likely (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016).
In accordance with these theoretical paradigms, effectiveness of VR in changing behaviour has been empirically supported in numerous domains, including pension savings (Hershfield et al., Reference Hershfield, Goldstein, Sharpe, Fox, Yeykelis, Carstensen and Bailenson2011), prejudice and bias reduction (Banakou et al., Reference Banakou, Hanumanthu and Slater2016, Reference Banakou, Kishore and Slater2018), sustainability and environment (Bailey et al., Reference Bailey, Bailenson, Flora, Armel, Voelker and Reeves2015; Nelson et al., Reference Nelson, Anggraini and Schlüter2020), prosocial behaviour (Rosenberg et al., Reference Rosenberg, Baughman and Bailenson2013), domestic violence (Seinfeld et al., Reference Seinfeld, Arroyo-Palacios, Iruretagoyena, Hortensius, Zapata, Borland, de Gelder, Slater and Sanchez-Vives2018), parenting (Hamilton-Giachritsis et al., Reference Hamilton-Giachritsis, Banakou, Quiroga, Giachritsis and Slater2018), physical activity (Ng et al., Reference Ng, Ma, Ho, Ip and Fu2019), etc. As an example, embodying white individuals into a virtual body of a black person reduced their racial prejudice (Banakou et al., Reference Banakou, Hanumanthu and Slater2016). A systematic literature review by Lanier et al. (Reference Lanier, Waddell, Elson, Tamul, Ivory and Przybylski2019) has shown that, even if VR research is still in its early stages and the quality of studies generally needs to improve, those studies that have been conducted so far have a good evidential value and indicate that VR interventions may effectively change psychological and behavioural outcomes. However, the studies have several main disadvantages. First, they are mostly lab studies, and it is therefore not known to what extent VR can change behaviours in the real world. Second, the studies typically involve short-term effects, which means that the impact of VR on behaviour is assessed immediately after the interventions or up to one week later at most, but it is not known whether they can create a sustained behaviour change. Finally, the sample sizes are generally small (34 participants per condition on average; Lanier et al., Reference Lanier, Waddell, Elson, Tamul, Ivory and Przybylski2019), which means that the magnitude of behaviour change observed cannot be estimated with precision. Therefore, to reveal a full potential of VR in behavioural change, researchers will need to focus on field studies that examine long-term effects using larger sample sizes.
In contrast to VR, very few studies have examined the impact of AR on behaviour, given that this technology is not yet as widely used as VR. Therefore, although no well-informed conclusions can be made in this regard, researchers agree that this technological innovation has a large untapped potential for behaviour change (Riva et al., Reference Riva, Baños, Botella, Mantovani and Gaggioli2016; Ng et al., Reference Ng, Ma, Ho, Ip and Fu2019), as we illustrate in the next section.
Future potential
Given that VR is already widely used, its potential applications in behavioural public policy will largely depend on the degree to which behavioural scientists adopt this technology, design interventions for it, and test them. Currently, most research regarding VR and behaviour change has been conducted outside the realm of behavioural science (see Lanier et al. (Reference Lanier, Waddell, Elson, Tamul, Ivory and Przybylski2019) and the studies reviewed above). For example, most interventions are not grounded in theories and approaches of behaviour change (e.g., Michie et al., Reference Michie, Van Stralen and West2011) and/or do not use behavioural science intervention techniques such as defaults, salience, framing, norms, and simplification of complex choices (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Loewenstein & Chater, Reference Loewenstein and Chater2017; Oliver, Reference Oliver2019). In this regard, we recommend that behavioural scientists interested in policy examine VR as a tool for influencing behaviour and focus on developing VR-based interventions informed by behavioural principles.
Although AR has so far not been comprehensively researched regarding behavioural interventions, we posit that it has an even greater potential for changing behaviour than VR because it can directly alter the environment in which people act. To illustrate this potential, let us imagine a scenario in which a person has decided to eat more vegetables, and fewer sweets and chocolate. In that case, AR equipment could be programmed to recognize sweets or chocolate in real-time, even before the person consciously detects them. Then, it could redirect the person's attention into another direction, distract the person with sounds or colours, hide the sweets by altering the visual environment, make the sweets appear disgusting (e.g., by creating the hologram of a worm entering the sweets), or produce verbal prompts or sounds to discourage consumption. On the other hand, the equipment could also be programmed to recognize vegetables in real time and make them salient or visually more appealing, produce verbal prompts or sounds to encourage consumption, etc. In other words, AR has a potential to dynamically implement numerous behavioural tools and principles in real time. Whereas the capacity of AR to fulfil this potential will greatly depend on further technological developments, and it may take another 5–10 years before this tool reaches the adequate level of usability and adoption, behavioural scientists can already set the stage for this by devising AR-based interventions and testing them.
Social Robotics
Introducing the technological domain
Social robots are autonomous or semi-autonomous agents that communicate and interact with people, imitating closely human behaviour, looks, and/or emotional expressions (Broadbent, Reference Broadbent2017). These robots are typically designed to behave according to the norms expected by the individuals they interact with (Bartneck & Forlizzi, Reference Bartneck and Forlizzi2004). Simply put, social robots are not user-friendly computers that operate as machines; rather, they are user-friendly computers that operate as humans (Zhao, Reference Zhao2006). They are made to interact with humans as helpers and artificial companions in hospitals, schools, homes, or social care facilities (Broadbent, Reference Broadbent2017; Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018). Some examples of social robots include Nao Humanoid Robot, which can perform various human-like functionalities such as dancing, walking, speaking, or recognizing faces and objects, and Alyx, who teaches people with autism how to recognize emotional cues. An additional subcategory of social robotics is robopets – robots that appear and behave like companion animals, such as Aibo-dog (Eachus, Reference Eachus2001; Abbott et al., Reference Abbott, Orr, McGill, Whear, Bethel, Garside, Stein and Thompson-Coon2019). Importantly, social robots do not necessarily need to resemble living beings like humans or pets – it is sufficient that they can verbally communicate with people in a human-like manner (Broadbent, Reference Broadbent2017).
Theoretical argument and available evidence
Several lines of argument indicate that social robots could effectively change behaviour in the form of messengers (Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012) who prompt people to undertake a certain behaviour of interest. First, these robots can be programmed to possess characteristics of effective messengers, including credibility, trust, and empathy (Reeves et al., Reference Reeves, Wise, Maldonado, Kogure, Shinozawa and Naya2003; Cialdini & Cialdini, Reference Cialdini and Cialdini2007; Looije et al., Reference Looije, Neerincx and Cnossen2010, Reference Looije, van der Zalm, Neerincx and Beun2012; Dolan et al., Reference Dolan, Hallsworth, Halpern, King, Metcalfe and Vlaev2012; Seo et al., Reference Seo, Geiskkovitch, Nakane, King and Young2015). Second, they can positively impact self-efficacy (Matsuo et al., Reference Matsuo, Miki, Takeda and Kubota2015; El Kamali et al., Reference El Kamali, Angelini, Caon, Carrino, Röcke, Guye, Rizzo, Mastropietro, Sykora, Elayan and Kniestedt2020) and intrinsic motivation (Fasola & Matarić, Reference Fasola and Mataric2012) as highly important factors when it comes to initiating and maintaining behaviour change (Bandura, Reference Bandura1997; Ryan & Deci, Reference Ryan and Deci2000). Third, relative to humans, social robots may be less likely to evoke psychological reactance – a motivational state characterized by anger that can occur when people are asked to change their behaviour but react against it because they feel their freedom of action has been undermined (Brehm, Reference Brehm1966; Brehm & Brehm, Reference Brehm and Brehm2013). Social agency theory posits that people are more likely to experience psychological reactance as the social agency of the messenger increases (i.e., the more the messenger is characterized by human-like social cues, including human-like face and head movements, facial expressions, affective intonation of speech, etc.; Roubroeks et al., Reference Roubroeks, Ham and Midden2011; Ghazali et al., Reference Ghazali, Ham, Barakova and Markopoulos2018). Although social robots are similar to humans, they are not humans and therefore have lower social agency in comparison. People may thus find robot messengers less threatening to their autonomy than other humans and experience lower reactance in response to prompts delivered by them. In this regard, an opposite argument can also be made because some people may dislike interacting with robots due to the lack of human connection (e.g., Nomura et al., Reference Nomura, Kanda and Suzuki2006), which might impede their effectiveness as messengers. However, there is currently no theoretical or empirical support for this premise, especially because there are many situations where people prefer robots over other humans (Broadbent, Reference Broadbent2017; Granulo et al., Reference Granulo, Fuchs and Puntoni2019).
Despite the outlined theoretical arguments, the capacity of social robots to positively impact behaviour as messengers has rarely been investigated. These robots have primarily been studied as assistants in the domains on education, elderly care, and treatment of autism spectrum disorders (Abdi et al., Reference Abdi, Al-Hindawi, Ng and Vizcaychipi2018; Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018; Robinson et al., Reference Robinson, Cottier and Kavanagh2019). In this regard, they were shown to improve children's experiences of learning and the learning outcomes (Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018); to beneficially influence wellbeing, cognition, and physical health of the elderly (Abdi et al., Reference Abdi, Al-Hindawi, Ng and Vizcaychipi2018); and to enhance the learning of social skills for patients suffering from autism spectrum disorders (Pennisi et al., Reference Pennisi, Tonacci, Tartarisco, Billeci, Ruta, Gangemi and Pioggia2016). Although few studies have been conducted on whether social robots can change behaviour via messages or prompts, which is of interest to behavioural public policy (Oliver, Reference Oliver2013), these studies showed promising findings (Casaccia et al., Reference Casaccia, Revel, Scalise, Bevilacqua, Rossi, Paauwe, Karkowsky, Ercoli, Serrano, Suijkerbuijk and Lukkien2019; Tussyadiah & Miller, Reference Tussyadiah and Miller2019; Mehenni et al., Reference Mehenni, Kobylyanskaya, Vasilescu, Devillers, D'Haro, Callejas and Nakamura2020; Robinson et al., Reference Robinson, Connolly, Hides and Kavanagh2020). For example, Robinson et al. (Reference Robinson, Connolly, Hides and Kavanagh2020) provided preliminary evidence that motivational messages communicated by a robot can reduce consumption of unhealthy snacks.
Future potential
Several authors have argued that social robots should be used to administer interventions aimed at influencing various behaviours that are beneficial to society, ranging from charitable giving to pro-environmental behaviour (Borenstein & Arkin, Reference Borenstein and Arkin2017; Sequeira, Reference Sequeira2018; Tussyadiah & Miller, Reference Tussyadiah and Miller2019; Rodogno, Reference Rodogno, Seibt, Hakli and Nørskov2020). Developments in this regard will be driven by the efforts policy makers invest to create the appropriate messaging interventions that can be implemented by social robots. Indeed, social robots are currently widely available and many of them are relatively affordable (Broadbent, Reference Broadbent2017; Belpaeme et al., Reference Belpaeme, Kennedy, Ramachandran, Scassellati and Tanaka2018); the lack of behavioural interventions devised for this technological tool can, therefore, primarily be explained by the fact that very little research has been done to create and test such interventions. In addition, the effectiveness of social robots as messengers will depend on future advancements in their design, given that the degree to which they are interactive may improve intervention success (Bartneck et al., Reference Bartneck, Nomura, Kanda, Suzuki and Kennsuke2005; Song & Luximon, Reference Song and Luximon2020). The design is also crucial to overcome one of the main potential issues in human–robot interaction known as uncanny valley – a phenomenon according to which robots who are similar to humans but have certain details that are strikingly non-human can cause eeriness and revulsion (Mathur & Reichling, Reference Mathur and Reichling2016; Ciechanowski et al., Reference Ciechanowski, Przegalinska, Magnuski and Gloor2019; Kätsyri et al., Reference Kätsyri, de Gelder and Takala2019).Footnote 1 Lastly, a broad adoption of social robots in administering behavioural interventions may depend on whether these robots and the interventions designed for them can overcome specialization. Currently, the few examples of social robots that were used to implement message interventions typically did so within a single domain, such as healthy eating (Robinson et al., Reference Robinson, Connolly, Hides and Kavanagh2020). However, a multipurpose social robot that can help humans to change in a variety of domains (e.g., from health to pro-environmental behaviour to financial planning) may be both more cost-effective and practical from a usability perspective.
Gamification
Introducing the technological domain
Simply put, gamification is a process of making a game of something that is not a game. In a more academic sense, it refers to the use of game design elements in non-gaming contexts (Baptista & Oliveira, Reference Baptista and Oliveira2019). These game design elements vary greatly and comprise the use of badges (Hamari, Reference Hamari2017), points (Attali & Arieli-Attali, Reference Attali and Arieli-Attali2015), levels (Jones et al., Reference Jones, Madden, Wengreen, Aguilar and Desjardins2014), leader boards (Morschheuser et al., Reference Morschheuser, Hamari and Maedche2018), and avatars (Diefenbach & Müssig, Reference Diefenbach and Müssig2019), to name but a few. The non-gaming contexts to which the design elements can be applied have a broad range, from learning how to use a statistical software to doing household chores (Diefenbach & Müssig, Reference Diefenbach and Müssig2019). Some popular examples of gamification include the Forest app that helps people stay away from their smartphone by planting and growing a virtual tree, or Duolingo, where people can level up as they learn new languages.
Theoretical argument and available evidence
Theoretical support for positive behavioural effects of gamification is grounded in the self-determination theory (Deci & Ryan, Reference Deci and Ryan2000; Ryan & Deci, Reference Ryan and Deci2000). This theory outlines that humans have three motivational needs – competence, autonomy, and relatedness (Deci & Ryan, Reference Deci and Ryan2000; Ryan & Deci, Reference Ryan and Deci2000). If an activity satisfies these needs, it is intrinsically motivating. If, however, this is not the case because the activity is driven by external factors such as money, it is extrinsically motivating. Playing games generally fulfils each of the three needs (Przybylski et al., Reference Przybylski, Rigby and Ryan2010, Mekler et al., Reference Mekler, Brühlmann, Tuch and Opwis2017; Koivisto & Hamari, Reference Koivisto and Hamari2019). First, engaging in game playing is typically a voluntary decision undertaken at one's discretion, and it thus promotes autonomy. Game design elements such as creating one's own avatar can further enhance autonomy (Pe-Than et al., Reference Pe-Than, Goh and Lee2014). In terms of competence, the key element of games is challenging the player to overcome various obstacles. Numerous game design elements such as dynamic difficulty adjustment or performance indicators such as leader boards satisfy the need for competence (Pe-Than et al., Reference Pe-Than, Goh and Lee2014). Moreover, the need for relatedness is often satisfied via social environments and in-game interactions (Koivisto & Hamari, Reference Koivisto and Hamari2019). The fulfilment of motivational needs should not only enhance the effectiveness of games through intrinsic motivation but also increase their enjoyment (Pe-Than et al., Reference Pe-Than, Goh and Lee2014).
The empirical research on gamification and behaviour change has focused primarily on the domains of education, physical exercise, and crowdsourcing: around 70% of all the studies were conducted in these domains (Koivisto & Hamari, Reference Koivisto and Hamari2019). Although several studies showed mixed findings, most studies produced positive evidence in support of gamification effectiveness (Seaborn & Fels, Reference Seaborn and Fels2015; Johnson et al., Reference Johnson, Deterding, Kuhn, Staneva, Stoyanov and Hides2016, Reference Johnson, Horton, Mulcahy and Foth2017; Looyestyn et al., Reference Looyestyn, Kernot, Boshoff, Ryan, Edney and Maher2017; Koivisto & Hamari, Reference Koivisto and Hamari2019). The main limitation in this regard is that the research conducted tends to be of low or moderate quality, with many studies using small sample sizes, non-representative samples, or lack of randomization in treatment allocation (Johnson et al., Reference Johnson, Deterding, Kuhn, Staneva, Stoyanov and Hides2016, Reference Johnson, Horton, Mulcahy and Foth2017; Koivisto & Hamari, Reference Koivisto and Hamari2019; Zainuddin et al., Reference Zainuddin, Chu, Shujahat and Perera2020). Furthermore, many studies relied primarily on self-reported measures of outcome variables capturing behaviour change and lacked theoretical foundations for the hypotheses (Seaborn & Fels, Reference Seaborn and Fels2015; Johnson et al., Reference Johnson, Horton, Mulcahy and Foth2017; Koivisto & Hamari, Reference Koivisto and Hamari2019; Zainuddin et al., Reference Zainuddin, Chu, Shujahat and Perera2020). Lastly, only few game design elements were comprehensively investigated (e.g., badges, points, and leader boards; Hamari et al., Reference Hamari, Koivisto and Sarsa2014; Seaborn & Fels, Reference Seaborn and Fels2015, Koivisto & Hamari, Reference Koivisto and Hamari2019), whereas other less typical elements were neglected. Therefore, gamification overall shows a lot of promise for effective behaviour change, but more high-quality studies need to be conducted to maximize its potential.
Future potential
For gamification to be effectively used in behavioural public policy, researchers will first need to comprehensively examine which game design elements and their combinations drive behaviour change. Although a significant advancement has been achieved in this regard, as previously indicated only few of the elements have been extensively and systematically researched so far (Koivisto & Hamari, Reference Koivisto and Hamari2019). In this regard, policy makers will need to increasingly collaborate with computer scientists and game designers, because even if many studies on gamification and behaviour change have been conducted, few of them have been grounded in theories of behaviour change. Input from behavioural scientists is, therefore, essential to fulfil the potential of gamification. An additional challenge to making gamification effective is overjustification (Meske et al., Reference Meske, Brockmann, Wilms, Stieglitz, Stieglitz, Lattemann, Robra-Bissantz, Zarnekow and Brockmann2017). That is, even if games can propel intrinsic motivation as previously discussed, several game design elements such as points can serve as external reinforcements if they are associated with external rewards (e.g., exchanging points won for completing a desired behaviour such as exercise for leisure time or other desirable activities) and therefore diminish intrinsic motivation (Deci, Reference Deci1971; Deci et al., Reference Deci, Koestner and Ryan2001). The main aim for behavioural scientists should, therefore, be to design games that make the desired behaviours that the interventions target rewarding in themselves.
Self-Quantification
Introducing the technological domain
Self-quantification refers to the use of technology to self-track any kind of biological, physical, behavioural, or environmental information (Swan, Reference Swan2013; Maltseva & Lutz, Reference Maltseva and Lutz2018). Some popular examples of the practice include the automatic tracking of physical exercise through wearable devices like smartwatches and fitness trackers, or self-logging of dietary information through various smartphone applications. Self-quantification can also be used in many other areas, from sexual and reproductive behaviour (Lupton, Reference Lupton2015) to participation in green consumption activities (Zhang et al., Reference Zhang, Zhang, Zhang and Li2020). The practice is prevalent in the health domain – almost 70% of the US adult population tracked their exercise, diet, or weight in 2012 (Fox & Duggan, Reference Fox and Duggan2013). The goal of self-quantification is to offer people an insight into their own behaviour, given that the underlying assumption of this practice is that the ‘self-knowledge through numbers’ (Heyen, Reference Heyen and Selke2016, p. 283) can both help people realize which behaviours they may want to change and motivate them to undertake the change (Card et al., Reference Card, Mackinlay and Shneiderman1999; North, Reference North2006; Kersten-van Dijk et al., Reference Kersten-van Dijk, Westerink, Beute and Ijsselsteijn2017). Self-quantification is, therefore, also referred to as ‘personal science’ because it involves studying one's own behaviour to answer personal questions (Wolf & De Groot, Reference Wolf and De Groot2020).
Theoretical argument and available evidence
Multiple theoretical arguments suggest that self-quantification can propel behaviour change. The social-cognitive theory outlines two key drivers of this change that are leveraged by self-quantification – self-monitoring and self-reflectiveness (Bandura, Reference Bandura1998, Reference Bandura2001, Reference Bandura2004). Monitoring one's behavioural patterns and the surrounding circumstances is the first prerequisite for modifying a behaviour (Bandura, Reference Bandura1998, Reference Bandura2001). For self-monitoring to be effective in this regard, it is important that the person themselves has selected the behaviours to monitor and the desired end states rather than this being imposed on them, and that they physically record their behaviour throughout the process of monitoring (Harkin et al., Reference Harkin, Webb, Chang, Prestwich, Conner, Kellar and Sheeran2016). Then, by employing self-reflectiveness, which is a metacognitive capacity to reflect on oneself and the adequacy of one's actions and thoughts, they can dwell on the monitored behaviour and examine it in relation to personal goals and standards, which may ultimately lead to insights about changing their behaviour (Bandura, Reference Bandura2001).
Self-quantification supports both self-monitoring and self-reflectiveness. It allows a person to collect the data about their behaviour, thus providing an overview of actions they perform. The person can then reflect about the data by evaluating them against their motives, values, and goals, which may in turn lead to new insights that trigger behaviour change (Ploeder et al., Reference Ploderer, Reitberger, Oinas-Kukkonen and van Gemert-Pijnen2014). For example, a person may monitor how much time they spend on different activities on a weekly basis. Then, by reflecting on the data in relation to their goals and values, they may conclude they do not sufficiently prioritize important personal goals, which may in turn prompt them to incorporate more meaningful activities into their schedule.
Even if there is a reasonable theoretical argument for the positive role of self-quantification in behaviour change, the empirical research on this topic is limited both in quantity and quality. A literature review by Kersten-van Dijk et al. (Reference Kersten-van Dijk, Westerink, Beute and Ijsselsteijn2017) indicates that, in most of the studies conducted to date, self-quantification improved people's insights about their behaviour. However, only five articles evaluated the impact of self-quantification on behaviour change, and two of these articles documented positive behavioural effects (Consolvo et al., Reference Consolvo, Klasnja, McDonald, Avrahami, Froehlich, LeGrand, Libby, Mosher and Landay2008; Hori et al., Reference Hori, Tokuda, Miura, Hiyama and Hirose2013). Therefore, whereas self-quantifying one's own behaviour using various technologies is a promising approach to creating behaviour change, policy makers need to further integrate this approach with effective behavioural change techniques to maximize its potential.
Future potential
The use and effectiveness of self-quantification in behavioural public policy will likely depend on two future developments: (1) the extent to which policy makers integrate self-quantification with cutting-edge insights on behaviour change and (2) the advancement of self-tracking technological devices themselves. Concerning the first development, the self-improvement hypothesis at the core of self-quantification posits that gaining insights about one's own behaviour through data should inspire a change (Kersten-van Dijk et al., Reference Kersten-van Dijk, Westerink, Beute and Ijsselsteijn2017). In behavioural science, however, it is well known that information itself is not sufficient to modify behaviour (Thaler & Sunstein, Reference Thaler and Sunstein2008; Marteau et al., Reference Marteau, Hollands and Fletcher2012). Indeed, whereas people may decide to change after seeing data about their activities, it is how the data are presented to them that should eventually determine their motivation and prompt the efforts to change (Johnson et al., Reference Johnson, Shu, Dellaert, Fox, Goldstein, Häubl, Larrick, Payne, Peters, Schkade and Wansink2012; Otten et al., Reference Otten, Cheng and Drewnowski2015; Congiu & Moscati, Reference Congiu and Moscati2020). Therefore, to maximize the potential of self-quantification, policy makers should work on developing and testing the tools of effective self-tracking data visualization, and these tools should ideally go beyond the most popular domains such as physical activity or eating and apply to a broad range of domains people may be interested in. The tools would then not only help individuals to understand their own behaviour but also empower them to change in line with their values and preferences. This implies that the person should be free to choose whether they want to use any of the data visualization tools on offer or not, and that policy makers should provide information about the behavioural change strategies implemented in these tools to allow the person to make an informed choice.
Concerning the second development that can aid the effectiveness of self-quantification in behavioural public policy – the advancement of the technology itself – it will be important to devise tools that can track behaviours and people's psychological states more precisely and reliably. Currently, many quantified-self approaches rely on self-reported data because technologies to track the actual behaviours or experienced emotions are either not sufficiently developed or do not yet exist. This is, however, problematic from a usability perspective, because people may want to use self-quantification but simply do not have the time or capacity to manually log their data (Li et al., Reference Li, Dey and Forlizzi2010; Wolf & De Groot, Reference Wolf and De Groot2020). In fact, this need for constant data logging may interfere with their freedom to engage in activities they enjoy or even create potentially unhealthy obsessions with data collection or the technologies involved (Lupton, Reference Lupton2016). In this respect, it is worth knowing that technologies to track behaviour and psychological states are rapidly evolving (e.g., Poria et al., Reference Poria, Cambria, Bajpai and Hussain2017), and more advanced tracking devices are constantly becoming available.
Another potential technological advancement involves developing devices that will not only accurately track behaviours and psychological states, but that will make it easier for people to gain insights regarding which underlying factors shape these behaviours or states. For example, a person may be interested to know how different activities, people they meet, and various contextual factors (e.g., weather; colours, sounds, or smells present in their environment; etc.) shape their future behaviours and emotions. Current technologies can typically track several such factors (e.g., other people present in the situation), but they could potentially evolve to automatically track various other factors that would be of interest to individuals who practise self-quantification. Such data would allow computing models that could clarify whether these factors predict future behaviours and emotional states. It is important to emphasize that in this example we are referring to factors, behaviours, and emotional states of interest to the person practising self-quantification, and we are not advocating that the devices track the data the person is not interested in.
Behavioural Informatics
Introducing the technological domain
Behavioural informatics (BI) is the application of the internet of things (IoT) – the network of any interconnected devices (e.g., mobile phones, smart speakers and other devices, etc.) that can be used to collect and record any type of data created by some form of human behaviour – for the purpose of creating behavioural change (e.g., Swan, Reference Swan2012; Pavel et al., Reference Pavel, Jimison, Korhonen, Gordon and Saranummi2015; Fu & Wu, Reference Fu and Wu2018; Rahmani et al., Reference Rahmani, Gia, Negash, Anzanpour, Azimi, Jiang and Liljeberg2018). This can be achieved in many ways and requires the use of sophisticated machine learning algorithms. For example, the health coaching platform proposed by Pavel et al. (Reference Pavel, Jimison, Korhonen, Gordon and Saranummi2015) that helps the elderly to improve and manage their health behaviours relies on various devices referred to as sensors that collect data from the person's home environment in real time. These sensors involve contact switches, passive infrared sensors that capture motion, bed cameras, computer keyboards, smartphones, credit card logs, accelerometers, environmental sensors, 3D cameras, and so on. The data from the sensors, together with the self-reported data generated by users via questionnaires concerning their health goals and motivational states, are continuously processed by inference algorithms that generate estimates of behaviours as well as psychological and physical states. These estimates are then used by the coaching platform to provide interventions in real time. For example, if the algorithms infer the person feels sad or depressed, they may prompt a family member or carer to call or visit the person to cheer them up.
Therefore, dynamic personalization (Pavel et al., Reference Pavel, Jimison, Korhonen, Gordon and Saranummi2015) is at the core of BI. In other words, based on the data obtained from various devices in real time, machine learning models can constantly compute different variables that are relevant to the behavioural goals of interest (e.g., motivation levels, barriers to meeting the goals, etc.) and then select the best interventions to be implemented (i.e., the interventions that work best based on previous data and/or that have been established as effective by previous theories of behaviour change). Although BI is to some degree linked to self-quantification because it relies on tracking devices that capture data about people's behaviour, it goes beyond self-quantification because its core components are sophisticated algorithms that process various interconnected devices in real time and provide appropriate behavioural interventions.
Theoretical argument and available evidence
One of the advantages of BI is that, rather than being supported by a specific theory, BI platforms can adopt various theories of behavioural change to guide the interventions. For example, Active2Gether (Klein et al., Reference Klein, Manzoor and Mollee2017) is a BI system that encourages physical activity and is based on the social-cognitive theory (Bandura, Reference Bandura2001, Reference Bandura2004). According to the theory as implemented in the system, main determinants of behaviour change are intentions, self-efficacy regarding the behaviour, and outcome expectancies. Other factors that contribute to these main determinants are social norms, long-term goals, potential obstacles, and satisfaction with one's goal progress. Active2Gether tracks how people score on these theoretical components in real time and then selects the appropriate interventions to guide physical activity. For example, if a person currently has low self-efficacy (i.e., low confidence and belief in oneself that s/he can undertake the desired behaviour), then the platform selects simpler behavioural goals (e.g., climbing only one floor instead of five) that the person can easily accomplish and gradually increases their difficulty until the desired difficult behaviour is accomplished.
Given that building and testing BI platforms is a highly challenging endeavour because it requires sophisticated programming knowledge, behavioural change expertise, and the opportunity to access or link various sensors, to our knowledge no BI platform has been rigorously researched to date in terms of its effectiveness. Some preliminary findings based on self-reports (e.g., Fu & Wu, Reference Fu and Wu2018), however, indicate that BI has a considerable future potential to revolutionize behaviour change.
Future Potential
Currently, the number of devices connected to the internet that could potentially be used to track behaviour is estimated to be around 30–35 billion (Statista, 2018). This means that each household on average owns several such devices, and the number is likely to be larger in developed countries. Therefore, the potential of BI to contribute to behaviour change is large, given that these devices generate data that could be continuously processed by algorithms and inform real-time interventions. The main obstacle in this regard is likely a lack of collaboration between behavioural change experts and computer scientists, given that all BI platforms need to be a joint effort of researchers and practitioners working in these domains. Therefore, we encourage behavioural scientists to explore current advancements in BI and potentially form collaborations with computer scientists to create effective BI-based behavioural change platforms.
Overcoming Libertarian Paternalism
Administering behavioural interventions via the overviewed technological tools could overcome libertarian paternalism in several ways. First, this approach would not interfere with people's choice processes and would, therefore, not limit their negative freedom (Grüne-Yanoff, Reference Grüne-Yanoff2012; Gane, Reference Gane2021) because they would need to actively select the technology and the intervention to use only after the choice process has ended (i.e., after they have decided whether and which behaviour they want to change). However, beyond this basic contribution, technology has a potential to empower people to preserve their negative freedom even in environments where they typically have little control. For example, whenever people are outside of their homes, they are at the mercy of policy makers, marketers, and other agents who can change the contexts in which these people act to interfere with their choices and influence them. City councils may implement point-of-decision prompts to increase stair climbing (Soler et al., Reference Soler, Leeks, Buchanan, Brownson, Heath and Hopkins2010), and supermarkets may implement choice architecture that encourages a particular food choice (Wansink, Reference Wansink2016; Huitink et al., Reference Huitink, Poelman, van den Eynde, Seidell and Dijkstra2020). People may not agree with how various places they visit daily attempt to change their behaviour, but they have little power to change this. However, VR and AR would empower them to alter the external environment in a way that prompts actions consistent with their goals, values, and beliefs, and to therefore override unwanted contextual influences imposed by other agents that interfere with their choice processes. In this context, instead of implementing nudges that prompt specific choices ‘in the wild’ and thus limit negative freedom, policy makers could focus on producing VR or AR behaviour change apps that people could use to alter their external environment to be consistent with their behavioural preferences.
Transparency would ensure that technological interventions go beyond negative freedom and achieve positive freedom – the possibility to make choices that allow taking control of one's life and being consistent with one's fundamental purposes (Carter, Reference Carter2009). For the transparency requirement to be met, a technological intervention would need to be accompanied by a summary that outlines how the intervention operates, whether it is supported by scientific evidence, and in which direction it should change behaviour. Although it is not possible to estimate to what degree different people would utilize this information, its presence would allow them to use reflective processes (Stanovich & West, Reference Stanovich and West2000; Strack & Deutsch, Reference Strack and Deutsch2004) and deliberate regarding whether a technological intervention is consistent with their values and gives them enough control. In other words, they would have the option to extensively practise their positive freedom if they wanted to do so. This option could be further extended by allowing them to not only select desired interventions based on adequate information, but to also determine intervention parameters. For example, a gamification intervention could be designed in such a way that people can determine how points are awarded and when, what behavioural goals need to be achieved to level up, how badges are unlocked, and so on. Given that all technological interventions we have overviewed would require access to people's data, positive freedom would also necessitate that people have the option to decide which data they are willing to provide. To be able to make this choice, they would ideally need to be presented with a rationale behind the relevance of different variables for a given intervention, and it would be mandatory that the technology provider clarifies how their data will be handled.
It is important to emphasize that we do not view technology as something that should replace behavioural strategies that were designed to overcome libertarian paternalism, including nudge plus (Banerjee & John, Reference Banerjee and John2020), self-nudging (Reijula & Hertwig, Reference Reijula and Hertwig2020), and boosting (Hertwig & Grüne-Yanoff, Reference Hertwig and Grüne-Yanoff2017). Instead, we see technology as a tool that can complement and extend these approaches, but also go beyond them. First, the technologies we overviewed can be used to administer interventions compatible with either of the three strategies. For example, nudge plus refers to behavioural change techniques that not only alter the context in which people act but also foster reflection and deliberation about the intervention itself and the behaviour to change. As discussed, the technologies we tackled would nurture reflection and deliberation because they would require the person to select the desired behaviour to change and the intervention compatible with one's values, to possibly adjust intervention parameters, etc., which is consistent with nudge plus. Second, the technologies overviewed can extend the three intervention techniques by making them more engaging and motivating. For example, self-nudging refers to people applying nudges such as framing or prompts on themselves, which may be difficult to do because it requires extensive self-control that can be depleted (Muraven & Baumeister, Reference Muraven and Baumeister2000; Baumeister et al., Reference Baumeister, Vohs and Tice2007). Technology can make self-nudging easier because it can both automatize it and make it more interesting and immersive (e.g., gamifying nudges or presenting them in VR or AR). Finally, technology can go beyond the three intervention techniques because, as discussed, it can empower people to preserve their negative freedom even in environments where they typically have little control by overriding or changing contextual influences in these environments (e.g., AR altering the environment's visual appearance).
Knowledge about the Interventions and Their Mechanisms: An Obstacle to Behavioural Change?
Given that making technological interventions compatible with liberalism requires that the person understands the behavioural change techniques implemented and how they operate, the following question arises: would such an extensive knowledge and freedom of choice impair intervention effectiveness? Although this has not yet been systematically investigated, there are several arguments indicating it should not make interventions ineffective.
The first argument is based on self-determination theory, according to which people's intrinsic motivation to change their behaviour is determined by competence, autonomy, and relatedness (Deci & Ryan, Reference Deci and Ryan2000; Ryan & Deci, Reference Ryan and Deci2000). Given that transparency and freedom of choice associated with technological interventions should provide people with the sense of autonomy, such interventions could potentially be more intrinsically motivating than interventions that lack these characteristics and thus produce a more durable and long-lasting behavioural change (e.g., Van Der Linden, Reference Van Der Linden2015; Liu et al., Reference Liu, Hau and Zheng2019). The second argument comes from research on personalized persuasion. Studies that have been conducted in this regard (Hirsh et al., Reference Hirsh, Kang and Bodenhausen2012; The Behavioural Insights Team, 2013; Matz et al., Reference Matz, Kosinski, Nave and Stillwell2017; Lavoué et al., Reference Lavoué, Monterrat, Desmarais and George2018; Mills, Reference Mills2020) suggest that personalized behavioural interventions are more effective than the non-personalized ones. Therefore, because the technologies overviewed in the present article would lend themselves to personalization given that they would be linked to the user's specific needs, preferences, and behavioural patterns, it is likely that their effectiveness would benefit from this. As the final argument, we posit that, even if people know how certain interventions operate, this knowledge will not necessarily be salient every time they receive the intervention and it, therefore, does not need to interfere with how they react to the intervention. For example, even if people are aware that defaults change behaviour by making the decision process less cognitively costly (Blumenstock et al., Reference Blumenstock, Callen and Ghani2018), this does not mean they will not be influenced by defaults when they encounter them. For example, Loewenstein et al. (Reference Loewenstein, Bryce, Hagmann and Rajpal2015) showed that, even if people were warned they would receive defaults that would attempt to change their behaviour, the effects of these defaults persisted. Overall, our argument that knowing how behavioural interventions operate should not necessarily hamper their effectiveness is consistent with other articles that tackled this issue (e.g., Banerjee & John, Reference Banerjee and John2020; Reijula & Hertwig, Reference Reijula and Hertwig2020).
New Ethical Issues
Although the new technologies examined in the present article have a potential to create behaviour change while empowering people to make their own choices in this regard, they also raise new ethical issues with implications for freedom of choice. For example, personal data that are collected via self-quantification, social robots, VR and AR, various sensors involved in behavioural informatics, and gamification platforms might potentially be stored by private companies who could use them to influence people more effectively, without their knowledge, to buy products or services they would not otherwise be interested in (Zimmer, Reference Zimmer2010; Kramer et al., Reference Kramer, Guillory and Hancock2014; Verma, Reference Verma2014; Boyd, Reference Boyd2016; Herschel & Miori, Reference Herschel and Miori2017; Gostin et al., Reference Gostin, Halabi and Wilson2018; Rauschnabel et al., Reference Rauschnabel, He and Ro2018; Mathur et al., Reference Mathur, Acar, Friedman, Lucherini, Mayer, Chetty and Narayanan2019; Mavroeidi et al., Reference Mavroeidi, Kitsiou, Kalloniatis and Gritzalis2019). Therefore, although the technological tools would on the surface support liberalism because they would endorse free choice as well as subjectivity or plurality of values, below the surface they could be used to fulfil various goals which are not necessarily aligned with the individual, but with the interests of those who control the technology. Indeed, it is well known that several scandals that reflect this premise happened in the past, such as Cambridge Analytica, where people's data were used for microtargeting without their awareness (Isaak & Hanna, Reference Isaak and Hanna2018; Hinds et al., Reference Hinds, Williams and Joinson2020). This and associated dangers of using new technologies in behaviour change remain a valid concern, given that it cannot be excluded that people's data collected via these technologies will be used to manipulate them in ethically dubious ways.
Data protection policies are continuously advancing; however, further action is necessary to ensure democratic and liberal protection of data. The EU General Data Protection Regulation (GDPR) introduced data protection standards regarding informed consent or algorithmic transparency (Wachter, Reference Wachter2018) and gave consumers the right to access, delete, and opt-out of processing of their data at any time (Politou et al., Reference Politou, Alepis and Patsakis2018; Mondschein & Monda, Reference Mondschein, Monda, Kubben, Dumontier and Dekker2019). Multiple countries worldwide followed, starting to recognize the need for regulation to match the technological progress and protect the privacy of the citizens (Lynskey, Reference Lynskey2014; Oettinger, Reference Oettinger2015). However, opt-out clauses may not be sufficient to ensure sustainable protection of individuals’ privacy. As Viljoen (Reference Viljoen2020) argues, what drives the value as well as danger of the data in digital economy is their relational aspect – the fact that they put individuals into relationships in a population-wide network. Large companies are not interested in individual-level insights of specific subjects, but rather a population-level knowledge. While GDPR and similar legislatures aim at individual-level privacy protection, the population-level protection remains overlooked. To address the urgency of privacy, governments could move towards more democratic institutions of data governance, following the solution proposed by Viljoen (Reference Viljoen2020).
These suggested advancements in the data protection regulation might be supported by the increasing public demand for data protection. Privacy paradox – a discrepancy between users’ concern about their privacy and the fact that they do little to protect their privacy and personal data – is a result of individuals’ risk–benefit calculation and the perception that the risk is low (Barth & de Jong, Reference Barth and De Jong2017). However, recent scandals such as Cambridge Analytica or popular documentaries such as The Social Dilemma or Terms and Conditions May Apply that uncover which data corporations and governments collect and what they use them for, may help change the risk–benefit ratio and risk perception of the matter. For example, making the data privacy abuse concrete and psychologically close may motivate people to overcome this paradox, which is aligned with the construal level theory (Spence et al., Reference Spence, Poortinga and Pidgeon2012). A recently published report is consistent with this premise, as it indicates that, in this age when people are being increasingly exposed to information about data privacy abuse through the media, 75% of the US adults support more government regulation concerning the personal information that data companies can store and what they can do with it (Auxier et al., Reference Auxier, Rainie, Anderson, Perrin, Kumar and Turner2019). With increasing public demand for data protection, policymakers should offer legislative solutions that would not only protect the data of the customers, but also provide secure framework for behavioural science interventions supported by new technologies.
Additional Policy Considerations
Finally, it is important to address the remaining practical challenges that might hamper the application of the new technologies we have overviewed in the policy context. The first challenge is scalability. The use of all the technologies we have discussed for administering behavioural interventions at least to some degree depends on stable and fast internet connection. However, there is currently a significant urban-rural divide in internet coverage. In Europe, for instance, only 59% of households in rural areas have access to high-speed broadband internet, compared with roughly 86% of total EU households (DiMaggio & Hargittai, Reference DiMaggio and Hargittai2001; European Commission, 2020). Therefore, the extent to which the new technologies will be scalable in the future will depend on how rapidly fast internet technologies (e.g., Fiber-To-The-Premises or 5G) develop and become adopted.
Furthermore, implementation of the new technologies has a potential to create negative spillovers that might outweigh the benefits they create (Truelove et al., Reference Truelove, Carrico, Weber, Raimi and Vandenbergh2014). For example, whereas humanoid social robots can serve as messengers to prompt people to undertake various behaviours, they could also potentially replace other humans, both as companions and intimate partners, which might negatively affect birth rates. This could be problematic for various developed countries struggling with falling birth rates, such as Japan or the United States (Kramer, Reference Kramer2013). Whereas social robots that fulfil people's intimate and/or sexual needs could have a positive impact on health (Döring & Pöschl, Reference Döring and Pöschl2018), they might create further pressures on demographic development if they influence individuals to opt-out of reproductive sexual relationships (Scheutz & Arnold, Reference Scheutz and Arnold2016; Danaher et al., Reference Danaher, Earp, Sandberg, Danaher and McArthur2017). Overall, this is only one example of a potentially negative spillover of the technologies we cover, and each of these technologies could be linked to other negative spillovers. Therefore, before the new technologies can be implemented to administer behavioural interventions on a large scale, policy makers will need to systematically evaluate their potential negative spillovers.
Finally, the introduction of the new technologies as an alternative policy tool might result in a negative shift of the policy focus from a strategic and contextual to a more piecemeal approach. For example, we have discussed that VR or AR can empower people to alter the context in which they act and potentially reduce the manipulative influence of external agents such as marketeers on their behaviour. Whereas this may be a desirable outcome from the users’ point of view, it would constitute only a piecemeal solution because it would divert further responsibility on the individual, as opposed to organizations which should provide a cleaner, safer, and better organized context for its population. Moreover, using VR or AR for this purpose could discourage policy makers from undertaking the effortful process of developing a more strategic regulatory framework that would limit the manipulative impact of marketeers and large organizations on the context in which people act. Therefore, it is important that policy makers do not use new technologies as a quick solution for policy challenges that need to be tackled in a more strategic way.
Conclusion
In the present article, we proposed that one way of making behavioural science interventions less paternalistic could be by integrating them with cutting edge developments in technology. We covered five emerging technological domains – virtual and augmented reality, social robotics, gamification, self-quantification, and behavioural informatics – and examined their current state of development, potential compatibility with techniques of behaviour change, and how using them to alter behaviour could overcome libertarian paternalism. In this regard, we argued that the interventions delivered using these technologies would be aligned with liberal principles because they would require that people deliberately choose which behaviours they want to change (if any) and select the desired technological tools and interventions for this purpose. Moreover, the interventions would be described in a user-friendly way to ensure transparency and compatibility with users’ values and beliefs. Importantly, we do not expect that the integration between behavioural science and the cutting-edge technologies could be achieved immediately. As discussed, there are several impediments to this, including that some technologies are not yet fully scalable or usable and are associated with some potential ethical issues. The main purpose of this article is to encourage behavioural scientists to start more rigorously exploring the technologies we discussed and designing testable behavioural change tools for these technologies. This will speed up the integration of the two domains and lead to the new age of liberal behavioural interventions that enable extensive freedom of choice.