Policy challenge and research aims
Social media provides many benefits, but almost 3 in 10 adults in the UK (27%) say they have recently been exposed to potentially harmful content (Ofcom, 2024). Many social media platforms offer users tools to control what appears in their feeds to avoid distressing or harmful content (see example in Figure 1). We refer to such content as ‘sensitive content’ and the tools to adjust these settings as ‘content controls’ or ‘content settings’.

Figure 1. Example of existing content controls.
Only about a quarter of social media users say they have ever used them (Ofcom, 2024). Lack of awareness is one factor, but there are other barriers to engagement, such as perceived lack of time, complex settings, difficulty finding them and previous experiences (Ofcom, 2024).
The UK’s Online Safety Act 2023 (‘the Act’) requires certain online services,Footnote 1 including some social media platforms, to offer adult users tools to control what content they see ‘at the earliest possible opportunity’. Regulators are interested in how platform design influences user choices and whether it enables users to act in line with their preferences. This research contributes to the evidence base in this policy area.
Behaviour change literature shows that small changes in the environment – the choice architecture – can influence people’s decisions (Thaler, Sunstein, & Balz, Reference Thaler, Sunstein and Balz2014). For example, defaults, complicated layouts, long disclosures and biased language can hinder users’ engagement with online controls (Centre for Data Ethics and Innovation, 2020). Exploring the impact of online choice architecture addressing harmful practices is increasingly important for policymakers and regulators (Busch & Fletcher, Reference Busch and Fletcher2024). Research on its impact on safety, privacy and data sharing is growing (Bauer, Bergstrøm, & Foss-Madsen, Reference Bauer, Bergstrøm and Foss-Madsen2021). However, there is limited public research in the context of social media content controls. This research addresses that gap.
This experiment focused on one mechanism by which users can control their feed: users choosing between ‘All content types’ and ‘Reduced sensitive content’ when signing up for a new social media platform. This choice may have a long-lasting impact on their experience due to people’s tendency to stick with the existing option, known as status quo bias (Samuelson & Zeckhauser, Reference Samuelson and Zeckhauser1988).
We explored how choice architecture can help or hinder users’ ability to make informed decisions about the amount of sensitive content on their feeds. Our goal was not to steer users towards a ‘safer’ choice but to help them understand, reflect on and select the option that suits them best.
Methodology
We identified barriers, developed interventions and tested them on a simulated social media platform with UK adults using a randomised controlled trial (RCT).
Diagnosing barriers
To identify barriers to user engagement with content controls, we conducted a literature review, and workshops using the Capability, Opportunity, Motivation and Behaviour model (Michie et al., Reference Michie, Van Stralen and West2011) and the Theoretical Domains Framework (Atkins et al., Reference Atkins, Francis, Islam, O’Connor, Patey, Ivers, Foy, Duncan, Colquhoun, Grimshaw, Lawton and Michie2017). See Supplementary materials A for the list of barriers and the prioritisation approach. After prioritising, we selected the following barriers for intervention development:
• Lack of attention to the information and options as users skim through to get to the feed.
• Lack of understanding of the information on content controls or the different options.
• Key information about controls is often buried in submenus or requires additional navigation, making it less visible and accessible to users.
• Tendency to stay with the status quo, such as a pre-selected default option.
Intervention design
To address the identified barriers, we developed interventions focusing on choice information and choice structure based on the Competition & Markets Authority’s taxonomy of online choice architecture practices (Competition & Markets Authority, 2022). Under choice information, we considered how information is presented, such as its salience, ease of access, visual and design elements, as well as framing, such as how content control options are labelled and described. Under the choice structure, we considered pre-selecting a default option, and the granularity and bundling of options.
Ultimately, we prioritised testing (1) the use of defaults, as one of the strongest choice architecture interventions (Mertens et al., Reference Mertens, Herberz, Hahnel and Brosch2022) and effective across different domains (Jachimowicz et al., Reference Jachimowicz, Duncan, Weber and Johnson2019), not yet explored in this context; and (2) how information is presented, building on research on video-sharing platforms (Ofcom, 2023).
We developed a five-arm RCT with one control and four interventions. Figure 2 gives an overview of the trial arms.

Figure 2. Trial interventions overview.
To determine whether users’ initial content control choices align with their preferences, we included a ‘review’ stage. We asked participants whether they would like to keep or change their choice after a period of browsing (Figure 3; details about the feed are under ‘Experimental flow and simulated social media platform’). If an intervention leads to a higher degree of change compared to the control, this could indicate that the choice architecture of that intervention is distorting users’ choices at sign-up, leading them to make selections that do not reflect their true preferences (see Figure 4 for the full list of hypotheses).Footnote 2

Figure 3. Review stage message.

Figure 4. Hypotheses.
Trial arms and hypotheses
Basic presentation (Control arm)
The Control arm reflected the content control design used by many popular platforms. None of the options were pre-selected, so participants could make an active choice. Users could access a more detailed definition of sensitive content by clicking ‘Learn more’. The information provided was the same as in all other trial arms (Figure 2).
Default arm
Defaults introduce a barrier to making an active choice because users can proceed without changing the pre-selected option. Defaults are common on social media platforms. Information provided to participants in the Default and Control arms was the same. However, we expected that compared to the Control participants in the Default arm would be less motivated to engage with the information and more likely to proceed with ‘All content types’, as it was pre-selected. We expected that this choice would be less active, and therefore, after seeing the feed, participants would be more likely to change their initial choice. Following the completion of this research, we ran an exploratory mini-experiment to investigate the effect of pre-selecting ‘Reduced sensitive content’, that is reported in Supplementary materials G.Footnote 3
Information salience arm
Reducing the number of steps to access information can increase engagement with it (Rosenkranz et al., Reference Rosenkranz, Vringer, Dirkmaat, van den Broek, Abeelen and Travaille2017). This intervention tested whether users made a more informed choice when the examples of sensitive content were more salient and easier to access.
Non-skippable and skippable microtutorial arms
Microtutorials are short step-by-step online guides. Unlike nudges that steer decisions, microtutorials aim to boost users’ capabilities to make their own choices (Hertwig & Grüne-Yanoff, Reference Hertwig and Grüne-Yanoff2017).
The trial tested whether chunking examples of sensitive content into small segments within an interactive microtutorial could help users align their settings with preferences. We included both a non-skippable and skippable microtutorial arm, as both types are used by online platforms. All steps of the microtutorials are in Supplementary materials B1.
Experimental flow and simulated social media platform
We tested these interventions on a simulated social media platform called WeConnect. To improve authenticity and external validity, WeConnect reflected the design of real social media platforms. The platform had two main components: (1) a sign-up process and (2) a content feed. During sign-up, participants chose their content settings, selecting between ‘All content types’ and ‘Reduced sensitive content’. The sign-up included other steps that mirrored the real-world process but were not included in the analysis (Supplementary materials B2).
The feed contained 24 content pieces: six short videos, six long text posts and 12 short text posts. Depending on the setting chosen during sign-up, they saw either 12 (‘All content types’) or two (‘Reduced sensitive content’) sensitive pieces, covering hate, violence and misinformation (Figure 5). See Supplementary materials C for details on content selection (C1) and ethics and safeguarding (C2).

Figure 5. WeConnect content feed examples.
Figure 6 illustrates the flow of the experiment.

Figure 6. Experiment flow.
Participant interactions (e.g. liking a post) were not visible to others. After engaging with the feed, participants proceeded to the review stage (Figure 3), and the post-trial survey, which included a comprehension test (Figure 7) and further questions (Supplementary materials E6, E8 and E9).

Figure 7. Comprehension test.
To address potential issues concerning platform functionality, intervention design and data collection, we conducted user testing and a soft launch prior to fieldwork (Supplementary materials D).
Sample and data collection
We recruited a nationally representative sample of adult internet users from the UK. Our final sample comprised 3,500 UK adults (18+) recruited through the panel aggregator Lucid between 24 November and 14 December 2023. Further details in Supplementary materials include power calculationsFootnote 4 (E1), data collection (E2) and sample demographics (Table S.2).
Analysis strategy
We followed a pre-specified analysis framework that was approved before trial data were collected.Footnote 5
For outcomes with binary data (primary outcome and exploratory analyses), we conducted logit regressions. For outcomes with count data (secondary outcomes), we conducted Poisson regressions. For all models, our predictor variable was the treatment variable with the Control arm as the baseline, and we included age, gender, income, education, ethnicity and platform use as covariates. We used a significance level of 5% throughout. To control the false discovery rate, we corrected for four comparisons across the primary outcome and eight comparisons across secondary outcomes using the Benjamini–Hochberg adjustment (no adjustments were made for exploratory analyses). Data analysis was conducted in R. Further details on the analytical strategy are presented in Supplementary materials E.
Results
Primary analysis: whether participants continue with their initial choice
In the Control arm (the ‘Basic presentation’), 87% of participants maintained their initial choice after viewing the feed (Figure 8). After correcting for multiple comparisons, we found no significant differences between the Control and any of the intervention arms (p > 0.05; Table 1). This did not provide evidence to support our hypotheses as we expected that compared to Control, the proportion of participants continuing with their initial choice would be lower in the Default arm and higher in all other intervention arms (Hypotheses 1a–d).

Figure 8. Primary analysis comparing the percentage of participants who chose to continue with their content settings in the Control arm to each intervention arm.
Table 1. Primary analysis model output

Secondary analysis: comprehension of what constitutes sensitive content
After viewing the feed, participants were asked to categorise eight items of content as either sensitive or not sensitive. On average, participants correctly categorised 5.87 pieces of content (Figure 9, Table 2).Footnote 6 None of the treatment arms resulted in significant changes from the Control (p > 0.05), even though only five participants in the Control arm clicked ‘Learn more’ and thus had an opportunity to read the sensitive content examples. This did not provide evidence to support our hypotheses as we expected that the probability of correctly classifying content would be lower in the Default arm and higher in all other intervention arms compared to the Control (Hypotheses 2a–d).

Figure 9. Secondary analysis, comparing the content participants correctly identified as sensitive or not sensitive in the Control arm to each intervention arm.
Table 2. Secondary analysis (comprehension) model output

Exploratory analysis: initial choice
Overall, 24% of participants across all trial arms chose ‘Reduced sensitive content’ at sign-up. The Information salience intervention significantly increased the proportion of participants making this choice compared to the Control (29.4% vs 24.4%, p < 0.05). Conversely, the Default significantly reduced the proportion making this choice compared to the Control (14.9% vs 24.4%, p < 0.01, Figure 10, Table 3), in line with Hypothesis 3a.

Figure 10. Exploratory analysis, comparing the percentage of participants who chose to see reduced sensitive content in the Control arm to each intervention arm.
Table 3. Exploratory analysis (initial choice) model output

Exploratory descriptives: reasons for decision to change or keep setting
Participants could select multiple response options from the list provided. Among participants who continued with the original choice (n = 3,069), the most popular reasons included thinking it was the right option for them (48%), the content matching their expectations (34%) and liking the content they saw (26%). These were the top three reasons regardless of the choice they decided to keep (Supplementary materials Tables S.5–S.7).Footnote 7
The most popular reason for changing ‘All content types’ to ‘Reduced sensitive content’ (n = 320) was because they saw content that upset them (43%) (Supplementary materials Table S.9) whereas the top reason for changing to ‘All content types’ (n = 111) was because they were curious to see what would change (65%) (Supplementary materials Table S.10; aggregated data in Table S.8).
Exploratory descriptives: skipping the microtutorial
Of the 664 participants in the Skippable microtutorial arm, 73% skipped the tutorial (reasons in Supplementary materials Table S.11).
Additional results in Supplementary materials F include goodness of fit test (F1), understanding of choice options (F3), sentiment (F2, F4–F7), behaviour on the platform (F8), sentiment to non-skippable microtutorial (F12), comprehension by post topic (F13), previous experience with content controls (F14), results of ordinal models (F15) and post-hoc mini-experiment (G).
Discussion and conclusion
We aimed to determine whether changes to the online choice architecture could help or hinder users’ ability to align content settings with their preferences.
We found that interventions affected the initial choice at the sign-up stage. In the Control arm, where participants made an active choice, 24% chose ‘Reduced sensitive content’. In the Default arm, where ‘All content types’ were pre-selected, only 15% chose ‘Reduced sensitive content’. The difference could be driven by inertia and the ease of staying with the default option, the perception that this is the recommended choice or because it represents the status quo (Jachimowicz et al., Reference Jachimowicz, Duncan, Weber and Johnson2019).
In contrast, more participants chose ‘Reduced sensitive content’ when examples were shown on the decision page (29%) compared to the Control group (24%). This resonates with information security research showing that threatening wording and visually salient messages increase users’ likelihood of declining cookies (Ebert, Ackermann, & Bearth, Reference Ebert, Ackermann and Bearth2022). In our case, the perceived threat level may have been higher in the Information salience group, as all participants saw the sensitive content examples, unlike the Control group, where only five clicked ‘Learn more’ and saw the examples.
After viewing the feed, participants were asked if the choice was still working for them (Figure 3). Surprisingly, the proportion of participants maintaining their initial choice remained similar across all interventions and the Control group, despite differences in initial selections. This lack of significant impact from the interventions may be affected by the high baseline, with 87% staying with the initial choice in the Control arm. The most common reasons for keeping or changing the initial choice aligned with our assumptions when designing the experiment. Participants kept their initial choice if it suited them, switched to a ‘safer’ option if the content upset them and switched to a less ‘safe’ option out of curiosity about potential changes.
Notably, the highest proportion of participants keeping their initial choice was in the Non-skippable microtutorial arm (91%), although it was not statistically significant after multiple comparisons correction. We expected that the microtutorial would make users pause and reflect on the information, and make a more informed initial choice, increasing the likelihood of sticking to it after the feed. This finding would have aligned with research, which demonstrated that microtutorials were an effective intervention for online safety by significantly boosting user reporting (Ofcom, 2023).
In addition to the high baseline, differences in context, nature (capability-building vs information provision) and psychological mechanisms (prompting an action vs shaping choice) could also explain the discrepancy between our findings and previous research. The lack of impact from the Skippable microtutorial is likely due to the fact that 73% of participants skipped it, effectively placing them in the Control condition. Overall, our findings indicate that the initial choice was ‘sticky’ even when participants were given a low-effort opportunity to revise it.Footnote 8
The lack of impact of the interventions on comprehension may be driven by participants having their own pre-existing understanding of what constitutes sensitive content.Footnote 9 Another explanation may be that our interventions targeted attention rather than comprehension. Users know what sensitive content is but do not consider it when making online choices. Information salience was effective as it increased user attention, and not knowledge, leading to different initial choices.
Limitations
The main limitations relate to the simulated environment which may not fully replicate real social media users’ incentives and motivations, and the absence of more personalised or harmful content. Moreover, the short experiment timescale limits conclusions on long-term effects. Thus, our confidence lies more in the relative impact of interventions than in precise measures of their magnitude.
Conclusion
These findings offer two key lessons for policymakers and regulators to enhance user experience and safety across digital platforms. Firstly, small changes in how content controls are presented may significantly influence user choice. This underscores the importance of platforms designing initial choices to support informed decision-making. Secondly, users tend to stick with their existing setting, even when offered easy opportunities to change. We hypothesise that some users may not have strong preferences regarding the two choice options offered. They may prefer more tailored methods, such as hiding an individual post. Future research could explore such control mechanisms.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/bpp.2025.10016
Acknowledgements
We would like to thank our colleagues at Ofcom and BIT for their support and contributions, including Amor Perez Pavon, Bobby Stuijfzand, Deborah Mc Crudden, Eva Kolker, John Ivory, Johnny Sutton, Jonathan Porter, Rhian Armstrong, Riccardo D’Adamo and Zak Soithongsuk.
Funding statement
Ofcom provided funding for this work.
Competing interests
The authors declare no competing interests.
Data availability statement
Data and code can be provided upon request. Please email the corresponding author.



