1. Introduction
Long before the widespread adoption of the internet, the issue of endurance and proliferation of misinformation such as fake news – ‘misleading information intentionally published and presented as news’ – (Anderau Reference Anderau2021, 210) has posed a challenge to our knowledge-seeking practices (cf. O’Connor and Weatherall Reference O’Connor and Weatherall2019, 1–2; cf. Black and Fullerton Reference Black and Fullerton2020, 73; Manfra and Holmes Reference Manfra and Holmes2020, 129–30).Footnote 1 As noted by Allcott and Gentzkow (Reference Allcott and Gentzkow2017, 214), falsehoods have historically permeated public discourse through various channels within popular media. However, the emergence and utilisation of new information and communication technologies, especially online platforms such as social media (Rini Reference Rini2017, 43; Marin Reference Marin2021, 363; Millar Reference Millar2019, 525–26), are seen as catalysts for the unparalleled spread of misleading content to a significantly broader audience. Without delving into the question of how acute and serious the problems of misinformation proliferation and fake news consumption are in the digital space (for more on this, see: Lazer et al. Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer, Metzger, Nyhan, Pennycook, Rothschild, Schudson, Sloman, Sunstein, Thorson, Watts and Zittrain2018; but also consider: Altay et al. Reference Altay, Berriche and Acerbi2023), it is undeniable that the internet possesses certain characteristics that facilitate the spread of false, misleading, and dubious content. Such ‘epistemically toxic content’ (a phrase borrowed from Record and Miller Reference Record and Miller2022a) can be mediated, both in terms of perpetuation and diffusion, not only through technology but also by the agents themselves.
By technologically mediated digital content, we mean the content that is suggested or delivered to users through algorithms, such as those employed by search engines or social media platforms. With these mechanisms, the following phenomena are associated: (a) personalised searches (Simpson Reference Simpson, Halpin and Monnin2013; Mößner and Kitcher Reference Mößner and Kitcher2017; Black and Fullerton Reference Black and Fullerton2020), (b) page rank (Höchstötter and Lewandowski Reference Höchstötter and Lewandowski2009), (c) autocomplete technology (Miller and Record Reference Miller and Record2017), (d) sponsored content (de Villiers-Botha Reference de Villiers-Botha2022), and (e) information or filter bubble (Nguyen Reference Nguyen2018). For example, the results we see and click on during our Google searches are technology-mediated, as they are selected according to the PageRank algorithm, which lacks transparency (see Brin and Page Reference Brin and Page1998, 109ff.; de Villiers-Botha Reference de Villiers-Botha2022, 331–32) and can be influenced by sponsored content (de Villiers-Botha Reference de Villiers-Botha2022, 330). Similarly, the autocomplete technology (Miller and Record Reference Miller and Record2017) nudges users to search for specific results that are also personalized (Simpson Reference Simpson, Halpin and Monnin2013, 427; Black and Fullerton Reference Black and Fullerton2020, 78). Personalization refers to the curation of content that users are exposed to, based on profiles constructed by algorithms. This carries the risk of sheltering users from diverse information and potentially exposing them to unreliable content simply because it aligns with their past searches (see Mößner and Kitcher Reference Mößner and Kitcher2017). In this way, it is argued that the interaction of algorithms can create the epistemic bubble in which the user is situated (Nguyen Reference Nguyen2018, 4)
Agent-mediated content refers to the content that arises because of certain user activities that may lead to the generation of epistemically toxic content. Some of these activities can stem from selective exposure of users to a specific circle of sources, perhaps those who share their core beliefs, leading to so-called echo-chambers (Nguyen Reference Nguyen2018), while others may result from unregulated sharing practice (Rini Reference Rini2017). While echo chambers actively isolate users from typical sources of dissenting opinions, unregulated sharing norms allow them to evade responsibility for their sharing behaviour.
Given the negative impact that these types of mediation have on the reliability of online content, it is unsurprising that in recent years, there has been a growing tendency to provide a deeper understanding of trustworthiness of the various agents and technologies engaged in online information dissemination along with the formulation of strategies for dependable and responsible utilisation of information technologies in seeking advice grounded in sufficient evidence (e.g. Black and Fullerton Reference Black and Fullerton2020). If we follow the classification proposed by Michel Croce and Piazza (Reference Croce and Piazza2021), these endeavours can be categorised into two approaches: the educational and structural interventions (distinction also made in Lazer et al. Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer, Metzger, Nyhan, Pennycook, Rothschild, Schudson, Sloman, Sunstein, Thorson, Watts and Zittrain2018, 1095). Advocates of the educational approach (e.g. Polizzi and Taylor Reference Polizzi and Taylor2019; Croce and Piazza Reference Croce and Piazza2021) argue that the lack of resources and strategies for evaluating the quality of online content (sometimes described as digital media literacy [Guess et al. Reference Guess, Lerner, Lyons, Montgomery, Nyhan, Reifler and Sircar2020] or information/media literacy [Black and Fullerton Reference Black and Fullerton2020, 85; Lichtenberg Reference Lichtenberg2021, 19]) renders users susceptible to social media misinformation. Refining these skills, perhaps through engaging in fact-checking (Nygren et al. Reference Nygren, Guath, Axelsson and Frau-Meigs2021; Eisemann and Pimmer Reference Eisemann and Pimmer2020), broadening sources of information (Croce and Piazza Reference Croce and Piazza2021) or developing strategies to identify trustworthy (Polizzi and Taylor Reference Polizzi and Taylor2019) or reliable information online sources (Wiley et al. Reference Wiley, Goldman, Graesser, Sanchez, Ivan and Hemmerich2009; Herrero-Diz and López-Rufino Reference Herrero-Diz and López-Rufino2021), is believed to be crucial in improving individuals’ knowledge seeking online activities.
On the other hand, proponents of the structural camp believe that the inherent structures and specific features of web-based technology, along with the dynamics of users’ interaction with it, place them in a fairly disadvantaged epistemic position that cannot be significantly improved by altering their individual epistemic performances (dečVilliers-Botha Reference de Villiers-Botha2022; Millar Reference Millar2019, Reference Millar2021). In their view, effective interventions aimed at enhancing current information-seeking practices should entail structural changes within the information environment itself (Rini Reference Rini2017) so that it promotes truth (Lazer et al. Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer, Metzger, Nyhan, Pennycook, Rothschild, Schudson, Sloman, Sunstein, Thorson, Watts and Zittrain2018) and well-validated claims (cf. Levy Reference Levy2024). In the following two sections (1.1 and 1.2.), we will look more closely at these proposals and provide an evaluation of the structuralist interpretation of the internet as an epistemic practice, including their understanding (or lack thereof) of the roles and positions of internet users within it. We will argue that substantial improvements in the reliable utilisation of web-based informational and communicative technologies stem not from reforms at the technological level, but rather from improving users’ epistemic performance.
However, as we will see further, proposed educational interventions aimed at this goal mostly focus on developing strategies for users to discern reliable information and its corresponding sources. In section 1.3, we aim to show that while such demarcation strategies can be beneficial, they are only part of the solution. It is necessary to understand the factors that mediate their usage and diminish their effectiveness to unlock their full potential. This, as we will see in sections 1.4 and 1.5., necessitates establishing different assessment dimensions for users to evaluate the circumstances under which they acquire beliefs they deem as knowledge, as well as their own influence on the epistemic climate. In other words, we will illustrate the necessity for users to comprehend both their passive and active roles, as well as the relationship between their intentions to share and their ability to assess the accuracy of content. Understanding their own epistemic limitations and the external factors that hinder their cognitive efforts is crucial for improving their epistemic position in such a context. Finally, we recommend that social studies teachers and other practitioners actively discuss and integrate this approach as the foundation of a more comprehensive education program aimed at promoting the more reliable use of web technologies in knowledge-seeking practices.
1.1. The motivations behind a structural approach
As mentioned in the previous section, some proponents of the structural camp believe that users’ reliance on web-based information and communication channels cannot be effectively improved by educational interventions. For example, Boyd Millar (Reference Millar2021, 9) believes that potential enhancements through certain educational programs would be notably incremental – so minimal as to render almost any educational intervention scarcely justifiable in terms of both cost and exertion. As he argues, the inherent nature of web-based technologies exploits certain human epistemic defects such as uncritical acceptance of information congruent with prior beliefs (confirmation bias) and susceptibility to (illusory) truth effect. Truth effect refers to the fact that people are more willing to believe a piece of information if they encounter it repeatedly (see Hassan and Barber Reference Hassan and Barber2021). This applies to falsehoods as well, which are ubiquitous on social media (Millar Reference Millar2019, 528). In this way, social media traps individuals in a bubble that shields them from contradictory evidence that could otherwise challenge their beliefs (Millar Reference Millar2019, 529–30), making it unrealistic to expect the average user to overcome these limitations on their own. Therefore, Millar proposes structural changes, which could involve ‘some combination of laws, regulations, and incentives to create an information environment that ordinary human beings can navigate without regularly forming outlandish false beliefs’ (Millar Reference Millar2019, 534). In a similar vein, Tanya de Villiers-Botha argues that users are not epistemically blameworthy for adopting false beliefs via Google searches (de Villiers-Botha Reference de Villiers-Botha2022, 336–37). Since Google lacks transparency about the reliability markers it uses to recommend results, and users have no insight into the parameters behind it (de Villiers-Botha Reference de Villiers-Botha2022, 333, 335–36), they are mostly blameless for acquiring false beliefs through the search engine (de Villiers-Botha Reference de Villiers-Botha2022, 336–37). Similar reasoning can be ascribed to Regina Rini, who asserts that social media users cannot be criticised for believing in falsehoods if their belief results from assigning greater credibility to a testifier who is a member of the same partisan network as they are (Rini Reference Rini2017, 54). The reason for this is, as she argues, that relying on the testimonies of like-minded people is expected and individually reasonable.
For these reasons, proponents of the structuralist approaches believe that the primary focus in ensuring safer and epistemically optimal navigation through online spaces should be on systematic, (infra)structural, or institutional changes. These changes could involve implementing various strategies, including deliberate regulation (Levy Reference Levy2024) and legislative frameworks (O’Connor and Weatherall Reference O’Connor and Weatherall2019, 182–83), labelling (Vosoughi et al. Reference Vosoughi, Roy and Aral2018, Ramezani et al. Reference Ramezani, Rafiei, Omranpour and Rabiee2019) and removing false news (Bronstein and Vinogradov Reference Bronstein and Vinogradov2021; Lotto et al. Reference Lotto, Hanjahanja-Phiri, Padalko, Oetomo, Butt, Boger, Millar, Cruvinel and Morita2023), greater transparency from companies regarding the functioning of their algorithms that underlie personalization and data usage (de Villiers-Botha Reference de Villiers-Botha2022; Lotto et al. Reference Lotto, Hanjahanja-Phiri, Padalko, Oetomo, Butt, Boger, Millar, Cruvinel and Morita2023), modifications to the platform itself such as nudging (Thornhill et al. Reference Thornhill, Meeus, Peperkamp and Berendt2019), diversifying news feeds (Huber Reference Huber2020) and website content (Sunstein Reference Sunstein2007, 208–9, 193–95 in Anderson Reference Anderson2011, 158) and calculating and displaying publicly a Reputation Score for individual users (Rini Reference Rini2017). In the following, we will attempt to show that while some of these proposals suffer from serious shortcomings, others can achieve their purpose only in combination with carefully designed and compatible educational strategies.
1.2. Evaluating key strategies of the structural approach
As we indicated in the previous section, proponents of the structural approach view the internet as an environment that presents significant challenges and offers limited technological possibilities for users in terms of navigation and mastery.Footnote 2 For example, Millar says that ‘the human brain was designed by natural selection to operate best in a specific range of circumstances, and our hardwired cognitive tendencies make it especially difficult to avoid false beliefs in the current information environment’ (Millar Reference Millar2019, 528). Similarly, when it comes to social media activities, there are authors who believe that they possess characteristics that diminish users’ inclination to engage in critical thinking or fact-checking procedures (Rini Reference Rini2017: 43). Moreover, given the limitations on users’ resources (epistemic, motivational, practical, etc.) and the inherently unfavourable conditions fostered by social networks, it is unreasonable to expect users to track and verify or every single claim and its corresponding sources (Rini Reference Rini2017: 54; de Villiers-Botha Reference de Villiers-Botha2022, 337; Millar Reference Millar2019, 531).
It is undeniable that structuralists are correct in asserting that expecting internet users to combat misinformation alone by tracking down and investigating the trustworthiness and reliability of all sources is highly unrealistic, given constraints of time, energy, motivation, attention, competence, skill, and education. However, we are sceptical that fact-checking techniques and myth-busting interventions would be any more effective if entrusted to third-party initiatives. In the following discussion, we will outline several reasons why fact-checking, regardless of its form, is inadequate for ensuring the reliable use of web-based technologies in information-seeking practice.
(a)Problems with fact-checking
There are different methodologies for the fact-checking process. One, illustrated by Steensen and colleagues (2023) refers to the process which addresses the need for speed in the verification process by ensuring the accuracy of claims made during live political debates or discussions. In this process, most of the material fact-checkers use is prepared in advance, often sourced from official channels. With pre-approved sources in place, any claim deviating from them is likely to be judged as false. It’s easy to see how this methodology, if widely adopted, could be used to ‘silence marginal voices’ (Record and Miller Reference Record and Miller2022a). Since in this approach fact-checkers rely on claims already widely accepted as true and on the existing ‘hegemonic view of what constitutes important and reliable information’ (Steensen et al. Reference Steensen, Kalsnes and Westlund2023, 15) it may push them toward a confirmatory epistemology (as noted by Steensen et al. Reference Steensen, Kalsnes and Westlund2023). This is undesirable because it can promote a single, uniform perspective – a kind of ‘hivemind’ epistemic conformity – that runs counter to the fundamental human desire for curiosity and exploration. We will explore the drawbacks of this further in the paper.
A different type of fact-checking procedure is illustrated by Lucas Graves (Reference Graves2017), representing traditional fact-checking where most of the effort takes place after a claim has been identified as newsworthy. The process begins with selecting a claim based on its newsworthiness or significance, followed by contacting the source (if possible). The claim is then traced using a search engine or news database, experts are consulted, and finally, the work is reviewed by editors who vote on it (Graves Reference Graves2017, 524–28). However, given the abundance of misinformation on social media and ubiquity and the fast pace of transmission of false content (Vosoughi et al. Reference Vosoughi, Roy and Aral2018), it is not realistic to expect that every or even most of the potentially dangerous and false claims can be given such a thorough examination and due process as described by Graves (Reference Graves2017). This kind of fact-checking may have to compromise between epistemic rigour (being thorough and critical) and being ‘efficient’ in terms of the number of fact-checked claims, while still dealing with the unclear methodology and potential latent bias.
Chloe Lim (Reference Lim2018, 6) found that fact-checkers do not agree on selection criteria and that they tend to disagree with each other on the truth value of the claim if the claim is ambiguous. And when they do agree that a claim is false, it is not clear what they find objectionable about the claim, the context, or a certain detail (Uscinski Reference Uscinski2015, 3, 4). Researchers such as Uscinski and Butler (Reference Uscinski and Butler2013) and Uscinski (Reference Uscinski2015) further highlight this issue, pointing out the lack of selection criteria for determining which claims should undergo fact-checking (Uscinski and Butler Reference Uscinski and Butler2013, 164ff.), leading to biases prevailing in the fact-checking process. Even though fact-checkers are entrusted with a very serious task, we still need to be aware that they are human beings, who are not always error-free and may be biased (see also Markowitz et al. Reference Markowitz, Levine, Serota and Moore2023, 13).Footnote 3 Ideology can significantly influence how fact-checkers perceive the world and what they consider to be significant, credible, false, or true, regardless of their professionalism or background (Uscinski Reference Uscinski2015, 5). They also highlight fact-checkers’ tendencies to fact-check statements that are inherently difficult to verify, such as those about the future or causality (Uscinski and Butler Reference Uscinski and Butler2013, 170), and to analyse statements in a manner that strips them of context (Uscinski Reference Uscinski2015, 2). Therefore, effectively monitoring many claims made on social media and deciding what to delete (as proposed by Bronstein and Vinogradov [Reference Bronstein and Vinogradov2021]), constitutes challenging work that requires time. By the time the relevant claim is adequately checked, the errors and falsehoods may have already become irrelevant or second nature. Because of these and similar reasons that make fact-checking technologically unfeasible for individual users (more on this in Mattioni Reference Mattioni2024), some propose modifying the epistemic environment itself – such as assigning the demanding task of fact-checking, or at least part of it, to algorithms (for an overview, see Graves Reference Graves2018).
However, there are similar problems with automated fact-checking as well. As Vosoughi and colleagues (Reference Vosoughi, Roy and Aral2018) argued, false claims ought to be labelled as false to discourage users from sharing, and in the case of automated fact-checking, AI fact-checking systems have to decide, based on evidence, that a claim is false, and false in a significant sense. But to succeed in that, reliable fact-checking requires understanding things like meaning and context (Graves Reference Graves2018; Uscinski and Butler Reference Uscinski and Butler2013,167–68; for the importance of context see Record and Miller Reference Record and Miller2022b). Understanding context is extremely important in cases of ‘malinformation’, which involves presenting a true statement in a misleading way. For example, in an attempt to criticize a politician, stating that ‘he increased the deficit in 2022’ may be technically true, but it lacks crucial context. The deficit primarily increased to support the population against soaring energy prices – a policy any reasonable leader might have pursued to some extent. It is difficult to determine definitively whether a statement – especially a political one – is entirely true or false, or strictly black or white, as it may fall within shades of grey (Graves Reference Graves2018). Furthermore, some claims do not lend themselves to precise and distinct criteria for evaluating their accuracy or truthfulness. For example, metaphorical descriptions, due to their suggestive and open-ended nature, are challenging to verify using specific, clearly defined conditions (Williams Reference Williams2001, 92). AI may confuse sarcastic statements or irony for fake news or misinformation since it is not developed enough at this point to reliably recognize sarcasm and satire (Santos Reference Santos2023, 683; Zhang et al. Reference Zhang, Zou, Lian, Tiwari and Qin2024). We can express sarcastic attitudes with a mere shift in inflection and ‘sarcasm detection is often considered a holistic and non-rational cognitive process that does not conform to step-by-step logical reasoning’ (Zhang et al. Reference Zhang, Zou, Lian, Tiwari and Qin2024, 10; see also Wikipedia Contributors 2025a). Although progress has been made in developing algorithms that can detect sarcasm in speech, much more work is needed to develop effective multilingual sarcasm detection. It is important to note that AI systems are typically limited to English and a few other languages, whereas misinformation is not confined to any particular language – potentially compromising their effectiveness in fact-checking (Bontridder and Poullet Reference Bontridder and Poullet2021, e32-11; Quelle and Bovet Reference Quelle and Bovet2024, 12). We also need to acknowledge the use of the so-called ‘algospeak’, which refers to the use of language in such a way as to avoid triggering social media algorithms (Klug et al. Reference Klug, Steen and Yurechko2023; Steen et al. Reference Steen, Yurechko and Klug2023). For example, certain words that could potentially trigger AI’s reaction are intentionally spelled incorrectly or a rhyming word is used as a substitute for an intended word, or even a picture of a rhyming word. Slang can evolve too quickly for AI to keep up with.
Wondering whether something is a fact can be out of place because we do not always communicate facts or care about being factual. Whether some piece of information is factual or not can be beside the point in some cases, like when pointing out what someone else has said (Marin Reference Marin2021, 365) or when exchanging jokes. Consider, for example, the joke: ‘You ever wonder how trains eat? They choo, of course’. Now, imagine we shared this as a caption for a photo of us eating a sandwich on a train. Then, imagine AI fact-checking our post by stating that trains do not eat because they are machines, not living beings, and are powered by steam, electricity, or diesel. We would consider such an action to be way off the mark. Although this is a simplified example of a joke that an AI-driven humour detector would easily recognize, failing to detect more sophisticated jokes could lead to over-labelling them as misinformation. Given that a significant portion of structural approaches heavily rely on the simple false/true distinction, concerns about the effectiveness of fact-checking on social media in mitigating the spread of misleading content (Wasike Reference Wasike2023) are not surprising.
Additionally, AIs are always trained on certain datasets, which may contain biased (Papageorgiou et al. Reference Papageorgiou, Chronis, Varlamis and Himeur2024, 23) unreliable, misleading, or outdated information – a phenomenon known in the literature as data poisoning (Alber et al. Reference Alber, Alber, Yang, Alyakin, Yang, Rai, Valliani, Zhang, Rosenbaum, Amend-Thomas, Kurland and Kremer2025). Even a small percentage of misinformation can affect an algorithm’s reliability, and it’s important to remember that even the most sophisticated databases may contain misinformation or outdated information (Alber et al. Reference Alber, Alber, Yang, Alyakin, Yang, Rai, Valliani, Zhang, Rosenbaum, Amend-Thomas, Kurland and Kremer2025). The situation seems even darker if AI used unfiltered information to train. Faulty information can also appear in the training data as a result of a cyberattack (Eddy Reference Eddy2024). But even with training based entirely on accurate data, generative AI models can still produce hallucinations, generating potentially inaccurate content by unpredictably combining existing patterns (Weise and Metz Reference Weise and Metz2023). The question is how willing we are to entrust such a critical task – assessing the reliability of the content we rely on – to systems that are prone to hallucination.
Also, AI can exhibit political bias, as demonstrated by Rozado (Reference Rozado2023), due to the inherent biases of the humans who train it, which can, inadvertently influence the algorithm. Moreover, as the world changes information about the world changes too, but most training data are static and not updated constantly, which makes AI unreliable because it cannot keep up with the ever-changing information landscape (Papageorgiou et al. Reference Papageorgiou, Chronis, Varlamis and Himeur2024, 23–24). Additionally, if we were to permit AI trained on predetermined authoritative data sets, we would be putting in danger dissenting voices or even just plurality of voices and opinions (Marsden and Meyer Reference Marsden and Meyer2019, 45 in Bontridder and Poullet Reference Bontridder and Poullet2021, e32-11) which were, as history shows us, very important in the pursuit of truth.
Finally, it should be pointed out that AI can, in effect, play a negative role in shaping our information environment, as it can be easily and cheaply used to create misinformation by manipulating various forms of digital media content, such as audio and video (e.g. AI-generated fake content or ‘deepfakes’) (Bontridder and Poullet Reference Bontridder and Poullet2021, Section 2.1). This is quite concerning, as large generative models, utilizing pre-trained networks with diffusion or transformer frameworks and vast datasets, can perform multi-modal tasks and generate high-quality, convincing content (Xu et al. Reference Xu, Fan and Kankanhalli2023: 9292). When it comes to the transparency of content regulation, we should keep in mind that the inner workings of AI systems are, to a great extent, epistemically opaque to us. If AI is to be used as a tool to differentiate truth from fiction, it must be understood by its users (Hsieh et al. Reference Hsieh2024, 11). AI is sometimes described as a ‘black box’ due to its obscurity and high complexity. Models are extremely intricate, with a tremendous number of parameters (Hsieh et al. Reference Hsieh2024, 12), which may compromise users’ trust in them (Mishima and Yamana Reference Mishima and Yamana2022, 1249). Although efforts to improve AI explainability are underway, this field of research is still in its infancy, raising doubts about how well we can understand these processes and how capable AI is of explaining itself to us. Some authors argue that the inner workings of AI algorithms, which operate independently, may be beyond our comprehension (Matthias Reference Matthias2004; Humphreys Reference Humphreys2009; Mittelstadt et al. Reference Mittelstadt, Allo, Taddeo, Wachter and Floridi2016; Koskinen Reference Koskinen2024). Thus, even if many of the AI performance limitations we have highlighted were to be overcome, we would still be left with the issue of interpretability, as users trust in a model largely depends on how effectively they understand the reasoning process and its decisions (cf. Mishima and Yamana Reference Mishima and Yamana2022).
(b)Transparency and diversification
As indicated previously, the term ‘structural approach’ is an umbrella term that encompasses various ideas with the information environment at its core focus. Structurally changing the environment is not necessarily equivalent nor conducive to censorship. That does not mean that there are no researchers calling for censorship of social media. For example, Bronstein and Vinogradov (Reference Bronstein and Vinogradov2021, 1) call for the ‘deletion of false and dangerous information’ and hail censorship as a valuable tool in fighting online health misinformation.
But if the overarching institution censoring dissenting voices is the same one propagating misinformation, who will keep that institution in check (cf. Niemiec Reference Niemiec2020)? What constitutes false and dangerous content? And who makes the call on this? And, given the extensive presence of epistemically toxic content online, how are we supposed to manage their overwhelming numbers? It’s essential to recognize that entrusting truth determination to government or private companies grants immense power to already powerful entities, which may not prioritise truth when it is inconvenient (cf. Record and Miller Reference Record and Miller2022a). In such cases, these entities may vaguely define what they deem dangerous or misleading content (Niemiec Reference Niemiec2020) and misuse it to suppress criticism. It can also happen that media platforms treat facts as secondary, spreading misinformation without regard for the truth of what they communicate (i.e. bullshitting) due to a special collaborative relationship between the state and the media, where parties work together in mutually beneficial partnerships (Gibbons Reference Gibbons2024). Additionally, this attitude may be driven by state pressure or financial incentives from corporate sponsors, financial partners, and other private entities. As Adam Gibbons (Reference Gibbons2024, Section 2.2) notes in his interesting incentives-based analysis of bullshit in politics, ‘as long as there are sufficiently many who will reward the media even when they bullshit (whether by tuning in, paying for subscriptions, or what have you), bullshitting will remain rational’. By the same token, the economic structures that support key infrastructure elements like search engines, databases, and social networks can silence certain voices or even overshadow them entirely (Mößner and Kitcher Reference Mößner and Kitcher2017, 30). As Mößner and Kitcher emphasize, while it is true that the internet, as an epistemic environment, allows all perspectives to be expressed, it cannot be said that all perspectives are equally heard. But dissenting voices are crucial for advancing our epistemic practice (de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2012). The world is complex and uncertain, and just as previously accepted justifications can be undermined by new evidence (Williams Reference Williams2001, 161) what is believed wrong today may prove true tomorrow.
Taking these considerations into account, the question arises: what stance can we adopt regarding structuralists’ proposals, such as diversifying users’ news feeds (Huber Reference Huber2020) and employing nudging techniques on the systematic level like presenting original posts alongside alternative views (Thornhill et al. Reference Thornhill, Meeus, Peperkamp and Berendt2019)? Although interventions of this kind sound promising at first glance, the question is how effective such measures would truly be. As pointed out by Duncan Pritchard (Reference Pritchard2013, 237), access to information is futile if individuals lack the cognitive abilities to interpret it effectively. Without these abilities, it is far from obvious whether increased access to information sources, and consequently exposure to unreliable content that could otherwise be avoided, would render users better informed.
Granted, the question arises whether these interventions can be solely entrusted to media algorithms and Big Tech companies. Just as in the case of human fact-checking procedures, relying on the government, corporations, or technology itself to regulate information flow through diversification is not only questionable in terms of whether implemented mechanisms will contribute to greater knowledge production but also too risky as these entities may not want or be able to prioritise truth. However, what structuralists propose and what we believe can truly help users achieve greater technological possibility for epistemic vigilance (Mattioni Reference Mattioni2024) is greater transparency of companies regarding their operations and data usage (cf. Lotto et al. Reference Lotto, Hanjahanja-Phiri, Padalko, Oetomo, Butt, Boger, Millar, Cruvinel and Morita2023; de Villiers-Botha Reference de Villiers-Botha2022). Such changes are always welcome, not only because they expand users’ knowledge horizons and control, but also because we believe that, in the context of social media, these strategies can help restore users’ already damaged trust in certain platforms (Huber Reference Huber2020, 41; Spence Reference Spence2021, 3). On the other hand, one must be realistic about the extent to which demands for transparency can be met, if for no other reason than the following: (a) companies have financial interests in protecting their business models from competitors; (b) explanations of technical processes – such as algorithmic filtering procedures, targeting systems, and advertising practices – are often lengthy and filled with technical jargon, making them difficult for laypeople to understand; and (c) ultimately, the complexity and rapid evolution of modern machine learning models make it challenging even for their designers and companies to develop standardized evaluation measures that can reliably assess their performance and decision-making processes.
So even structuralist proposals advocating for technology-level source diversification and transparency should be approached with caution and limited expectations, as none of them is comprehensive. One of the reasons why this is the case lies in the fact that most proponents of the structural approach focus solely on the inherent characteristics and qualities of digital infrastructures, neglecting the influence of user interactions on the online epistemic climate. In other words, most of the structuralists have an overly simplistic view of the human user, social media, language games played there and the internet in general. They view it as a single-level field churned up with both misinformation and genuine information, through which a subject with several orientation-related imperfections attempts to navigate, often stumbling and getting hurt along the way. As the subject wanders through the treacherous landscape, obstacles need to be either removed or, in some cases, established so that they travel in a ‘correct’ path. The only viable way to help it is to shape the environment around it. Thus, one accusation that can be levelled against structuralist interventions is the denial of the (epistemic) agency of the user (cf. Lazer et al. Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer, Metzger, Nyhan, Pennycook, Rothschild, Schudson, Sloman, Sunstein, Thorson, Watts and Zittrain2018. 1096), which may be attributed to the current methodology of researching the user’s online activities, usually accomplished by accumulating and analysing vast amounts of data. The consequence of this is the datafication of audiences (Livingstone Reference Livingstone2019). When we reduce users of media to mere data (and there is a lot of data about their online activities), they look like undifferentiated abstract objects and not individuals. Consequently, we lose a great deal of information about their motivations, interpretations, and concerns (Couldry and Kallinikos Reference Couldry, Kallinikos, Burgess, Marwick and Poell2017 in Livingstone Reference Livingstone2019).
But, when we remove ourselves from this intellectual oversimplification, we can see that the internet resembles a maze of vast size and complexity with different segments (or rooms) in which different games are played and in which different rules are followed, and at least one source of these rules is the human user himself (see also Marin Reference Marin2020, Reference Marin2021). It is the active and creative role of the user that some structuralists fail to see. Even structuralists, like Rini, who acknowledge the active role of social media users in shaping digital epistemic landscape, exonerate the need for individual-level alterations in social media sharing practice by focusing on technology-level reforms as a sufficient solution. Namely, as an effective strategy in reducing propagation of epistemically toxic content online Rini (Reference Rini2017) proposes that Facebook calculate a Reputation Score for individual users based on the frequency with which each user chooses to share disputed stories. Setting aside the myriad problems associated with verifying the accuracy of shared content that we discussed earlier, this proposal only addresses symptoms rather than the underlying cause of the issue. It still leaves room for users to manipulate the content; for instance, profiles with previously attained high Reputation Scores could be used to engage in selective and misleading presentations of well-supported facts and the spread of malinformation.
So, in tailoring any kind of reforms or interventions, it’s crucial to bear in mind that users are often the ones who significantly contribute to the propagation of epistemically toxic content through reposting or sharing (cf. Marin Reference Marin2021, 363). That users play a crucial role as both witnesses and sources of factual reports is an aspect acknowledged by some proponents of the educational approach (Black and Fullerton Reference Black and Fullerton2020). Yet, even for them, the main focus remains on users perceived as receivers of testimony, raising pressing questions about what kinds of information sources to consult, what evidence-gathering procedures to consider, and how to devise practical and easy-to-use guidelines for discerning reliable/legitimate information/voices from unreliable/illegitimate ones. In the next section, we aim to show that not only are these strategies insufficient for identifying information firmly grounded in the best available evidence but also for understanding and facilitating users’ meaningful and epistemically useful interaction with online content.
1.3. Educational approach
Given the enormous influence that technology has on the ways in which knowledge is acquired in today’s everyday practices, numerous research studies have emphasised the importance of developing media literacy/digital skills and introducing ‘critical thinking’ courses into school programs. The current work reflects a fluid terminology around the question of what constitutes literacy and critical thinking in the digital space. However, it seems that most authors imply by these terms the practice of something that can be described as epistemic vigilance. Epistemic vigilance refers to a set of mechanisms that can be readily deployed to continuously calibrate trust (cf. Mercier and Sperber Reference Mercier and Sperber2017).
Despite considerable debate regarding the precise nature of trust, there is widespread recognition that in deciding whom to trust and identifying trustworthy information, both epistemological and ethical considerations are equally important (Intemann Reference Intemann2023, Section 1). That is to say, the abilities required to evaluate the credibility of various information sources involve two key aspects. First, the ability to evaluate the strength of the given evidence and the validity of relevant claims. Second, it involves assessing whether the information provider is benevolent and honest and has moral integrity (Anderson Reference Anderson2011, 145; de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2018, 90), and whether they maintain an appropriate attitude toward the potential epistemic consequences of their work (Intemann Reference Intemann2023, Section 1). However, achieving the former requires a significant level of competence in terms of skill and a genuine understanding of the relevant issue from the trustor themselves, something they often lack in many domains. For this reason, philosophers have proposed criteria that focus on the credibility of the testifier (i.e. potential trustee) rather than the trustworthiness of their assertions (Levy Reference Levy2024). Acknowledging the challenges that lay people face when it comes to evaluating evidence and the merits of scientific research, Elizabeth Anderson (Reference Anderson2011, 145) argues that laypersons have ‘second-order capacity’ to judge trustworthiness of information sources in making well-informed decisions, emphasising that individuals decide what to believe by evaluating whom to believe. This decision depends on three types of assessment. The first is an assessment of the expertise or competence of the agents who create, endorse, and distribute the relevant claim. The second is an assessment of their honesty, and the third is their responsiveness to counterarguments to their belief. The guidelines provided by Anderson can certainly be useful, but in cases of persistent substantive disagreements between apparently equally honest parties, they may be of limited significance. As Neil Levy (Reference Levy2024) pointed out, traditional criteria for evaluating expertise – such as credentials and institutional affiliations – can be useful, but they often fall short in persistent disputes. In many cases, both sides of a debate can present impressive credentials, leaving laypeople struggling to identify the more trustworthy source. Levy cites figures like Frederick Seitz, a former President of the National Academy of Sciences, who lent his support to the ‘merchants of doubt’ on issues such as tobacco and climate change. Similarly, Robert Malone, a figure embraced by critics of COVID-19 vaccination, promotes himself as the inventor of mRNA technology, giving his position credibility despite the controversies surrounding it (Levy Reference Levy2024). In addition to experts who may intentionally promote false narratives, some unintentionally overstep the boundaries of their competence when offering advice – boundaries that laypeople often cannot clearly discern. Furthermore, online content found on blogs, social media, and forums, frequently relied upon by internet users, may come from anonymous sources, making it impossible to verify their credibility (Heersmink Reference Heersmink2018). In reality, when two experts disagree, only in rare high-stakes situations might we expect the average internet user to thoroughly examine their track records, assess the merits of their academic achievements, or search for signs of dishonesty in their research and advising history. However, even then, if they are presented with a convincingly clear or familiar explanation that dismisses the opposing side as corrupt, most people will not realize they’ve been misled.
When it comes to the second criterion, honesty, it is important to remember that testimony recipients tend to view testifiers who align with their own views – whether those are pre-existing beliefs, political affiliations, ideologies, or values – as honest, well-intentioned, and reliable (cf. Rini Reference Rini2017; Braman et al. Reference Braman, Grimmelmann and Kahan2005). In that regard, it could be said that individuals not only decide what to believe by evaluating whom to trust, as Anderson emphasizes, but also that ’who you believe depends on what you believe’ (Herzog Reference Herzog2006, 105 in Hardoš Reference Hardoš2018, 276). For example, an individual’s perspective on existential risks associated with AI – such as concerns about its potential to pose an existential threat to humanity – may reduce their confidence in the competence of AI researchers to discuss the associated risks and benefits. Specifically, individuals with a strong fear of AI may distrust experts who present testimony that challenges their readiness to view advancements in AI as safe. A person’s previous beliefs play a significant role in determining how they attribute epistemic authority to others (Hardoš Reference Hardoš2018, 276), while the way we perceive those we trust is often emotionally charged (Furman Reference Furman2020). Given that individuals naturally consider their own judgement as truthful, they tend to see themselves as following the standard of relying on reliable sources when they, in fact, only interact with information that mirrors their existing beliefs (Worsnip Reference Worsnip, Fox and Saunders2019, Section 2). Having these tendencies in mind, it becomes clearer why the dishonesty behind public scientific testimony is difficult for laypersons to uncover. Here, we cannot expect much help from the third line of assessment introduced by Anderson, which is calling on the testifier’s responsiveness to counterevidence and counterarguments. Of course, in situations where a testifier fails to respond rationally to counterarguments and merely repeats their claim, laypeople can recognize that the testifier does not adhere to the standards of dialogic rationality and dismiss them as untrustworthy. But we can expect the parties in a debate to arm themselves with much more sophisticated arguments and persuasive tactics, some of which offer a subtly manipulative yet unified and easy-to-accept explanation for the disagreement. If these debates are filled with technical terms and epistemically opaque jargon, testimony recipients are left with few options in forming their judgment. This judgment will either be shaped by their prior beliefs, by what they can somewhat recognize as aligning with their value system, or by whatever evokes in them what C. Thi Nguyen (Reference Nguyen2021) describes as a ‘sense of clarity’. It is precisely the allure of clarity, as Nguyen argues, that can influence our judgments of expertise, leading us to accept an explanation and view its proponents and creators as more credible.
So, it is questionable whether an effective assessment of the reliability of information and its corresponding sources can be based solely on credentials and assessment of trustworthiness of information sources. Keeping this in mind, opponents of the educational approach might argue that second-order assessments of the trustworthiness of online sources are precisely the ones that pose problems in our quest for knowledge as they are either not readily available for implementation or are ineffective in cases in which previous beliefs and predispositions significantly influence one’s decision on whom to trust. Moreover, second-order assessment of honesty can reinforce practices that prioritise sources with a partisan orientation, perpetuating epistemic bubbles.Footnote 4 As mentioned earlier, individuals within these bubbles lack exposure to diverse information and arguments, hindering their ability to verify accuracy or validity (Nguyen Reference Nguyen2018), which is precisely what second-order assessments are intended to ensure.
Furthermore, there’s a concern that these demarcation guidelines disregard practice of the contribution of lay expertise to proper understanding of different phenomena (more on such contribution in: Wynne 1998) and, related to that, the successful involvement of public opinion in science (more on this: Bedessem and Ruphy Reference Bedessem and Ruphy2020). Like the fact-checking procedure, as Noortje Marres (Reference Marres2018, 428–29) effectively summarises, demarcation strategies risk reinforcing stereotypical divisions between the discerning and the non-discerning, exacerbating the dichotomy between knowledge-capable individuals and others. Nevertheless, in a democratic society, citizens have a legitimate right to critique scientific findings and recommendations presented to them if they perceive them as lacking certain values or incorporating detrimental ones (de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2018, 126–27), and as emphasised by Pavličić and colleagues (2023), there are certain epistemic benefits in terms of scientific discoveries and the provision of reliable testimony when expertise boundaries are transcended.
Does that mean that we should abandon the pursuit of a satisfactory resolution to the epistemological challenges confronting reliable usage of web-based technology? Some might infer from our argument against the fact-checking procedure that we should discard the distinction between claims supported by solid evidence and those that lack such support, while our argument against credibility indicators suggests that we should disregard expertise and not designate any voice as inherently illegitimate. But that is not the case. Science stands as our most reliable source of empirical knowledge, and the concept of equal epistemic status – suggesting all individuals and internet sources are equally reliable in terms of knowledge and information – is a myth (Mößner and Kitcher Reference Mößner and Kitcher2017, 4). Most of the points outlined above have been discussed in the context of our everyday fallible, complex, and ever-evolving epistemic practices, where there is a need to leave room for reforms that recognize the importance of marginalised voices. Like any other practice – whether everyday, scientific, legal, etc. – online interaction practices are characterised by specific characteristics and limitations that can lead to epistemic harms. Following some proposals we aim to emphasise the importance of understanding these limitations (Lichtenberg Reference Lichtenberg2021, 17: see also: van Dijck Reference van Dijck2010). Gaining insight into these mechanisms and understanding the nature of the underlying processes behind our online activities – as well as the various political, social, and psychological factors involved in the production, distribution, and acquisition of online knowledge – is crucial for improving our epistemic performance and engaging more effectively in the exchange of ideas. How this can be achieved will be explored, to some extent, in the next section of our paper.
1.4. Toward more comprehensive education approach
As previously highlighted, people today generally have access to a much wider range of information than in pre-digital times. However, the question remains: does this greater access make them better informed? One may still find it challenging to integrate conflicting viewpoints into a cohesive perspective and develop an adequate understanding. So, although exposure to a variety of information is necessary, it alone is simply not enough. Students should understand why diversifying – considering alternative viewpoints and widening information sources – is important, what the limitations are, and accordingly, what can be achieved over time. Without this ability, it is difficult to see how diversification alone could ‘ensure that social media users get better at distinguishing good from bad news’ (Croce and Piazza Reference Croce and Piazza2021, 10). One of the epistemically unfavourable consequences of exposure to more varied information sources and even-handedly reporting may result in users’ wrong perception that epistemically unequal justified claims are equally justified (cf. Anderson Reference Anderson2011). And, when people step out of their epistemic bubbles, they may still encounter false claims, perhaps even a greater number.
On the other hand, we believe that mindful engagement with a variety of information, accompanied by training to discuss controversial political issues (Kahne and Bowyer Reference Kahne and Bowyer2017), can be a valuable tool for students to make well-informed decisions based on the best available evidence. But what do we mean by mindful engagement with information? One cannot be able to assess their own responsibilities and competencies until they reach a certain level of willingness and ability to critically evaluate their own beliefs (cf. Pongiglione Reference Pongiglione2021, 72).Footnote 5 Building on Elizabeth Anderson’s (2011) approach, this broader perspective – which includes an understanding of one’s own psychological tendencies and cognitive limitations (cf. Russo and Schoemaker Reference Russo and Schoemaker1992, 8) as well as their exploitation by algorithms and business models (Niemiec Reference Niemiec2020; Maréchal and Biddle Reference Maréchal and Biddle2020; de Villiers-Botha Reference de Villiers-Botha2022) – can be formulated as a ‘third-order assessment’. Understanding the underlying mechanisms of information-gathering procedures, both internal and external, psychological and algorithmic, is the necessary step toward a deeper understanding of our own epistemic position from a proverbial bird’s-eye view. Many proposed strategies to mitigate the proliferation of misleading content overlook these mechanisms, as they often focus solely on evaluating the reliability of online information sources – essentially, on what happens on the screen.
1.4.1. On the screen
Some researchers argue that educational interventions about fake news detection ought to take place as soon as children start to use media (Eisemann and Pimmer Reference Eisemann and Pimmer2020). Certainly, teaching users of recognizing credibility markers is one of the initial steps in addressing the problem highlighted by the study conducted by the Stanford History Education Group. This study found that young people struggle to assess the reliability of information presented to them online (Wineburg et al. Reference Wineburg, McGrew, Breakstone and Ortega2016, 4; see also Phippen et al. Reference Phippen, Bond and Buck2021, 40; and Polizzi Reference Polizzi2020, Section 1). However, as previously discussed, we should not rely too heavily on such strategies alone for ensuring the reliable use of web-based technology. Our ability to do so also depends on understanding how capable we, as users in front of the screen, are in assessing the reliability of others, as well as recognizing the extent to which social, political, financial, and other factors operating behind the screen shape our judgments of reliability and, more broadly, our knowledge acquisition and dissemination in web-based practices.
1.4.2. Behind the screen
The piece of information delivered to a social media user results from various unseen and opaque processes, which are a complex amalgamation of algorithmic, financial, social, and psychological factors (Niemiec Reference Niemiec2020). With that in mind, it is important to understand, to a reasonable extent, how these external factors and content-shaping algorithms mediate our knowledge-seeking activities and influence our judgments about whom to trust and consider reliable, potentially compromising our ability to make well-informed decisions. The digital ecosystem is populated by malicious stakeholders, whether individuals, political parties, military leaders, policymakers, or governments, who manage troll farms (fake accounts and websites), information operations (gathering of strategic intelligence on an opponent combined with the spread of propaganda), social bots (programs that automatically like, share posts, and send messages), and similar entities to spread deceptive information and manipulate relevant narratives. However, while AI techniques for combating misinformation can partially identify false, inaccurate, or misleading content, they cannot determine the intent of the sharer (Bontridder and Poullet Reference Bontridder and Poullet2021, e32-10). Since detecting such intentions is crucial for avoiding reliance on misleading information designed to manipulate, students should develop the ability to accurately determine whether a speech act is intended to inform, mock, manipulate, or serve another purpose (Mattioni Reference Mattioni2024). They should also be familiar with sophisticated forms of deception, the fabrication process, and other possible malicious behaviours on social media.
Another reason highlighting the limitations of education interventions solely focused on identifying accurate information and reliable sources stems from the recognition that not all information shared on social media serves the purpose of informing or misinforming, nor is it consistently taken seriously or understood as factual or testimonial (cf. Marin Reference Marin2021, 367). The primary goal of social media is to maximize profit by exploiting our attention rather than providing high-quality information. It achieves this through an ad-driven business model that keeps users engaged, captivated, and fixated, leveraging micro-targeting to deliver highly specific advertisements (Maréchal and Biddle Reference Maréchal and Biddle2020). We believe that educating new generations about the ’backend’ of online platforms and the extent to which they make it technologically possible for users to evaluate the reliability of their own sources – at least as far as our knowledge allows – is just as crucial as helping them understand the environment and society they live in (see also Mattioni Reference Mattioni2024, Section 3).
1.4.3. In front of the screen
Just as it is important to understand the extent to which technological affordances contribute to the formation of truth-conducive beliefs, it is equally important to recognize our own role in shaping an epistemically toxic environment. First, it must be acknowledged that we can never hope to become ultra-reliable fact-checkers ourselves, as the very challenges that undermine formal fact-checking efforts also apply to individual users. The vast array of information encountered on social media will often exceed our own areas of expertise – if we possess any relevant expertise at all – and the techniques required for verification may lie beyond our cognitive and technological capacities (see Mattioni Reference Mattioni2024, Section 3.2). This does not mean that any investigation of a claim is futile. But it means that we need to take a step back and keep ourselves in check. Our moods and emotions play a crucial role in shaping false beliefs and driving the spread of misinformation (Xu et al. Reference Xu, Fan and Kankanhalli2023; Horner et al. Reference Horner, Galletta, Crawford and Shirsat2021). At the same time, our perception of the world is influenced by our existing beliefs and societal factors. Confirmation bias, a well-documented and robust psychological phenomenon, significantly affects how we process information. Moreover, we are more likely to believe and share false information when we see others accepting or spreading it (Xu et al. Reference Xu, Fan and Kankanhalli2023).
The awareness of the above mechanisms and psychological features will also assist us in determining the appropriateness of our sharing activities. We can be very careful about what information we adopt as a belief, but we can be, on the other hand, extremely careless when sharing. We can care about truth deeply and be perfectly well-meaning yet unreliable (Lichtenberg Reference Lichtenberg2021, 18) by, let’s say, believing that we are fully competent in some subject matter while unfoundedly exceeding our true competences. It is often the case that people, who make judgments about some field they have no expertise in, do so unwittingly. Moreover, in our interconnected and interdisciplinary world, even experts can struggle to determine the boundaries of their expertise and where someone else begins (Gerken Reference Gerken2018; Pavličić et al. Reference Pavličić, Dimitrijević, Vučković, Đorđević, Nedeljković and Tešić2023).
And, as we suggested above, we are not always interested in the facts or the truth (Record and Miller Reference Record and Miller2022a). As some studies suggest, the veracity of headlines has minimal influence on sharing intentions (Pennycook et al. Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021). The reason for this probably lies in social media platforms encouraging individuals to prioritise other factors, such as whether sharing will draw attention and approval from followers and friends (see also: Bronstein and Vinogradov Reference Bronstein and Vinogradov2021). One can share some piece of information that they are very sceptical of, or that we know is not true, to draw attention or point and say ‘Look how silly this is!’ (cf. Marin Reference Marin2021, 365). Also, one may want to provoke, agitate, and annoy (troll) people in their friend group (Record and Miller Reference Record and Miller2022b). And while non-linguistic signals like emoticons, reaction options, and typographical conventions on social media can shape the interpretation of posts, they are far less useful than facial expressions or tone of voice in assessing the trustworthiness or sincerity of testimonies (Mattioni Reference Mattioni2024, Section 3.1).
For example, let’s say we regularly post flat-earth content (even though we don’t believe in it) just to tease our close friend, who is an astrophysicist, and enjoy his reaction because it’s amusing to us. This is simply banter or trolling, not an attempt to misinform anyone. However, let’s now imagine that our friend is equally entertained by pretending to find the arguments in those posts quite plausible, and we continue this playful exchange on my public profile. To people inclined to believe in this theory, our playful conversation could lend credibility to it. Situations with unforeseen consequences of our posting and sharing practice can particularly turn dark if something known as the collapse of context occurs (Record and Miller Reference Record and Miller2022b). Context collapse refers to situations where multiple (different) audiences are flattened into a single context (Brandtzaeg and Lüders Reference Brandtzaeg and Lüders2018). Due to its nature, social media presents the same content (for example, a post we shared) to different audiences, including people who do not know us, which can lead to significant misunderstandings (Marwick and Boyd Reference Marwick and Boyd2011, in Record and Miller Reference Record and Miller2022b). We can act very responsibly as a receiver of information, yet act in a misleading manner as a disseminator, because we are unaware of the epistemic constraints and contextual nature of social media, and do not consider more local norms that exist on different social media platforms (like private groups).Footnote 6 Therefore, the awareness of context and multiple perspectives online (Marin Reference Marin2021; Record and Miller Reference Record and Miller2022b) is something that must be thoroughly researched and taught when it comes to responsible sharing and engaging with information. We believe that taking into account these considerations in order to standardise the practice of sharing will not only contribute to mitigating epistemically toxic content but also prevent the abandonment of public posting that is currently occurring, specifically the transition of users to closed groups, leaving open networks behind (The Economist 2024).
That’s a pity, since technology is an integral part of our everyday epistemic practice in which internet and social media by extension, as a new learning environment, can contribute immensely to the dissemination and consumption of valuable information. As Paul Thagard (Reference Thagard1997) argued some time ago, although seeking information on the World Wide Web is not always entirely reliable, it’s important to recognize that in the hands of careful users, especially when guided by scientists, posting information can increase its reliability. We believe that Thagard’s remark still applies today. As it has likely become more evident by now, knowing where to look and whom to listen to is simply not enough. The literature offers various frameworks for evaluating online posts, some of which are presented as acronyms for mnemonic purposes.Footnote 7 However, assessing the credibility or truthfulness of online content is only part of the responsibility of an informed internet user. Equally important is the mindful and responsible sharing of information. Drawing inspiration from these existing frameworks, we propose our own model designed to encourage more conscientious sharing practices. Below, we outline a potential structure for such a framework, along with its corresponding acronym.
1.5. ACCEDE
A – Audience: One of the first questions that we should ask ourselves before posting some material is: Who is the intended audience for this post? Answers to this question can help us be mindful of the tone and language; inappropriate or offensive language can alienate the audience.
C – Collapse of the context: After determining who the intended audience is, we should ask ourselves: Is the post likely to spread virally? In other words, does the post relate to any current events, trends, or viral topics that could increase its visibility and shareability, thus opening the possibility of a collapse of context?
C – Contextualizing: Can we prevent the potential collapse of context by providing context to the post that we are about to share? As we have already discussed, context (to a certain extent) helps prevent individuals interacting with our posts from making assumptions or jumping to conclusions based on incomplete information.
E – Explaining away: Despite all our estimations and precautions, if our post still becomes subject to misinterpretation, misuse, or misrepresentation of the intended message, can we clarify and rationalize its content to explain it away?
D – Detriments: Apart from the undesirable consequences for our own reputation that we aim to prevent when things go wrong online, we should try to anticipate, understand, and acknowledge the real-world consequences of our online activities.
E – Evaluation: As mentioned earlier, we are not always solely concerned with being factual. However, when we do aim for factual accuracy, it’s essential to be aware of our evidence-seeking duties and epistemic limitations. This necessitates an examination of the specific topic we want to comment on, accompanied by metaknowledge.
Perhaps this outlook may not be the most optimal, possibly far from it; however, developing a framework within which users could navigate the online space more securely and efficiently, comprehending it better, and finding ways in which they can responsibly express their ideas are of paramount importance. We require a free market for our ideas, and the withdrawal of individuals into closed groups threatens to close it off.
1.6. Concluding remarks
At the beginning of the paper, we introduced the distinction between agent-mediated and technology-mediated digital content to highlight that human users, due to insufficiently informed and reflective or malevolent sharing of information, are responsible for creating and disseminating misleading content (cf. Marin Reference Marin2021, 363; Lichtenberg Reference Lichtenberg2021, 17). Our goal was to demonstrate that neither proponents of structural approaches nor advocates of educational strategies for building a more reliable web-based epistemic landscape fully recognize the proactive (both creative and destructive) role that users play across various web-based information and communication channels. In fact, they tend to view internet users as passive recipients of information who need external protection against misinformation and clear guidance to distinguish between reliable and unreliable online information for making well-informed judgments. However, responsible and reliable use of web-based technology in knowledge-seeking practices involves more than understanding and improving testimonial knowledge practices, where information is received from others. A significant portion of unreliable information on the web arises precisely because of an average internet user’s creation, propagation, and dissemination of content which seems to the unregulated social media sharing practices. However, we expressed doubt that these practices can be regulated institutionally from the top down without integrating knowledge from research on user tendencies online into educational programs. To the best of our knowledge, there is limited research on sharing behaviour and the consequences of lacking clear sharing norms (Rini Reference Rini2017; Arielli Reference Arielli2018; Record and Miller Reference Record and Miller2022b; Marsili Reference Marsili2021; Marin Reference Marin2021) and only a few studies have explored the relationship between the ability to discern the truthfulness of content and sharing intentions (e.g. Pennycook et al. Reference Pennycook, Epstein, Mosleh, Arechar, Eckles and Rand2021). So, what we need is more research on these complex and interconnected questions as the foundation for building a comprehensive education approach for responsible and meaningful engagement with information and communication technology. It may seem like a gargantuan task, but only an educational approach aligned with thoughtful engagement with research evidence (Mouthaan and Révai Reference Mouthaan and Révai2023) can significantly contribute to reducing epistemically toxic content on various informational and communicative channels and mastering their reliable use.