I. Introduction
In recent years, the issue of online misinformation has prompted many countries to consider content moderation as a means of curtailing harmful content on the internet. These debates mark a significant departure from the previous approach of minimizing interference in the content transmitted by intermediaries on the internet, which marked regulation in the early 2000s. Content regulation is crucial to human rights as it relates to how freedoms are exercised online, and how democratic debate can be preserved in an increasingly digitally connected and networked society.
This piece examines the shift from models of least interference in content creation and dissemination to an active ‘duty of care’ that requires intermediaries to monitor and take action against harmful content on their platforms. Specifically, we explore three existing models of intermediary liability in large democracies in Europe, North America and South America, and describe their key characteristics. We also examine how these models are evolving under similar provisions, such as the European Union’s Digital Services Act, the British Online Safety Bill, the German NetzDG, and the anticipated Brazilian Fake News Bill.
Intermediary liability refers to the legal responsibility of intermediaries such as internet service providers (ISPs), social media platforms, search engines, web hosting companies, and content delivery networks for the content transmitted or stored on their platforms.Footnote 1 If the content is found to be illegal or infringing on the rights of others, these intermediaries can be held liable.Footnote 2 In this piece, we focus on the responsibility of content providers like social media platforms and search engines.
Content moderation involves reviewing, monitoring and managing user-generated content on online platforms to identify and remove content that violates the platform’s policies or community guidelines. This process uses automated tools and human moderators to eliminate hate speech, bullying, spam and illegal content.Footnote 3
Regulating content removal is a critical issue that affects online freedoms and democratic debate. As the internet continues to evolve, it is essential to maintain a balance between protecting individuals’ rights and minimizing harmful content. The intermediary liability and content moderation models are evolving, and policymakers must continue to consider their effectiveness and impact on human rights.
The ‘fake news’ debate exposes a deeper issue: the reorganization of existing business models of the communications ecosystem. In the past, media conglomerates produced and distributed information in society.Footnote 4 However, data-driven advertising business models now dominate the digital ecosystem, pushing communications companies to produce attention-grabbing and identity-confirming content. This results in a highly segmented information spaces, with echo chambers that promote disparate spaces of information consumption, political views, and even understandings of reality.Footnote 5
To respond to the fast-growing and far-reaching informational challenges, countries have decided to review their models of intermediary responsibility to encompass mandatory rules for content moderation. While the traditional understanding of human rights preconized by Article 19 of the International Covenant on Civil and Political Rights (ICCPR)Footnote 6 was one of non-interference and authorization of speech, new means of limiting and removing content are now necessary as a means of sustaining freedom of expression and balancing other rights. The rising understanding is that content providers should have obligations to moderate content more actively, and even be held accountable when they fail to contain harmful content being disseminated over their services.Footnote 7
In the next section, we will explore three existing models in large democracies in North America, Latin America and Europe. These models address the issues of intermediary responsibility and content moderation and provide different approaches to maintaining freedom of expression while limiting the circulation of harmful information.
II. Models of Limited Intermediary Liability
We will look at three models of intermediary liability. The American model provides immunity for third-party content and content moderation.Footnote 8 The European model suggests immunity with a ‘notice and takedown’ approach.Footnote 9 Meanwhile, the Brazilian model offers immunity for third-party content, but content providers may be liable for wrongfully removing content.Footnote 10 These models, which we will explore in greater detail below, demonstrate that intermediary liability is centred on protecting content providers from the harm caused by third-party content. However, the methods of content removal differ across these models.
The American Model: Section 230 of the Communications Decency Act of 1996
In the US model, established in Section 230 of the Communications Decency Act of 1996 (CDA), ‘no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’.Footnote 11 This provision assigns the responsibility of published content to its authors, as opposed to the content providers. It allows companies to build their digital products and services with less risk and was considered essential for the economic development of the internet economy.Footnote 12 This reflects a vision that content providers are also part of a ‘dumb pipe’ system, which favours Freedom of Expression because the protection granted to intermediaries is a protection that extends to the users of their services.Footnote 13
This immunity, however, is not unlimited. Federal criminal laws, illegal or harmful content, and copyright violationsFootnote 14 are examples of norms [or rules] that assign duties to platforms. Moreover, the law also authorizes intermediaries to moderate content and protects the removal of content that has been done in good faith. It is the Good Samaritan principle set out in Section 230(c)(2): operators of interactive computer services are exempt from liability when, in good faith, they remove or moderate third-party material that they deem ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected’.
The Previous EU Model: Articles 12 to 15 of the European Union’s 2000 E-Commerce Directive
The Directive on Electronic Commerce, also known as the E-commerce Directive,Footnote 15 was adopted by the European Union in 2000. Its main purpose is to establish a legal framework for electronic commerce in the EU and facilitate cross-border online transactions. The directive applies to a wide range of online services, including e-commerce platforms, social media platforms and search engines. One of the key provisions of the directive is the safe harbour provision, which protects intermediaries from liability for the content that they transmit or store on their platforms.
Similar to the American model, this provision is intended to encourage innovation and free expression on the internet by limiting the legal liability of intermediaries. However, the directive also establishes conditions under which intermediaries can be held liable for illegal content transmitted or stored on their platforms. These include cases where the intermediary has actual knowledge of illegal activity or information on their platform or where they fail to promptly remove such content once they become aware of it. Additionally, the directive provides for a notice and takedown procedure, which allows individuals or organizations to request the removal of illegal content from intermediaries’ platforms.
The Brazilian Model: Article 19 of the ‘Internet Bill of Rights’
The Brazilian model of intermediary liability, as described in Article 19 of the Brazilian Civil Rights Framework for the Internet,Footnote 16 also establishes that internet intermediaries are not responsible for the content generated by third parties. However, intermediaries can be required to remove content that is deemed illegal by court order, violate intellectual property rights, or contain unauthorized nudity.
The Brazilian model has obtained international relevance because it counts on a judicial revision to appreciate issues related to Freedom of Expression. Unlike the American model, which grants content providers immunity in the acts of content moderation, the Brazilian model understands that these practices can violate rights and are subject to legal liability. This is why articles 19 and 21 of Marco Civil clarify the standards to be met to balance moderation of harmful content and freedom of expression, a fundamental right reinforced several times in the law.
Interestingly, these mechanisms are now converging towards more stringent obligations of monitoring and removing content which we will see in the following section.
III. The Rise of a Duty of Care
Intermediary liability solutions have aimed to mitigate the liability of content providers for third-party content they host. However, the methods for content removal were highly localized, where each jurisdiction built their own approach to striking down harmful content based on their national view of balancing Freedom of Expression. The emergence of issues like misinformation and hate speech online, which pose threats to democracy, physical and mental safety, and public health, has prompted countries to reconsider their stance. These developments, along with debates about the role of internet content providers in distributing online content, have led to new regulatory arrangements that place greater responsibilities on content providers for removing harmful content. However, this model places the burden on content providers to make legal judgements about the content circulating online. This raises concerns about their lack of legitimacy to validate political speech and the challenge of handling the volume of online communications with the aid of automated tools, which can suppress speech at large scale if left unchecked.
The German NetzDG of 2017
In Europe, for instance, Germany passed in 2017 the Network Enforcement Act (Netzwerkdurchsetzungsgesetz, or simply ‘NetzDG’), a regulation that explicitly ‘aims to fight hate crime, criminally punishable fake news and other unlawful content on social networks more effectively’Footnote 17 and increases the rigor in holding intermediaries accountable. Generally, the law creates a list of situations that oblige platforms to carry out the summary removal of content.
Among its innovations, the new rules require more transparency from content providers and expedited response to users’ complaints. According to the rules of effective complaints management, which establish a standard with more transparency and efficiency, operators of social networks ‘must offer users an easily recognizable, directly accessible and permanently available procedure for reporting criminally punishable content’, as well as ‘immediately take notice of content reported to them by users and examine whether that content might violate criminal law’.
From a practical perspective, NetzDG’s added rigor to pre-existing legal obligations did not necessarily lead to the desired changes. In fact, in the same year it entered into force, a study has shown that the law did not result in widespread calls for takedowns, nor has it compelled intermediaries to adopt a ‘take down, ask later approach’. However, uncertainty as to whether it would effectively prevent hate speech remained in the air.Footnote 18
The stringent law has been immediately criticized for posing threats to online free speech since its content moderation strict rules, applied to social media companies, could incentivize intermediaries to over-police speech.Footnote 19 For these reasons, Germany passed the Act to Amend the Network Enforcement Act, which entered into force on 28 June 2021,Footnote 20 with notable changes to the user-friendliness of complaint procedures,Footnote 21 appeals procedure and arbitration, transparency reports, and expansion of powers of the Federal Office of Justice.Footnote 22
The European Union’s Digital Services Act (DSA) of 2022
Recently, the European Union approved a digital strategy consisting of two norms: the Digital Services ActFootnote 23 and the Digital Markets Act.Footnote 24 Together, the laws form a normative framework that seeks to establish a safe digital environment that can both meet competition aspects and protect users’ fundamental rights.
The DSA, updating European regulation, sought to bring more incisive rules to digital services in general, which ‘include a large category of online services, from simple websites to internet infrastructure services and online platforms’.Footnote 25 In this way, the digital strategy reaches a variety of providers, even if it recognizes that it has a primary focus on intermediaries and digital platforms – online marketplaces, social networks, content-sharing platforms, and others.
Two critical provisions in the new European regulation are the due diligence obligations and systemic risk monitoring. The due diligence obligations, which can be interpreted as a duty of care, impose new legal obligations on digital services by establishing a list of measures considered reasonable to avoid damage to users, under penalty of having negligence recognized and, therefore, subjecting these platforms to liability. Meanwhile, systemic risk monitoring is an obligation directed at very large online platforms (VLOPs) that impose the monitoring and surveillance of harmful trends such as disinformation and hate speech.Footnote 26 Along the same lines, VLOPs will be subject to annual independent audits, as well as being obliged to offer users at least one recommendation system option that is not based on user activity on the network (profiling).
So far, all that can be said is that the changes in European regulation have made the rules governing accountability for digital services more rigid and aim to create a more secure and competitive digital environment, which will benefit both the economy and individual freedoms. Whether these goals will be achieved or how well they will be achieved, as well as possible new problems are still questions in the open air, as the new rules have not yet come into force.
Duty of Care in Other Countries: USA, UK and Brazil
In the US, Section 230 rules have been challenged across the political spectrum. Republicans oppose content moderation because they understand that there is significant damage to freedom of expression, despite the intention to manage harmful content. On the other hand, Democrats understand that the immunity granted to platforms does not meet the objective of making the digital environment safer precisely because it promotes inertia in the face of materials that cause greater damage and circulate freely on networks. Still, despite political divergences, a survey showed that most US citizens favour eradicating harmful misinformation over protecting free speech.Footnote 27
Several civil society organizations argue that internet freedom of expression could not be the same without Section 230.Footnote 28 On the other hand, the provisions are not free from controversy either: today, many claim that Section 230 is a tool that grants undue immunities to platforms and disregards social media’s ability to stop the spread of false information and hate speech. Section 230 has already been the object of more than 25 legislative measures that attempt to abolish or alter it due to the demand on platforms to have greater accountability.Footnote 29 In Brazil, Bill no. 2630/2020 (nicknamed PL das Fake News)Footnote 30 offered a proposal to combat online disinformation. Over the last three years, the Bill has undergone developments, particularly after the invasions of the three powers in Brasilia in 2023.
Proposed during the pandemic, PL2630 emerged in a context of informational disorder and had as its initial focus the fight against disinformation. Over time, the engagement of social sectors in the legislative discussion made the project take on a proposal quite different from the initial one and, today, PL2630 has taken on the outlines of a platform regulation proposal. The core of the project remains centred on user rights and transparency in content moderation, which can be understood on three fronts: broad transparency, as in general reports and statistics made available by platforms; systemic transparency and analysis of the risks posed by digital services to fundamental freedoms, which relates to issues of algorithmic transparency; and individual transparency, with clarifications to the user about decision-making in content moderation and its fundamentals. However, the project still contains controversial parts and is a hot and disputed agenda in the National Congress. The Bill had its urgency approved in AprilFootnote 31 and is still awaiting a vote,Footnote 32 amid a contentious debate that includes intense publicity and the public positioning of digital platforms regarding PL2630.Footnote 33
In the United Kingdom, the Online Safety Bill places a duty of care on internet service providers to keep users safe. In doing so, it also notes that companies should have regard for privacy and freedom of expression concerns.Footnote 34 Although framed in broad terms, the duty of care consists of three distinct duties, defined from sections 9 to 11 of the Bill: protection of users from illegal content (section 9); additional measures to protect children’s online safety (section 10); and protection of all users from harmful content, although not illegal, for services with broader reach and magnitude (section 11). Like other regulations, the Online Safety Bill has faced criticism for imposing duties that can burden providers and potentially facilitate censorship.Footnote 35
IV. Conclusion
Intermediary liability is undergoing significant changes that will force internet content providers to take on greater responsibility for the risks associated with their activities. This includes the dissemination of harmful content like misinformation and hate speech. However, there are concerns that this will decrease the relative immunity that these providers have for hosting third-party content, impacting both economic incentives and human rights. The economic capacity of developing automated tools and sustaining teams that are capable of operating content moderation in such expedited time frames is also a concern.
While combating harmful content is a valid goal, the challenge lies in identifying illegal forms of speech and creating mechanisms to remove them without over-moderating content. Economic incentives to suppress speech more actively arising from stricter rules may make it harder for people to communicate online, potentially chilling speech on these platforms.
Financial support
No funding was received for preparation of this article.
Competing interest
Authors Caio C. V. Machado and Thaís Helena Aguiar declare none.