A. Introduction
Systemic risk assessments have the potential to significantly transform the landscape of content moderation. By requiring platforms to engage in a “proactive and continuous form of governance,” they promise to address content moderation where it really matters.Footnote 1 While in the United States, where scholarship on systemic approaches flourishes—and any such efforts depend on the goodwill of platforms to function—the European Union has introduced obligations for platforms to assess and mitigate systemic risks in its Digital Services Act (DSA).Footnote 2 The EU, yet again, passed an ambitious law regulating the digital future, likely with vast extraterritorial effect.Footnote 3 It is still uncertain whether the DSA will effectively strengthen platform accountability, or whether the EU merely released yet another European paper tiger. Much depends on how the Commission will implement the new law.Footnote 4
Systemic risk assessments constitute a new regulatory approach to tame the power of platforms.Footnote 5 They are supposed to complement individual rights and remedy mechanisms, which are part of what is often described as the “rule of law” approach.Footnote 6 Individual rights and remedy mechanisms, which allow users to challenge enforcement decisions, already exist, but are often viewed as ineffective. Many of the most important components of content moderation are out of reach for individual remedy mechanisms, and individual rights fail to capture all relevant societal interests. Articles 34 and 35 of the DSA aim to address these shortcomings by establishing obligations for platforms to assess and mitigate systemic risk.Footnote 7 Platforms must analyze the harms of their content moderation practices, including, according to the wording of Article 34, the “amplification of content,” the “design of recommender systems” and generally their “content moderation systems.”Footnote 8 They must analyze how their moderation practices systemically effect areas such as civic discourse, electoral processes, public health, and security. The DSA vaguely defines the sources of risk, Article 34 (2) DSA, the kinds of risks, Article 34 (1) DSA, and the sorts of mitigation measures platforms must consider.Footnote 9 Beyond that, systemic risk assessments in content moderation are unknown territory for industry, academics, and regulators alike.
A systemic approach to regulate content moderation could be a huge leap forward, allowing to hold platforms accountable for their societal impact. It’s also a jump into the unknown and it remains uncertain how the obligations will be implemented. The first round of risk assessments was due end of August 2023, four months after the EU Commission designated platforms as “very large online platforms” (VLOPs).Footnote 10 These risk assessments were then sent to auditors, the Commission and the Board for Digital Services, which review the risk assessments, develop best practices and guidelines on specific risks—Article 35 (2) b. and (3) DSA—and will potentially require platforms to take alternative mitigation measures or even fine them.Footnote 11 The risk assessment and audit reports have not yet been published and will only be published three months after the Commission has received the audit report.Footnote 12 As the audit period is one year, the first reports are not expected until Autumn 2024.Footnote 13 Meanwhile, the Commission has announced that it has opened an investigation into Tik Tok. Other than that, little is known about its approach to enforcing systemic risk assessment obligations. As the DSA only outlines the basic parameters of the processes by which risk assessments will be developed and reviewed, much uncertainty remains. The success of systemic risk assessments as a regulatory approach for content moderation will ultimately depend on how the process is implemented, on questions of who takes what role in that process, and on how those involved execute their particular functions.
This Article develops a proposal which contributes to the process of transforming systemic risk assessments into a meaningful mechanism. By providing some foundations and outlining one of many possible visions, it hopes to instigate a debate in which many more proposals for a successful implementation of the risk assessment provisions can be developed.
The Article begins by describing the potential and limitations of individual remedy mechanisms under the DSA. It argues that the DSA significantly strengthens individual remedy by expanding what kind of content moderation decisions can be challenged by users. However, despite this expansion, individual rights and remedy mechanisms remain structurally limited. Responding to these limitations requires systemic approaches, such as the ones outlined in the DSA.
The Article then lays out the challenges of systemic approaches to regulating content moderation. It explains why public actors should refrain from defining the concrete standards governing systemic risk assessments and argues that terms of services and contractual freedom do not provide a legitimate normative basis for assessing systemic risks, either. It describes the challenges of relying on fundamental rights and expertise to specify systemic risk obligations.
Responding to the identified challenges, Section D proposes an approach which focuses on civil society involvement and could contribute to making systemic risk assessments a success. It conceptualizes the processes laid out in the DSA as a “virtuous loop”—a loop involving platforms, the Commission, the Board for Digital Services and auditors—which ultimately aims at empowering civil society organizations. It engages with the shortcomings of conventional “multistakeholderism” and explains how systemic risk assessments provide solutions to these shortcomings. It argues that systemic risk assessments could make civil society involvement mandatory, thus making away with the flaws of extensive discretion of platforms with view to when and how to involve civil society stakeholders and pressuring them to account for positions articulated in civil society engagement. The Article finally proposes a role for social media councils and argues that they could help make civil society involvement fairer and could translate various positions into concrete and implementable solutions.
B. The Future of Content Moderation Regulation is Systemic
This Section describes the idea underlying existing individual remedy mechanisms. It explains why individual remedy might become significantly more impactful because of the DSA and why it still remains a structurally limited form of platform accountability.
I. The Idea Behind Individual Remedy
Individual remedy mechanisms allow users to contest enforcement decisions that platforms impose on users for violating the policies platforms establish for speech. The range of punishments levied on a user can range from the removal of a post to temporary or even permanent account suspensions. Individual remedy can be conceived as part of a broader approach to hold platforms accountable, which has been characterized as the rule of law approach.Footnote 14 Individual remedy has been celebrated as an important step in protecting users’ rights to free expression, constituting an important component of what has been described as “digital constitutionalism.”Footnote 15 Conceptually, the justification for requiring platforms to provide individual remedy mechanisms is straightforward: As platforms increase in size and power, their decisions to sanction users become comparable to decisions a state imposes on its citizens, and their policies resemble laws,Footnote 16 making them the “new governors.”Footnote 17 Therefore, similar to citizens having rights to contest actions of the state, users should have rights to contest actions of platforms.
From this analogy follows the idea that, although fundamental rights are originally intended to constrain the power of the state and conventionally only apply in the vertical relation between a citizen and the state, the extraordinary power of platforms justifies applying them in the horizontal relation between a user and platforms. More concretely, the idea is to apply the negative dimension of fundamental rights in a horizontal relationship, meaning the dimension which allows users to contest sanctions which platforms impose on them for the alleged violation of a rule. This idea has shaped self-regulatory approaches of platforms, which often provide appeal mechanisms to users for decisions to remove their content. The idea that fundamental rights may have a horizontal dimension is not new and a common subject of debate in European constitutional law.Footnote 18 National Courts, such as the German Federal Court of Justice, have begun relying on a horizontal dimension of national fundamental rights to address the overwhelming power of platforms with view to their users, too.Footnote 19 The DSA appears to enshrine this practice in European secondary law.Footnote 20 It is still largely unclear if and how secondary law can command the application of European fundamental rights, whose scope is defined in the Charter of Fundamental Rights—Article 51 (1) Charter of Fundamental RightsFootnote 21 —and what follows from a potential application with view to the substance of decisions taken by platforms.Footnote 22 Nonetheless, the DSA in any case creates clear procedural obligations: Platforms will have to provide a textual foundation for interventions against users in their terms and conditions, per Article 14 DSA; platforms will need to notify users with a statement of reasons when intervening, per Article 17 DSA; platforms will need to provide internal remedy, per Article 20 DSA: and platforms will be required to cooperate with external remedy mechanisms, per Article 21 DSA, meaning that users can appeal decisions to sanction them for violative content to independent bodies.Footnote 23
II. The New Potential of Individual Remedy: Demotion of Content
The DSA obliging platforms to provide individual remedy mechanisms is a step forward, for two reasons: One, providing remedy is no longer a courtesy of platforms but a legal requirement. Two, the DSA defines very, perhaps shockingly broadly, what counts as a sanction which triggers the individual remedy framework—meaning that it requires a foundation in the terms and conditions, a statement of reasons and triggers internal and external remedy mechanisms. Any “restrictions that they impose in relation to the use of their service,”Footnote 24 including the demotion of contentFootnote 25 and shadow-banningFootnote 26 count as sanctions.Footnote 27 The DSA’s definition of what is eligible as a “sanction,” and subsequently able to be challenged, is broad by design. This approach creates grounds for contestation for all the measures, explicit or covert, employed by platforms to enforce against content violating their policies.Footnote 28 The DSA basically codifies an idea that courts often took years or decades to develop: Measures that can be considered sanctions need not be as formal as an intervention or with the intent to sanction an individual. Instead, courts have championed an understanding of sanctions to include measures that practically and detrimentally affect individuals.
This is the “Dassonville” moment for platforms, comparable to the pathbreaking decision of the European Court of Justice declaring that any measure which is “capable of hindering directly or indirectly, actually or potentially intra-community trade are to be considered as measures having an effect equivalent to quantitative restrictions,” meaning that these measures constitute encroachments on trade rights and require a justification.Footnote 29 The ECJ’s broad understanding of what counts as an encroachment of individuals’ free trade rights paved the path for the ECJ to become the EU’s “engine of integration,” as it could declare inapplicable any national measure that it found to fall under this extremely broad definition.Footnote 30 Any such measure would need to serve a legitimate interest and satisfy necessity and proportionality requirements, and the ECJ is the final authority to assess whether such measures are indeed justified. Applying a broad definition as to what counts as an encroachment of individual rights changed the European legal order, and it will now change content moderation.
The DSA’s definition opens demotion -related content to user challenge, providing an urgently needed remedy: Demotion without justification or notification is a powerful and opaque form of content moderation that has lacked effective accountability mechanisms.Footnote 31 For example, Elon Musk explained that as CEO of Twitter, he would no longer remove content but “max deboost[] & [sic] demonetize[]” it instead, likely to avoid all the scrutiny that comes with removal.Footnote 32 The EU proposes its own answer to the question of whether “freedom of speech” is also “freedom of reach,”Footnote 33 introducing for the first-time requirements for platforms to justify reducing the reach of content. This means that in the EU, demotion is treated as a minus to removal: The normative framework is identical, only that the threshold to justify demotion will likely be lower than justifying removal. Besides strengthening individual remedy, this also raises difficult questions, such as how to delimit demotion from amplification.Footnote 34
III. Persistent Shortcomings
While individual remedy will become significantly more meaningful due to the broad definition of what counts as a restriction, its contribution to rectifying the societal impact of content moderation remains structurally limited.Footnote 35 The structural deficits of individual remedy consist, one, in their still limited scope: Crucial components of content moderation, such as amplification of content, the design of recommender systems and the newsfeed, clearly of significant importance for social media,Footnote 36 remain out of reach for individual remedy. Individual remedy does not address the algorithmic infrastructure, which ultimately makes a platform what it is. Two, many societal harms do not manifest themselves in the violation of individual rights, such as impacts on civic discourse, electoral processes or public health and security. Individual remedy only empowers users of platforms, although non-users equally suffer the detrimental consequences of content moderation. Three, content moderation at scale happens through automated means, and the availability of individual remedy, understood as an additional review through humans, cannot match the scale of automated decisions. Four, a regulatory approach which solely builds on individual remedy ultimately puts the burden of holding platforms accountable on the individual.
The transition from individual rights to systemic risks assessments entails a paradigm change, it introduces a very different approach to regulating tech. Previous legislation, such as the General Data Protection Regulation (GDPR), has focused heavily on individual rights and remedies, building on a neoliberal construction of individual freedom and responsibility.Footnote 37 The privacy theory that underpins the individual-centered regulatory approach that characterizes the GDPR is based on the idea of the “liberal self” that possesses “the capacity for rational deliberation and choice” and is “capable of exercising its capacities.”Footnote 38 This approach finds its continuation in proposals to respond to machine learning--based decision making with transparency, explainability and due process requirementsFootnote 39 —the effectiveness of which can be questioned.Footnote 40 Systemic risk assessments can be distinguished from regulatory approaches that assume that individuals will read terms and conditions, make meaningful choices when clicking through pop-up windows, and that this will keep large corporations in check. They shift responsibility from the individual to the state. They take the burden of holding platforms to account off the shoulders of individuals. In much the same way that the state is expected to ensure that the products we use and the food we eat are safe, the medicines we take are tested, and the toys our children play with are free of toxic substances, the welfare state is now expected to police the harmful effects of social media.
This Section argued that the DSA significantly strengthens individual remedy by expanding what kind of content moderation decisions can be challenged by users. Despite this expansion, individual remedy has its limitations. Responding to these limitations requires an understanding of systemic harms and regulatory approaches to address these harms. The following Section develops such an understanding.
C. The Challenges of Assessing Systemic Risks
To develop an adequate procedural and normative framework for risk assessments, we first need to understand the challenges that a systemic approach to content moderation entails. This Section outlines some of these challenges. It first explains why public actors should refrain from defining the concrete standards governing systemic risk assessment themselves. It then argues that terms of services and contractual freedom do not provide a legitimate normative basis for assessing systemic risks. Finally, it describes the challenges of relying on fundamental rights to substantiate systemic risk obligations.
I. Regulating Speech and Public Discourse
A comparison can help illustrate the challenges of regulating content moderation and the role that public actors should and should not play.
The starting point for such a comparison is that there is nothing new about regulating the operation of private companies to mitigate social harm. Environmental regulation, for example, has the goal of limiting the detrimental impact of production processes on our climate. How is regulating industrial processes generating CO2 different from regulating content moderation?Footnote 41
The first point this comparison helps us clarify is the reasons justifying regulators intervening in platforms’ content moderation practices. When a company runs a factory and emits CO2, it cannot argue that the way it conducts its business is a purely private matter. Emissions affect everyone, which makes the processes creating them a public matter and which justifies regulating them. The regulation of content moderation and systemic risks is based on the same idea. Because social media, and more particularly the content on social media including its curation, affects public discourse, elections and public health, and produces-real world violence, content moderation is not a private matter. The impact of content moderation on our society justifies regulating content moderation.
Secondly, the comparison illustrates the particularities of public institutions regulating speech and public discourse.Footnote 42 We may take no issue with the state defining concrete standards on how much CO2 a company may emit. When the state regulates content moderation, however, it indirectly regulates speech and it shapes public discourse as it takes place online. By defining concrete standards for content moderation, governments would, to limit the power of platform, unduly expand their own power. This is the conundrum of regulating content moderation.Footnote 43 Regulating content moderation should not be a back door to regulating speech. In a democracy, public discourse shapes elections which ultimately determine who holds power. Democratic governments and more broadly public institutions should not be able to manipulate public discourse, and content moderation shaping that public discourse, in their favor. There is, therefore, a natural and healthy skepticism to endowing the EU Commission with a mandate to dictate platforms how they should moderate and curate content.Footnote 44
A third aspect, which the comparison allows us to see more clearly, is why regulating content moderation is so hard—and why we need a complex process to succeed in this task. The comparison helps us specify the challenges of assessing and mitigating the systemic risks of content moderation. When regulating CO2 emissions, emissions can be measured in an objective manner and their effects can be proven in an empiric, scientific process. Regulation can establish clear, objective and easily measurable thresholds for what is allowed and what is not allowed, how much CO2 can and cannot be emitted. Regulating content moderation is very different from regulating emissions. The societal harms of content moderation are not easily quantifiable. We lack objective standards and empirical measures to assess these harms; thus hard questions arise: What are the emissions or public harms produced by social media? How to measure these emissions and harms? What causes them? What conditions must content moderation fulfil to minimize emissions and mitigate harm? These questions are hard to answer, and there are no objectively correct solutions. And as we deal with speech and public discourse, public institutions should restrain from dictating standards in a similar way as they might dictate emission limitations. This is why the processes around risk assessments are complex and why we need a procedural framework which allows us to define and refine standards over time.
II. The Limited Legitimizing Power of Terms of Service
Another point, which the comparison illustrates, is that the reference point for systemic risk assessments is not the user base of a platform, but the public more broadly. One important idea underlying systemic risk assessments is that the way platforms moderate content affects not only the users of platforms, but also all other people and society at large. As the impact of content moderation is not only internal but also external, producing real-world harms affecting fundamental rights and the functioning of democracy, problems pertaining to content moderation cannot simply be resolved between a company and its users. It is for that very reason, that the idea that user choice pertaining to the content that they see on their newsfeed solves the problem of content moderation is flawed, and why relying on user polls to decide hard questions is not convincing either. Anyone involved in systemic risk assessments, including social media councils, must focus on the impact of content moderation on the general public, rather than only on the platforms’ users.
Based on this observation, we can identify important differences in the normative framework which underlies individual remedy and a normative framework we might find suitable for systemic risk assessments. Individual remedy allows users to contest platforms’ enforcement decisions. The starting point of any claim brought under the individual remedy framework is that an enforcement decision did not align with the terms and conditions. Platforms, in turn, invoke their terms and conditions to justify decisions that affect their users. The underlying assumption here is that basing enforcement decisions on terms and conditions is legitimate because the user agreed to these terms and conditions. Being bound by them is a consequence of users exercising their contractual freedom. One can contest this assumption, arguing that contractual freedom only legitimizes the relation between two parties if both contracting parties are actually on an equal footing and that, in fact, platforms are often so big and market-dominating that this balance does not exist at all, leaving the user no real choice. However, one does not have to accept this argument, or engage in that argument at all to see why terms and conditions are largely irrelevant when it comes to justifying systemic risks. Systemic risks affect everyone, not just the users, including all people who never agreed to the terms and conditions. In the normative framework for systemic risk assessments, terms and services are not the solution to the problem but very much a part of it.
The regime established in Articles 34 and 35 DSA requires to assess whether terms and conditions cause systemic risks.Footnote 45 While the DSA does not directly interfere with platforms’ ability to write their own terms and conditions, systemic risk assessments constrain their discretion.Footnote 46 Cementing an everything -goes approach in X‘s terms and conditions would not shelter the platform from obligations under the DSA. Platforms will invoke their right to conduct a business to insist that they can design their terms and services as they please. But this does not prevent EU institutions to address terms and conditions as a source of risks, as explicitly named in Article 34(2) DSA and presents us with the challenge of balancing a platform’s right to conduct a business with the objective of mitigating societal harms.
III. The Challenges of Applying Human and Fundamental Rights
We can look at individual remedy processes to reflect on if and how human or fundamental rights could provide foundations for systemic risk assessments. Over the last few years, the idea that international human rights law should guide content moderation practices has been discussed by academics, civil society organizations and UN rapporteurs alike.Footnote 47 Social media councils involved in individual remedy mechanisms have developed approaches to review enforcement decisions based on human rights.Footnote 48 Exploring how international human rights law evolves, scholars analyze if and how it imposes obligations on intermediaries, including platforms.Footnote 49 Relatedly, the DSA requires platforms to account for fundamental rights in their enforcement processes—Article 14 DSA—and requires them to assess negative effects on fundamental rights on a systemic level—Article 34 DSA.Footnote 50 With secondary law aiming to define the scope of application of what has the status of EU primary law, the EU Charter, it is still largely unclear what the application of European fundamental rights to content moderation will look like.Footnote 51
While relying on human or fundamental rights in the individual remedy framework typically means applying them horizontally in a negative dimension, applying them to systemic risks would mean to apply them horizontally in a positive dimension. That means that platforms would not only be expected to not violate the freedom of expression of their users by removing their content, but that they would be expected to take active steps in order to assure that their content moderation practices do not have detrimental effects of fundamental rights generally, including effects on fundamental rights of people who are not their users, but who can still be affected by online content, such as hate speech. This entails even more uncertainties. It is difficult to assess whether a post should be removed based on fundamental rights, and it is even more difficult to assess whether a particular way to amplify content “impacts fundamental rights.” Examining international human rights law, scholars have assessed whether states have positive obligations, meaning that they should pass laws that protect human rights.Footnote 52 The question the DSA raises is whether similar obligations should be extended to private actors, such as social media platforms. In the world of international human rights law, the idea that businesses have human rights responsibilities is already well established, and addressed under the notion of human rights responsibilities of businesses, which are defined in the non-binding UN Guiding Principles on Business and Human Rights. Civil society organizations, human rights consultancies and UN agencies and officials have worked to specify human rights due diligence obligations in the context of freedom of expression and social media for years.Footnote 53 To better grasp what the obligations pertaining to systemic risks could entail for platforms, we can thus draw a comparison between due diligence obligations under the UN Guiding Principles on Business and Human Rights and the DSA’s obligations to assess systemic risks. The EU can learn from the efforts originating in the human rights and corporate social responsibility world, and the global struggle can leverage hard legal obligations originating in the EU to hold platforms accountable to human rights responsibilities.
However, while holding platforms accountable based on human rights responsibilities is seen as an important pathway to constrain platforms’ power, there is also persistent skepticism that human rights actually provide an adequate basis to assess platforms’ decisions,Footnote 54 and questions around how this development, in turn, affects international law.Footnote 55 There are, indeed, many good reasons to doubt that, as the criticism focusing on individual remedy already illustrates: Human rights responsibilities are rather general and do not say anything specific about how to moderate content; perhaps framing a discussion on what should happen to content in terms of human rights disguises more than it reveals; and perhaps it is impossible to resolve individual cases based on human rights responsibilities through legal reasoning, and social media councils should not purport that the outcomes they suggest are somehow determined by international law. The skepticism on human rights providing an adequate framework for the existing individual remedy framework is likely to become even more pronounced with view to relying on human rights, or, rather, in the context of the DSA, fundamental rights, to assess systemic risks. We do not have to discuss the merits of this skepticism in detail and need not share a skeptical view of the role of human rights in content moderation, to understand that human or fundamental rights will not provide objective detailed standards based on which systemic risks could be measured and mitigated.
This also means that human rights experts can only play a limited role in defining the standards that should govern systemic risks. Platforms can, and already do, work with human rights experts such as BSR or Article 1, who can develop human rights impact assessments.Footnote 56 They also work with Trust & Safety experts, such as the Trust & Safety Professional Association or the Integrity Institute.Footnote 57 These organizations aim at developing and publicly sharing knowledge, developing processes and standards which can be applied by platforms. While strengthening the role of outside experts may be one way to substantiate the systemic risk assessment process, and we should develop proposals for them to do so in an impactful, independent and more transparent way, it has its inherent limitations.
Human rights experts alone, cannot decide some of the hardest questions pertaining to systemic risks, and the tradeoffs involved in choosing between mitigation measures. As acknowledged above, they cannot claim that their particular assessment of systemic risks directly follows from international human rights, or European fundamental rights.Footnote 58 They, as well as Trust & Safety and any other involved expert, ultimately make choices that, while based on expertise, are not neutral. Platforms may attempt to frame the work with outside experts "as ostensibly neutral acts of consultations or advice,” but doing so “obfuscate(s) the fact that each entity has its own goal and agenda, further complicated by the fact that many of these entities offer their ‘expertise’ for a fee.”Footnote 59 Surely, different experts will come to different solutions when it comes to assessing and mitigating systemic risks and answering hard questions on the impact of content moderation practices.
As we lack a clear normative basis which could govern systemic risks, and regulators should not define standards in detail, we need to develop a procedural approach which allow to concretize systemic risk obligations over time. The following Section proposes such an approach.
D. A Way Forward: A Virtuous Loop to Assess Systemic Risks
This Section explains the process as outlined in the DSA, conceptualizing it as a “virtuous loop”—a process which empowers civil society and will help solve the conundrum of regulating content moderation. It analyses what role different players should and should not play in that loop, including the Commission, the Board for Digital Services, platforms and auditors, and explains why civil society stakeholders should play a central in the process of assessing and mitigating risks. It describes how civil society involvement already constitutes the core of another approach to tame the power of platforms, of an approach referred to as “multistakeholderism.” To test the proposed model, the Section lays out the standard criticism of conventional multi-stakeholder consultancy processes, and then argues that the “virtuous loop” of risk assessments provides responses to this criticism. Finally, focusing on social media councils, it outlines one among many possible visions of how to strengthen civil society involvement in risk assessments in more detail.Footnote 60 It describes how social media councils could contribute and strengthen civil society involvement in risk assessments and further mitigate conventional shortcomings of multistakeholderism.
I. The Process as Established in the DSA
Beyond defining sources of risk, kinds of risks, and the mitigation measures platforms must consider, the DSA outlined responsibilities and processes around systemic risk assessments. It basically creates a loop in which, once a year and before launching a new product, platforms’ efforts to assess and mitigate risks are evaluated and recalibrated.Footnote 61 Platforms are supposed to work with civil society organizations and independent experts to assess risks and develop mitigation measures and are obliged to submit a yearly report on their efforts to the EU Commission, the Board for Digital Services and auditors. They will then assess whether platforms comply with their obligations under the DSA, or whether they need to change and improve their risk assessment efforts and take alternative mitigation measures. After the Commission and the Board for Digital Services received the first round of risk assessments from platforms, they are expected to begin developing best practices and guidelines which platforms, in their future risk assessments, will have to account for. In addition, the Commission and the Board for Digital Services are tasked with facilitating the development of codes of conduct, including on issues pertaining to systemic risks.Footnote 62
Some roles, namely the roles of the platforms, the Commission and the Board for Digital Services, are formally set. However, all three are somewhat ill-suited to ultimately decide what systemic risks are and how they should be mitigated. Platforms are ill-suited, because self-assessments by corporations whose ultimate goal is to maximize profits, lack credibility.
The Commission and the Board for Digital Services, consisting of the National Digital Services Coordinators, are ill-suited, because they are both public institutions, which raises the concerns explained above in Section B.
With platforms, the Commission and the Board for Digital Services not being well suited, the question arises what other actors can contribute to transform systemic risk assessment into something meaningful and legitimate. Three actors might be especially relevant: Auditors, civil society organizations and, not outlined in the DSA, but always relevant in the EU, national courts and the European Court Justice (ECJ).
Despite the rather ambitious aspirations of the EU Commission laid out in its recently delegated act,Footnote 63 it is still unclear how audits can add value to risk assessments rather than only creating revenue for profit-oriented consultancy firms.Footnote 64 Auditors are usually large companies, likely with no significant expertise in content moderation and certainly with no particular legitimacy with regard to defending the public interest. Until shortcomings of auditing processes are overcome, we cannot rely on auditing to create true accountability. Many of the submissions the Commission received for the public consultation of its delegated act make this very point.Footnote 65 Even platforms themselvesFootnote 66 and large consulting companiesFootnote 67 urge the Commission to provide more concrete standards and argue that it should not be the auditors who define these standards.
Courts, and the ECJ in particular, might, over time, play their role in defining standards which need to be considered in systemic risk assessments. The text of the DSA obliges platforms to assess the effects on fundamental rights of curation practices such as recommendations, the newsfeed and amplification, more generally. National courts and the ECJ have both the competence and expertise to weigh in on questions pertaining to the horizontal application of fundamental rights. How they might do so, however, remains rather unpredictable. As explained in Section C.III, one can construct the questions of assessing the systemic fundamental rights implications of curation practices as a matter of concretizing positive fundamental rights obligations on a horizontal level. This is not entirely implausible, as it is only a step further from the analogy which led to applying negative rights on a horizontal level. However, such a construction entails uncertainties, faces objections from the horizontal as well as federal separation of powers within the EU, and might ignore the fundamental rights protections which platforms themselves enjoy.Footnote 68 In the near future, the role of ECJ in addressing systemic fundamental rights implications may remain constraint to its assessments of competing interests which justify individual rights encroachments, such as when it considered in its ruling on “the right to be forgotten” whether the general public had a legitimate interest in accessing information provided by search engine providers.Footnote 69
This leaves one more role open, which could contribute to solving some of the challenges of systemic risk assessments, which is the role of civil society organizations. This role is only established in the recitals of the DSA, indicating minor relevance, per recital Nb. 90. However, taking into account the practical and theoretical concerns raised here, stakeholders might be best suited to concretize the standards and processes based on which systemic risks should be measured and mitigated.Footnote 70 Many submissions to the Commission’s delegated act on independent auditing make that point with view to both, systemic risk assessments and independent auditing.Footnote 71
II. “Multistakeholderism” and its Flaws
The idea that civil society organizations should fill the gap which regulators cannot close is not new, and has previously been described as the “‘least-worst’ option.”Footnote 72 It was dubbed with the ugly yet somehow fitting term of “multistakeholderism.”Footnote 73 Before systemic risk assessments were introduced, multistakeholderism was conceived as the second large approach, besides rule of law and individual remedy mechanisms, to hold platforms accountable. It aims at increasing civil society’s influence in platform governance, through increasing transparency and assuring consultation and participation of stakeholders.Footnote 74 It “pursues democratic accountability . . . but reflects concerns about direct state regulation of online communications.”Footnote 75 Building usually only on voluntary commitments, it does not raise the same concerns as direct state regulation.
However, as indicated above, not only is the idea of civil society organizations helping regulate content moderation not new, there is also persistent skepticism that this indeed provides useful solutions. Let us look at four major objections against multistakeholderism. One, the voluntary nature of civil society involvement leaves platforms too much discretion on when, how and for what questions they should consult stakeholders. Two, even where platforms involve civil society, let us assume for a moment that civil society involvement creates concrete solutions, they are not required to implement these solutions. Three, let us challenge that assumption, and acknowledge that civil society participation often does not produce compromises and implementable solutions. Different civil society organizations will bring different views to the table—which is their very task—and often favor different results. Stakeholder consultation helps to inform decision making, which is immensely important, but does not preempt the choice between different possible solutions. Platforms remain the “single point of authority” and ultimately decide what to make of consultations.Footnote 76 Four, although multistakeholderism aims at introducing public interest and purports to contribute to democratic accountability, it can lead to unequal participation, the underrepresentation of poorly funded or marginalized groups and unfair outcomes which only represent the interest of those who organized more effectively.Footnote 77
The conclusion which is often drawn is thus that multistakeholderism cannot provide democratic accountability and, therefore, has limited legitimizing capacity.Footnote 78 Perhaps, one could distill the criticism, the ugliness of the term captures well the thing itself, which is complicated, unpractical, unfair and ultimately useless. Why, then, should multistakeholderism help respond to challenges posed by systemic risk assessments under the DSA? Because, maybe, systemic risk assessments can fix multistakeholderism, and in turn, multistakeholderism can fix what would otherwise be performative and ineffective systemic risk assessments. This leads us to the decisive question: How would that work? The final Sections of this Article propose an answer to that question, first focusing on the mandatory nature of risk assessments and then outlining a potential role for social media councils.
III. Voluntary and Selective Involvement Versus Mandatory and Pre-Defined Involvement
The Commission could empower civil society organizations—emphasis on could, as it is by no means certain that it will. It could decide to give civil society involvement significant weight in its evaluations of the submissions of platforms, and it could, together with the Board for Digital Services, require substantial civil society involvement in its best practices and guidelines. Perhaps this is the true ingenuity of the DSA’s systemic risk assessment provisions: They create a loop in which the Commission can use its enforcement powers to empower civil society and independent expertise. This loop provides solutions to some of the most important objections against multistakeholderism—and might untimely help solve the conundrum regulating content moderation.
Best practices and guidelines, in which the Commission and the Board for Digital Services specify what civil society involvement as outlined in the recitals of the DSA concretely entails, would not create legal obligations, but would have quasi-legal effects: Whoever risks ignoring these guidelines and best practices risks breaching their due diligence obligations with view to assessing and mitigating risks. Best practices and guidelines would thus ultimately remove the cardinal flaw of multistakeholderism, its voluntary nature. Platforms could no longer simply ignore outcomes of civil society engagement, as this might indicate to the Commission that they fail to adequately assess and mitigate risks. They can also no longer arbitrarily decide in which areas to involve civil society, as Article 34 DSA defines the sources and kinds of risks that risk assessment must address—and, if refined by best practices and guidelines, would be expected to involve civil society in all major areas.
The loop conveys the task of concretizing and applying substantive standards based on which systemic risks should be assessed and mitigated to civil society actors, while the task of enforcing obligations remains with the Commission. It assures that platforms will have to account for public interest as articulated in the stakeholder process, as imperfect as it might be, without public actors having to define what that concretely means themselves While multistakeholderism has the potential to respond to a difficult issue raised by systemic risk assessments, namely how to define and apply substantive standards and reconcile competing interests, it will by no means provide a panacea, understood as a comprehensive solution to all the challenges posed by risk assessments. Instead, we should see it as one piece of the puzzle, one component of what makes up an effective risk assessment ecosystem. With good reason, academics and regulators are working to develop data-driven analyses, thorough empirical methodologies and clear audit standards to measure harm and the effectiveness of mitigation measures.Footnote 79 This work is crucial to the success of risk assessments, too. However, we should be careful not to confuse the hard substantive questions underlying risk assessments—such as how to draw red lines between what is acceptable and unacceptable risk, how to reconcile competing fundamental rights, what forms of content moderation harm democracy—with questions about to how to measure harm. We should be cautious not to drown difficult normative questions in methodological and technocratic standard-setting.
The DSA already foresees that civil society should contribute to the development of codes of conduct.Footnote 80 While codes of conduct may play an increasingly important role over time in concretizing obligations in certain risk areas, developing these codes is a tedious process which takes a lot of time, and they, too, remain relatively abstract and thus have limited impact on the concrete handling of risks.Footnote 81 Providing civil society with a role in the process of assessing risks would give them a voice where it really matters—in the evaluation of concrete risks and the decision on mitigation measures—and would ensure the practical impact of their work. Best practices and guidelines on risk assessments issued by the Commission and the Board for Digital Services can do what codes of conduct likely cannot do, which is to shape a procedural framework for risk assessments and help create the virtuous loop outlined here.
Given the above-described feedback the Commission received on its delegated act for auditing, there is hope to believe that the Commission might strengthen the role of civil society stakeholders in risk assessments, potentially through best practices and guidelines. However, we also need to consider the consequences of if it does not: What would happen if the Commission did not encourage a role for civil society stakeholders? The DSA and its risk assessment obligations would likely, contrary to their objective, marginalize civil society’s influence on platforms’ content moderation practices. Before the DSA, assessing and mitigating societal harms was a matter of corporate social responsibility for platforms.Footnote 82 They could work with civil society stakeholders to develop solutions but were also under no legal pressure to implement solutions proposed by stakeholders if they believed they would harm their business practices.Footnote 83 As assessing and mitigating societal harms becomes a legal requirement, the dynamic between platforms and civil society actors changes. Working with civil society suddenly creates compliance risks. Platforms run the risk of having to justify before regulators why they did not implement solutions proposed by stakeholders, and could even be fined for not doing so. The consequence of transforming corporate social responsibility matters into hard law is that the legal rather than policy departments of platforms take the lead. The task of legal teams is to minimize compliance risks. To minimize compliance risks, legal teams will likely advise against any civil society engagement that is not required by regulators. Because of the DSA, civil society stakeholders risk losing the impact they had gained during the era where assessing the societal risks of platforms was a matter of corporate social responsibility. The demise of policy and governance teams, evidenced by the recent waves of layoffs of platforms, would be followed by the demise of civil society stakeholders’ impact.Footnote 84 The proposed loop is necessary not only to strengthen civil society involvement but also to prevent its marginalization.
To sum up, the proposed loop would respond to two of the most important objections against multistakeholderism: that platforms can arbitrarily decide on where and how to involve civil society stakeholders and that platforms are not bound by the results. Two major objections remain: the problem that civil society involvement does not actually produce any implementable solutions and leaves the discretion to decide between different proposals to platforms. And finally, the problem that stakeholder involvement often fails to guarantee equal participation. The following and final Section develops a proposal for how to respond to these objections. It proposes that platforms should work with social media councils to translate civil society involvement into concrete solutions.
IV. Situating Social Media Councils in the Systemic Risk Framework
The term “social media council” is broadly used to describe bodies which typically include a committee of experts or civil society representatives and which work with platforms to improve their content moderation practices.Footnote 85 The idea that such councils—whatever form or shape they might take—can contribute to content moderation accountability has been supported not only by academics but also by the former Special Rapporteur on Freedom of Expression of the UN, David Kaye,Footnote 86 and they feature in government programs to regulate platforms.Footnote 87 Social media councils have been extensively discussed in academia in recent years, especially the most prominent one, the Oversight Board.Footnote 88 Discussions centered on questions including whether they actually are credible mechanisms to hold platforms accountable, and with view to the Oversight Board, whether it is truly independent from Meta, whether its human rights approach is compelling, whether it is composition and processes indeed provide an additional and legitimate way to take important decisions and account for the public interest. Originally described as Meta’s “Supreme Court,” social media councils tended to be contextualized in the world of individual remedy. This Section revisits that characterization, and situates social media councils in the world of systemic risk assessments and the regulatory landscape the DSA creates.
It argues that, with the DSA, we have a clear language and framework to talk about systemic issues and societal harms of these platforms, and that this allows to strengthen the added value of social media councils. It paints a vision for a contribution social media councils could make, and outlines how they could help respond to the two outstanding shortcomings of multistakeholderism: Its unfairness and incapability to reconcile competing positions and produce implementable solutions.
Let us begin by briefly reassessing what social media councils do based on the systemic risks assessment framework, taking the Oversight Board as an example. The Oversight Board has been described as “a body of experts, drawn from geographically, culturally, and professionally diverse backgrounds, funded by an independent trust created under Delaware law, which Meta has empowered to make decisions on content that will affect billions of people worldwide.”Footnote 89 The goal of the Oversight Board is to decide emblematic cases, cases which are difficult and significant. When deciding a case, the Board digs deep, analyzing the processes, automated systems and policies which decided the particular case, but which equally affect other cases. The Board ultimately aims at discovering systemic issues and flaws in Meta’s content moderation and takes individual cases as a vehicle to advocate for systemic changes. Based on the analyses of an individual case, it then recommends changes to matters such as policies, enforcement processes and transparency. And it tracks whether Meta implements the recommended changes. While the overall number of cases the Board decides is low, it is not really the number that matters but the systemic changes the Board achieves through cases. Engaging with individual cases, the Board is still part of the world of individual remedy, yes, but it is also part of the world of systemic risk assessments, as it aims at structurally improving the way Meta treats users. The confusing part here is that we can also talk about individual rights and remedy on a systemic level, and that is what the Board is doing in its cases.
However, we wanted to look beyond individual remedy, and focus on more systemic issues such as amplification or recommendations. How could social media councils, as one player among many, contribute to addressing the systemic risks of content moderation? Perhaps it is not that big of a leap. Perhaps it is already happening, in a modest way. Let us look at the Oversight Board again as an example. When deciding individual cases, the Board can enquire about the algorithmic systems Meta uses to moderate content, and make recommendations pertaining to those systems.Footnote 90 Additionally, the Board does not only decide individual cases but also publishes “Policy Advisory Opinions.”Footnote 91 Meta can decide to send difficult content moderation questions to the Board, which then, after a few months of acquiring relevant information, analyzing public comments, engaging with civil society stakeholders, deliberating and drafting, answers the questions and recommends a concrete way forward. The recommendations are, again, not binding, but the Board tracks how Meta reacts to them. In terms of substance, the scope of Policy Advisory Opinions is not limited. They can focus on issues which the DSA describes as sources of systemic risks, such as how Meta amplifies or demotes content. Building on the DSA, we can conceive this process as a potential way to assess the systemic risks of the most important components of platforms’ content moderation, including the amplification of content and recommender systems. Mechanisms in which platforms work with social media councils, such as Policy Advisory Opinions or what we can perhaps, more generally, call “risk advisory opinions,” may offer a way forward for social media councils to address all kinds of hard questions pertaining to systemic risks.
Moving away from the concrete example and existing practices, we can more generally assert that one credible way for platforms to conduct risk assessments could be to rely on social media councils to analyze risks, answer hard questions and decide between different risk mitigation measures. Let us leave the example of the Oversight Board behind. The potential skepticism of the credibility and independence of the Oversight Board, the endless discussion about whether the Oversight Board is a legitimate actor or a PR stunt, should not distract us from the task ahead: Envisioning a future in which social media councils, however you want them to be structured and funded, could help fix shortcomings of multistakeholderism and make systemic risks assessments a mechanism of effective accountability.
The cooperation between platforms and social media councils can take different shapes and different degrees of intensity. Enabled by the Commission’s implementation of the DSA, different models could emerge. On one end of the spectrum could be cooperation which leaves control largely in the hands of the platform, allowing them to decide what questions the social media council should answer, what access they get to information and leaving them discretion if and how to implement solutions suggested by the council. On the other end of the spectrum, the cooperation could consist of platforms conveying certain competencies to social media councils, providing them decay on priorities for risk assessments, giving them access to relevant data, allowing them to choose the questions they wish to engage with, to develop their own frameworks and to make recommendations on some of the most important questions. The platforms would not necessarily have to follow all recommendations that social media councils make, but at least would commit to respond to them—and justify before the Commission and the Board for Digital Services if they did not follow the recommendations the social media councils made. Social media councils could track whether platforms implement the proposed measures and assess the effectiveness of those measures.
We do focus on the potential of social media councils to contribute to the virtuous loop of risk assessments, but we also need to account for their limitations. Social media councils would fulfill what can be conceived as a public function, and they, too, require legitimacy.Footnote 92 The topics that social media councils will have to deal with are so diverse and complex, that we cannot seriously assume that the members of these councils will be experts on all of these issues or that the staff of the social media council could develop the necessary expertise to assess systemic risks in a variety of fields. Expertise and fundamental rights, we found earlier, provide an imperfect basis to resolve hard questions in a legitimate way.
This is why social media councils will have to work with civil society organizations who do the hard work of empirically and qualitatively assessing the practical effects of curation. The added value of social media councils as envisioned in the “virtuous loop” proposed here not so much consists in expertise but in providing a process which involves a variety of civil society organizations, thus enabling participation and consultation of affected groups.Footnote 93 Multistakeholderism can help fix the shortcomings of an otherwise expert-led, and ultimately illegitimate, process concretizing how risks should be assessed and mitigated. Social media councils, in turn, can help make civil society involvement fairer and translate it into concrete solutions.
Scholars criticized multistakeholderism as the impact that civil society organizations have in such processes depends on their funding and ability to effectively organize.Footnote 94 Civil society involvement, so the concern, thus does not actually represent the public interest and does not actually provide democratic accountability. The deliberative processes of social media councils could help mitigate this concern. We can imagine social media councils to work like courts, where parties are expected to all facts and concerns, but where the complex task of ultimately assessing their weight and relevancy for the legal evaluation falls to the judge. The idea that “the court knows the law” implies that parties presenting their concerns need not to and the court should give equal considerations to all issues raised, even if they are not presented in a comprehensive and formal manner. Social media councils should, in turn, know the basic legal parameters governing systemic risk assessments and situate relevant facts and concerns wherever they are the most relevant. Where need be, they could commission additional research to dive deeper into underrepresented positions. Social media councils could serve as a corrective for otherwise potentially skewed civil society involvement and could thus help to make it fairer, providing a solution to another shortcoming of conventional multistakeholderism.
Social media councils would not only oversee the civil society involvement but would also serve as the new “point of authority”Footnote 95 which decides what conclusions to draw from the various positions presented in stakeholder engagement. The remaining objections against “multistakeholderism” consist in the problem that civil society involvement ultimately only surfaces various divergent positions, failing to reconcile those and producing concrete implementable measures. The solution to this problem lies in the deliberative processes of social media councils and the reflection of these deliberations in its reasoned decisions.
We acknowledged that human rights, legal reasoning, and expertise do not actually alleviate anyone from making hard calls, deciding between different tradeoffs and resolving conflicting interests. The next best thing to legal reasoning and expertise are deliberative processes: Processes in which a group of people discusses, and weighs affected interests, to finally come to a conclusion. Yes, the conclusions reached in these deliberations and solutions proposed by social media councils will be imperfect. But at least a reasonable conversation that was detached from economic interests and well-informed took place. And we can read about the considerations which led to these solutions and criticize the reasoning behind them. This allows for academic and public conversations which can feed into the next round of risk assessments the following year. This solution is not perfect, but, if we get the set-up of social media councils and their cooperation with civil society right, it could be less worse than the existing “least-worst” options.
For social media councils to become credible actors and add value in the loop of risk assessments, we need to think more about how they should be structured and funded, how the decision-makers should be chosen and how the body should operate. We can build on the experiences of and the debate on the Oversight Board to develop proposals. We also need to think about how the research and the work of contributing civil society organizations should be funded. Much speaks in favor of the EU providing resources for this work, as the quality of risk assessments will largely depend on the quality of this research. This would be money well spent, and the EU has sufficient resources based on the fee it charges to platforms for the implementation of the DSA, which consist of 0.05% of their worldwide annual net income.Footnote 96 The delegated act by the Commission on methodologies and procedures regarding the supervisory fees mentions “studies and external consultants referring to a given designated service, including . . . or analyzing a given category of risk resulting from the risk assessment” as one potential expenditure.Footnote 97 This may provide starting points for funding required under the proposed model.
Many open questions remain, and not much time before the Commission and the Board for Digital Services will begin issuing best practices and guidelines which will likely shape risk assessments for years to come. How should social media councils be set up for them to be independent from both platforms and European regulators? Who should sit in these councils and how exactly will their processes integrate civil society organizations? How exactly should they be plugged into the risk assessment process and what should their decision-making process look like? How can they base decisions on fundamental rights, and legitimately exercise discretion which unavoidably remains?
The modest goal of this Article is not to answer these questions but to demonstrate that it is worth pursuing answers, for platforms, civil society and the regulators implementing the DSA. Relying on social media councils has significant advantages for all parties involved, including platforms, civil society, and the Commission: Platforms would benefit from involving social media councils in their process of assessing and mitigating risks, as social media councils could ultimately help decide the hardest questions platforms face, weighing tradeoffs presented by other stakeholders. If platforms choose to implement the recommendations by the social media council and incorporate them into their strategy to assess and mitigate risks, this conveys credibility to the systemic risk assessments they submit to the EU. In response, platforms could expect positive evaluations and non-interference from the side of the EU institutions and would minimize the risks of fines.
The involvement of social media councils could make civil society stakeholder engagement fairer and more effective. They could help strengthen underrepresented voices, reconcile diverse positions and translate them into concrete solutions. Their deliberative processes and reasoned decisions could lead to reasonable and transparent outcomes.
The involvement of social media councils allows the EU Commission to restrain from defining the concrete standards against which to measure content moderation and curation practices, as any public institution should, while catalyzing a productive, virtuous loop which empowers civil society. Through the platforms’ work with external experts and social media councils, the Commission and the Board for Digital Services receive high-quality risks assessments, based on which they can further define substantial standards governing systems risk assessments, without unduly interfering with the workings of platforms.
E. Conclusion
Systemic risk assessments indeed pose great, but not entirely unprecedented, challenges. Under the first pillar of content moderation, self-regulation, individual remedy, we learned a lot about institutional set-ups and normative frameworks which can help solve the conundrum of taming content moderation. We can now leverage this experience to transform systemic risk assessments into a powerful tool of platform accountability which can help mitigate the social harms of social media. This year, in which obligations around risk assessments will be implemented for the first time, will shape the future of content moderation. If done right, systems risk assessments will defend the public interest in the space of at-scale content moderation and help protect democracy.
This Article addressed one of the issues raised by the new systemic risk assessment regime, namely how to define substantive standards and reconcile competing interests. It described one piece of the puzzle, aiming to contribute to a fuller picture of how systemic risk assessments could become a success. It argued that we should conceive the process set up by the DSA as a virtuous loop whose aim is to empower civil society stakeholders. The Commission and the Board for Digital Services could fix flaws of “multistakeholderism” by making civil society involvement mandatory through its best practices and guidelines, and social media councils could reconcile the perspectives of a variety of civil society stakeholders and contribute to fairer outcomes. Social media councils could ultimately be better suited than platforms or the Commission to answer hard questions and make difficult choices. They could be involved early in the process of risk assessments, defining priorities, and could then develop risk advisory opinions for the most important areas, assessing risks and selecting mitigation measures, and tracking whether platforms implement the proposed measures. To make the proposed model a reality, the Commission and the Board for Digital Services must empower civil society organizations, and more concretely models such as social media councils, through their best practices and guidelines, and needs to ensure their funding.
Many open questions remain. The window in which answers to these questions can meaningfully influence the implementation of the DSA might close soon. It is high time for academia and civil society to provide answers and come up with more concrete proposals on how to meaningfully involve civil society in risk assessments. The virtuous loop, as outlined in this Article, can provide a framework for the debate. And the reflections on social media councils can serve as a starting point for a detailed model—or at least as provocations for alternative proposals.
Acknowledgements
This paper is based on the participation in the research clinic PLATFORM://DEMOCRACY of the Humboldt Institute for Internet and Society and the short, related paper “Assessing the systemic risks of curation”, published as a result of that clinic in: Kettemann, Matthias C.; Schulz, Wolfgang (eds.) (2023): Platform://Democracy–Perspectives on Platform Power, Public Values and the Potential of Social Media Councils. Hamburg: Verlag Hans-Bredow-Institut. https://doi.org/10.21241/ssoar.86524, pp. 170-172.
I am grateful to Prof. Dr. Matthias Kettemann, Prof. Dr. Mattias Wendel, Amre Metwally and the entire Community of the Information Society Project at Yale for their support of this project. I also want to thank the organisers and participants of presentations at King’s College London, Solvay Brussels School of Economics and Management, the Hans-Bredow-Institut and the Humboldt Institute for Internet and Society.
Competing Interests
The authors declare none. Please note that I am, as explained above, working at the Oversight Board. This research has been conducted independently from my affiliation with the Oversight Board, in my private capacity as a researcher and Affiliated Fellow of the ISP. It is not funded or supported by the Oversight Board and does not express positions taken by the Oversight Board. All views in the article are strictly the author’s own.
Funding Statement
No specific funding has been declared for this article. See above.