Hostname: page-component-65f69f4695-bd75b Total loading time: 0 Render date: 2025-06-26T18:30:21.745Z Has data issue: false hasContentIssue false

Governing platforms through corporate risk management: the politics of systemic risk in the Digital Services Act

Published online by Cambridge University Press:  04 June 2025

Rachel Griffin*
Affiliation:
Sciences Po Law School, Paris, France
Rights & Permissions [Opens in a new window]

Abstract

The EU’s 2022 Digital Services Act (DSA) requires large online platforms to assess and mitigate risks to various vaguely-defined and contestable values, including fundamental rights; public health and security; civic discourse; and people’s mental and physical wellbeing. The scope of these provisions is thus incredibly broad and they have unsurprisingly attracted significant academic interest. So far, most scholarship has taken a broadly functionalist approach: it accepts the basic premise that certain social impacts of platforms constitute risks which must be managed, and evaluates the DSA’s capacity to achieve this ‘effectively’.

In contrast, this article aims to step back and critically reflect on the underlying conceptual and institutional framework of systemic risk management, based on an explicitly social constructionist understanding of risk as a technology of governance shaped by competing political agendas. Drawing on critical literature on risk regulation from various disciplines and regulatory fields, it identifies two key themes. First, discursively framing political problems as risks tends to exclude issues less amenable to the ‘risk’ framing; to privilege technocratic expertise; and to depoliticise distributive and ideological conflicts. Second, the DSA’s institutional approach to risk regulation delegates primary responsibility for identifying and managing risks to regulated companies, creating well-documented risks of capture. Overall, both themes point to similar conclusions: the systemic risk framework will be highly favourable to the interests of the powerful multinational corporations it seeks to regulate.

In conclusion, however, the article notes that platform regulation may pose distinctive challenges that are not well illuminated by scholarship on risk regulation in other fields – notably because of the widely-recognised possibilities for political abuse of more prescriptive regulation of online media. It suggests some avenues for further research into risk politics in the specific context of online media platforms, in particular drawing on scholarship on risk management in critical security studies.

Information

Type
Core analysis
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

In a 1942 short story, Argentinian author Jorge Luis Borges informs us that animals can be divided into 14 categories. These include, among others, ‘stray dogs’, ‘sirens’, ‘belonging to the emperor’, ‘fabulous’, ‘included in the present classification’ and ‘etc’.Footnote 1

In 2022, the European Union (EU) passed the Digital Services Act (DSA), overhauling the regulation of digital intermediary services, in particular platforms publishing user-generated content. Article 34(1) of the DSA is not entirely dissimilar to Borges’ taxonomy. It requires companies running ‘very large online platforms’ (with over 45 million EU users) to regularly assess and mitigate risks associated with the functioning and use of their services in the following areas: dissemination of illegal content; negative effects on fundamental rights; negative effects on civic discourse, electoral processes or public security; and negative effects on gender-based violence, protection of public health, protection of minors, and serious negative consequences for people’s physical and mental wellbeing.Footnote 2 Like Borges’ list, these categories are heterogeneous in their apparent purposes, subject matter, and levels of abstraction; sometimes extremely broadly- or vaguely-defined; and overall somewhat lacking in coherence.Footnote 3

Characteristically, Borges attributes his taxonomy to a fictional translation of a Chinese encyclopaedia by a real German translator, Franz Kuhn.Footnote 4 Many of his stories feature similar (semi-)fictional attributions. One effect of this literary device is to highlight the situated and socially constructed nature of knowledge and narratives. Arrestingly strange ideas, like the library containing every possible book or the map the same size as the territory,Footnote 5 are not just presented as abstract intellectual curiosities – they come from somewhere and are mediated by someone. Borges’ work suggests that texts are never static: as they travel, they are actively interpreted, translated and reconstructed in different contexts. Similarly, legal texts have no fixed meaning: what law means in practice depends on how it is interpreted and applied, by specific people and institutions under particular conditions.Footnote 6 In practice, Article 34(1) cannot mean everything at once, but nor is its meaning static or predetermined. What systemic risk management comes to mean in practice will depend on how regulators, companies and other stakeholders interpret the DSA, in line with their own material interests and political preferences, and how they try to influence and contest each other’s interpretations.

These points are not particularly original, but nor is the DSA’s regulatory approach. Risk management is regarded by many scholars as a central organising principle of the modern regulatory state,Footnote 7 or of modern industrialised societies in general.Footnote 8 Corporate risk management obligations have become ubiquitous in many regulatory fields, including digital law.Footnote 9 Risk-based regulatory approaches have also been extensively considered and critiqued by scholars in fields ranging from law and regulatory studies to political science, sociology, and science and technology studies. Informed by this literature, this article aims to critically reflect on the political implications of framing all kinds of issues related to platform governance as risks to be managed – and specifically, to be managed through internal corporate procedures.

The DSA systemic risk framework has already attracted significant academic attention, much of which situates it in relation to prior scholarship on risk management and risk regulation.Footnote 10 Typically, this literature focuses on evaluating its ability to achieve goals such as accountability, efficiency or fundamental rights protection; highlighting problems, such as risks of corporate capture; and/or providing practical advice on how VLOPs and regulators should approach risk management.Footnote 11 These can all broadly be classed as functionalist analyses,Footnote 12 in the sense that they adopt the basic premise that certain social impacts of large online platforms constitute risks which need to be managed, and focus on how this might be achieved effectively.Footnote 13

Taking a slightly different approach, this article makes two main contributions. First, instead of evaluating how ‘effectively’ the DSA will achieve particular aims, I step back to consider a logically prior question: what are the implications of framing its policy aims in terms of ‘risks’ in the first place? Second, I take an explicitly social constructionist view of risk, as ‘a way of representing events so they might be made governable in particular ways, with particular techniques, and for particular goals’.Footnote 14

Social constructionist perspectives highlight that the concept of risk is fundamentally ambiguous and open to interpretation, but this does not mean that how risks are constructed is entirely contingent or open-ended. Rather, it is shaped by existing material and institutional structures, and by the goals, interests, and relative power and resources of actors deploying risk management techniques in the service of specific political objectives. Crucially, this means ‘some people have a greater capacity to define risk than others’.Footnote 15 Yet often, the emphasis on private-sector technical expertise that characterises risk regulations like the DSA serves to defuse and disguise political disagreements and conflicting interests. By theorising systemic risks as socially constructed, the article offers new insights into how powerful actors – notably including VLOPs, but also regulatory agencies and other state institutions – will enjoy outsized power to determine how risks are understood and managed.

I focus on two broad themes: the discursive effects of the ‘risk’ framing, and the institutional structure of corporate risk management obligations. In each case, I draw comparisons and insights from critical literature on risk regulation in other fields to analyse the legal text of the DSA and available evidence on its implementation so far. This helps situate the DSA’s risk-based regulatory approach in relation to longer-term (de)regulatory trends, suggesting how it could facilitate corporate capture, while sidelining political contestation of the underlying objectives and values shaping technology and its governance.

However, the relatively novel expansion of risk regulation to media and communications platforms calls for scholarship to go beyond existing literature on corporate risk management. Given the interest of state actors in shaping the construction of risks related to online media and communications in ways that justify political repression, the article concludes by suggesting that scholarship on risk in security, law enforcement and counterterrorism could offer useful perspectives for future research. For example, it could help to theorise the co-production of risk by state and corporate actors, the labelling of certain individuals and social groups as ‘risky’, and the use of algorithmic technologies to manage risks, as well as pointing to empirical methodologies that could help shed light on how DSA systemic risks are constructed in practice.

While this analysis focuses on the DSA, these insights and research directions are of broader relevance: many other technology regulation initiatives in the EU and elsewhere,Footnote 16 as well as regulatory efforts in other fields like sustainability,Footnote 17 follow a similar regulatory approach. As corporate due diligence and risk mitigation obligations become an increasingly ubiquitous response to all kinds of political problems, scholarship needs to move beyond internal critiques of these regulations’ ability to achieve their goals ‘effectively’, and ask more fundamental questions about what problems they are solving and whose interests they serve.

2. Risk regulation in the DSA

A. The systemic risk management framework

The DSA systemic risk framework is set out in Chapter III Section 5, which applies only to the largest online platforms (‘VLOPs’, defined in Article 33 as those with over 45 million monthly active users in the EU). Online platforms are defined in Article 3(i) as online intermediary services which make user-generated content available to the public: this notably includes social media, as well as e-commerce and adult content platforms which host third-party content. Section 5 also applies to search engines (defined in Article 3(j) as services allowing users to query websites based on keywords or other inputs) with over 45 million users.

Twenty-five services are currently designated as VLOPs.Footnote 18 Their owners include companies like Meta (owner of Facebook and Instagram), Alphabet (owner of Google and YouTube) and Microsoft (owner of LinkedIn) which are among the largest in the world, as well as companies which are smaller but dominate their respective markets, like TikTok and Pornhub. In recent years, these ‘big tech’ platforms have attracted increasing attention from policymakers, media and the public – taking in various specific policy concerns, such as disinformation and child mental health; more structural political-economic issues, such as concentrated corporate power; and conflicts between basic values, such as tradeoffs between (different understandings of) freedom of expression and public safety.

Arguably, Articles 34–5 represent the DSA’s main way of addressing such concerns. Most other DSA obligations have a narrower scope, generally focusing on individual user rights, illegal content removal, or transparency. In contrast, Articles 34–5 explicitly focus on ‘systemic’ issues and collective and social interests. Article 34(1) requires VLOPs to ‘diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services’. This must include the following areas:

  • dissemination of illegal content;

  • negative effects on fundamental rights;

  • negative effects on civic discourse, electoral processes or public security;

  • negative effects on gender-based violence, protection of public health, protection of minors, and serious negative consequences for people’s physical and mental wellbeing

Assessments must be conducted at least yearly, and before launching ‘functionalities that are likely to have a critical impact on the risks identified’. Article 35 then requires VLOPs to implement ‘reasonable, proportionate and effective mitigation measures’. Article 37 requires independent audits of risk assessments and mitigation measures.Footnote 19 Their adequacy will ultimately be overseen by the Commission (which is equipped with an array of investigatory powers, and can impose fines of up to 6 per cent of worldwide annual turnover: see Chapter IV, Section 4) with input from the European Board for Digital Services (EBDS), representing national regulators (Article 63).

As noted in the introduction, the scope of these articles is extraordinarily broad. For example, I struggle to think of any widely discussed issue in platform governance that could not somehow be framed in terms of fundamental rights (which is not to say that this is always the most useful framingFootnote 20). Attempts to define fundamental rights impacts can at least draw on an extensive body of EU and international case law, scholarship and other legal materials. On the other hand, ‘civic discourse’ is neither an established legal term nor defined in the DSA, and what constitutes good or bad civic discourse is hardly an issue where any consensus exists. In Martin Husovec’s words, Article 34(1)’s overall message is that platform companies should be doing something about risks to ‘everything we cherish’.Footnote 21 The problem is not only that people cherish different things, but also that they understand them very differently.

Articles 34–5 provide some limited further guidance. Article 34(2) states that risk assessments should consider:

  • the design of recommendation systems and other algorithmic systems;Footnote 22

  • content moderation systems;Footnote 23

  • terms and conditions and their enforcement;Footnote 24

  • advertising systems;Footnote 25

  • ‘data-related practices’;Footnote 26

  • ‘intentional manipulation’ of the service;Footnote 27

  • dissemination of content which is illegal or violates terms and conditions;Footnote 28

  • ‘regional and linguistic aspects’Footnote 29

Article 35(1) lists 12 indicative types of risk mitigation measure, including – among others – the following extremely broad categories:

  • ‘adapting the design, features or functioning’ of services;Footnote 30

  • adapting content moderation systems,Footnote 31 recommendations and other algorithmic systems;Footnote 32

  • ‘reinforcing the internal processes, resources, testing, documentation, or supervision of any of their activities’Footnote 33

Overall, this additional guidance is also very wide-ranging, general and abstract. It serves less to clarify the interpretation of Articles 34–5 than to broaden the range of potential interpretations even further by making clear that aspects which are not otherwise major focuses of the DSA, like interface design or internal operational processes, are also within scope.

The DSA also creates numerous avenues for the interpretation of Articles 34–5 to be clarified over time through delegated legislation, soft law and industry standards.Footnote 34 The Commission and EBDS can both issue guidance, which is not officially binding, but likely to be influential (for example, setting expectations for the Commission’s enforcement strategy). Article 45 also provides for codes of conduct to be developed primarily by regulated companies, but with supervision and input from the Commission and EDBS. Non-compliance with relevant codes could be a ground for enforcement proceedings.Footnote 35 Finally, auditing standards and evaluation metrics will likely significantly influence how companies approach compliance in practice, and how compliance is assessed by regulators.Footnote 36 Consultancy services, both from the ‘big four’ companiesFootnote 37 and from specialised platform regulation consultancies like Paris-based Tremau,Footnote 38 could also play an important role in shaping VLOPs’ compliance practices.

The DSA also envisages a role for civil society and independent experts.Footnote 39 In some cases these actors will directly give input to VLOPs and regulators: for example, Recital 90 provides that VLOPs should consult with organisations representing relevant stakeholders during their risk assessment processes, while Article 45 provides for civil society to participate in drafting codes of conduct. More generally, independent researchers – especially academics – are envisaged as providing evidence to guide risk management (which Recital 90 provides should be ‘based on the best available information and scientific insights’) and monitoring VLOPs’ and regulators’ implementation of the risk management framework. Notably, Article 40 requires VLOPs to provide data to facilitate research that ‘contributes to the detection, identification and understanding of systemic risks’. Data access has attracted significant attention from researchers and civil society as a means of strengthening platforms’ accountability to the public.Footnote 40 However, questions could be raised about the resources and capacities available for such independent oversight, and about whether the institutional incentive structures guiding academic research are aligned with the objectives of regulatory oversight.

B. Situating the DSA’s approach to risk regulation

It could be somewhat surprising that the DSA frames so many disparate values, concerns and aspects of platform governance in terms of risks to be managed. Risks are generally understood as possible harmful future events – classically quantified according to probability and severity.Footnote 41 The DSA is generally not concerned with specific harmful events that might or might not happen, but with ongoing, diffuse and often indirect impacts of VLOPs’ business activities. Empirically, such impacts are typically highly contestable. For example, there is significant scientific uncertainty and disagreement around platforms’ impacts on issues like misinformation, political polarisation and child mental health, extending to basic questions of prevalence and causation, as well as the impacts of policy interventions.Footnote 42 More fundamentally, given the essentially indeterminate and contested meaning of concepts such as fundamental rights, civic discourse or public security, probabilistic approaches to assessing negative impacts seem not only value-laden and contestable but on some level arbitrary.

From another perspective, however, the DSA’s regulatory approach is typical. In recent decades, risk management obligations have steadily expanded across a vast array of regulatory fields – from environmental pollution to financial stability to the supply chains of multinational corporations. Often, the risks in question are defined in far more fluid and value-laden terms than the classic probability/severity definition would suggest. For example, various industry best practices, soft law commitments and – increasingly – binding legal frameworksFootnote 43 now oblige corporations to conduct ‘human rights due diligence’ and assess diverse ‘environmental, social and governance’ risks.Footnote 44 Accordingly, risk assessment has expanded beyond quantitative and probabilistic approaches, to encompass a range of more flexible, qualitative and open-ended evaluative techniques.Footnote 45

In regulatory regimes like the DSA, ‘risk’ functions less as a defined approach to identifying, quantifying and preventing harm, and more as a ‘boundary object’ or common language that can help mobilise a variety of institutional practices, techniques and resources in the service of diverse regulatory goals.Footnote 46 Its ambiguity is part of what makes it useful. Unsurprisingly, then, the term ‘risk regulation’ is also used in diverse contexts and with different, sometimes overlapping meanings.Footnote 47 This article highlights three broad regulatory traditions whose influence is particularly evident in the DSA.

First, regulatory goals can be framed in terms of managing risks to the public. Hood, Rothstein and Baldwin influentially defined risk regulation as ‘governmental interference with market or social processes to control potential adverse consequences’.Footnote 48 Nothing in this definition specifies how restrictively governments should regulate businesses; as they show, risk regulation ‘regimes’ – their term for the ensemble of governmental and non-governmental actors, institutions, norms and practices involved in regulating particular areas – have very different policy goals, regulatory tools and institutional arrangements, encompassing a broad spectrum of stricter and more laissez-faire approaches.Footnote 49

Nonetheless, framing the goals of regulation in this way resonates in several ways with deregulatory ideologies.Footnote 50 Understanding regulation as ‘interference’ with markets assumes an a priori separation between markets and regulation, implying that the former would exist in some natural state without the latter – and that the default option should be non-interference, except where regulation is justified to prevent specific harms. This ‘market naturalist’ framing obscures more subtle ways that state policies structure marketsFootnote 51 (often in ways that actively facilitate concentrations of wealth and powerFootnote 52) and can discourage consideration of more structural reforms, by focusing attention on incremental reforms aimed at mitigating specific negative externalities of businesses’ activities. Indeed, in practice, risk regulation is often accompanied by approaches to ‘cost-benefit analysis’ that require extensive evidence to justify regulatory interventions, serving to block many restrictions on business activities.Footnote 53 Finally, in technology regulation, this framing also resonates with dominant narratives framing regulation as a tradeoff between safety and innovation.Footnote 54 It suggests that governments should allow maximum market-driven innovation, as long as adverse consequences are controlledFootnote 55 – as opposed to questioning the underlying goals and interests deciding what technological ‘innovations’ the private sector develops.

These tendencies are visible in Articles 34–5 DSA. Regulatory objectives are framed in terms of preventing specific ‘negative impacts’ of VLOPs’ activities, which implies that having mitigated these specific risks, they can otherwise continue as normal. It appears likely that the enforcement of this particular regulatory regime will be on the more interventionist end of the spectrum: the Commission initiated five enforcement actions in the first year of its operation.Footnote 56 Nonetheless, any enforcement can be challenged in court – by very well-resourced companies with top-tier legal teams – and will thus demand strong evidentiary justifications.

Second, in what is generally called risk-based regulation, risk does not refer to the regime’s objectives but rather the allocation of public resources. Regulatory obligations and/or oversight focus on companies or activities considered to pose most risk to the regulatory regime’s objectives – often assessed using standardised indicators inspired by the classic probability/severity approach.Footnote 57 This often overlaps with ‘responsive regulation’ approaches, where oversight varies depending on how compliant or cooperative businesses have been in the past.Footnote 58 As leading regulation scholar Julia Black points out, that agencies have limited resources and cannot achieve perfect compliance is not distinctive to risk-based regulatory regimes; rather, risk-based regulation seeks to explicitly recognise this ubiquitous problem and manage it in a structured way.Footnote 59 Nonetheless, by emphasising evidence-based and efficient use of government resources, risk-based regulation also resonates with the ethos of the neoliberal era, which saw spending cuts across many areas of government, as well as deregulatory ideologies prescribing that government restrictions on business should be avoided except where clearly justified.

The DSA’s tiered structure, where VLOPs have stricter obligations because of their large user bases, clearly aligns with risk-based regulation. However, the influence of risk-based approaches can also be seen in how the regulatory obligations for VLOPs are being implemented. At least one DSC, the Irish Media Commission (CnaM), is taking an explicitly risk-based approach, assessing companies according to formalised risk factors in order to determine monitoring and enforcement priorities.Footnote 60 Commission officials have stated that they are taking a broadly responsive approach to DSA enforcement, prioritising dialogue with VLOPs and voluntary commitments, and treating formal enforcement as a last resort.Footnote 61

Finally, risk management can be the basis for the institutional structure of regimes which – like the DSA – require companies to manage specified risks to the public. Regulators play the secondary role of overseeing whether these internal risk management systems are adequate, hence this approach is also sometimes called ‘meta-regulation’.Footnote 62 In effect, it seeks to harness existing corporate systems and resources – notably including ‘enterprise risk management’ (ERM) and auditing practices developed to address commercial risksFootnote 63 – and turn them towards a wider variety of public policy goals.Footnote 64

Historically associated with financial regulation,Footnote 65 this approach has become common in numerous fields, prominently including data protection and technology regulation,Footnote 66 as well as business and human rights. ‘Due diligence’ obligations for multinational corporations to assess and mitigate human rights and sustainability risks in their value chains have rapidly become a ‘new global orthodoxy’,Footnote 67 spreading from influential soft-law standards like the UN General Principles to legal regimes in several jurisdictions, including (following the 2024 Corporate Sustainability Due Diligence Directive) the EU. Articles 34–5 DSA can be seen as the result of cross-fertilisation between these regulatory approaches: they use meta-regulatory tools to regulate the design and operation of complex technological systems, but define their objectives in terms of fundamental rights and other abstract values.

Scholarship on meta-regulation identifies several advantages: it is more flexible and context-sensitive than prescriptive industry-wide standards, and allows regulators to take advantage of companies’ technical and financial resources,Footnote 68 as well as their greater industry- and company-specific knowledge.Footnote 69 On the other hand, scholars have pointed out – and empirically documented in detailFootnote 70 – that it creates obvious room for companies to interpret regulatory goals and standards in self-serving ways, as well as potentially eroding public-sector capacities.Footnote 71 From either perspective, meta-regulatory approaches once again resonate with deregulatory agendas, assuming that governments should be deferential to private-sector expertise and that regulatory obligations should be tailored to minimise burdens on business activities.

In summary, then, the DSA’s risk management framework draws from all three regulatory traditions identified – unsurprisingly, since all three have become increasingly popular in diverse regulatory fields, including other areas of EU tech regulation. Each of the three is in principle compatible with a wide range of regulatory goals and strategies, including more restrictive and precautionary approaches. However, they generally show the influence of the neoliberal era of privatisation and deregulation during which they developed: they dovetail neatly with assumptions that private-sector expertise is superior to public oversight and that regulators should ‘interfere’ with corporate freedom and profits only to the minimum extent necessary. As the rest of this article will show in more detail, these ideological influences are also visible in the DSA.

C. The social construction of systemic risks

A core premise of this article is that risks are socially constructed. This is not to say that harms and dangers do not really exist – for example, that VLOPs do not impact mental wellbeing, civic discourse or fundamental rights – but rather that there is no ‘objective’ way of conceptualising, measuring and responding to such impacts.Footnote 72 Drawing on Foucauldian approaches to theorising the interrelationship of knowledge and power, as well as Latourian perspectives on the social construction of scientific knowledge, social constructionist scholarship has shown how the processes through which issues are recognised as problems, framed as ‘risks’, defined, assessed and finally mitigated are shaped by social institutions, material conditions and power dynamics.Footnote 73 Moreover, risk assessment is not about knowledge for its own sake, but fundamentally about informing decision-making.Footnote 74 Accordingly, how risks are defined necessarily depends on the objectives and values of the organisations involved in risk managementFootnote 75 – and, often, on negotiations and conflicts between actors with competing goals. This encompasses both top-down processes in which political actors mobilise evidence and advocate for their preferred problem framings and priorities,Footnote 76 and more bottom-up processes in which the quotidian knowledge production practices of professionals like academics, consultants or data scientists form the basis for shared understandings of risk.Footnote 77

Relatedly, while classic definitions of risk focus on preventing harm, it also – especially in commercial contexts – connotes positive opportunities.Footnote 78 The dual imperatives for governments and businesses to take risks, but also to control them, mean that risk management itself presents opportunities for actors who can construct risks in ways that serve their own objectives. Insurance, auditing and consultancy businesses frame risks in ways that help them market their services.Footnote 79 Politicians and state institutions construct risks against which they claim to be able to protect the public, as a way to attract political support or institutional resources.Footnote 80 Thus, while existing political and economic institutions shape how risks are understood and managed, the legal and discursive framework of risk management can also alter existing institutional configurations and create new opportunities for political and economic actors to shape platform governance.

To take a concrete example, Article 34(1)(d) DSA mentions risks to public health, wellbeing and the protection of minors. In this context, several of the Commission’s early investigations and enforcement actions have focused on putative risks to children’s mental health.Footnote 81 A social constructionist perspective would emphasise that, while mental health problems are very real, how they are understood as a policy issue is contingent on many social and institutional factors. The policy choice to make this a priority in DSA enforcement reflects discussions and negotiations between politicians, regulators, VLOPs and other stakeholders (like NGOs) at both national and EU levels. In turn, these actors’ perceptions and priorities are influenced by the knowledge and discourses about platforms’ impacts on child mental health produced by journalists, scientists and other actors.Footnote 82 Some of these actors may frame safety risks in terms of children encountering harmful content – itself a very ambiguous concept, which can be understood in many ways, influenced by different political agendas: from feminist critiques of content promoting exclusionary beauty standardsFootnote 83 to homophobic politics framing information about LGBTQIA+ identities as a threat to children.Footnote 84 Others may associate potential mental health impacts with other aspects of platform governance, such as ‘addictive’ design features and recommendation algorithms.Footnote 85 In turn, other actors may contest these interpretations by emphasising risks to freedom of expression and other values posed by overzealous efforts to shelter children.Footnote 86 There is no external standard or objectively correct understanding of systemic risks to minors against which these different approaches could be measured. Instead, risks are produced through the interactions between these different political agendas, narratives and knowledge production practices.

To some degree, this is true of all forms of risk regulation. Even in fields like pollution and chemicals, whose association with ‘hard’ science and quantifiable indicators can make them appear more objective, defining risks involves contingent and context-dependent choices about the production of scientific evidence, as well as political choices about the distribution of harms and benefits between different social groups – both often heavily lobbied and contested by industry and other stakeholders.Footnote 87 However, there is at least a basic degree of consensus around the nature of the harms involved (cancer rates, nuclear accidents). Article 34 leaves infinitely more room for conflicting interpretations. Some risk areas mentioned (eg, public security, civic discourse) are essentially contested concepts, whose meaning necessarily depends on many other ideological premises. Others (eg, dissemination of illegal content) may appear more straightforward. However, even in these cases, there will be frequent conflicts between different risk areas and between different ways of interpreting, measuring and prioritising them, with no obviously correct answers.Footnote 88 Several other aspects of the DSA’s drafting compound this essential ambiguity. This notably includes the absence of any definition of ‘systemic’;Footnote 89 the lengthy and non-exhaustive lists of risk factors and mitigation measures; and the extensive provisions for risk management standards to be developed and supplemented over time.

Overall, it seems clear that the DSA’s drafters were not trying to set out a detailed and prescriptive approach to risk assessment, but rather to create an open-ended framework which could accommodate diverse regulatory goals and priorities, and could be further developed and specified over time.Footnote 90 Reasons behind this approach could include a desire for flexibility to respond to new technological and economic developments, and to changing political priorities. More pragmatically, ‘constructive ambiguity’ is often necessary to reach consensus on a text – especially in the EU’s complex legislative process.Footnote 91 This makes the concept of risk as a ’boundary object’ that can connect different institutional practices and perspectives particularly relevant. Policymakers with different or incompatible opinions about what is wrong with platform governance and how it should be reformed were able to agree that platforms pose risks which should be mitigated. Yet this agreement does not resolve conflicts over how platforms should be governed, but merely opens up new spaces for ongoing political struggles over the construction of risks.

Given the centrality of systemic risk management in the regulation of VLOPs – some of the world’s best-known, most valuable and most powerful companies – many stakeholders, with different and conflicting agendas, will be interested in the outcomes of these struggles. Evidently, these stakeholders’ capacities to set political agendas, marshal evidence and establish consensus around their preferred risk framings will largely reflect established inequalities of resources and political power.Footnote 92 However, the DSA’s risk regulation regime is also an important piece of the political opportunity structure within which such conflicts will play out.

From the brief overview above,Footnote 93 it is already apparent that while the DSA delegates significant discretion to VLOPs, it does not give them carte blanche, but aims to establish an ecosystem of government, corporate and civil society stakeholders who will all have input into what issues constitute systemic risks and how VLOPs should manage them.Footnote 94 This aligns with scholarship on risk regulation which argues that open-ended or ambiguous regulatory provisions like Articles 34–5 can be resolved as common understandings of risk coalesce within ‘interpretative communities’ involved in implementing a given regulatory regime.Footnote 95 This is easier ‘within sector-specific regulatory regimes where the regulated sector forms a relatively tight-knit community’.Footnote 96 Such a community is already very visible around the DSA. Conferences, events and consultations regularly bring together regulatory agency staff, academic researchers, NGOs and industry experts, providing opportunities for them to repeatedly meet, exchange information and form professional and social connections.

Importantly, however, access to and participation in this ecosystem is far from equal.Footnote 97 Scholarship on the social amplification of risk argues that interest group politics is a crucial factor shaping how people and institutions understand risks. Stakeholder groups produce, mobilize and frame evidence in order to shape risk management in ways that favour their own ideologies or interests, and that help marshal support for their broader political agendas.Footnote 98 These processes are structured by pervasive disparities of power and expertise. Interest groups who have more resources, relationships with other influential actors, and capacities to produce and engage with expert knowledge will be better able to build consensus behind their preferred risk framings. Consequently, risks are often defined and managed in ways that reinforce existing inequalities.Footnote 99

Accordingly, the following sections will analyse the political implications of its regulatory approach in more detail, synthesising existing scholarship on two particular aspects of risk regulation: the discursive effects of framing social and political issues in terms of risks to be managed, and the institutional approach of corporate risk management obligations.

3. Risk, discourse and politics

Critical literature on the various forms of risk regulation discussed in section 2(b) has highlighted various ways that the conceptual framework of risk management structures political discourse – shaping how problems are framed, what solutions are seen as feasible, and what kinds of knowledge and arguments are seen as authoritative. This section identifies three broad critiques that appear particularly relevant to the DSA. First, risk discourse reinforces boundaries – between different issues on the regulatory agenda, but also by excluding some issues entirely. Second, risk discourse privileges technocratic expertise over other forms of knowledge. Finally, because risks are conceived as objective harms – as opposed to distributive decisions that harm some and benefit others – risk discourse is depoliticising, obscuring conflicts between competing ideologies and material interests. These three tendencies help explain why, despite the inherent ambiguity of risk as a concept, risk regulation all too often tends to ‘[ratify] existing distributions of resources’.Footnote 100

A. Boundary reinforcement

Critical accounting scholar Michael Power suggests that risk regulation is essentially inspired by the ‘boundary preserving model’ of enterprise risk management (ERM).Footnote 101 These internal corporate processes, on which the DSA now seeks to build, focus on specifically-defined problems, internal decision-making procedures, and auditable metrics and documentation – instead of raising more challenging questions about the unpredictable interactions between companies’ internal systems and their external environment, or about whether their overall goals and operating logics serve the public interest.Footnote 102 Similar critiques are regularly raised in scholarship on ESG, sustainability reporting and climate due diligenceFootnote 103 and could also be applied to the DSA.

For example, in the DSA context, the boundary-preserving tendencies of ERM may be exacerbated by the demands from many stakeholders for industry-standard metrics and benchmarks for particular risks. This may certainly have advantages (such as facilitating comparisons between different companies and over timeFootnote 104) but is unlikely to encourage consideration of complex interactions between risks and mitigation measures in different areas, which would demand context-sensitive approaches that are harder to quantify or standardise.Footnote 105 Empirical platform governance research emphasises that harms often result from complex interactions from platforms, their user bases and the wider economy and digital and media ecosystems.Footnote 106 For example, standardised metrics of algorithmic bias in content moderation overlook how mistaken moderation decisions are disproportionately harmful to users in situations of economic precarity or vulnerability.Footnote 107 Formalised, standardised risk assessment procedures are inapt to capture such context-dependent differential impacts.Footnote 108

Assessing risks individually may also obscure conflicts and tensions between different goals mentioned in Article 34(1), and make it difficult to recognise that risk mitigation measures can themselves be harmful. For example, mitigating risks by removing harmful content will necessarily restrict users’ freedom of expression and (given well-documented biases in moderation tools) equal access to platforms.Footnote 109 Article 35(1) DSA recognises this to some extent, by providing that VLOPs choosing mitigation measures should consider their impact on fundamental rights. However, with little guidance on how such impacts should be identified and evaluated, or how potential conflicts should be resolved, it is unclear whether this provision will have much concrete impact.Footnote 110

Second, the DSA requires each VLOP to assess and mitigate risks associated with their own platforms. Coordination between companies is promoted to an extent by some other aspects of the DSA, notably codes of conduct.Footnote 111 However, the risk management procedures used to document and evaluate compliance focus on the practices of individual companies.Footnote 112 This could discourage consideration of issues involving the cumulative effects of many companies’ activities, or business practices that appear justifiable within one company but more problematic at the level of the industry as a whole.Footnote 113 For example, for various commercial and regulatory reasons, most major platforms ban content deemed sexually explicit or overly suggestive.Footnote 114 Many scholars and activists consider this generally excessively restrictive of adults’ freedom of expression, and particularly harmful to certain vulnerable and marginalised groups, such as sex workers and LGBTQIA+ people (especially because such policies are typically enforced in overinclusive and discriminatory waysFootnote 115). Any individual VLOP might conclude that such risks are outweighed by benefits in areas like child safety and privacy; indeed, banning sexual content could be portrayed as a positive compliance measure aimed at mitigating risks in these areas. However, this misses an arguably more important question: what are the implications for freedom of expression generally, and for groups particularly affected by these policies, if people are unable to post and interact with sexual content on all major online platforms?

Power also points out that ERM processes are designed to address risks to companies’ commercial interests, which ‘are more or less an exogenous input into the model with the consequence that it is hard to enlist such a framework in challenging the objectives themselves’.Footnote 116 Regulations like the DSA seek to redirect these processes towards a wider range of public-interest goals. However, they are built on the same corporate systems and share the same structural limitation: because they ask how a given company can better manage risks in the course of its business activities, they sideline questions about whether these activities are in themselves problematic. This is what allows fossil fuel companies to highlight in ESG and due diligence reporting that they are reducing emissions produced during fossil fuel extraction, not mentioning that this business activity is as such incompatible with effective climate policy.Footnote 117

While it is a less existential threat to society, many people also consider the business model of today’s leading search engines and social media platforms – aggregating large volumes of data about users to target them with personalised adverts – inherently socially destructive. This might be because it threatens privacy, because it incentivises and promotes divisive or sensationalist content, and/or because curating content based on its commercial value to advertisers is inherently exclusionary and sidelines other important goals and values for media governance.Footnote 118 Similar points could be made about dominant e-commerce platforms whose business models revolve around maximising consumer spending regardless of its social and environmental impacts.

Regardless of the merits of particular arguments along these lines, they clearly raise important questions touching on many values mentioned in Article 34(1), such as fundamental rights, media pluralism and users’ well-being. Yet the DSA systemic risk framework provides little space to address such questions. First, evidently, no VLOP is likely to conclude that it should mitigate risks by shutting down its core business. It would also be difficult for regulators, auditors or other stakeholders to argue this: the implicit premise of risk regulation is that businesses can and should continue their activities, as long as specific associated risks are addressed. Second, it could reasonably be argued that the surveillance advertising business model is not in itself unacceptable, but that values such as media pluralism or freedom of expression are unacceptably undermined when all major media platforms have the same business model and there are few comparable alternatives. Such questions are also outside the boundaries of Articles 34–5, which exclusively call for individual VLOPs to assess the impacts of their own services. Finally, this boundary reinforcement also circumscribes the role of regulatory agencies, who are charged with overseeing individual VLOPs’ compliance with Articles 34–5 – that is, ensuring more ‘responsible and diligent’ corporate conductFootnote 119 within existing market structures, rather than more fundamentally reforming them.Footnote 120

B. Technocracy

As Section 2(c) argued, risk management is closely bound up with knowledge production practices which enable the identification and evaluation of risks. Crucially, this also involves practices of ‘authorisation’ which deem some forms of knowledge and evaluative techniques more valid than others.Footnote 121 A consistent theme in scholarship on risk management and risk regulation is that they privilege technical knowledge, scientific evidence and professional expertise over other ways of understanding policy issues. Consequently, risk regulation generally authorises professional classes who have privileged access to scientific and technical expertise to determine how risks should be assessed and managed on everyone else’s behalf.Footnote 122 Insofar as other actors attempt to contest these decisions, they are incentivised to deploy the same discursive framings and forms of evidence that these elite actors deem authoritative.Footnote 123

In fields like chemicals and environmental law, risk regulation traditionally privileged quantitative metrics and ‘hard’ scientific evidence.Footnote 124 As risk regulation has expanded to encompass less quantifiable and more explicitly normative domains, it has also opened up to a wider range of evaluative techniques and forms of evidence.Footnote 125 Importantly, however, these approaches still tend to privilege elite technical knowledge. Corporate human rights due diligence (and its offshoot in the DSA’s concept of systemic risks to fundamental rights) provides a good example. Due diligence obligations not only include obviously unquantifiable normative standards, but explicitly envisage going beyond technical or scientific knowledge, in particular via ‘stakeholder engagement’ with affected communities.Footnote 126 Yet framing such issues in terms of human rights – that is, in the language of a technically complex legal and institutional field, where authoritative claims rely on specialised knowledge of an array of treaties, case law and interpretative practices – still privileges professional experts, and stakeholder groups with the resources to deploy legal expertise.Footnote 127

Reinforcing these tendencies, the DSA’s various mechanisms for risks and mitigation measures to be negotiated and clarified generally seem to assume consensual processes led by technical and professional experts (auditors, consultants, industry experts, academics, etc.) rather than political contestation.Footnote 128 Provisions for civil society consultation might provide more opportunities for discussion of underlying policy goals and values. However, such consultations are framed in terms of gathering ‘evidence’ on objective or self-evident risksFootnote 129 rather than contesting how risks are defined in the first place. For example, Recital 90 recommends that VLOPs consult with organisations representing ‘the groups most affected by the risks’ – obscuring the essentially contestable and value-laden processes involved in determining what ‘the risks’ are and whom they affect. Ultimately, since the DSA’s risk management system centres around technical processes and expert assessments (risk assessments, audit reports, etc.), civil society actors will likely find it easier to influence policy debates if they can also frame their interventions in terms of technical expertise, rather than ideological differences.Footnote 130

C. Depoliticisation

This points to the third group of critiques, which have featured particularly prominently in technology regulation:Footnote 131 risk discourse is depoliticising, obscuring conflicts over how technologies should be designed, owned and operated. To a large extent, this follows from the previous two points. On one hand, privileging technocratic expertise enables regulators, corporations and other powerful actors to present their preferred risk framings as objective or apolitical, making it harder for other actors to contest their political agendas. On the other, many structural policy issues and reforms which are more obviously politically contestable are placed outside the boundaries of regulatory debates.

Many risk regulation measures, including the DSA, elide the question of risks for whom,Footnote 132 instead framing risks in terms of harms to an amorphous, unitary ‘public’ (as in Article 34(1)’s references to public health, security, etc.). Reliance on ‘objective’ technical evidence and metrics can ‘[portray] political decision-making as a neutral pursuit of utility maximization, devoid of winners and losers’.Footnote 133 Yet as noted above, risk management inevitably involves distributive and ideological conflicts. For example, there is no apolitical way to evaluate a mitigation measure that reduces the visibility of political disinformation, but also suppresses independent political news and commentary.Footnote 134

Characteristically, the DSA largely ignores such conflicts. Article 35(1) requires mitigation measures to be ‘appropriate, proportionate and effective’. What these criteria mean fundamentally depends on the goal being pursued: in the absence of consensus around what risk mitigation measures should be aiming to achieve, what is effective or appropriate cannot be evaluated. The DSA offers little guidance, since, arguably intentionally, the listed areas in Articles 34–5 accommodate a huge range of competing goals. The reference to proportionality – a framework classically used to assess and manage conflicts between human rights – might be more relevant to resolving disagreements, along with the reference (also in Article 35(1)) to considering the impact of mitigation measures on fundamental rights. However, these norms still frame conflicts in terms of abstract and universal values, to be ‘appropriately’ balanced in the interests of society as a whole.Footnote 135

This not only elides questions about who is harmed by risks, but also questions about who benefits from risky activities, and whether they should be undertaken at all.Footnote 136 As section 2(b) argued, risk regulation generally resonates with deregulatory ideologies that assume regulatory intervention should be minimised, given its costs for ‘innovation’Footnote 137 – often framed in political debates as an unqualified good. This sidesteps ‘greater issues of what the proper human meanings, conditions, limits, and purposes of scientific and technological innovation should be’.Footnote 138 Ongoing debates around topics including freedom of political speech and protest,Footnote 139 environmental impacts of digital infrastructure,Footnote 140 and the social (dis)utility of generative AIFootnote 141 make it clear that VLOPs’ business activities implicate important distributive and ideological conflicts: who gets access to (online) media, what kinds of technologies benefit society, and how limited resources should be used. Such questions clearly implicate values mentioned in Article 34(1), such as fundamental rights and public security. Yet framing platform governance in the dry and technocratic terms of risk management provides little space for them to be openly discussed.

4. Risk and institutional power

Depoliticising conflicts over how risks should be understood and managed does not mean they will not be resolved somehow. Article 34 cannot mean everything at once. What it ultimately comes to mean in practice will be determined by the institutions involved in implementing the DSA – most importantly VLOPs and regulators, although other private actors like auditors and civil society organisations will also be influential.Footnote 142 Thus, as well as analysing how the DSA discursively frames policy issues, it is important to consider how risk politics will be shaped by the institutions putting it into practice.

These two aspects are however closely related. There is an obvious complementarity between discursively framing policy issues as apolitical technical problems, and delegating their management to private companies, professional auditors and independent experts. In this respect, as section 2(b) showed, the DSA draws on ‘meta-regulatory’ approaches that are well-established in other fields, such as financial and environmental regulation and business and human rights. A correspondingly longstanding body of research has highlighted drawbacks of this approach – in particular, its tendency to shift power to corporations, who are often able to define risks and mitigation measures in ways that serve their own interests.

A. An essentially misguided project?

The strongest critical perspectives would hold that corporate risk management and due diligence obligations are fundamentally unhelpful responses to economic injustice, environmental degradation and human rights violations, because they work within the existing legal and economic structures of corporate capitalism, and thus exclude from the outset the structural reforms necessary to address such issues.Footnote 143 Arguably, risk management obligations not only ignore but actively undermine such structural changes, by legitimising corporations and helping them resist more interventionist regulatory proposals.Footnote 144

These arguments are certainly (and perhaps especially) relevant to the VLOPs regulated by the DSA systemic risk framework. A growing body of scholarship argues that their business models are fundamentally economically unjust and environmentally destructive, in light of factors like their structural reliance on low-paid labour and heavily polluting and resource-intensive digital infrastructure.Footnote 145 Evidently, issues around global justice and sustainability are not the DSA’s primary focus.Footnote 146 However, even focusing purely on the DSA’s core project of regulating the governance of user-generated content, it has been argued – as section 3(a) discussed – that business models revolving around surveillance advertising, engagement maximisation and commercial content curation are inherently socially harmful. On this basis, it could also be argued that the DSA’s risk management system stabilises and legitimises these harmful business models, at the expense of more radical regulatory efforts aiming to establish alternative models of platform governance.

I am generally sympathetic to such arguments. However, they are applicable not only to risk management, but any approach focused on regulating rather than dismantling corporate platforms. This does not mean all approaches in the former category are the same, or unworthy of further analysis. Regulation which does not directly attack corporate power can still meaningfully limit it or create opportunities for contestation.Footnote 147 The following subsections thus aim to reflect on the specific regulatory approach chosen in the DSA – and to present a critical assessment of its approach to risk regulation that should be convincing regardless of the reader’s views on the more radical critiques discussed above.

B. Corporate capture

Considering different actors’ power to shape risk management, scholarship on risk regulation has in particular emphasised the problem of corporate capture.Footnote 148 Given the deregulatory orientation of risk regulation in general, and the discretion accorded to corporations in meta-regulatory regimes like the DSA, risks and mitigation measures are obviously susceptible to being defined in ways that suit corporate interests – especially where the corporations involved are as powerful and well-resourced as most VLOPs.Footnote 149

First, compliance teams can deliberately and strategically construct risks in ways that serve the company’s commercial interests. Given the vast scope of Articles 34–5 DSA, risk assessments will necessarily involve selecting and prioritising a limited number of ‘risks’ from among a nearly infinite number of potentially-relevant issues, then deciding how to define and measure those risks, and finally identifying and selecting among possible mitigation measures. Evidently, as companies are making these decisions, not all of them will be equally appealing from a commercial perspective: some will be more costly or more disruptive to business operations than others. In this situation, VLOPs obviously have an incentive to choose those that are least costly.Footnote 150 There is also plenty of scope to minimise costs by reframing existing business practices (or minor modifications thereof) as DSA risk mitigation measures. For example, existing ‘brand safety’ tools developed to suppress ‘non-family-friendly’ content that is unappealing to advertisersFootnote 151 could be reframed as tools addressing risks to child safety.

Second, more generally, even where compliance staff are sincerely motivated to pursue public-interest regulatory objectives, compliance processes established within for-profit companies cannot be fully insulated from commercial considerations. Representatives of VLOPs have explicitly stated that they want DSA risk management processes to be maximally scalable and efficient, and to build on existing ERM and human rights due diligence processesFootnote 152 – which are essentially designed to mitigate commercial and reputational risks.Footnote 153 Specialist DSA consultancy Tremau has suggested that VLOPs should look at DSA risk assessments as investments that can also be leveraged for business goals, like improving customer satisfaction.Footnote 154 These assessments will also necessarily rely on internal databases and analytics tools originally designed for business purposes. Thus, even if corporate compliance departments are not consciously and strategically seeking to minimise costs, systemic risk management will inevitably be guided by data, resources and decision-making processes geared towards commercial objectives.

Compliance departments will have limited staff and resources (indeed, journalistic investigations and leaks suggest even the largest VLOPs’ ‘trust and safety’ teams are perennially understaffedFootnote 155). They may also find it difficult to persuade other teams whose performance metrics are based on revenue and other commercial goals to allocate resources to regulatory compliance and risk mitigation. These constraints would generally incentivise compliance staff to focus on more superficial and less disruptive mitigation measures: for example, rephrasing content policies, instead of improving automated moderation systems, which would require more work from highly-paid and in-demand software engineers.

Notably, Article 41 DSA requires VLOPs to establish an independent compliance department, structurally separated from other management functions. This has advantages as a way of (somewhat) insulating risk assessments from commercial goals. However, it could also make it easier for VLOPs to structurally sideline compliance staff, such that their activities (assessing risks, issuing guidance, etc.) signal to regulators and other actors that the company takes compliance seriously, but have little impact on other departments’ activities.Footnote 156

Indeed, regulatory and sociolegal scholarship has frequently observed a tendency for internal risk management processes to devolve into ‘box-checking’ or ‘cosmetic compliance’: companies focus on implementing procedures which signal compliance to regulators and other stakeholders, but achieve little substantive change.Footnote 157 This is particularly common in regimes like the DSA which mandate procedures companies should follow (assessing and mitigating risks, publishing reports, commissioning and responding to audits) rather than substantive changes. Often, this encourages what Power calls ‘secondary risk management’: organisations focus on avoiding the risks to themselves if their risk management systems are deemed inadequate, by ensuring that other actors see them following appropriate procedures, more than on the ‘primary’ risks they are actually charged with managing.Footnote 158

Sociolegal scholarship also highlights several other characteristics of regulatory regimes that facilitate cosmetic compliance: vague or ambiguous legal rules, proliferation of regulatory guidance and standards from different sources, and lack of transparency.Footnote 159 All these features are visible in the DSA. For example, the vague and open-ended nature of Articles 34–5 will hinder external accountability. As emphasised above, it is a core feature of this regulatory framework that VLOPs have to choose between vast numbers of possible risk framings, assessment metrics and mitigation measures; these choices can be self-serving without being legally unjustifiable. Moreover, any enforcement decision finding that VLOPs’ preferred interpretation of Articles 34–5 is legally impermissible will be open to legal challenge, and well-resourced large corporations typically have a structural advantage in such legal proceedings.Footnote 160 Indeed, the mere possibility of such litigation could incentivise the Commission to focus on ‘safe’ interpretations of Articles 34–5 and avoid pushing for more radical changes in businesses practices – especially because, as Section 3 argued, this kind of incrementalist approach is generally encouraged by the discursive framing of risk management.

The lack of transparency around risk management is also notable. VLOPs must only publish summary reports compiling their risk assessments, audit reports and response to the audit within three months of receiving the audit report (which could be a year after the original risk assessment).Footnote 161 Civil society organisations have repeatedly stressed that the opacity of risk assessments not only prevents them from scrutinising and criticising particular aspects of VLOPs’ risk management, but also more generally from understanding what kinds of risks and mitigation measures are being discussed and where they might want to focus their own resources.Footnote 162 This delay also increases VLOPs’ power to set the agenda for subsequent policy debates, as their risk assessments will highlight issues, formulate problem framings and propose solutions to which other stakeholders must later respond.Footnote 163

In this vein, an important strand of literature on corporate capture argues that corporations can not only approach internal risk management processes in self-serving ways, but can also influence how regulators and external stakeholders understand the regulatory regime. Sociolegal scholars Lauren Edelman and Ari Ezra Waldman theorise this as ‘legal endogeneity’, describing how regulatory interpretation comes to be shaped by the preferences of regulated companies.Footnote 164 Other scholars have discussed the influence of regulated companies on regulatory agencies and broader policy debates in terms of ‘discursive capture’Footnote 165 or ‘cultural capture’.Footnote 166

This influence can operate through several mechanisms.Footnote 167 Technical expertise, ample resources, and prominence at industry events and policy discussions can enable large corporations to establish widely recognised best practices that influence other actors’ expectations of ‘appropriate’ risk management.Footnote 168 They can also directly influence regulators’ priorities and perceptions of policy issues through lobbying and advocacy.Footnote 169 Finally, other actors may rely on their technical knowledge and tools to understand risks. For example, in the DSA, external research and compliance evaluations by regulators, auditors and independent researchers will largely rely on VLOPs’ own reports, databases and APIs, and will thus be influenced by their preferred metrics and problem framings.

Many characteristics of the DSA discussed in this section seem to be intentional and defensible regulatory choices. As section 2(c) noted, favouring flexible standards and procedures over prescriptive rules can accommodate changing regulatory contexts and priorities, and allow an expert community to negotiate shared understandings of appropriate risk management over time. The danger is that because VLOPs bear primary responsibility for defining and managing risks, and because of their financial and technical resources, these shared understandings will ultimately be dominated by industry perspectives and preferences.

C. Risk and state power

Much of the literature discussed in subsection 4(b) suggests, implicitly or explicitly, that the alternative should be greater involvement of state institutions – understood as representing public interests, or at least being more accountable to them than companies. Prescriptions to avoid corporate capture often involve increasing regulatory agencies’ involvement in risk management, and strengthening their capacities to monitor companies and set substantive standards.Footnote 170 More radical proposals aiming to curtail corporate power might focus on setting regulatory ‘red lines’ that clearly prohibit or mandate certain activities.Footnote 171

So far, the Commission’s DSA enforcement strategy seems to take some inspiration from both of these approaches. On the one hand, the Commission (like other regulators in important member states, such as Ireland’s CnaM) has built up a large DSA enforcement team, and claims to be in regular, ongoing dialogue with VLOPs regarding risk management and other DSA obligations.Footnote 172 On the other, it has also started issuing fairly prescriptive official guidance on certain risks (eg, child safetyFootnote 173). From the perspective of the scholarship discussed above, these efforts might be celebrated as strengthening public institutions’ capacities to shape risk management in line with the public interest. In fact, however, they have often met with concern from civil society about state institutions exercising excessive influence over online freedom of expression.Footnote 174 In particular, given the importance of soft law standards and unofficial guidance within the DSA framework, and the Commission’s emphasis on dialogue with VLOPs and voluntary commitments, there are concerns about state actors regulating online speech by informally encouraging VLOPs to manage risks in certain ways, as opposed to legal procedures that are public and open to challenge.Footnote 175

This points to a gap in the existing literature on risk regulation, and an important topic for further research on the DSA: what does it mean when regulatory tools and techniques traditionally used in areas like chemicals, environmental damage or financial stability are redeployed to regulate media and communications? Many (though by no means all) of the VLOPs regulated by the DSA are social media and search engines. These platforms occupy a central place in contemporary media and information ecosystems, exercising significant influence over areas like the production and distribution of news journalism, the organisation of activism and protest, and everyday political discussions. As such, it is widely recognised that prescriptive state regulation – in particular, when it relates to how platforms moderate users’ content – is highly likely to be deployed in ways that limit political freedoms, repress dissent, and/or target politically unpopular minorities. Navigating this tension between unchecked corporate power and excessive state control has long been a central problem of media law.Footnote 176 However, risk management has historically not played a prominent role in media regulation.Footnote 177 Conversely, literature on risk regulation in other fields may not fully illuminate its particular dynamics in the platform regulation context.Footnote 178

First, unlike fields like environmental or financial regulation, regulating online media content inevitably raises complex issues around freedom of expression, access to information and other civil liberties.Footnote 179 Importantly, while regulatory obligations (or incentives) are susceptible to being used to intentionally target politically disfavoured speech, this is not the only problematic use of state power.Footnote 180 Even where state-mandated restrictions on content are aimed at (perceived) genuine harms such as hate speech, they are still likely to disproportionately affect marginalised social groups and political perspectives.Footnote 181 For example, if platforms seek to meet the Commission’s expectations around the mitigation of risks associated with specific types of illegal content by developing or adjusting automated moderation software to remove more such content,Footnote 182 this will necessarily also increase rates of ‘false positives’Footnote 183 – which tend to disproportionately affect marginalised social groups, due to structural biases within state institutions and platform companies.Footnote 184

Second, compared to the other regulatory fields mentioned, online content regulation more prominently involves heavily securitised policy areas such as disinformation, so-called ‘terrorist’ content, and child safety.Footnote 185 Police, security and law enforcement agencies are also involved in platform governance and regulatory enforcement: for example, providing lists of ‘extremist’ organisations whose content should be removed,Footnote 186 or reporting content to platforms for removal – often based on alleged illegality, but also often simply on the basis that it violates companies’ in-house content policies.Footnote 187 This involvement is also recognised and further institutionalised in the DSA. For example, law enforcement agencies can be certified as ‘trusted flaggers’ whose reports to platforms must be prioritised.Footnote 188

As in other securitised policy fields, the ‘risk’ framing and the delegation of risk management to VLOPs not only shift decision-making power towards corporations, but also towards executive institutions. Evidently, prescriptive legal rules restricting speech, expression and media are also open to political abuse (as has been amply demonstrated by recent Europe-wide crackdowns on the pro-Palestine and climate movementsFootnote 189). However, making and interpreting such restrictions heavily involves courts and legislatures, whose decisions are at least relatively transparent and contestable (as illustrated by the extensive documentation, public criticism and legal challenges of the aforementioned crackdowns). In contrast, framing online content regulation in terms of technical problems to be solved by private-sector experts shifts power from courts and legislatures towards the regulatory agencies overseeing corporations’ internal risk management processes, which are themselves opaque and difficult to challenge.

This regulatory oversight may formally focus on procedures rather than substantive standards,Footnote 190 but by developing best practices for risk management and (implicitly or explicitly) threatening enforcement action, regulators can effectively pressure VLOPs to restrict content they deem harmful.Footnote 191 For example, in the context of increasing regulatory attention to platforms’ putative impacts on child mental health (see section 2(c)), several VLOPs recently launched a new tool to coordinate identification and moderation of content deemed potentially harmful to children, for example because it encourages self-harm.Footnote 192 This is notable because there would generally be no legal obligation for them to remove such content. Thus, this development illustrates both the ‘function creep’ of moderation tools beyond their original purposes,Footnote 193 and the possibility for governments to use the construction of systemic risks to encourage platforms to regulate user content in ways that go beyond their strict legal obligations.Footnote 194 Commission statements have also publicly suggested that effectively mitigating risks should include promptly removing content reported by law enforcement.Footnote 195 In effect, then, risk management obligations could offer a further incentive for VLOPs to comply with law enforcement requests – even where they do not allege illegality, but only breaches of platforms’ terms and conditions, or where their justifications are less than convincingFootnote 196 – and could thereby facilitate content removals which circumvent formal legal processes. In this context, civil society groups have raised well-founded concerns that governments, regulators and police could use risk mitigation obligations to pressure platforms into suppressing political advocacy and activism.Footnote 197

As such, theorising DSA systemic risks only in terms of deregulatory agendas and corporate power, as this article has mostly done so far, seems incomplete – empirically, because it is insufficiently attentive to the role of state institutions in constructing risks, but also normatively, insofar as it points to alternative regulatory approaches that give state agencies even more power to regulate online content. The expansion of risk regulation to platform governance calls for more research on the intersections between state and corporate power in the construction and management of risk.

Importantly, it would be a mistake to conceptualise the two as opposite poles on a spectrum, such that more state intervention means less corporate capture and vice versa. Regulators’ and politicians’ political objectives and preferred risk framings may sometimes conflict with those of VLOPs, but may also often coincide. In particular, as Section 4(b) argued, VLOPs will generally be incentivised to frame risks in terms that require relatively little investment of resources or disruption to business operations. If state actors wish to construct risks in ways that focus on monitoring and controlling political speech, these goals will be largely compatible. Both will be served by constructing risks in terms of harmful content and user activities that need to be identified and suppressed, which can be achieved through relatively small adjustments to existing moderation systems, rather than in terms of structural problems in platform design and governance. From this perspective, the boundary-reinforcing, technocratic and depoliticising nature of risk discourse is advantageous for both VLOPs and governments: it can legitimise monitoring and surveillance of ‘risky’ user activity, while also deflecting attention from more structural critiques of corporate platform governance that would be in neither of their interests.Footnote 198

D. Critical security studies perspectives on risk

In this context, one promising avenue for future investigation of the politics of systemic risks in the DSA could be to draw from the extensive and theoretically rich literature on risk management as a mode of political power in fields like critical security studies and criminology.Footnote 199 Space does not permit more than a very brief discussion of this literature. However, a few points suggest potentially generative connections with scholarship on risk regulation and corporate risk management in the DSA.

First, like the sociological and STS literature on risk regulation relied on in this paper, scholarship on risk management in security, counterterrorism and law enforcement has generally been strongly influenced by social constructionist understandings of risk (in particular by Foucauldian perspectives on risk as a mode of governmentalityFootnote 200). On this basis, it provides insights into how politicians and state institutions discursively construct risks in ways that justify state repression, curtail civil liberties and target stigmatised minorities.Footnote 201 There are already examples of risk being mobilised in this way in the DSA context.Footnote 202 More research is needed on how regulators, politicians, professional experts and other stakeholders discursively construct particular social groups or activities as ‘risky’ within the DSA framework.

Second, although scholarship on risk in security studies has effectively illuminated how state institutions like police, counterterrorism and border agencies exercise power through risk management, it is far from exclusively focused on government institutions – instead tending to emphasise how risks are constructed and managed by heterogeneous networks involving different state institutions, political actors, international and supranational organisations, and private actors.Footnote 203 For example, private companies not only provide software and analytics tools that state institutions use to manage security risks,Footnote 204 but may also enforce their own risk management standards,Footnote 205 as well as intervening in policy discussions to promote particular risk framings and narratives.Footnote 206 This underlines the point that state and corporate power to construct risks are not necessarily in tension. Further research could integrate insights from security studies scholarship with scholarship on risk regulation in order to better analyse how knowledge and discourses about DSA systemic risks are co-produced by networks of actors and institutions spanning the public and private sectors.

In this regard, existing scholarship integrating security studies and sociolegal analysis could provide valuable methodological as well as theoretical directions. Such research has used empirical and often multi-sited or mixed-methods research to explore how risk governance plays out in practice and how legal norms, technologies and institutional practices interrelate and reciprocally shape one another.Footnote 207 Especially given the extremely ambiguous and open-ended nature of Articles 34–5 DSA, and the range of state, corporate and non-governmental actors who could potentially be involved in shaping risk management, empirical research that is methodologically eclectic and focuses on the sociotechnical practices that translate and materialise legal standards will be essential to understand how the DSA systemic risk framework is playing out in practice, and its political implications.

Third, critical security studies scholarship has in particular explored the use of algorithmic risk assessment technologies in contexts like border controls and counterterrorism.Footnote 208 Here, parallels could be drawn with the automated filtering tools which are now ubiquitous in commercial content moderation.Footnote 209 Given regulators’ apparent focus on removal of harmful content as a key risk mitigation measure, expansion and fine-tuning of automated moderation seems likely to play a central role in VLOPs’ compliance efforts. Broadly, these tools can function by cross-referencing existing databases of banned content, often shared across the industry, or by using machine learning to evaluate signals from the content and its metadata, to estimate an overall probability that the content violates a given rule.Footnote 210 As such, there are clear parallels with risk assessment tools used in border controls – which may rely on relatively simple lists of banned or risky passengers, but may also use more sophisticated algorithmic technologies and integrate data from diverse sources to produce opaque aggregated risk scores.Footnote 211 Security studies scholarship on algorithmic risk governance could help to illuminate the power dynamics, political agendas and understandings of risk encoded and reproduced by automated moderation tools – and conversely, how the functioning of these tools and the data they produce shape perceptions of risk.

That said, there are important differences between these two contexts. Scholarship on border controls and state surveillance has most often focused on how risk assessment technologies attach risk to people, enabling the exercise of power in highly individualised and selective ways: for example, barring ‘risky’ individuals from travelling while enabling free movement for others.Footnote 212 One open question is how theorisations of (algorithmic) risk governance in security studies might need to be rethought or developed for a regulatory context where they are used to regulate large-scale business operations, rather than individual activities and movements. The DSA could also be framed as attempting to monitor and regulate (indirectly, through risk mitigation obligations for VLOPs) risky online behaviours or patterns of activity. This is reminiscent of research in security and surveillance studies which has deployed Deleuze’s concept of the ‘dividual’ to suggest that algorithmic risk management tools govern not (only) by tracking identified individuals, but through fragmentary, constantly changing statistical representations of characteristics or behaviours deemed relevant at a given moment.Footnote 213 This could be a generative way to understand and critique systemic risk management – as an effort by both states and VLOPs to control online communication by modulating the algorithmic regulation of particular interactions at the level of the platform network, instead of enforcing the law against individual users.Footnote 214 It could also be useful to draw on scholarship on counterterrorism and security risks in financial regulation, which can similarly be located at the intersection between the regulation of people and their movements and the regulation of multinational businesses.Footnote 215

Finally, scholarship on both corporate risk managementFootnote 216 and risk governance in security studiesFootnote 217 has highlighted that while the discourses, techniques and institutions of risk management tend to be mobilised in ways that serve powerful state and corporate interests, these are not unilateral top-down processes, but also present opportunities for politicisation and contestation of dominant understandings of risk. In this regard, scholarship on risk regulation often calls for more institutionalised involvement of affected communities, civil society organisations and other external stakeholders, as a way of democratising risk management and contesting dominant framings.Footnote 218 Such calls for stakeholder participation are also very widespread in the context of platform regulation, both in academic scholarshipFootnote 219 and from industry actors,Footnote 220 where it is often portrayed as the necessary corrective to excessive state and corporate power.Footnote 221

Yet in this literature, multistakeholder participation and deliberation are often framed in terms that reinforce rather than challenging the depoliticising tendencies of risk discourse: as a way to consider everyone’s interests, and especially the interests of the most vulnerable and marginalised groups.Footnote 222 Such framings overlook the significant imbalances of power and resources that mean certain stakeholder groups – typically those representing well-organised and well-funded interest groups, who enjoy privileged access to expert communities and policymaking circles – are far better able to mobilise for their preferred understandings of risk than others.Footnote 223 Future research exploring how civil society groups and other actors contest dominant understandings of systemic risk in the DSA could also benefit from drawing on critical security studies literature, which has tended to place more emphasis on how established institutions and power dynamics structure the construction and contestation of risk.

5. Conclusion

At first glance, the long list of exceedingly broadly defined risk areas in Article 34(1) could suggest that DSA systemic risks can mean anything and everything. In fact, however, the social constructionist analysis presented in this article shows that the DSA’s systemic risk provisions are less open-ended or contingent than they might appear.Footnote 224 As Borges’ work reminds us, texts do not exist in a vacuum: they are interpreted by specific people, in specific places and contexts, with particular objectives which they attempt to pursue within given constraints. Risk regulation is always shaped by pre-existing institutions, norms, resource distributions and power relationships. In practice, this more often than not means that risks are constructed in ways that favour powerful interests.

By situating the DSA’s regulatory strategy in the longer history of critical scholarship on risk regulation, this article has shown how it aligns with a neoliberal, deregulatory ethos, based on the premise that regulation should interfere with corporate freedom and profits only to the minimum extent justified by specific threats to the public interest. In particular, two core features of the systemic risk management regime – its framing of diverse and value-laden political questions as risks to be managed through technical expertise, and its meta-regulatory structure which delegates primary responsibility for defining, prioritising and mitigating risks to regulated companies – will tend to reinforce the status quo, in which platform governance is dominated by the commercial interests of ‘big tech’.

At the same time, using examples from the implementation of the DSA to date, this article has illustrated how risk regulation in the context of online platforms and media may present some new and distinctive problems. In particular, the prominent role of law enforcement institutions and security discourses suggests that state actors may push for risks to be constructed and managed in ways that involve intensified monitoring and control of user communications. This may sometimes conflict with VLOPs’ interests; however, their different but overlapping interests can also work together to co-produce understandings of risk that facilitate both private profits and state security objectives. This potential is already visible in the context of the DSA, as regulators and corporations appear to be favouring understandings of systemic risks and mitigation measures that centre around monitoring and controlling user content. This latest development in the long history of risk regulation calls for more critical normative and empirical research into how risks are constructed and contested in the context of platform governance.

Acknowledgements

Thank you to Riccardo Fornasari, Clara Iglesias Keller, Daphne Keller, Paddy Leerssen, João C. Magalhães, Ljubiša Metikoš, Lukas Seiling, Daniel Sneiss and Gavin Sullivan for their thoughtful comments on earlier drafts. Thank you also to the organisers, participants and staff at the FernUni Hagen Tausend Plattformen conference, the Université de Lille conference on « Law & Political Economy : Droit et rapports de domination – Pour une approche critique », and the Sciences Po conference on Algorithmic Transparency and the Digital Rule of Law, for opportunities to present and discuss this work.

Funding statement

This work was supported by a research grant from the Project Liberty Institute and by a fellowship at the Weizenbaum Institute for the Networked Society. Colleagues from the Weizenbaum Institute offered feedback on a presentation of an early version of this article. Otherwise, these funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests

The author has no conflicts of interest to declare.

References

1 JL Borges, ‘The Analytical Language of John Wilkins’ (Lilia Graciela Vasquez tr, Alamut, 15 July 1999) <https://www.alamut.com/subj/artiface/language/johnWilkins.html> accessed 29 October 2024.

2 Art 34(1), Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance) [2022] OJ L277/1 (‘DSA’).

3 For example, violence of all kinds impacts fundamental rights and physical and mental wellbeing, so it is unclear why gender-based violence must be mentioned specifically (but not violence targeting other vulnerable or marginalised groups). It might be taken as a reference to the new categories of cybercrime, like gender-based online harassment, prescribed by the new Directive (EU) 2024/1385 of the European Parliament and of the Council of 14 May 2024 on combating violence against women and domestic violence (‘Gender-Based Violence Directive’) – but would such categories not be subsumed under illegal content?

4 Borges (n 1).

5 The latter image was memorably deployed by Daphne Keller in an essay on the DSA: D Keller, ‘The DSA’s Industrial Model for Content Moderation’ (Verfassungsblog, 24 February 2022) <https://verfassungsblog.de/dsa-industrial-model/> accessed 23 October 2024.

6 N Tzouvala, Capitalism as Civilisation: A History of International Law (Cambridge University Press 2020).

7 C Hood, H Rothstein and R Baldwin, The Government of Risk: Understanding Risk Regulation Regimes (Oxford University Press 2001); J Black, ‘The Emergence of Risk-Based Regulation and the New Public Risk Management in the United Kingdom’ (2005) Public Law 512.

8 U Beck, Risk Society: Towards a New Modernity (Mark Ritter tr, Sage Publications 1992).

9 ME Kaminski, ‘Regulating the Risks of AI’ 103 (2023) Boston University Law Review 1347.

10 See, eg, Z Efroni, ‘The Digital Services Act: Risk-Based Regulation of Online Platforms’ (2021) Internet Policy Review <https://policyreview.info/articles/news/digital-services-act-risk-based-regulation-online-platforms/1606> accessed 23 October 2024; G De Gregorio and P Dunn, ‘The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age’ 59 (2) (2022) Common Market Law Review 473. <https://doi.org/10.54648/cola2022032>; N Zingales, ‘The DSA as a Paradigm Shift for Online Intermediaries’ Due Diligence’ (Verfassungsblog, 2 November 2022) <https://verfassungsblog.de/dsa-meta-regulation/> accessed 23 October 2024; MC de Carvalho, ‘It will be what we want it to be: Sociotechnical and Contested Systemic Risk at the Core of the EU’s Regulation of Platforms’ AI Systems’ 16 (1) (2025) JIPITEC 35 <https://www.jipitec.eu/jipitec/article/view/420> accessed 16 April 2025.

11 In addition to the just-cited articles see J Laux, S Wachter and B Mittelstadt, ‘Taming the Few: Platform Regulation, Independent Audits, and the Risks of Capture Created by the DMA and DSA’ 43 (2021) Computer Law & Security Review 105613. <https://doi.org/10.1016/j.clsr.2021.105613>; A Mantelero, ‘Fundamental Rights Impact Assessments in the DSA’ (Verfassungsblog, 1 November 2022) <https://verfassungsblog.de/dsa-impact-assessment/> accessed 23 October 2024; P Terzis, M Veale and N Gaumann, ‘Law and the Emerging Political Economy of Algorithmic Audits’ (2024) FAccT ‘24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency 1255. <https://doi.org/10.1145/3630106.3658970>; N Eder, ‘Making Systemic Risk Assessments Work: How the DSA Creates a Virtuous Loop to Address the Societal Harms of Content Moderation’ (2024) German Law Journal. <https://doi.org/10.1017/glj.2024.24>.

12 WR Freudenberg and SK Pastor, ‘Public Responses to Technological Risks: Toward a Sociological Perspective’ 33 (2) (1992) The Sociological Quarterly 389. <https://doi.org/10.1111/j.1533-8525.1992.tb00381.x>

13 Some notable exceptions have generally followed the basic line of argument, contrasting ‘risk-based’ to ‘rights-based’ approaches, and arguing that the latter more effectively protect vulnerable individuals and/or hold companies accountable to the public: A Aloisi and V de Stefano, ‘Between Risk Mitigation and Labour Rights Enforcement: Assessing the Transatlantic Race to Govern AI-Driven Decision-Making Through a Comparative Lens’ 14 (2) (2023) European Labour Law Journal. <https://doi.org/10.1177/20319525231167982>; D Leufer, F Hidvegi and A Zornatta, ‘The Pitfalls of the European Union’s Risk-Based Approach to Digital Rulemaking’ 71 (2024) UCLA Law Review 156 <https://www.uclalawreview.org/the-pitfalls-of-the-european-unions-risk-based-approach-to-digital-rulemaking/> accessed 23 October 2024; A Del Campo, N Zara and RÁ Ugarte, ‘Are Risks the New Rights? The Perils of Risk-based Approaches to Speech Regulation’ 16 (2) (forthcoming 2025) JIPITEC.

14 M Dean, ‘Risk, Calculable and Incalculable’ 49 (1998) Soziale Welt 25, 25 <https://www.jstor.org/stable/40878216> accessed 16 April 2025.

15 U Beck, ‘Living in the World Risk Society’ 35 (3) (2006) Economy & Society 329, 333. <https://doi.org/10.1080/03085140600844902>.

16 Kaminski, ‘Risks of AI’ (n 9).

17 B Christophers, ‘Climate Change and Financial Instability: Risk Disclosure and the Problematics of Neoliberal Governance’ 107 (5) (2016) Annals of the American Association of Geographers 1108. <https://doi.org/10.1080/24694452.2017.1293502>; R Fornasari and V Maccarrone, ‘Mandatory Corporate Sustainability Due Diligence and Its Limitations: The Persistence of Unequal Exchange’ in A Bieler and V Maccarrone (eds), Critical Political Economy of the European Polycrisis (Edward Elgar forthcoming 2025).

18 European Commission, ‘Supervision of the Designated Very Large Online Platforms and Search Engines under DSA’ (Shaping Europe’s Digital Future, 18 October 2024) <https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses> accessed 23 October 2024.

19 More detailed rules on auditing are set out in Commission Delegated Regulation (EU) 2024/436 of 20 October 2023 supplementing Regulation (EU) 2022/2065 of the European Parliament and of the Council, by laying down rules on the performance of audits for very large online platforms and very large online search engines [2024] OJ L.

20 R Griffin, ‘Rethinking Rights in Social Media Governance: Human Rights, Ideology and Inequality’ 2 (1) (2023) European Law Open 30. <https://doi.org/10.1017/elo.2023.7>.

21 M Husovec, ‘Will the DSA Work?’ (Verfassungsblog, 9 November 2022) <https://verfassungsblog.de/dsa-money-effort/> accessed 16 April 2025

22 Art 34(2)(a), DSA (n 2).

23 Art 34(2)(b), DSA (n 2).

24 Art 34(2)(c), DSA (n 2).

25 Art 34(2)(d), DSA (n 2).

26 Art 34(2)(e), DSA (n 2).

27 Art 34(2), DSA (n 2).

28 Ibid.

29 Ibid.

30 Art 35(1)(a), DSA (n 2).

31 Art 35(1)(c), DSA (n 2).

32 Art 35(1)(d), DSA (n 2).

33 Art 35(1)(f), DSA (n 2).

34 For an overview see J Jaursch, ‘Overview of DSA Delegated acts, Reports and Codes of Conduct’ (Interface, 12 September 2022) <https://www.stiftung-nv.de/publications/overview-dsa-delegated-acts-reports-and-codes-conduct> accessed 23 October 2024.

35 R Griffin, ‘Codes of Conduct in the Digital Services Act: Functions, Benefits & Concerns’ (2024) Technology & Regulation 167. <https://doi.org/10.26116/techreg.2024.016>.

36 Terzis et al (n 11); D Keller, ‘The Rise of the Compliant Speech Platform’ (Lawfare, 16 October 2024) <https://www.lawfaremedia.org/article/the-rise-of-the-compliant-speech-platform> accessed 23 October 2024. Auditing firms have expressed concern about taking responsibility for auditing compliance with the DSA’s vague standards in the absence of clear external definitions or benchmarks (see European Contact Group, ‘ECG Responds to the EC Call for Feedback on the Digital Services Act Audit Methodology Draft Delegated Regulation’ (European Contact Group, 4 July 2023 <https://www.europeancontactgroup.eu/news/ecg-responds-to-the-ec-call-for-feedback-on-the-digital-services-act-audit-methodology-draft-delegated-regulation/> accessed 23 October 2024) – although the opportunity to provide lucrative new services to ‘big tech’ companies suggests that they will ultimately overcome this reluctance: see Terzis et al (n 11).

37 Terzis et al (n 11).

38 Tremau, ‘Our Trust & Safety Advisory Services’ (Tremau) <https://tremau.com/advisory/> accessed 23 October 2024.

39 For a comprehensive overview of mechanisms for civil society participation see Carvalho (n 10).

40 P Leerssen, ‘Outside the Black Box From Algorithmic Transparency to Platform Observability in the Digital Services Act’ 4 (2) (2024) Weizenbaum Journal of the Digital Society. <https://doi.org/10.34669/wi.wjds/4.2.3>.

41 A Stirling, ‘Risk, Precaution and Science: Towards a More Constructive Policy Debate’ 8 (2007) EMBO Reports 309. <https://doi.org/10.1038/sj.embor.7400953>.

42 A Orben, ‘The Sisyphean Cycle of Technology Panics’ 15 (1) (2020) Perspectives on Psychological Science 1143. <https://doi.org/10.1177/1745691620919372>; L Thorburn, J Stray and P Benganjani, ‘How to Measure the Effects of Recommenders’ (Understanding Recommenders, 20 July 2022) <https://medium.com/understanding-recommenders/how-to-measure-the-causal-effects-of-recommenders-5e89b7363d57> accessed 23 October 2024; S Altay, M Berriche and A Acerbi, ‘Misinformation on Misinformation: Conceptual and Methodological Challenges’ 9 (1) (2023) Social Media + Society. <https://doi.org/10.1177/20563051221150412>; The Lancet, ‘Unhealthy influencers? Social Media and Youth Mental Health’ 404 (10461) (2024) The Lancet 1375. <https://doi.org/10.1016/S0140-6736(24)02244-X>; UKH Ecker et al, ‘Why Misinformation Must Not Be Ignored’ (2024) American Psychologist. <https://doi.org/10.1037/amp0001448>; CJ Ferguson, ‘Do Social Media Experiments Prove a Link with Mental Health: A Methodological and Meta-Analytic Review’ 14 (2) (2025) Psychology of Popular Media 201. <https://doi.org/10.1037/ppm0000541>.

43 Fornasari and Maccarrone (n 17).

44 A Duval, ‘Ruggie’s Double Movement: Assembling the Private and the Public Through Human Rights Due Diligence’ 41 (3) (2023) Nordic Journal of Human Rights 279. <https://doi.org/10.1080/18918131.2023.2171633>; C Parfitt and G Bryant, ‘Risk Politics’ (Phenomenal World, 7 June 2023) <https://www.phenomenalworld.org/analysis/risk-politics/> accessed 23 October 2024.

45 L Amoore, The Politics of Possibility: Risk and Security Beyond Probability (Duke University Press 2013).

46 B Wynne, ‘Risk and Environment as Legitimatory Discourses of Technology: Reflexivity Inside Out?’ 50 (3) (2002) Current Sociology 459, 461. <https://doi.org/10.1177/0011392102050003010>.

47 Black, ‘Emergence of Risk-Based Regulation’ (n 7); Kaminski, ‘Risks of AI’ (n 9).

48 Hood et al (n 7) 3.

49 See Stirling (n 41).

50 What is commonly termed ‘deregulation’ often does not involve an overall reduction in state regulation of a given sector, but rather involves changes in the content and objectives and regulation, aimed at maximising corporate freedoms and profitability (see, eg, T Hathaway, ‘Neoliberalism as Corporate Power’ 24 (3–4) (2020) Competition & Change 315. <https://doi.org/10.1177/1024529420910382>). I deliberately use the term here in the latter sense.

51 M Somers, ‘Legal Predistribution, Market Justice, and Dedemocratization: Polanyi and Piketty on Law and Political Economy’ 3 (2) (2022) Journal of Law & Political Economy 225, 233–4. <https://doi.org/10.5070/LP63259631>.

52 K Pistor, The Code of Capital: How the Law Creates Wealth and Inequality (Princeton University Press 2019); I Kampourakis, ‘Legal Theory in Search of Social Transformation’ 1 (4) (2023) European Law Open 808. <https://doi.org/10.1017/elo.2023.15>.

53 I Kampourakis and KH Eller, ‘Quantifying “Better Regulation”’ (Verfassungsblog, 21 February 2022) <https://verfassungsblog.de/quantifying-better-regulation/> accessed 23 October 2024; F Pasquale, ‘Power and Knowledge in Policy Evaluation: From Managing Budgets to Analyzing Scenarios’ 86 (2023) Law & Contemporary Problems 39; W Boyd, ‘De-Risking Environmental Law’ 48 (153) (2024) Harvard Environmental Law Review 153.

54 R Mansell, ‘Digital Technology Innovation: Mythical Claims about Regulatory Efficacy’ 30 (2) (2023) Javnost 145. <https://doi.org/10.1080/13183222.2023.2198933>.

55 As an example of a legal analysis of the DSA risk management framework framed around this trade-off, see De Gregorio and Dunn (n 10).

56 Commission, ‘Supervision’ (n 18).

57 Black, ‘Emergence of Risk-Based Regulation’ (n 7); M Power, The Risk Management of Everything: Rethinking the Politics of Uncertainty (Demos 2004).

58 I Ayres and J Braithwaite, Responsive Regulation: Transcending the Deregulation Debate (Oxford University Press 1992).

59 Black, ‘Emergence of Risk-Based Regulation’ (n 7).

60 Personal communication to author, 11 June 2024.

61 R Wezenbeek, ‘Opening Keynote – The European Commission and the DSA’ (DSA and Platform Regulation Conference, Amsterdam, 16 February 2024) <https://dsa-observatory.eu/the-dsa-and-platform-regulation-conference-2024/> accessed 23 October 2024.

62 Black, ‘Emergence of Risk-Based Regulation’ (n 7); I Landau, ‘Human Rights Due Diligence and the Risk of Cosmetic Compliance’ 20 (1) (2019) Melbourne Journal of International Law 221.

63 M Power, ‘The Risk Management of Nothing’ 34 (6–7) (2009) Accounting, Organizations and Society 849. <https://doi.org/10.1016/j.aos.2009.06.001>.

64 L Enriques and D Zetzsche, ‘The Risky Business of Regulating Risk Management in Listed Companies’ 3 (2013) European Company and Financial Law Review 271. <https://doi.org/10.1515/ecfr-2013-0271>.

65 Enriques and Zetzsche (n 64).

66 Kaminski, ‘Risks of AI’ (n 9); R Gellert, The Risk-Based Approach to Data Protection (Oxford University Press 2020); K Yeung and LA Bygrave, ‘Demystifying the Modernized European Data Protection Regime: Cross-Disciplinary Insights from Legal and Regulatory Governance Scholarship’ 16 (2022) Regulation & Governance 137. <https://doi.org/10.1111/rego.12401>.

67 Landau (n 62) 222.

68 As in the DSA, risk regulation regimes may also involve diverse other private actors in standard-setting and oversight, such as auditors, consultancies and NGOs.

69 Yeung and Bygrave (n 66); KA Bamberger and DA Mulligan, ‘New Governance, Chief Privacy Officers, and the Corporate Management of Information Privacy in the United States’ 33 (4) (2011) Law & Policy 477. <https://doi.org/10.1111/j.1467-9930.2011.00351.x>.

70 L Edelman, Working Law: Courts, Corporations and Symbolic Civil Rights (University of Chicago Press 2016); AE Waldman, ‘Privacy Law’s False Promise’ 97 (3) (2020) Washington University Law Review 773.

71 J Cohen and AE Waldman, ‘Introduction: Framing Regulatory Managerialism as an Object of Study and Strategic Displacement’ 86 (2023) Law & Contemporary Problems i.

72 Dean (n 14); Wynne (n 46).

73 Freudenberg and Pastor (n 12); Wynne (n 46); Amoore, Politics of Possibility (n 45); P O’Malley, Risk, Uncertainty and Government (Routledge 2004); G Mythen and S Walklate, ‘Criminology and Terrorism: Which Thesis? Risk Society or Governmentality?’46 (3) (2005) British Journal of Criminology 379 <https://www.jstor.org/stable/23639354>.

74 Dean (n 14); F Ewald, ‘Insurance and Risk’ in G Burchell, C Gordon and P Miller (eds), The Foucault effect: Studies in Governmentality (University of Chicago Press 1991) pp 197–211; J van der Heijden, Risk Governance and Risk-Based Regulation: A Review of the International Academic Literature (State of the Art in Regulatory Governance Research Paper Series, Victoria University of Wellington/Government Regulatory Practice Initiative, June 2019) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3406998> accessed 23 October 2024.

75 This point is not only appreciated by critical sociologists. It is neatly encapsulated by the International Standardization Organisation’s 2018 standard on risk management, which defines risk as the impact of uncertainty on an organisation’s objectives: ISO, ISO 31000: Risk Management (ISO 2018) <https://www.iso.org/iso-31000-risk-management.html> accessed 29 October 2024.

76 J Adekola, Power and Risk in Policymaking: Understanding Public Health Debates (Springer Nature 2022).

77 D Bigo, ‘Security and Immigration: Toward a Critique of the Governmentality of Unease’ 27 (2002) Alternatives 63 <https://www.jstor.org/stable/45468068>.

78 Amoore, Politics of Possibility (n 45).

79 Ewald (n 74); AS Obendiek and T Seidl, ‘The (False) Promise of Solutionism: Ideational Business Power and the Construction of Epistemic Authority in Digital Security Governance’ 30 (7) Journal of European Public Policy 1305. <https://doi.org/10.1080/13501763.2023.2172060>. This is already apparent in platform regulation, where the DSA and similar regulations in other jurisdictions have helped to create a thriving market for ‘trust and safety’ service providers: T Bernard, ‘The Evolving Trust and Safety Vendor Ecosystem’ (Tech Policy Press, 24 July 2023) <https://www.techpolicy.press/the-evolving-trust-and-safety-vendor-ecosystem/> accessed 23 October 2024.

80 Bigo (n 77).

81 European Commission, ‘Commission Opens Formal Proceedings against TikTok under the Digital Services Act´(Press Corner, 19 February 2024) <https://ec.europa.eu/commission/presscorner/detail/en/ip_24_926> accessed 23 October 2024; European Commission, ‘Commission Opens Formal Proceedings against Meta under the Digital Services Act Related to the Protection of Minors on Facebook and Instagram’ (Press Corner, 16 May 2024) <https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2664> accessed 23 October 2024.

82 Orben (n 42).

83 R Gill, Changing the Perfect Picture: Smartphones, Social Media and Appearance Pressures (City University of London March 2021) <https://www.city.ac.uk/__data/assets/pdf_file/0005/597209/Parliament-Report-web.pdf> accessed 23 October 2024.

84 J Factora, ‘The “Kids Online Safety Act” Is Dead For Now, But Advocates Warn It Will Return’ (Them, 1 August 2024) <https://www.them.us/story/kosa-dead-house-lgbtq-internet-legislation> accessed 23 October 2024.

85 Both interpretations seem to have gained traction within the Commission: see its ongoing investigations against TikTok and Meta (n 81).

86 EDRi et al, ‘Open Letter: The Dangers of Age Verification Proposals to Fundamental Rights Online’ (EDRi, 16 September 2024) <https://edri.org/our-work/open-letter-the-dangers-of-age-verification-proposals-to-fundamental-rights-online/> accessed 23 October 2024; A Marwick et al, Child Online Safety Legislation (COSL) – A Primer (Center for Information Technology Policy, Center for Information, Technology and Public Life and Sanford School of Public Policy, 29 May 2024) especially at 30–3 <https://citap.pubpub.org/pub/cosl/release/5#chilling-effects-on-information-access-and-free-expression> accessed 5 November 2024.

87 Boyd (n 53); M Liboiron, M Tironi and N Calvillo, ‘Toxic Politics: Acting in a Permanently Polluted World’ 48 (3) (2018) Social Studies of Science 331. <https://doi.org/10.1177/0306312718783087>.

88 RÁ Ugarte, ‘Bad Cover Versions of Law: Inescapable Challenges and Some Opportunities for Measuring Human Rights Impacts of Corporate Conduct in the ICT Sector’ (Draft Article on File with Author).

89 Two years after the DSA’s passage, what ‘systemic’ means and how – if at all – it narrows, qualifies or expands the concept of ‘risk’ continues to be debated by scholars, industry actors and civil society organisations, with consensus yet to emerge. See DSA Decoded, ‘Takeaways from the Webinar “Delimiting Systemic Risks in the DSA”’ (DSA Decoded, 2024) <https://www.dsadecoded.com/webinar-summary> accessed 23 October 2024; Global Network Initiative & Digital Trust & Safety Partnership, European Rights & Risks: Stakeholder Engagement Forum Event Summary (Global Network Initiative 2024) <https://globalnetworkinitiative.org/wp-content/uploads/GNI-DTSP-Forum-Summary.pdf> accessed 23 October 2024, and for a detailed legal analysis Carvalho (n 10).

90 Carvalho (n 10).

91 M Jegen and F Mérand, ‘Constructive Ambiguity: Comparing the EU’s Energy and Defence Policies’ 37 (1) (2013) West European Politics 182. <https://doi.org/10.1080/01402382.2013.818325>.

92 Boyd (n 53); Adekola (n 76); J Cohen, Between Truth and Power: The Legal Foundations of Informational Capitalism (Oxford University Press 2019). On the DSA specifically see R Griffin, ‘Public and Private Power in Social Media Governance: Multistakeholderism, the Rule of Law and Democratic Accountability’ 14 (1) (2023) Transnational Legal Theory 46. <https://doi.org/10.1080/20414005.2023.2203538>.

93 For a more comprehensive account of the DSA’s risk management system, see M Husovec, Principles of the Digital Services Act (Oxford University Press 2024); Carvalho (n 10).

94 Carvalho (n 10).

95 J Black, ‘Constructing and Contesting Legitimacy and Accountability in Polycentric Regulatory Regimes’ 2 (2) (2008) Regulation & Governance 137. <https://doi.org/10.1111/j.1748-5991.2008.00034.x>.

96 Yeung and Bygrave (n 66) 140; see also Hood et al (n 17).

97 U Beck, ‘Living in the World Risk Society’ 35 (3) (2006) Economy & Society 329, 333. <https://doi.org/10.1080/03085140600844902>.

98 Adekola (n 76); J Adekola, D Fischbacher-Smith and M Fischbacher-Smith, ‘Light Me Up: Power and Expertise in Risk Communication and Policy-Making in the e-Cigarette Health debates’ 22 (10) (2019) Journal of Risk Research 1294. <https://doi.org/10.1080/13669877.2018.1473463>; B Ram and T Webler, ‘Social Amplification of Risks and the Clean Energy Transformation: Elaborating on the Four Attributes of Information’ 42 (7) (2022) Risk Analysis 1423. <https://doi.org/10.1111/risa.13902>. Stakeholders might have a direct interest in how risk regulations are implemented (for example, regulated companies will typically favour understandings of risk that do not require them to make costly overhauls to their business practices). However, some stakeholders might also have an interest in shaping other actors’ perceptions of risk for other reasons, for example as a way of marketing relevant services (consultancy, software, etc.): Obendiek and Seidl (n 79).

99 Cohen (n 92).

100 Cohen (n 92) 183.

101 Power, ‘Risk Management of Nothing’ (n 63) 854.

102 Power, ‘Risk Management of Nothing’ (n 63).

103 Christophers (n 17); Parfitt and Bryant (n 44); Fornasari and Maccarrone (n 17).

104 Griffin, ‘Codes of Conduct’ (n 35).

105 Power, ‘Risk Management of Nothing’ (n 63); Enriques and Zetzsche (n 64).

106 Thorburn et al (n 42).

107 Anna Lauren Hoffmann, ‘Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse’ 22 (7) (2019) Information, Communication & Society 900. <https://doi.org/10.1080/1369118X.2019.1573912>.

108 Boyd (n 53).

109 See generally R Griffin, ‘The Sanitised Platform’ 13 (1) (2022) JIPITEC 36. For example, in designing and calibrating automated content moderation tools, there is a well-recognised trade-off between ‘false negatives’ and ‘false positives’, such that attempts to reduce one will inevitably increase the other: E Douek, ‘Content Moderation as Systems Thinking’ 136 (2022) Harvard Law Review 526. Presupposing a clear distinction between ‘correct’ and ‘incorrect’ content removals is overly schematic, but well illustrates the underlying principle that implementing risk mitigation measures in complex systems will generally have unintended side effects.

110 Fundamental rights concerns could however provide a convenient justification for VLOPs not to implement mitigation measures they find costly or inconvenient. See section 4(b) for further discussion.

111 See Arts 45–7 DSA (n 2).

112 Coordination between companies is promoted by some aspects of the regulatory framework, notably codes of conduct developed under Arts 45–7 DSA (n 2).

113 This is ironic, since the terminology of ‘systemic risks’ originates from the financial sector, where these are exactly the types of risks it describes: see Christophers (n 17). There is little in the DSA which suggests this kind of approach to risk regulation; as noted in section 2(c), there is still little clarity about what ‘systemic’ is supposed to mean, if it is more than a buzzword or a synonym for ‘important’.

114 A Monea, The Digital Closet: How the Internet Became Straight (MIT Press 2022).

115 Monea (n 114); Griffin, ‘Sanitised Platform’ (n 109).

116 Power, ‘Risk Management of Nothing’ (n 63) 854.

117 J Dehm, ‘Beyond Climate Due Diligence: Fossil Fuels, ‘Red Lines’ and Reparations’ 8 (2) (2023) Business and Human Rights Journal 151. <https://doi.org/10.1017/bhj.2023.30>.

118 S Vaidhyanathan, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy (Oxford University Press 2018); M Landwehr, A Borning and V Wulf, ‘The High Cost of Free Services: Problems with Surveillance Capitalism and Possible Alternatives for IT Infrastructure’ (2019) LIMITS ‘19: Proceedings of the Fifth Workshop on Computing within Limits 3. <https://doi.org/10.1145/3338103.3338106>; R Griffin, ‘The Law and Political Economy of Online Visibility: Market Justice in the Digital Services Act’ (2023) Technology & Regulation 69. <https://doi.org/10.26116/techreg.2023.007>.

119 Recital 3, DSA (n 2).

120 In this regard the DSA can be contrasted with the 2022 Digital Markets Act (DMA), which does pursue structural market reforms (although some scholars have suggested this may ultimately principally facilitate competition between ‘big tech’ platforms, rather than alternative governance models: M Hiltunen, ‘Social Media Platforms within Internal Market Construction: Patterns of Reproduction in EU Platform Law’ 23 (9) (2022) German Law Journal 1226. <https://doi.org/10.1017/glj.2022.80>). Notably, the DMA’s aims are explicitly defined in terms of pursuing positive outcomes and structural changes, rather than mitigating risks, as can be seen from its full title: Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector.

121 Amoore, Politics of Possibility (n 45).

122 Bigo (n 77); Wynne (n 46).

123 R Ahmed, ‘Negotiating Fundamental Rights: Civil Society and the EU Regulation on Addressing the Dissemination of Terrorist Content Online’ (2023) Studies in Conflict and Terrorism. <https://doi.org/10.1080/1057610X.2023.2222890>.

124 Beck (n 15); Boyd (n 53).

125 Amoore, Politics of Possibility (n 45).

126 Dehm (n 117).

127 B Dvoskin, ‘Expert Governance of Online Speech’ 64 (1) (2023) Harvard International Law Journal 85.

128 Terzis et al (n 11); Leerssen (n 40).

129 See, eg, European Commission, ‘Commission Launches Call for Evidence for Guidelines on Protection of Minors Online under the Digital Services Act’ (Shaping Europe’s Digital Future, 31 July 2024) <https://digital-strategy.ec.europa.eu/en/news/commission-launches-call-evidence-guidelines-protection-minors-online-under-digital-services-act> accessed 23 October 2024. For more discussion see Leerssen (n 40).

130 E Siapera and E Ferries, ‘Platform governance and civil society organisations: Tensions between reform and revolution continuum’ 14 (1) (2025) Internet Policy Review. CSOs can already be seen using this kind of rhetorical strategy: a 2024 open letter from a group of leading digital rights organisations demanding more transparency around risk assessments framed their external scrutiny of risk assessments in terms of collaboration and utility for VLOPs, as a way to ‘identify potential gaps and unaddressed risks, assess whether mitigation measures are appropriate and uphold fundamental rights, as well as learning and sharing best practices to raise the general standards of risk assessment and mitigation’. Center for Democracy & Technology et al, ‘Joint Civil Society Statement on Meaningful Transparency of Risk Assessments under the Digital Services Act’ (Center for Democracy & Technology, 8 November 2024) <https://cdt.org/insights/joint-civil-society-statement-on-meaningful-transparency-of-risk-assessments-under-the-digital-services-act/> accessed 8 November 2024.

131 Wynne (n 46); J Sadowski, ‘Rediscovering a Risky Ideology: Technocracy and Its Effects on Technology Governance’ 7 (1) (2020) Journal of Responsible Innovation 112. <https://doi.org/10.1080/23299460.2020.1816345>.

132 Parfitt and Bryant (n 44).

133 I Kampourakis, ‘A Post-Neoliberal European Order? Public Purpose and Private Accumulation in the Green Transition’ (Draft Article on file with Author), 15. This could be connected to the history of risk management in fields such as environmental regulation, where pollution or catastrophic accidents have often been seen as threatening all of society (see, famously, Beck (n 15)) – even though these harms tend to materialise in highly unequal ways, and managing them involves acute distributive conflicts (see Liboiron et al (n 87); G Mythen, ‘From “Goods” to “Bads”? Revisiting the Political Economy of Risk’ 10 (3) (2005) Sociological Research Online 191.

134 Research suggests that anti-disinformation measures implemented by major platforms to promote reliable news sources tend to favour a small number of large publishers: J Schlosberg, ‘Tightening the Grip: Why the Web Is No Haven of Media Plurality’ (Inforrm, 6 March 2016) <https://inforrm.org/2016/03/06/tightening-the-grip-why-the-web-is-no-haven-of-media-plurality-justin-schlosberg/> accessed 28 October 2024; E Nechushtai, R Zamith and SC Lewis, ‘More of the Same? Homogenization in News Recommendations When Users Search on Google, YouTube, Facebook, and Twitter’ (2023) Mass Communication & Society. <https://doi.org/10.1080/15205436.2023.2173609>. The Commission is currently investigating Instagram for suppressing recommendations of all political content as a potential breach of Articles 34–5: European Commission, ‘Commission Opens Formal Proceedings Against Facebook and Instagram under the Digital Services Act’ (Press Corner, 30 April 2024) <https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2373> accessed 23 October 2024. Yet given concerns that algorithmic promotion of political content stokes disinformation, polarisation and conflict, this could equally be framed as a risk mitigation measure.

135 For critical analyses of how the legal discourse of proportionality obscures distributive policy choices, see GCN Webber, ‘Proportionality, Balancing, and the Cult of Constitutional Rights Scholarship’ 23 (1) (2015) Canadian Journal of Law & Jurisprudence 179. <https://doi.org/10.1017/S0841820900004860>; T Marzal, ‘From Hercules to Pareto: Of Bathos, Proportionality, and EU Law’ 15 (3) (2017) European Journal of Constitutional Law 621. <https://doi.org/10.1093/icon/mox055>.

136 Risk regulation in Hood, Rothstein and Baldwin’s (n 7) broad sense can involve regulations that more directly address such questions: for example, banning products entirely (see Stirling (n 41); Kaminski, ‘Risks of AI’ (n 9)). This is less true of meta-regulatory regimes, which presuppose that companies can continue their activities if they implement risk mitigation measures.

137 Mansell (n 54).

138 Wynne (n 46) 472; see also Sadowski (n 131); J Wilsdon and R Willis, See-through Science: Why Public Engagement Needs to Move Upstream (Demos, 2004) <https://demos.co.uk/wp-content/uploads/files/Seethroughsciencefinal.pdf> accessed 23 October 2024.

139 M Forst, State Repression of Environmental Protest and Civil Disobedience: A Major Threat to Human Rights and Democracy (UN Special Rapporteur on Environmental Defenders February 2024) <https://unece.org/sites/default/files/2024-02/UNSR_EnvDefenders_Aarhus_Position_Paper_Civil_Disobedience_EN.pdf> accessed 24 October 2024.

140 P Schütze, ‘The Impacts of AI Futurism: An Unfiltered Look at AI’s True Effects on the Climate Crisis’ 26 (3) (2024) Ethics and Information Technology. <https://doi.org/10.1007/s10676-024-09758-6>.

141 Max Read, ‘Drowning in Slop’ (Intelligencer, 25 September 2024) <https://nymag.com/intelligencer/article/ai-generated-content-internet-online-slop-spam.html> accessed 24 October 2024; J Bradley, ‘AI Isn’t About Unleashing Our Imaginations, It’s About Outsourcing Them. The Real Purpose Is Profit’ (The Guardian, 15 November 2024) <https://www.theguardian.com/technology/2024/nov/16/ai-isnt-about-unleashing-our-imaginations-its-about-outsourcing-them-the-real-purpose-is-profit> accessed 20 November 2024.

142 R Griffin, ‘The Politics of Risk in the Digital Services Act: A Stakeholder Mapping and Research Agenda’ (forthcoming 2025) Weizenbaum Journal of the Digital Society.

143 Dehm (n 117); Fornasari and Maccarrone (n 17).

144 Duval (n 44); G Baars, The Corporation, Law & Capitalism: A Radical Perspective on the Role of Law in the Global Political Economy (Brill 2019).

145 J Muldoon and B Wu, ‘Artificial Intelligence in the Colonial Matrix of Power’ 36 (2023) Philosophy & Technology 80. <https://doi.org/10.1007/s13347-023-00687-8>; M Kwet, Digital Degrowth: Technology in the Age of Survival (Pluto 2024).

146 This is also evidenced by Article 34(1)’s reference to systemic risks ‘in the Union’, although many of the most concerning social impacts of dominant platforms play out in the Global South: Kwet (n 145).

147 Parfitt and Bryant (n 44).

148 Waldman (n 70); Cohen and Waldman (n 71); Kaminski, ‘Risks of AI’ (n 9); ME Kaminski, ‘Voices In, Voices Out: Impacted Stakeholders and the Governance of AI’ 71 (2024) UCLA Law Review 176; AE Waldman, ‘Privacy, Practice and Performance’ 110 (2022) California Law Review 1221.

149 Laux et al (n 11).

150 It might be argued that VLOPs’ size and wealth makes resource constraints practically irrelevant. In practice, it is clear that even the largest companies are highly motivated to minimise costs. For example, in 2023 several ‘big tech’ companies laid off numerous staff, resulting in noticeable share price increases: S Patnaik and R Vlastelica, ‘Big Tech’s Job Cuts Spur Rallies Even as an Economic Slowdown Looms’ (Bloomberg, 25 January 2023) <https://www.bloomberg.com/news/articles/2023-01-25/big-tech-s-job-cuts-spur-rallies-even-as-economic-slowdown-looms> accessed 26 September 2024. Reporting consistently documents how regulatory compliance and ‘trust and safety’ teams at VLOPs are understaffed and must carefully prioritise resource allocation: see, eg, JC Wong, ‘How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account’ (Guardian, 12 April 2021) <https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang> accessed 18 January 2023; J Scheck, N Purnell and J Horwitz, ‘Facebook Employees Flag Drug Cartels and Human Traffickers. The Company’s Response Is Weak, Documents Show’ (Wall Street Journal, 16 September 2021) <https://www.wsj.com/articles/facebook-drug-cartels-human-traffickers-response-is-weak-documents-11631812953> accessed 18 January 2023; D O’Sullivan, C Duffy and B Fung, ‘Ex-Twitter exec Blows the Whistle, Alleging Reckless and Negligent Cybersecurity Policies’ (CNN, 23 August 2022) <https://edition.cnn.com/2022/08/23/tech/twitter-whistleblower-peiter-zatko-security/index.html> accessed 18 January 2023; J Koebler, ‘Where Facebook’s AI Slop Comes From’ (404 Media, 6 August 2024) <https://www.404media.co/where-facebooks-ai-slop-comes-from/> accessed 26 September 2024.

151 R Griffin, ‘From Brand Safety to Suitability: Advertisers in Platform Governance’ 12 (3) (2023) Internet Policy Review. <https://doi.org/10.14763/2023.3.1716>.

152 Global Network Initiative & Digital Trust & Safety Partnership (n 89).

153 Duval (n 44).

154 A Kaarlep, in ‘Risk Assessments with Agne Kaarlep’ (Safety is Sexy Podcast, 17 September 2024) <https://www.matthewsoeth.com/safety-is-sexy-podcast/risk-assessments-with-agne-kaarlep> accessed 23 October 2024.

155 See generally (n 150).

156 Waldman, ‘False Promise’ (n 70).

157 Power, ‘Risk Management of Nothing’ (n 63); Edelman (n 70); Landau (n 62); Waldman, ‘False Promise’ (n 70); Ugarte (n 88); Enriques and Zetzsche (n 64).

158 Power, ‘Risk Management of Nothing’ (n 63).

159 Edelman (n 70); Landau (n 62); Waldman, ‘False Promise’ (n 70).

160 Boyd (n 53).

161 Art 42(4) DSA (n 2).

162 Global Network Initiative & Digital Trust & Safety Partnership (n 89).

163 BB Arcila, ‘Systemic Risks in the DSA and its Enforcement’ (DSA Decoded, 2024) <https://www.dsadecoded.com/systemic-risks-in-the-dsa-and-its-enforcement> accessed 23 October 2024.

164 Edelman (n 70); Waldman, ‘False Promise’ (n 70).

165 V Pickard, America’s Battle for Media Democracy: The Triumph of Corporate Libertarianism and the Future of Media Reform (Cambridge University Press 2014); P Popiel, ‘The Tech Lobby: Tracing the Contours of New Media Elite Lobbying Power’ 11 (4) (2018) Communication, Culture & Critique 566. <https://doi.org/10.1093/ccc/tcy027>.

166 J Kwak, ‘Cultural Capture and the Financial Crisis’ in D Carpenter and DA Moss (eds), Preventing Regulatory Capture: Special Interest Influence and How to Limit It (Cambridge University Press 2013) pp 71–98.

167 K Wei et al, ‘How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance’ (2024) arXiv. <https://doi.org/10.48550/arXiv.2410.13042>.

168 KA Bamberger, ‘Technologies of Compliance: Risk and Regulation in a Digital Age’ 88 (2010) Texas Law Review 609; AE Waldman, ‘Privacy, Practice and Performance’ 110 (2022) California Law Review 1221.

169 Obendiek and Seidl (n 79); Popiel (n 165).

170 See eg Bamberger (n 168); Cohen and Waldman (n 71).

171 Dehm (n 117); see also Kampourakis, ‘Post-Neoliberal European Order?’ (n 133).

172 Wezenbeek (n 61).

173 European Commission, ‘Commission Launches Call for Evidence’ (n 121); European Commission, ‘Guidelines for Providers of VLOPs and VLOSEs on the Mitigation of Systemic Risks for Electoral Processes’ (Shaping Europe’s Digital Future, 26 April 2024) <https://digital-strategy.ec.europa.eu/en/library/guidelines-providers-vlops-and-vloses-mitigation-systemic-risks-electoral-processes> accessed 23 October 2024.

174 Global Network Initiative & Digital Trust & Safety Partnership (n 89); Access Now et al, ‘Civil Society Open Letter to Commissioner Breton’ (Article 19, 17 October 2023) <https://www.article19.org/wp-content/uploads/2023/10/Civil-society-open-letter-to-Commissioner-Breton.pdf> accessed 23 October 2024; J Barata and J Calvet-Badamunt, ‘The European Commission’s Approach to DSA Systemic Risk is Concerning for Freedom of Expression’ (Tech Policy Press, 30 October 2023) <https://www.techpolicy.press/the-european-commissions-approach-to-dsa-systemic-risk-is-concerning-for-freedom-of-expression/> accessed 23 October 2024.

175 Keller, ‘Compliant Speech Platform’ (n 36). This is a well-recognised problem in platform regulation: see D Keller, Who Do You Sue? State and Platform Hybrid Power Over Online Speech (Aegis Series Paper No. 1902, Hoover Institution, 2018) <https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf> accessed 23 October 2024.

176 AT Kenyon, Democracy of Expression: Positive Free Speech and Law (Cambridge University Press 2021).

177 P Valcke, Risk-Based Regulation in the Media Sector: The Way Forward to Advance the Media Pluralism Debate in Europe? (ICRI Working Paper 2/2011, Interdisciplinary Centre for Law & ICT, 2011).

178 This section focuses on risk management in the DSA. However, the questions it raises about civil liberties, securitisation and the role of law enforcement are not uniquely relevant to online content regulation, but also to other fields of digital regulation, like AI and surveillance technologies, that the EU has also increasingly approached through meta-regulatory risk management obligations. For example, civil society organisations have highlighted how securitisation discourses were used to weaken or create exemptions from regulatory obligations in areas like migration, law enforcement and ‘national security’ in the AI Act: #ProtectNotSurveil, ‘Joint statement – A Dangerous Precedent: How the EU AI Act Fails Migrants and People on the Move’ (Access Now, 13 March 2024) <https://www.accessnow.org/press-release/joint-statement-ai-act-fails-migrants-and-people-on-the-move/> accessed 22 March 2025. I am grateful to the second anonymous reviewer for raising this point.

179 Importantly, this is not only because regulation of online speech restricts these freedoms. Non-interventionist regulatory approaches may also limit them, for example when minorities cannot use platforms safely due to unmoderated harassment or abuse: MA Franks, ‘Beyond the Public Square: Imagining Digital Democracy’ 131 (2021) Yale Law Journal Forum 427. Thus, much like risk, understanding how regulation impacts freedom of expression necessarily involves social construction, conflicting interests and contestable political judgments.

180 Keller, ‘Compliant Speech Platform’ (n 36).

181 Griffin, ‘Sanitised Platform’ (n 109).

182 This is not a hypothetical scenario. In October 2023, Meta reportedly adjusted its content filters so they would remove content mentioning Palestine where the probability of a policy violation was assessed as over 25 per cent, instead of the usual 80 per cent: 7amleh, Palestinian Digital Rights, Genocide, and Big Tech Accountability (7amleh, September 2024) 10 <https://7amleh.org/storage/genocide/English%20new%20(1).pdf> accessed 23 October 2024.

183 Douek (n 109).

184 Griffin, ‘Sanitised Platform’ (n 101); Griffin, ‘Rethinking Rights’ (n 20).

185 Ahmed (n 123); J Van Hoboken and R Ó Fathaigh, ‘Regulating Disinformation in Europe: Implications for Speech and Privacy’ 6 (9) (2021) UC Irvine Journal of International, Transnational, and Comparative Law 9. Securitisation is a contested term, but generally refers to the construction of policy issues as severe or urgent threats to public safety, requiring a decisive policy response. Scholarship on securitisation has highlighted how such discourses are used to deflect political opposition and/or justify state repression of minorities: see Bigo (n 77); B Buzan, O Wever and J de Wilde, Security: A New Framework for Analysis (Lynne Rienner 1998); L Liedlbauer, ‘Politicising European Counter-Terrorism: The Role of NGOs’ 30 (3) (2023) European Security 485. <https://doi.org/10.1080/09662839.2021.1947802>.

186 S Biddle, ‘Revealed: Facebook’s Secret Blacklist of “Dangerous Individuals and Organizations”’ (The Intercept, 12 October 2021) <https://theintercept.com/2021/10/12/facebook-secret-blacklist-dangerous/> accessed 23 October 2024.

187 R Griffin, ‘The Politics of Algorithmic Censorship: Automated Moderation and Its Regulation’ in J Garratt (ed), Music and the Politics of Censorship: From the Fascist Era to the Digital Age (Brepols forthcoming 2025) <https://sciencespo.hal.science/hal-04325979v1> accessed 23 October 2024.

188 Art 22 and Recital 61, DSA (n 2).

189 Forst (n 139); L Fekete, ‘Anti-Palestinian Racism and the Criminalisation of International Solidarity in Europe’ 66 (1) (2024) Race & Class 99. <https://doi.org/10.1177/03063968241253708>.

190 M Husovec, ‘The Digital Service Act’s Red Line: What the Commission Can and Cannot Do About Disinformation’ (2024) SSRN <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4689926> accessed 5 November 2024.

191 Keller, ‘Compliant Speech Platform’ (n 36).

192 K Rosenblatt and M Eaglin, ‘Meta Teams up with Snap and TikTok to Address Self-Harm Content’ (NBC News, 12 September 2024) <https://www.nbcnews.com/tech/social-media/meta-teams-snap-tiktok-address-self-harm-content-rcna170838> accessed 28 October 2024.

193 These industry-wide databases were originally developed for child sexual abuse material and terrorist content: see R Gorwa, R Binns and C Katzenbach, ‘Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance’ (2019) Big Data + Society. <https://doi.org/10.1177/2053951719897945>.

194 Keller, ‘Compliant Speech Platform’ (n 36).

195 C Goujard, ‘EU Starts Investigating Meta, TikTok over Hamas Content’ (Politico, 19 October 2023) <https://www.politico.eu/article/eu-starts-investigating-meta-tiktok-over-hamas-content/> accessed 28 October 2024; European Commission, ‘Commission Sends Request for Information to Meta under the Digital Services Act’ (Daily News, 19 October 2023) <https://ec.europa.eu/commission/presscorner/detail/en/mex_23_5145 > accessed 7 November 2024.

196 Available evidence suggests that this is already common practice: freedom of information requests and public reports relating to British and Israeli police indicate that platforms remove content on request in around 80–90 per cent of cases. In many cases, the content is not even alleged to be illegal, but only to violate platforms’ in-house content policies. See 7amleh (n 182); Oversight Board, ‘Oversight Board Overturns Meta’s Decision in “UK Drill Music” Case’ (Oversight Board, 22 November 2022) <https://www.oversightboard.com/news/413988857616451-oversight-board-overturns-meta-s-decision-in-uk-drill-music-case/>.

197 Access Now et al (n 174).

198 C Doctorow, ‘Regulating Big Tech Makes Them Stronger, So They Need Competition Instead’ (Economist, 6 June 2019) <https://www.economist.com/open-future/2019/06/06/regulating-big-tech-makes-them-stronger-so-they-need-competition-instead> accessed 5 November 2011.

199 As well as Bigo (n 77); Amoore, Politics of Possibility (n 45); Mythen and Walklate (n 73); see, eg, L Amoore and M de Goede (eds), Risk and the War on Terror (Routledge 2016); C Aradau and T Blanke, ‘Politics of Prediction: Security and the Time/Space of Governmentality in the Age of Big Data’ 20 (3) (2016) European Journal of Social Theory 373. <https://doi.org/10.1177/1368431016667623>.

200 Mythen and Walklate (n 73); Aradau and Blanke (n 199).

201 Bigo (n 77); Mythen and Walklate (n 73); Amoore, Politics of Possibility (n 45).

202 Access Now et al (n 174).

203 Amoore and de Goede (n 199).

204 Amoore, Politics of Possibility (n 45).

205 For example, insurance companies may impose risk management standards on their clients: RV Ericson, ‘The State of Preemption: Managing Terrorism Through Counter Law’ in L Amoore and M de Goede (eds), Risk and the War on Terror (Routledge 2016) pp 57–75.

206 Obendiek and Seidl (n 79).

207 G Sullivan, The Law of the List: UN Counterterrorism Sanctions and the Politics of Global Security Law (Cambridge University Press 2020); G Sullivan, ‘Law, Technology, and Data-Driven Security: Infra-Legalities as Method Assemblage’ 49 (S1) (2022) Journal of Law & Society S31. <https://doi.org/10.1111/jols.12352>. On the application of critical security studies methodologies in platform governance see B Rieder, G Gordon and G Sileno, ‘Mapping Value(s) in AI: Methodological Directions for Examining Normativity in Complex Technical Systems’ 16 (3) (2022) Sociologica 51. <https://doi.org/10.6092/issn.1971-8853/15910>.

208 Sullivan, Law of the List (n 207); L Amoore, ‘Biometric Borders: Governing Mobilities in the War on Terror’ 25 (3) (2006) Political Geography 336. <https://doi.org/10.1016/j.polgeo.2006.02.001>; D Van Den Meersche, ‘Virtual Borders: International Law and the Elusive Inequalities of Algorithmic Association’ 33 (1) (2022) European Journal of International Law 171. <https://doi.org/10.1093/ejil/chac007>.

209 Gorwa et al (n 193).

210 Ibid.

211 Amoore, Politics of Possibility (n 45); Sullivan, Law of the List (n 207).

212 Amoore, ‘Biometric Borders’ (n 208); Sullivan, Law of the List (n 207); W Larner, ‘Spatial Imaginaries: Economic Globalisation and the War on Terror’ in L Amoore and M de Goede (eds), Risk and the War on Terror (Routledge 2016) pp 41–56.

213 Van Den Meersche (n 208); K Ivesen and S Maalsen, ‘Social Control in the Networked City: Datafied Dividuals, Disciplined Individuals and Powers of Assembly’ 37 (2) (2018) Environment & Planning. <https://doi.org/10.1177/0263775818812084>.

214 On the relevance of Foucauldian and Deleuzian understandings of surveillance and control in relation to algorithmic moderation generally see J Cobbe, ‘Algorithmic Censorship by Social Platforms: Power and Resistance’ 34 (2021) Philosophy & Technology 739 <https://link.springer.com/article/10.1007/s13347-020-00429-0>.

215 See, eg, Ericson (n 205); M de Goede, ‘Risk, Preemption and Exception in the War on Terrorist Financing’ in L Amoore and M de Goede (eds), Risk and the War on Terror (Routledge 2016) pp 113–27.

216 Parfitt and Bryant (n 44).

217 Amoore, Politics of Possibility (n 45).

218 Van der Heijden (n 74); S Jasanoff, ‘Beyond Calculation: A Democratic Response to Risk’ in A Lakoff (ed), Disaster and the Politics of Intervention (Cambridge University Press 2010) pp 14–41.

219 Eder (n 11); Kaminski, ‘Voices In’ (n 148); Carvalho (n 10).

220 Global Network Initiative & Digital Trust & Safety Partnership (n 89); P Popiel, ‘Digital Platforms as Policy Actors’ in T Flew and FR Martin (eds), Digital Platform Regulation (Palgrave MacMillan 2022) pp 131–50.

221 Griffin, ‘Public and Private Power’ (n 92).

222 Jasanoff (n 218); Landau (n 62).

223 Griffin, ‘Public and Private Power’ (n 92); CW Lee, M McQuarrie and ET Walker (eds), Democratizing Inequalities. Dilemmas of the New Public Participation (NYU Press 2015).

224 See generally S Marks, ‘False Contingency’ 62 (1) (2009) Current Legal Problems 1. <https://doi.org/10.1093/clp/62.1.1>.