1. Introduction
In a 1942 short story, Argentinian author Jorge Luis Borges informs us that animals can be divided into 14 categories. These include, among others, ‘stray dogs’, ‘sirens’, ‘belonging to the emperor’, ‘fabulous’, ‘included in the present classification’ and ‘etc’.Footnote 1
In 2022, the European Union (EU) passed the Digital Services Act (DSA), overhauling the regulation of digital intermediary services, in particular platforms publishing user-generated content. Article 34(1) of the DSA is not entirely dissimilar to Borges’ taxonomy. It requires companies running ‘very large online platforms’ (with over 45 million EU users) to regularly assess and mitigate risks associated with the functioning and use of their services in the following areas: dissemination of illegal content; negative effects on fundamental rights; negative effects on civic discourse, electoral processes or public security; and negative effects on gender-based violence, protection of public health, protection of minors, and serious negative consequences for people’s physical and mental wellbeing.Footnote 2 Like Borges’ list, these categories are heterogeneous in their apparent purposes, subject matter, and levels of abstraction; sometimes extremely broadly- or vaguely-defined; and overall somewhat lacking in coherence.Footnote 3
Characteristically, Borges attributes his taxonomy to a fictional translation of a Chinese encyclopaedia by a real German translator, Franz Kuhn.Footnote 4 Many of his stories feature similar (semi-)fictional attributions. One effect of this literary device is to highlight the situated and socially constructed nature of knowledge and narratives. Arrestingly strange ideas, like the library containing every possible book or the map the same size as the territory,Footnote 5 are not just presented as abstract intellectual curiosities – they come from somewhere and are mediated by someone. Borges’ work suggests that texts are never static: as they travel, they are actively interpreted, translated and reconstructed in different contexts. Similarly, legal texts have no fixed meaning: what law means in practice depends on how it is interpreted and applied, by specific people and institutions under particular conditions.Footnote 6 In practice, Article 34(1) cannot mean everything at once, but nor is its meaning static or predetermined. What systemic risk management comes to mean in practice will depend on how regulators, companies and other stakeholders interpret the DSA, in line with their own material interests and political preferences, and how they try to influence and contest each other’s interpretations.
These points are not particularly original, but nor is the DSA’s regulatory approach. Risk management is regarded by many scholars as a central organising principle of the modern regulatory state,Footnote 7 or of modern industrialised societies in general.Footnote 8 Corporate risk management obligations have become ubiquitous in many regulatory fields, including digital law.Footnote 9 Risk-based regulatory approaches have also been extensively considered and critiqued by scholars in fields ranging from law and regulatory studies to political science, sociology, and science and technology studies. Informed by this literature, this article aims to critically reflect on the political implications of framing all kinds of issues related to platform governance as risks to be managed – and specifically, to be managed through internal corporate procedures.
The DSA systemic risk framework has already attracted significant academic attention, much of which situates it in relation to prior scholarship on risk management and risk regulation.Footnote 10 Typically, this literature focuses on evaluating its ability to achieve goals such as accountability, efficiency or fundamental rights protection; highlighting problems, such as risks of corporate capture; and/or providing practical advice on how VLOPs and regulators should approach risk management.Footnote 11 These can all broadly be classed as functionalist analyses,Footnote 12 in the sense that they adopt the basic premise that certain social impacts of large online platforms constitute risks which need to be managed, and focus on how this might be achieved effectively.Footnote 13
Taking a slightly different approach, this article makes two main contributions. First, instead of evaluating how ‘effectively’ the DSA will achieve particular aims, I step back to consider a logically prior question: what are the implications of framing its policy aims in terms of ‘risks’ in the first place? Second, I take an explicitly social constructionist view of risk, as ‘a way of representing events so they might be made governable in particular ways, with particular techniques, and for particular goals’.Footnote 14
Social constructionist perspectives highlight that the concept of risk is fundamentally ambiguous and open to interpretation, but this does not mean that how risks are constructed is entirely contingent or open-ended. Rather, it is shaped by existing material and institutional structures, and by the goals, interests, and relative power and resources of actors deploying risk management techniques in the service of specific political objectives. Crucially, this means ‘some people have a greater capacity to define risk than others’.Footnote 15 Yet often, the emphasis on private-sector technical expertise that characterises risk regulations like the DSA serves to defuse and disguise political disagreements and conflicting interests. By theorising systemic risks as socially constructed, the article offers new insights into how powerful actors – notably including VLOPs, but also regulatory agencies and other state institutions – will enjoy outsized power to determine how risks are understood and managed.
I focus on two broad themes: the discursive effects of the ‘risk’ framing, and the institutional structure of corporate risk management obligations. In each case, I draw comparisons and insights from critical literature on risk regulation in other fields to analyse the legal text of the DSA and available evidence on its implementation so far. This helps situate the DSA’s risk-based regulatory approach in relation to longer-term (de)regulatory trends, suggesting how it could facilitate corporate capture, while sidelining political contestation of the underlying objectives and values shaping technology and its governance.
However, the relatively novel expansion of risk regulation to media and communications platforms calls for scholarship to go beyond existing literature on corporate risk management. Given the interest of state actors in shaping the construction of risks related to online media and communications in ways that justify political repression, the article concludes by suggesting that scholarship on risk in security, law enforcement and counterterrorism could offer useful perspectives for future research. For example, it could help to theorise the co-production of risk by state and corporate actors, the labelling of certain individuals and social groups as ‘risky’, and the use of algorithmic technologies to manage risks, as well as pointing to empirical methodologies that could help shed light on how DSA systemic risks are constructed in practice.
While this analysis focuses on the DSA, these insights and research directions are of broader relevance: many other technology regulation initiatives in the EU and elsewhere,Footnote 16 as well as regulatory efforts in other fields like sustainability,Footnote 17 follow a similar regulatory approach. As corporate due diligence and risk mitigation obligations become an increasingly ubiquitous response to all kinds of political problems, scholarship needs to move beyond internal critiques of these regulations’ ability to achieve their goals ‘effectively’, and ask more fundamental questions about what problems they are solving and whose interests they serve.
2. Risk regulation in the DSA
A. The systemic risk management framework
The DSA systemic risk framework is set out in Chapter III Section 5, which applies only to the largest online platforms (‘VLOPs’, defined in Article 33 as those with over 45 million monthly active users in the EU). Online platforms are defined in Article 3(i) as online intermediary services which make user-generated content available to the public: this notably includes social media, as well as e-commerce and adult content platforms which host third-party content. Section 5 also applies to search engines (defined in Article 3(j) as services allowing users to query websites based on keywords or other inputs) with over 45 million users.
Twenty-five services are currently designated as VLOPs.Footnote 18 Their owners include companies like Meta (owner of Facebook and Instagram), Alphabet (owner of Google and YouTube) and Microsoft (owner of LinkedIn) which are among the largest in the world, as well as companies which are smaller but dominate their respective markets, like TikTok and Pornhub. In recent years, these ‘big tech’ platforms have attracted increasing attention from policymakers, media and the public – taking in various specific policy concerns, such as disinformation and child mental health; more structural political-economic issues, such as concentrated corporate power; and conflicts between basic values, such as tradeoffs between (different understandings of) freedom of expression and public safety.
Arguably, Articles 34–5 represent the DSA’s main way of addressing such concerns. Most other DSA obligations have a narrower scope, generally focusing on individual user rights, illegal content removal, or transparency. In contrast, Articles 34–5 explicitly focus on ‘systemic’ issues and collective and social interests. Article 34(1) requires VLOPs to ‘diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services’. This must include the following areas:
-
• dissemination of illegal content;
-
• negative effects on fundamental rights;
-
• negative effects on civic discourse, electoral processes or public security;
-
• negative effects on gender-based violence, protection of public health, protection of minors, and serious negative consequences for people’s physical and mental wellbeing
Assessments must be conducted at least yearly, and before launching ‘functionalities that are likely to have a critical impact on the risks identified’. Article 35 then requires VLOPs to implement ‘reasonable, proportionate and effective mitigation measures’. Article 37 requires independent audits of risk assessments and mitigation measures.Footnote 19 Their adequacy will ultimately be overseen by the Commission (which is equipped with an array of investigatory powers, and can impose fines of up to 6 per cent of worldwide annual turnover: see Chapter IV, Section 4) with input from the European Board for Digital Services (EBDS), representing national regulators (Article 63).
As noted in the introduction, the scope of these articles is extraordinarily broad. For example, I struggle to think of any widely discussed issue in platform governance that could not somehow be framed in terms of fundamental rights (which is not to say that this is always the most useful framingFootnote 20). Attempts to define fundamental rights impacts can at least draw on an extensive body of EU and international case law, scholarship and other legal materials. On the other hand, ‘civic discourse’ is neither an established legal term nor defined in the DSA, and what constitutes good or bad civic discourse is hardly an issue where any consensus exists. In Martin Husovec’s words, Article 34(1)’s overall message is that platform companies should be doing something about risks to ‘everything we cherish’.Footnote 21 The problem is not only that people cherish different things, but also that they understand them very differently.
Articles 34–5 provide some limited further guidance. Article 34(2) states that risk assessments should consider:
-
• the design of recommendation systems and other algorithmic systems;Footnote 22
-
• content moderation systems;Footnote 23
-
• terms and conditions and their enforcement;Footnote 24
-
• advertising systems;Footnote 25
-
• ‘data-related practices’;Footnote 26
-
• ‘intentional manipulation’ of the service;Footnote 27
-
• dissemination of content which is illegal or violates terms and conditions;Footnote 28
-
• ‘regional and linguistic aspects’Footnote 29
Article 35(1) lists 12 indicative types of risk mitigation measure, including – among others – the following extremely broad categories:
-
• ‘adapting the design, features or functioning’ of services;Footnote 30
-
• adapting content moderation systems,Footnote 31 recommendations and other algorithmic systems;Footnote 32
-
• ‘reinforcing the internal processes, resources, testing, documentation, or supervision of any of their activities’Footnote 33
Overall, this additional guidance is also very wide-ranging, general and abstract. It serves less to clarify the interpretation of Articles 34–5 than to broaden the range of potential interpretations even further by making clear that aspects which are not otherwise major focuses of the DSA, like interface design or internal operational processes, are also within scope.
The DSA also creates numerous avenues for the interpretation of Articles 34–5 to be clarified over time through delegated legislation, soft law and industry standards.Footnote 34 The Commission and EBDS can both issue guidance, which is not officially binding, but likely to be influential (for example, setting expectations for the Commission’s enforcement strategy). Article 45 also provides for codes of conduct to be developed primarily by regulated companies, but with supervision and input from the Commission and EDBS. Non-compliance with relevant codes could be a ground for enforcement proceedings.Footnote 35 Finally, auditing standards and evaluation metrics will likely significantly influence how companies approach compliance in practice, and how compliance is assessed by regulators.Footnote 36 Consultancy services, both from the ‘big four’ companiesFootnote 37 and from specialised platform regulation consultancies like Paris-based Tremau,Footnote 38 could also play an important role in shaping VLOPs’ compliance practices.
The DSA also envisages a role for civil society and independent experts.Footnote 39 In some cases these actors will directly give input to VLOPs and regulators: for example, Recital 90 provides that VLOPs should consult with organisations representing relevant stakeholders during their risk assessment processes, while Article 45 provides for civil society to participate in drafting codes of conduct. More generally, independent researchers – especially academics – are envisaged as providing evidence to guide risk management (which Recital 90 provides should be ‘based on the best available information and scientific insights’) and monitoring VLOPs’ and regulators’ implementation of the risk management framework. Notably, Article 40 requires VLOPs to provide data to facilitate research that ‘contributes to the detection, identification and understanding of systemic risks’. Data access has attracted significant attention from researchers and civil society as a means of strengthening platforms’ accountability to the public.Footnote 40 However, questions could be raised about the resources and capacities available for such independent oversight, and about whether the institutional incentive structures guiding academic research are aligned with the objectives of regulatory oversight.
B. Situating the DSA’s approach to risk regulation
It could be somewhat surprising that the DSA frames so many disparate values, concerns and aspects of platform governance in terms of risks to be managed. Risks are generally understood as possible harmful future events – classically quantified according to probability and severity.Footnote 41 The DSA is generally not concerned with specific harmful events that might or might not happen, but with ongoing, diffuse and often indirect impacts of VLOPs’ business activities. Empirically, such impacts are typically highly contestable. For example, there is significant scientific uncertainty and disagreement around platforms’ impacts on issues like misinformation, political polarisation and child mental health, extending to basic questions of prevalence and causation, as well as the impacts of policy interventions.Footnote 42 More fundamentally, given the essentially indeterminate and contested meaning of concepts such as fundamental rights, civic discourse or public security, probabilistic approaches to assessing negative impacts seem not only value-laden and contestable but on some level arbitrary.
From another perspective, however, the DSA’s regulatory approach is typical. In recent decades, risk management obligations have steadily expanded across a vast array of regulatory fields – from environmental pollution to financial stability to the supply chains of multinational corporations. Often, the risks in question are defined in far more fluid and value-laden terms than the classic probability/severity definition would suggest. For example, various industry best practices, soft law commitments and – increasingly – binding legal frameworksFootnote 43 now oblige corporations to conduct ‘human rights due diligence’ and assess diverse ‘environmental, social and governance’ risks.Footnote 44 Accordingly, risk assessment has expanded beyond quantitative and probabilistic approaches, to encompass a range of more flexible, qualitative and open-ended evaluative techniques.Footnote 45
In regulatory regimes like the DSA, ‘risk’ functions less as a defined approach to identifying, quantifying and preventing harm, and more as a ‘boundary object’ or common language that can help mobilise a variety of institutional practices, techniques and resources in the service of diverse regulatory goals.Footnote 46 Its ambiguity is part of what makes it useful. Unsurprisingly, then, the term ‘risk regulation’ is also used in diverse contexts and with different, sometimes overlapping meanings.Footnote 47 This article highlights three broad regulatory traditions whose influence is particularly evident in the DSA.
First, regulatory goals can be framed in terms of managing risks to the public. Hood, Rothstein and Baldwin influentially defined risk regulation as ‘governmental interference with market or social processes to control potential adverse consequences’.Footnote 48 Nothing in this definition specifies how restrictively governments should regulate businesses; as they show, risk regulation ‘regimes’ – their term for the ensemble of governmental and non-governmental actors, institutions, norms and practices involved in regulating particular areas – have very different policy goals, regulatory tools and institutional arrangements, encompassing a broad spectrum of stricter and more laissez-faire approaches.Footnote 49
Nonetheless, framing the goals of regulation in this way resonates in several ways with deregulatory ideologies.Footnote 50 Understanding regulation as ‘interference’ with markets assumes an a priori separation between markets and regulation, implying that the former would exist in some natural state without the latter – and that the default option should be non-interference, except where regulation is justified to prevent specific harms. This ‘market naturalist’ framing obscures more subtle ways that state policies structure marketsFootnote 51 (often in ways that actively facilitate concentrations of wealth and powerFootnote 52) and can discourage consideration of more structural reforms, by focusing attention on incremental reforms aimed at mitigating specific negative externalities of businesses’ activities. Indeed, in practice, risk regulation is often accompanied by approaches to ‘cost-benefit analysis’ that require extensive evidence to justify regulatory interventions, serving to block many restrictions on business activities.Footnote 53 Finally, in technology regulation, this framing also resonates with dominant narratives framing regulation as a tradeoff between safety and innovation.Footnote 54 It suggests that governments should allow maximum market-driven innovation, as long as adverse consequences are controlledFootnote 55 – as opposed to questioning the underlying goals and interests deciding what technological ‘innovations’ the private sector develops.
These tendencies are visible in Articles 34–5 DSA. Regulatory objectives are framed in terms of preventing specific ‘negative impacts’ of VLOPs’ activities, which implies that having mitigated these specific risks, they can otherwise continue as normal. It appears likely that the enforcement of this particular regulatory regime will be on the more interventionist end of the spectrum: the Commission initiated five enforcement actions in the first year of its operation.Footnote 56 Nonetheless, any enforcement can be challenged in court – by very well-resourced companies with top-tier legal teams – and will thus demand strong evidentiary justifications.
Second, in what is generally called risk-based regulation, risk does not refer to the regime’s objectives but rather the allocation of public resources. Regulatory obligations and/or oversight focus on companies or activities considered to pose most risk to the regulatory regime’s objectives – often assessed using standardised indicators inspired by the classic probability/severity approach.Footnote 57 This often overlaps with ‘responsive regulation’ approaches, where oversight varies depending on how compliant or cooperative businesses have been in the past.Footnote 58 As leading regulation scholar Julia Black points out, that agencies have limited resources and cannot achieve perfect compliance is not distinctive to risk-based regulatory regimes; rather, risk-based regulation seeks to explicitly recognise this ubiquitous problem and manage it in a structured way.Footnote 59 Nonetheless, by emphasising evidence-based and efficient use of government resources, risk-based regulation also resonates with the ethos of the neoliberal era, which saw spending cuts across many areas of government, as well as deregulatory ideologies prescribing that government restrictions on business should be avoided except where clearly justified.
The DSA’s tiered structure, where VLOPs have stricter obligations because of their large user bases, clearly aligns with risk-based regulation. However, the influence of risk-based approaches can also be seen in how the regulatory obligations for VLOPs are being implemented. At least one DSC, the Irish Media Commission (CnaM), is taking an explicitly risk-based approach, assessing companies according to formalised risk factors in order to determine monitoring and enforcement priorities.Footnote 60 Commission officials have stated that they are taking a broadly responsive approach to DSA enforcement, prioritising dialogue with VLOPs and voluntary commitments, and treating formal enforcement as a last resort.Footnote 61
Finally, risk management can be the basis for the institutional structure of regimes which – like the DSA – require companies to manage specified risks to the public. Regulators play the secondary role of overseeing whether these internal risk management systems are adequate, hence this approach is also sometimes called ‘meta-regulation’.Footnote 62 In effect, it seeks to harness existing corporate systems and resources – notably including ‘enterprise risk management’ (ERM) and auditing practices developed to address commercial risksFootnote 63 – and turn them towards a wider variety of public policy goals.Footnote 64
Historically associated with financial regulation,Footnote 65 this approach has become common in numerous fields, prominently including data protection and technology regulation,Footnote 66 as well as business and human rights. ‘Due diligence’ obligations for multinational corporations to assess and mitigate human rights and sustainability risks in their value chains have rapidly become a ‘new global orthodoxy’,Footnote 67 spreading from influential soft-law standards like the UN General Principles to legal regimes in several jurisdictions, including (following the 2024 Corporate Sustainability Due Diligence Directive) the EU. Articles 34–5 DSA can be seen as the result of cross-fertilisation between these regulatory approaches: they use meta-regulatory tools to regulate the design and operation of complex technological systems, but define their objectives in terms of fundamental rights and other abstract values.
Scholarship on meta-regulation identifies several advantages: it is more flexible and context-sensitive than prescriptive industry-wide standards, and allows regulators to take advantage of companies’ technical and financial resources,Footnote 68 as well as their greater industry- and company-specific knowledge.Footnote 69 On the other hand, scholars have pointed out – and empirically documented in detailFootnote 70 – that it creates obvious room for companies to interpret regulatory goals and standards in self-serving ways, as well as potentially eroding public-sector capacities.Footnote 71 From either perspective, meta-regulatory approaches once again resonate with deregulatory agendas, assuming that governments should be deferential to private-sector expertise and that regulatory obligations should be tailored to minimise burdens on business activities.
In summary, then, the DSA’s risk management framework draws from all three regulatory traditions identified – unsurprisingly, since all three have become increasingly popular in diverse regulatory fields, including other areas of EU tech regulation. Each of the three is in principle compatible with a wide range of regulatory goals and strategies, including more restrictive and precautionary approaches. However, they generally show the influence of the neoliberal era of privatisation and deregulation during which they developed: they dovetail neatly with assumptions that private-sector expertise is superior to public oversight and that regulators should ‘interfere’ with corporate freedom and profits only to the minimum extent necessary. As the rest of this article will show in more detail, these ideological influences are also visible in the DSA.
C. The social construction of systemic risks
A core premise of this article is that risks are socially constructed. This is not to say that harms and dangers do not really exist – for example, that VLOPs do not impact mental wellbeing, civic discourse or fundamental rights – but rather that there is no ‘objective’ way of conceptualising, measuring and responding to such impacts.Footnote 72 Drawing on Foucauldian approaches to theorising the interrelationship of knowledge and power, as well as Latourian perspectives on the social construction of scientific knowledge, social constructionist scholarship has shown how the processes through which issues are recognised as problems, framed as ‘risks’, defined, assessed and finally mitigated are shaped by social institutions, material conditions and power dynamics.Footnote 73 Moreover, risk assessment is not about knowledge for its own sake, but fundamentally about informing decision-making.Footnote 74 Accordingly, how risks are defined necessarily depends on the objectives and values of the organisations involved in risk managementFootnote 75 – and, often, on negotiations and conflicts between actors with competing goals. This encompasses both top-down processes in which political actors mobilise evidence and advocate for their preferred problem framings and priorities,Footnote 76 and more bottom-up processes in which the quotidian knowledge production practices of professionals like academics, consultants or data scientists form the basis for shared understandings of risk.Footnote 77
Relatedly, while classic definitions of risk focus on preventing harm, it also – especially in commercial contexts – connotes positive opportunities.Footnote 78 The dual imperatives for governments and businesses to take risks, but also to control them, mean that risk management itself presents opportunities for actors who can construct risks in ways that serve their own objectives. Insurance, auditing and consultancy businesses frame risks in ways that help them market their services.Footnote 79 Politicians and state institutions construct risks against which they claim to be able to protect the public, as a way to attract political support or institutional resources.Footnote 80 Thus, while existing political and economic institutions shape how risks are understood and managed, the legal and discursive framework of risk management can also alter existing institutional configurations and create new opportunities for political and economic actors to shape platform governance.
To take a concrete example, Article 34(1)(d) DSA mentions risks to public health, wellbeing and the protection of minors. In this context, several of the Commission’s early investigations and enforcement actions have focused on putative risks to children’s mental health.Footnote 81 A social constructionist perspective would emphasise that, while mental health problems are very real, how they are understood as a policy issue is contingent on many social and institutional factors. The policy choice to make this a priority in DSA enforcement reflects discussions and negotiations between politicians, regulators, VLOPs and other stakeholders (like NGOs) at both national and EU levels. In turn, these actors’ perceptions and priorities are influenced by the knowledge and discourses about platforms’ impacts on child mental health produced by journalists, scientists and other actors.Footnote 82 Some of these actors may frame safety risks in terms of children encountering harmful content – itself a very ambiguous concept, which can be understood in many ways, influenced by different political agendas: from feminist critiques of content promoting exclusionary beauty standardsFootnote 83 to homophobic politics framing information about LGBTQIA+ identities as a threat to children.Footnote 84 Others may associate potential mental health impacts with other aspects of platform governance, such as ‘addictive’ design features and recommendation algorithms.Footnote 85 In turn, other actors may contest these interpretations by emphasising risks to freedom of expression and other values posed by overzealous efforts to shelter children.Footnote 86 There is no external standard or objectively correct understanding of systemic risks to minors against which these different approaches could be measured. Instead, risks are produced through the interactions between these different political agendas, narratives and knowledge production practices.
To some degree, this is true of all forms of risk regulation. Even in fields like pollution and chemicals, whose association with ‘hard’ science and quantifiable indicators can make them appear more objective, defining risks involves contingent and context-dependent choices about the production of scientific evidence, as well as political choices about the distribution of harms and benefits between different social groups – both often heavily lobbied and contested by industry and other stakeholders.Footnote 87 However, there is at least a basic degree of consensus around the nature of the harms involved (cancer rates, nuclear accidents). Article 34 leaves infinitely more room for conflicting interpretations. Some risk areas mentioned (eg, public security, civic discourse) are essentially contested concepts, whose meaning necessarily depends on many other ideological premises. Others (eg, dissemination of illegal content) may appear more straightforward. However, even in these cases, there will be frequent conflicts between different risk areas and between different ways of interpreting, measuring and prioritising them, with no obviously correct answers.Footnote 88 Several other aspects of the DSA’s drafting compound this essential ambiguity. This notably includes the absence of any definition of ‘systemic’;Footnote 89 the lengthy and non-exhaustive lists of risk factors and mitigation measures; and the extensive provisions for risk management standards to be developed and supplemented over time.
Overall, it seems clear that the DSA’s drafters were not trying to set out a detailed and prescriptive approach to risk assessment, but rather to create an open-ended framework which could accommodate diverse regulatory goals and priorities, and could be further developed and specified over time.Footnote 90 Reasons behind this approach could include a desire for flexibility to respond to new technological and economic developments, and to changing political priorities. More pragmatically, ‘constructive ambiguity’ is often necessary to reach consensus on a text – especially in the EU’s complex legislative process.Footnote 91 This makes the concept of risk as a ’boundary object’ that can connect different institutional practices and perspectives particularly relevant. Policymakers with different or incompatible opinions about what is wrong with platform governance and how it should be reformed were able to agree that platforms pose risks which should be mitigated. Yet this agreement does not resolve conflicts over how platforms should be governed, but merely opens up new spaces for ongoing political struggles over the construction of risks.
Given the centrality of systemic risk management in the regulation of VLOPs – some of the world’s best-known, most valuable and most powerful companies – many stakeholders, with different and conflicting agendas, will be interested in the outcomes of these struggles. Evidently, these stakeholders’ capacities to set political agendas, marshal evidence and establish consensus around their preferred risk framings will largely reflect established inequalities of resources and political power.Footnote 92 However, the DSA’s risk regulation regime is also an important piece of the political opportunity structure within which such conflicts will play out.
From the brief overview above,Footnote 93 it is already apparent that while the DSA delegates significant discretion to VLOPs, it does not give them carte blanche, but aims to establish an ecosystem of government, corporate and civil society stakeholders who will all have input into what issues constitute systemic risks and how VLOPs should manage them.Footnote 94 This aligns with scholarship on risk regulation which argues that open-ended or ambiguous regulatory provisions like Articles 34–5 can be resolved as common understandings of risk coalesce within ‘interpretative communities’ involved in implementing a given regulatory regime.Footnote 95 This is easier ‘within sector-specific regulatory regimes where the regulated sector forms a relatively tight-knit community’.Footnote 96 Such a community is already very visible around the DSA. Conferences, events and consultations regularly bring together regulatory agency staff, academic researchers, NGOs and industry experts, providing opportunities for them to repeatedly meet, exchange information and form professional and social connections.
Importantly, however, access to and participation in this ecosystem is far from equal.Footnote 97 Scholarship on the social amplification of risk argues that interest group politics is a crucial factor shaping how people and institutions understand risks. Stakeholder groups produce, mobilize and frame evidence in order to shape risk management in ways that favour their own ideologies or interests, and that help marshal support for their broader political agendas.Footnote 98 These processes are structured by pervasive disparities of power and expertise. Interest groups who have more resources, relationships with other influential actors, and capacities to produce and engage with expert knowledge will be better able to build consensus behind their preferred risk framings. Consequently, risks are often defined and managed in ways that reinforce existing inequalities.Footnote 99
Accordingly, the following sections will analyse the political implications of its regulatory approach in more detail, synthesising existing scholarship on two particular aspects of risk regulation: the discursive effects of framing social and political issues in terms of risks to be managed, and the institutional approach of corporate risk management obligations.
3. Risk, discourse and politics
Critical literature on the various forms of risk regulation discussed in section 2(b) has highlighted various ways that the conceptual framework of risk management structures political discourse – shaping how problems are framed, what solutions are seen as feasible, and what kinds of knowledge and arguments are seen as authoritative. This section identifies three broad critiques that appear particularly relevant to the DSA. First, risk discourse reinforces boundaries – between different issues on the regulatory agenda, but also by excluding some issues entirely. Second, risk discourse privileges technocratic expertise over other forms of knowledge. Finally, because risks are conceived as objective harms – as opposed to distributive decisions that harm some and benefit others – risk discourse is depoliticising, obscuring conflicts between competing ideologies and material interests. These three tendencies help explain why, despite the inherent ambiguity of risk as a concept, risk regulation all too often tends to ‘[ratify] existing distributions of resources’.Footnote 100
A. Boundary reinforcement
Critical accounting scholar Michael Power suggests that risk regulation is essentially inspired by the ‘boundary preserving model’ of enterprise risk management (ERM).Footnote 101 These internal corporate processes, on which the DSA now seeks to build, focus on specifically-defined problems, internal decision-making procedures, and auditable metrics and documentation – instead of raising more challenging questions about the unpredictable interactions between companies’ internal systems and their external environment, or about whether their overall goals and operating logics serve the public interest.Footnote 102 Similar critiques are regularly raised in scholarship on ESG, sustainability reporting and climate due diligenceFootnote 103 and could also be applied to the DSA.
For example, in the DSA context, the boundary-preserving tendencies of ERM may be exacerbated by the demands from many stakeholders for industry-standard metrics and benchmarks for particular risks. This may certainly have advantages (such as facilitating comparisons between different companies and over timeFootnote 104) but is unlikely to encourage consideration of complex interactions between risks and mitigation measures in different areas, which would demand context-sensitive approaches that are harder to quantify or standardise.Footnote 105 Empirical platform governance research emphasises that harms often result from complex interactions from platforms, their user bases and the wider economy and digital and media ecosystems.Footnote 106 For example, standardised metrics of algorithmic bias in content moderation overlook how mistaken moderation decisions are disproportionately harmful to users in situations of economic precarity or vulnerability.Footnote 107 Formalised, standardised risk assessment procedures are inapt to capture such context-dependent differential impacts.Footnote 108
Assessing risks individually may also obscure conflicts and tensions between different goals mentioned in Article 34(1), and make it difficult to recognise that risk mitigation measures can themselves be harmful. For example, mitigating risks by removing harmful content will necessarily restrict users’ freedom of expression and (given well-documented biases in moderation tools) equal access to platforms.Footnote 109 Article 35(1) DSA recognises this to some extent, by providing that VLOPs choosing mitigation measures should consider their impact on fundamental rights. However, with little guidance on how such impacts should be identified and evaluated, or how potential conflicts should be resolved, it is unclear whether this provision will have much concrete impact.Footnote 110
Second, the DSA requires each VLOP to assess and mitigate risks associated with their own platforms. Coordination between companies is promoted to an extent by some other aspects of the DSA, notably codes of conduct.Footnote 111 However, the risk management procedures used to document and evaluate compliance focus on the practices of individual companies.Footnote 112 This could discourage consideration of issues involving the cumulative effects of many companies’ activities, or business practices that appear justifiable within one company but more problematic at the level of the industry as a whole.Footnote 113 For example, for various commercial and regulatory reasons, most major platforms ban content deemed sexually explicit or overly suggestive.Footnote 114 Many scholars and activists consider this generally excessively restrictive of adults’ freedom of expression, and particularly harmful to certain vulnerable and marginalised groups, such as sex workers and LGBTQIA+ people (especially because such policies are typically enforced in overinclusive and discriminatory waysFootnote 115). Any individual VLOP might conclude that such risks are outweighed by benefits in areas like child safety and privacy; indeed, banning sexual content could be portrayed as a positive compliance measure aimed at mitigating risks in these areas. However, this misses an arguably more important question: what are the implications for freedom of expression generally, and for groups particularly affected by these policies, if people are unable to post and interact with sexual content on all major online platforms?
Power also points out that ERM processes are designed to address risks to companies’ commercial interests, which ‘are more or less an exogenous input into the model with the consequence that it is hard to enlist such a framework in challenging the objectives themselves’.Footnote 116 Regulations like the DSA seek to redirect these processes towards a wider range of public-interest goals. However, they are built on the same corporate systems and share the same structural limitation: because they ask how a given company can better manage risks in the course of its business activities, they sideline questions about whether these activities are in themselves problematic. This is what allows fossil fuel companies to highlight in ESG and due diligence reporting that they are reducing emissions produced during fossil fuel extraction, not mentioning that this business activity is as such incompatible with effective climate policy.Footnote 117
While it is a less existential threat to society, many people also consider the business model of today’s leading search engines and social media platforms – aggregating large volumes of data about users to target them with personalised adverts – inherently socially destructive. This might be because it threatens privacy, because it incentivises and promotes divisive or sensationalist content, and/or because curating content based on its commercial value to advertisers is inherently exclusionary and sidelines other important goals and values for media governance.Footnote 118 Similar points could be made about dominant e-commerce platforms whose business models revolve around maximising consumer spending regardless of its social and environmental impacts.
Regardless of the merits of particular arguments along these lines, they clearly raise important questions touching on many values mentioned in Article 34(1), such as fundamental rights, media pluralism and users’ well-being. Yet the DSA systemic risk framework provides little space to address such questions. First, evidently, no VLOP is likely to conclude that it should mitigate risks by shutting down its core business. It would also be difficult for regulators, auditors or other stakeholders to argue this: the implicit premise of risk regulation is that businesses can and should continue their activities, as long as specific associated risks are addressed. Second, it could reasonably be argued that the surveillance advertising business model is not in itself unacceptable, but that values such as media pluralism or freedom of expression are unacceptably undermined when all major media platforms have the same business model and there are few comparable alternatives. Such questions are also outside the boundaries of Articles 34–5, which exclusively call for individual VLOPs to assess the impacts of their own services. Finally, this boundary reinforcement also circumscribes the role of regulatory agencies, who are charged with overseeing individual VLOPs’ compliance with Articles 34–5 – that is, ensuring more ‘responsible and diligent’ corporate conductFootnote 119 within existing market structures, rather than more fundamentally reforming them.Footnote 120
B. Technocracy
As Section 2(c) argued, risk management is closely bound up with knowledge production practices which enable the identification and evaluation of risks. Crucially, this also involves practices of ‘authorisation’ which deem some forms of knowledge and evaluative techniques more valid than others.Footnote 121 A consistent theme in scholarship on risk management and risk regulation is that they privilege technical knowledge, scientific evidence and professional expertise over other ways of understanding policy issues. Consequently, risk regulation generally authorises professional classes who have privileged access to scientific and technical expertise to determine how risks should be assessed and managed on everyone else’s behalf.Footnote 122 Insofar as other actors attempt to contest these decisions, they are incentivised to deploy the same discursive framings and forms of evidence that these elite actors deem authoritative.Footnote 123
In fields like chemicals and environmental law, risk regulation traditionally privileged quantitative metrics and ‘hard’ scientific evidence.Footnote 124 As risk regulation has expanded to encompass less quantifiable and more explicitly normative domains, it has also opened up to a wider range of evaluative techniques and forms of evidence.Footnote 125 Importantly, however, these approaches still tend to privilege elite technical knowledge. Corporate human rights due diligence (and its offshoot in the DSA’s concept of systemic risks to fundamental rights) provides a good example. Due diligence obligations not only include obviously unquantifiable normative standards, but explicitly envisage going beyond technical or scientific knowledge, in particular via ‘stakeholder engagement’ with affected communities.Footnote 126 Yet framing such issues in terms of human rights – that is, in the language of a technically complex legal and institutional field, where authoritative claims rely on specialised knowledge of an array of treaties, case law and interpretative practices – still privileges professional experts, and stakeholder groups with the resources to deploy legal expertise.Footnote 127
Reinforcing these tendencies, the DSA’s various mechanisms for risks and mitigation measures to be negotiated and clarified generally seem to assume consensual processes led by technical and professional experts (auditors, consultants, industry experts, academics, etc.) rather than political contestation.Footnote 128 Provisions for civil society consultation might provide more opportunities for discussion of underlying policy goals and values. However, such consultations are framed in terms of gathering ‘evidence’ on objective or self-evident risksFootnote 129 rather than contesting how risks are defined in the first place. For example, Recital 90 recommends that VLOPs consult with organisations representing ‘the groups most affected by the risks’ – obscuring the essentially contestable and value-laden processes involved in determining what ‘the risks’ are and whom they affect. Ultimately, since the DSA’s risk management system centres around technical processes and expert assessments (risk assessments, audit reports, etc.), civil society actors will likely find it easier to influence policy debates if they can also frame their interventions in terms of technical expertise, rather than ideological differences.Footnote 130
C. Depoliticisation
This points to the third group of critiques, which have featured particularly prominently in technology regulation:Footnote 131 risk discourse is depoliticising, obscuring conflicts over how technologies should be designed, owned and operated. To a large extent, this follows from the previous two points. On one hand, privileging technocratic expertise enables regulators, corporations and other powerful actors to present their preferred risk framings as objective or apolitical, making it harder for other actors to contest their political agendas. On the other, many structural policy issues and reforms which are more obviously politically contestable are placed outside the boundaries of regulatory debates.
Many risk regulation measures, including the DSA, elide the question of risks for whom,Footnote 132 instead framing risks in terms of harms to an amorphous, unitary ‘public’ (as in Article 34(1)’s references to public health, security, etc.). Reliance on ‘objective’ technical evidence and metrics can ‘[portray] political decision-making as a neutral pursuit of utility maximization, devoid of winners and losers’.Footnote 133 Yet as noted above, risk management inevitably involves distributive and ideological conflicts. For example, there is no apolitical way to evaluate a mitigation measure that reduces the visibility of political disinformation, but also suppresses independent political news and commentary.Footnote 134
Characteristically, the DSA largely ignores such conflicts. Article 35(1) requires mitigation measures to be ‘appropriate, proportionate and effective’. What these criteria mean fundamentally depends on the goal being pursued: in the absence of consensus around what risk mitigation measures should be aiming to achieve, what is effective or appropriate cannot be evaluated. The DSA offers little guidance, since, arguably intentionally, the listed areas in Articles 34–5 accommodate a huge range of competing goals. The reference to proportionality – a framework classically used to assess and manage conflicts between human rights – might be more relevant to resolving disagreements, along with the reference (also in Article 35(1)) to considering the impact of mitigation measures on fundamental rights. However, these norms still frame conflicts in terms of abstract and universal values, to be ‘appropriately’ balanced in the interests of society as a whole.Footnote 135
This not only elides questions about who is harmed by risks, but also questions about who benefits from risky activities, and whether they should be undertaken at all.Footnote 136 As section 2(b) argued, risk regulation generally resonates with deregulatory ideologies that assume regulatory intervention should be minimised, given its costs for ‘innovation’Footnote 137 – often framed in political debates as an unqualified good. This sidesteps ‘greater issues of what the proper human meanings, conditions, limits, and purposes of scientific and technological innovation should be’.Footnote 138 Ongoing debates around topics including freedom of political speech and protest,Footnote 139 environmental impacts of digital infrastructure,Footnote 140 and the social (dis)utility of generative AIFootnote 141 make it clear that VLOPs’ business activities implicate important distributive and ideological conflicts: who gets access to (online) media, what kinds of technologies benefit society, and how limited resources should be used. Such questions clearly implicate values mentioned in Article 34(1), such as fundamental rights and public security. Yet framing platform governance in the dry and technocratic terms of risk management provides little space for them to be openly discussed.
4. Risk and institutional power
Depoliticising conflicts over how risks should be understood and managed does not mean they will not be resolved somehow. Article 34 cannot mean everything at once. What it ultimately comes to mean in practice will be determined by the institutions involved in implementing the DSA – most importantly VLOPs and regulators, although other private actors like auditors and civil society organisations will also be influential.Footnote 142 Thus, as well as analysing how the DSA discursively frames policy issues, it is important to consider how risk politics will be shaped by the institutions putting it into practice.
These two aspects are however closely related. There is an obvious complementarity between discursively framing policy issues as apolitical technical problems, and delegating their management to private companies, professional auditors and independent experts. In this respect, as section 2(b) showed, the DSA draws on ‘meta-regulatory’ approaches that are well-established in other fields, such as financial and environmental regulation and business and human rights. A correspondingly longstanding body of research has highlighted drawbacks of this approach – in particular, its tendency to shift power to corporations, who are often able to define risks and mitigation measures in ways that serve their own interests.
A. An essentially misguided project?
The strongest critical perspectives would hold that corporate risk management and due diligence obligations are fundamentally unhelpful responses to economic injustice, environmental degradation and human rights violations, because they work within the existing legal and economic structures of corporate capitalism, and thus exclude from the outset the structural reforms necessary to address such issues.Footnote 143 Arguably, risk management obligations not only ignore but actively undermine such structural changes, by legitimising corporations and helping them resist more interventionist regulatory proposals.Footnote 144
These arguments are certainly (and perhaps especially) relevant to the VLOPs regulated by the DSA systemic risk framework. A growing body of scholarship argues that their business models are fundamentally economically unjust and environmentally destructive, in light of factors like their structural reliance on low-paid labour and heavily polluting and resource-intensive digital infrastructure.Footnote 145 Evidently, issues around global justice and sustainability are not the DSA’s primary focus.Footnote 146 However, even focusing purely on the DSA’s core project of regulating the governance of user-generated content, it has been argued – as section 3(a) discussed – that business models revolving around surveillance advertising, engagement maximisation and commercial content curation are inherently socially harmful. On this basis, it could also be argued that the DSA’s risk management system stabilises and legitimises these harmful business models, at the expense of more radical regulatory efforts aiming to establish alternative models of platform governance.
I am generally sympathetic to such arguments. However, they are applicable not only to risk management, but any approach focused on regulating rather than dismantling corporate platforms. This does not mean all approaches in the former category are the same, or unworthy of further analysis. Regulation which does not directly attack corporate power can still meaningfully limit it or create opportunities for contestation.Footnote 147 The following subsections thus aim to reflect on the specific regulatory approach chosen in the DSA – and to present a critical assessment of its approach to risk regulation that should be convincing regardless of the reader’s views on the more radical critiques discussed above.
B. Corporate capture
Considering different actors’ power to shape risk management, scholarship on risk regulation has in particular emphasised the problem of corporate capture.Footnote 148 Given the deregulatory orientation of risk regulation in general, and the discretion accorded to corporations in meta-regulatory regimes like the DSA, risks and mitigation measures are obviously susceptible to being defined in ways that suit corporate interests – especially where the corporations involved are as powerful and well-resourced as most VLOPs.Footnote 149
First, compliance teams can deliberately and strategically construct risks in ways that serve the company’s commercial interests. Given the vast scope of Articles 34–5 DSA, risk assessments will necessarily involve selecting and prioritising a limited number of ‘risks’ from among a nearly infinite number of potentially-relevant issues, then deciding how to define and measure those risks, and finally identifying and selecting among possible mitigation measures. Evidently, as companies are making these decisions, not all of them will be equally appealing from a commercial perspective: some will be more costly or more disruptive to business operations than others. In this situation, VLOPs obviously have an incentive to choose those that are least costly.Footnote 150 There is also plenty of scope to minimise costs by reframing existing business practices (or minor modifications thereof) as DSA risk mitigation measures. For example, existing ‘brand safety’ tools developed to suppress ‘non-family-friendly’ content that is unappealing to advertisersFootnote 151 could be reframed as tools addressing risks to child safety.
Second, more generally, even where compliance staff are sincerely motivated to pursue public-interest regulatory objectives, compliance processes established within for-profit companies cannot be fully insulated from commercial considerations. Representatives of VLOPs have explicitly stated that they want DSA risk management processes to be maximally scalable and efficient, and to build on existing ERM and human rights due diligence processesFootnote 152 – which are essentially designed to mitigate commercial and reputational risks.Footnote 153 Specialist DSA consultancy Tremau has suggested that VLOPs should look at DSA risk assessments as investments that can also be leveraged for business goals, like improving customer satisfaction.Footnote 154 These assessments will also necessarily rely on internal databases and analytics tools originally designed for business purposes. Thus, even if corporate compliance departments are not consciously and strategically seeking to minimise costs, systemic risk management will inevitably be guided by data, resources and decision-making processes geared towards commercial objectives.
Compliance departments will have limited staff and resources (indeed, journalistic investigations and leaks suggest even the largest VLOPs’ ‘trust and safety’ teams are perennially understaffedFootnote 155). They may also find it difficult to persuade other teams whose performance metrics are based on revenue and other commercial goals to allocate resources to regulatory compliance and risk mitigation. These constraints would generally incentivise compliance staff to focus on more superficial and less disruptive mitigation measures: for example, rephrasing content policies, instead of improving automated moderation systems, which would require more work from highly-paid and in-demand software engineers.
Notably, Article 41 DSA requires VLOPs to establish an independent compliance department, structurally separated from other management functions. This has advantages as a way of (somewhat) insulating risk assessments from commercial goals. However, it could also make it easier for VLOPs to structurally sideline compliance staff, such that their activities (assessing risks, issuing guidance, etc.) signal to regulators and other actors that the company takes compliance seriously, but have little impact on other departments’ activities.Footnote 156
Indeed, regulatory and sociolegal scholarship has frequently observed a tendency for internal risk management processes to devolve into ‘box-checking’ or ‘cosmetic compliance’: companies focus on implementing procedures which signal compliance to regulators and other stakeholders, but achieve little substantive change.Footnote 157 This is particularly common in regimes like the DSA which mandate procedures companies should follow (assessing and mitigating risks, publishing reports, commissioning and responding to audits) rather than substantive changes. Often, this encourages what Power calls ‘secondary risk management’: organisations focus on avoiding the risks to themselves if their risk management systems are deemed inadequate, by ensuring that other actors see them following appropriate procedures, more than on the ‘primary’ risks they are actually charged with managing.Footnote 158
Sociolegal scholarship also highlights several other characteristics of regulatory regimes that facilitate cosmetic compliance: vague or ambiguous legal rules, proliferation of regulatory guidance and standards from different sources, and lack of transparency.Footnote 159 All these features are visible in the DSA. For example, the vague and open-ended nature of Articles 34–5 will hinder external accountability. As emphasised above, it is a core feature of this regulatory framework that VLOPs have to choose between vast numbers of possible risk framings, assessment metrics and mitigation measures; these choices can be self-serving without being legally unjustifiable. Moreover, any enforcement decision finding that VLOPs’ preferred interpretation of Articles 34–5 is legally impermissible will be open to legal challenge, and well-resourced large corporations typically have a structural advantage in such legal proceedings.Footnote 160 Indeed, the mere possibility of such litigation could incentivise the Commission to focus on ‘safe’ interpretations of Articles 34–5 and avoid pushing for more radical changes in businesses practices – especially because, as Section 3 argued, this kind of incrementalist approach is generally encouraged by the discursive framing of risk management.
The lack of transparency around risk management is also notable. VLOPs must only publish summary reports compiling their risk assessments, audit reports and response to the audit within three months of receiving the audit report (which could be a year after the original risk assessment).Footnote 161 Civil society organisations have repeatedly stressed that the opacity of risk assessments not only prevents them from scrutinising and criticising particular aspects of VLOPs’ risk management, but also more generally from understanding what kinds of risks and mitigation measures are being discussed and where they might want to focus their own resources.Footnote 162 This delay also increases VLOPs’ power to set the agenda for subsequent policy debates, as their risk assessments will highlight issues, formulate problem framings and propose solutions to which other stakeholders must later respond.Footnote 163
In this vein, an important strand of literature on corporate capture argues that corporations can not only approach internal risk management processes in self-serving ways, but can also influence how regulators and external stakeholders understand the regulatory regime. Sociolegal scholars Lauren Edelman and Ari Ezra Waldman theorise this as ‘legal endogeneity’, describing how regulatory interpretation comes to be shaped by the preferences of regulated companies.Footnote 164 Other scholars have discussed the influence of regulated companies on regulatory agencies and broader policy debates in terms of ‘discursive capture’Footnote 165 or ‘cultural capture’.Footnote 166
This influence can operate through several mechanisms.Footnote 167 Technical expertise, ample resources, and prominence at industry events and policy discussions can enable large corporations to establish widely recognised best practices that influence other actors’ expectations of ‘appropriate’ risk management.Footnote 168 They can also directly influence regulators’ priorities and perceptions of policy issues through lobbying and advocacy.Footnote 169 Finally, other actors may rely on their technical knowledge and tools to understand risks. For example, in the DSA, external research and compliance evaluations by regulators, auditors and independent researchers will largely rely on VLOPs’ own reports, databases and APIs, and will thus be influenced by their preferred metrics and problem framings.
Many characteristics of the DSA discussed in this section seem to be intentional and defensible regulatory choices. As section 2(c) noted, favouring flexible standards and procedures over prescriptive rules can accommodate changing regulatory contexts and priorities, and allow an expert community to negotiate shared understandings of appropriate risk management over time. The danger is that because VLOPs bear primary responsibility for defining and managing risks, and because of their financial and technical resources, these shared understandings will ultimately be dominated by industry perspectives and preferences.
C. Risk and state power
Much of the literature discussed in subsection 4(b) suggests, implicitly or explicitly, that the alternative should be greater involvement of state institutions – understood as representing public interests, or at least being more accountable to them than companies. Prescriptions to avoid corporate capture often involve increasing regulatory agencies’ involvement in risk management, and strengthening their capacities to monitor companies and set substantive standards.Footnote 170 More radical proposals aiming to curtail corporate power might focus on setting regulatory ‘red lines’ that clearly prohibit or mandate certain activities.Footnote 171
So far, the Commission’s DSA enforcement strategy seems to take some inspiration from both of these approaches. On the one hand, the Commission (like other regulators in important member states, such as Ireland’s CnaM) has built up a large DSA enforcement team, and claims to be in regular, ongoing dialogue with VLOPs regarding risk management and other DSA obligations.Footnote 172 On the other, it has also started issuing fairly prescriptive official guidance on certain risks (eg, child safetyFootnote 173). From the perspective of the scholarship discussed above, these efforts might be celebrated as strengthening public institutions’ capacities to shape risk management in line with the public interest. In fact, however, they have often met with concern from civil society about state institutions exercising excessive influence over online freedom of expression.Footnote 174 In particular, given the importance of soft law standards and unofficial guidance within the DSA framework, and the Commission’s emphasis on dialogue with VLOPs and voluntary commitments, there are concerns about state actors regulating online speech by informally encouraging VLOPs to manage risks in certain ways, as opposed to legal procedures that are public and open to challenge.Footnote 175
This points to a gap in the existing literature on risk regulation, and an important topic for further research on the DSA: what does it mean when regulatory tools and techniques traditionally used in areas like chemicals, environmental damage or financial stability are redeployed to regulate media and communications? Many (though by no means all) of the VLOPs regulated by the DSA are social media and search engines. These platforms occupy a central place in contemporary media and information ecosystems, exercising significant influence over areas like the production and distribution of news journalism, the organisation of activism and protest, and everyday political discussions. As such, it is widely recognised that prescriptive state regulation – in particular, when it relates to how platforms moderate users’ content – is highly likely to be deployed in ways that limit political freedoms, repress dissent, and/or target politically unpopular minorities. Navigating this tension between unchecked corporate power and excessive state control has long been a central problem of media law.Footnote 176 However, risk management has historically not played a prominent role in media regulation.Footnote 177 Conversely, literature on risk regulation in other fields may not fully illuminate its particular dynamics in the platform regulation context.Footnote 178
First, unlike fields like environmental or financial regulation, regulating online media content inevitably raises complex issues around freedom of expression, access to information and other civil liberties.Footnote 179 Importantly, while regulatory obligations (or incentives) are susceptible to being used to intentionally target politically disfavoured speech, this is not the only problematic use of state power.Footnote 180 Even where state-mandated restrictions on content are aimed at (perceived) genuine harms such as hate speech, they are still likely to disproportionately affect marginalised social groups and political perspectives.Footnote 181 For example, if platforms seek to meet the Commission’s expectations around the mitigation of risks associated with specific types of illegal content by developing or adjusting automated moderation software to remove more such content,Footnote 182 this will necessarily also increase rates of ‘false positives’Footnote 183 – which tend to disproportionately affect marginalised social groups, due to structural biases within state institutions and platform companies.Footnote 184
Second, compared to the other regulatory fields mentioned, online content regulation more prominently involves heavily securitised policy areas such as disinformation, so-called ‘terrorist’ content, and child safety.Footnote 185 Police, security and law enforcement agencies are also involved in platform governance and regulatory enforcement: for example, providing lists of ‘extremist’ organisations whose content should be removed,Footnote 186 or reporting content to platforms for removal – often based on alleged illegality, but also often simply on the basis that it violates companies’ in-house content policies.Footnote 187 This involvement is also recognised and further institutionalised in the DSA. For example, law enforcement agencies can be certified as ‘trusted flaggers’ whose reports to platforms must be prioritised.Footnote 188
As in other securitised policy fields, the ‘risk’ framing and the delegation of risk management to VLOPs not only shift decision-making power towards corporations, but also towards executive institutions. Evidently, prescriptive legal rules restricting speech, expression and media are also open to political abuse (as has been amply demonstrated by recent Europe-wide crackdowns on the pro-Palestine and climate movementsFootnote 189). However, making and interpreting such restrictions heavily involves courts and legislatures, whose decisions are at least relatively transparent and contestable (as illustrated by the extensive documentation, public criticism and legal challenges of the aforementioned crackdowns). In contrast, framing online content regulation in terms of technical problems to be solved by private-sector experts shifts power from courts and legislatures towards the regulatory agencies overseeing corporations’ internal risk management processes, which are themselves opaque and difficult to challenge.
This regulatory oversight may formally focus on procedures rather than substantive standards,Footnote 190 but by developing best practices for risk management and (implicitly or explicitly) threatening enforcement action, regulators can effectively pressure VLOPs to restrict content they deem harmful.Footnote 191 For example, in the context of increasing regulatory attention to platforms’ putative impacts on child mental health (see section 2(c)), several VLOPs recently launched a new tool to coordinate identification and moderation of content deemed potentially harmful to children, for example because it encourages self-harm.Footnote 192 This is notable because there would generally be no legal obligation for them to remove such content. Thus, this development illustrates both the ‘function creep’ of moderation tools beyond their original purposes,Footnote 193 and the possibility for governments to use the construction of systemic risks to encourage platforms to regulate user content in ways that go beyond their strict legal obligations.Footnote 194 Commission statements have also publicly suggested that effectively mitigating risks should include promptly removing content reported by law enforcement.Footnote 195 In effect, then, risk management obligations could offer a further incentive for VLOPs to comply with law enforcement requests – even where they do not allege illegality, but only breaches of platforms’ terms and conditions, or where their justifications are less than convincingFootnote 196 – and could thereby facilitate content removals which circumvent formal legal processes. In this context, civil society groups have raised well-founded concerns that governments, regulators and police could use risk mitigation obligations to pressure platforms into suppressing political advocacy and activism.Footnote 197
As such, theorising DSA systemic risks only in terms of deregulatory agendas and corporate power, as this article has mostly done so far, seems incomplete – empirically, because it is insufficiently attentive to the role of state institutions in constructing risks, but also normatively, insofar as it points to alternative regulatory approaches that give state agencies even more power to regulate online content. The expansion of risk regulation to platform governance calls for more research on the intersections between state and corporate power in the construction and management of risk.
Importantly, it would be a mistake to conceptualise the two as opposite poles on a spectrum, such that more state intervention means less corporate capture and vice versa. Regulators’ and politicians’ political objectives and preferred risk framings may sometimes conflict with those of VLOPs, but may also often coincide. In particular, as Section 4(b) argued, VLOPs will generally be incentivised to frame risks in terms that require relatively little investment of resources or disruption to business operations. If state actors wish to construct risks in ways that focus on monitoring and controlling political speech, these goals will be largely compatible. Both will be served by constructing risks in terms of harmful content and user activities that need to be identified and suppressed, which can be achieved through relatively small adjustments to existing moderation systems, rather than in terms of structural problems in platform design and governance. From this perspective, the boundary-reinforcing, technocratic and depoliticising nature of risk discourse is advantageous for both VLOPs and governments: it can legitimise monitoring and surveillance of ‘risky’ user activity, while also deflecting attention from more structural critiques of corporate platform governance that would be in neither of their interests.Footnote 198
D. Critical security studies perspectives on risk
In this context, one promising avenue for future investigation of the politics of systemic risks in the DSA could be to draw from the extensive and theoretically rich literature on risk management as a mode of political power in fields like critical security studies and criminology.Footnote 199 Space does not permit more than a very brief discussion of this literature. However, a few points suggest potentially generative connections with scholarship on risk regulation and corporate risk management in the DSA.
First, like the sociological and STS literature on risk regulation relied on in this paper, scholarship on risk management in security, counterterrorism and law enforcement has generally been strongly influenced by social constructionist understandings of risk (in particular by Foucauldian perspectives on risk as a mode of governmentalityFootnote 200). On this basis, it provides insights into how politicians and state institutions discursively construct risks in ways that justify state repression, curtail civil liberties and target stigmatised minorities.Footnote 201 There are already examples of risk being mobilised in this way in the DSA context.Footnote 202 More research is needed on how regulators, politicians, professional experts and other stakeholders discursively construct particular social groups or activities as ‘risky’ within the DSA framework.
Second, although scholarship on risk in security studies has effectively illuminated how state institutions like police, counterterrorism and border agencies exercise power through risk management, it is far from exclusively focused on government institutions – instead tending to emphasise how risks are constructed and managed by heterogeneous networks involving different state institutions, political actors, international and supranational organisations, and private actors.Footnote 203 For example, private companies not only provide software and analytics tools that state institutions use to manage security risks,Footnote 204 but may also enforce their own risk management standards,Footnote 205 as well as intervening in policy discussions to promote particular risk framings and narratives.Footnote 206 This underlines the point that state and corporate power to construct risks are not necessarily in tension. Further research could integrate insights from security studies scholarship with scholarship on risk regulation in order to better analyse how knowledge and discourses about DSA systemic risks are co-produced by networks of actors and institutions spanning the public and private sectors.
In this regard, existing scholarship integrating security studies and sociolegal analysis could provide valuable methodological as well as theoretical directions. Such research has used empirical and often multi-sited or mixed-methods research to explore how risk governance plays out in practice and how legal norms, technologies and institutional practices interrelate and reciprocally shape one another.Footnote 207 Especially given the extremely ambiguous and open-ended nature of Articles 34–5 DSA, and the range of state, corporate and non-governmental actors who could potentially be involved in shaping risk management, empirical research that is methodologically eclectic and focuses on the sociotechnical practices that translate and materialise legal standards will be essential to understand how the DSA systemic risk framework is playing out in practice, and its political implications.
Third, critical security studies scholarship has in particular explored the use of algorithmic risk assessment technologies in contexts like border controls and counterterrorism.Footnote 208 Here, parallels could be drawn with the automated filtering tools which are now ubiquitous in commercial content moderation.Footnote 209 Given regulators’ apparent focus on removal of harmful content as a key risk mitigation measure, expansion and fine-tuning of automated moderation seems likely to play a central role in VLOPs’ compliance efforts. Broadly, these tools can function by cross-referencing existing databases of banned content, often shared across the industry, or by using machine learning to evaluate signals from the content and its metadata, to estimate an overall probability that the content violates a given rule.Footnote 210 As such, there are clear parallels with risk assessment tools used in border controls – which may rely on relatively simple lists of banned or risky passengers, but may also use more sophisticated algorithmic technologies and integrate data from diverse sources to produce opaque aggregated risk scores.Footnote 211 Security studies scholarship on algorithmic risk governance could help to illuminate the power dynamics, political agendas and understandings of risk encoded and reproduced by automated moderation tools – and conversely, how the functioning of these tools and the data they produce shape perceptions of risk.
That said, there are important differences between these two contexts. Scholarship on border controls and state surveillance has most often focused on how risk assessment technologies attach risk to people, enabling the exercise of power in highly individualised and selective ways: for example, barring ‘risky’ individuals from travelling while enabling free movement for others.Footnote 212 One open question is how theorisations of (algorithmic) risk governance in security studies might need to be rethought or developed for a regulatory context where they are used to regulate large-scale business operations, rather than individual activities and movements. The DSA could also be framed as attempting to monitor and regulate (indirectly, through risk mitigation obligations for VLOPs) risky online behaviours or patterns of activity. This is reminiscent of research in security and surveillance studies which has deployed Deleuze’s concept of the ‘dividual’ to suggest that algorithmic risk management tools govern not (only) by tracking identified individuals, but through fragmentary, constantly changing statistical representations of characteristics or behaviours deemed relevant at a given moment.Footnote 213 This could be a generative way to understand and critique systemic risk management – as an effort by both states and VLOPs to control online communication by modulating the algorithmic regulation of particular interactions at the level of the platform network, instead of enforcing the law against individual users.Footnote 214 It could also be useful to draw on scholarship on counterterrorism and security risks in financial regulation, which can similarly be located at the intersection between the regulation of people and their movements and the regulation of multinational businesses.Footnote 215
Finally, scholarship on both corporate risk managementFootnote 216 and risk governance in security studiesFootnote 217 has highlighted that while the discourses, techniques and institutions of risk management tend to be mobilised in ways that serve powerful state and corporate interests, these are not unilateral top-down processes, but also present opportunities for politicisation and contestation of dominant understandings of risk. In this regard, scholarship on risk regulation often calls for more institutionalised involvement of affected communities, civil society organisations and other external stakeholders, as a way of democratising risk management and contesting dominant framings.Footnote 218 Such calls for stakeholder participation are also very widespread in the context of platform regulation, both in academic scholarshipFootnote 219 and from industry actors,Footnote 220 where it is often portrayed as the necessary corrective to excessive state and corporate power.Footnote 221
Yet in this literature, multistakeholder participation and deliberation are often framed in terms that reinforce rather than challenging the depoliticising tendencies of risk discourse: as a way to consider everyone’s interests, and especially the interests of the most vulnerable and marginalised groups.Footnote 222 Such framings overlook the significant imbalances of power and resources that mean certain stakeholder groups – typically those representing well-organised and well-funded interest groups, who enjoy privileged access to expert communities and policymaking circles – are far better able to mobilise for their preferred understandings of risk than others.Footnote 223 Future research exploring how civil society groups and other actors contest dominant understandings of systemic risk in the DSA could also benefit from drawing on critical security studies literature, which has tended to place more emphasis on how established institutions and power dynamics structure the construction and contestation of risk.
5. Conclusion
At first glance, the long list of exceedingly broadly defined risk areas in Article 34(1) could suggest that DSA systemic risks can mean anything and everything. In fact, however, the social constructionist analysis presented in this article shows that the DSA’s systemic risk provisions are less open-ended or contingent than they might appear.Footnote 224 As Borges’ work reminds us, texts do not exist in a vacuum: they are interpreted by specific people, in specific places and contexts, with particular objectives which they attempt to pursue within given constraints. Risk regulation is always shaped by pre-existing institutions, norms, resource distributions and power relationships. In practice, this more often than not means that risks are constructed in ways that favour powerful interests.
By situating the DSA’s regulatory strategy in the longer history of critical scholarship on risk regulation, this article has shown how it aligns with a neoliberal, deregulatory ethos, based on the premise that regulation should interfere with corporate freedom and profits only to the minimum extent justified by specific threats to the public interest. In particular, two core features of the systemic risk management regime – its framing of diverse and value-laden political questions as risks to be managed through technical expertise, and its meta-regulatory structure which delegates primary responsibility for defining, prioritising and mitigating risks to regulated companies – will tend to reinforce the status quo, in which platform governance is dominated by the commercial interests of ‘big tech’.
At the same time, using examples from the implementation of the DSA to date, this article has illustrated how risk regulation in the context of online platforms and media may present some new and distinctive problems. In particular, the prominent role of law enforcement institutions and security discourses suggests that state actors may push for risks to be constructed and managed in ways that involve intensified monitoring and control of user communications. This may sometimes conflict with VLOPs’ interests; however, their different but overlapping interests can also work together to co-produce understandings of risk that facilitate both private profits and state security objectives. This potential is already visible in the context of the DSA, as regulators and corporations appear to be favouring understandings of systemic risks and mitigation measures that centre around monitoring and controlling user content. This latest development in the long history of risk regulation calls for more critical normative and empirical research into how risks are constructed and contested in the context of platform governance.
Acknowledgements
Thank you to Riccardo Fornasari, Clara Iglesias Keller, Daphne Keller, Paddy Leerssen, João C. Magalhães, Ljubiša Metikoš, Lukas Seiling, Daniel Sneiss and Gavin Sullivan for their thoughtful comments on earlier drafts. Thank you also to the organisers, participants and staff at the FernUni Hagen Tausend Plattformen conference, the Université de Lille conference on « Law & Political Economy : Droit et rapports de domination – Pour une approche critique », and the Sciences Po conference on Algorithmic Transparency and the Digital Rule of Law, for opportunities to present and discuss this work.
Funding statement
This work was supported by a research grant from the Project Liberty Institute and by a fellowship at the Weizenbaum Institute for the Networked Society. Colleagues from the Weizenbaum Institute offered feedback on a presentation of an early version of this article. Otherwise, these funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests
The author has no conflicts of interest to declare.