Robotics and AI have become one of the most prominent technological trends of our century. Footnote 1
I. Introduction
Ensuring the safety of human operators in human–robotic interactions is a key requirement for the adoption and implementation of robots in a variety of settings. Robotics currently seeks synergies that can optimise the sustainable circularity of standard robotic outcomes while adding quality inputs to more customised small-scale projects, without discounting renewed calls for promoting ethical robotics to frame the quality of working life within the so-called “Industry 4.0” or “Fourth Industrial Revolution”.Footnote 2 Against this backdrop, and despite outstanding barriers,Footnote 3 robots are starting to replace or accompany humans in performing job tasks that, far from being repetitive and/or standardised only, might even touch upon higher spheres of operational and conceptual complexity – if not yet “creativity”.Footnote 4 It is no longer the case that machines simply act upon human instructionFootnote 5 : as robots become “suitable to act in unstructured scenarios and [interact] with unskilled and undertrained users”,Footnote 6 the robots and humans might be considered as peers, or it might even be the other way round, with humans acting in accordance with robotic directionFootnote 7 – in any case performing tasks working closely together. Think, for instance, of lightweight surgical and caregiving robots, robot-enabled aerospace repairing missions, rehabilitation and recovery robots that help patients overcome their psychological resistance to, for example, walkingFootnote 8 or automated defence applications where human attendees serve subordinate functions or whose safety and survival anyway depend upon robots whose indications they could not double-check rapidly enough.Footnote 9 As a growing market,Footnote 10 collaborative robots can be used in different sectors and applications.Footnote 11 In manufacturing, a collaborative robot “bridges the gap between totally manual and fully automated production lines”.Footnote 12 Accordingly, working-space seclusion and compartmentalisation are long gone, too: the idea that (massively similar) robots are allocated a siloed working space in which to intensively perform their tasks while humans work in other company environments or departments is falling into obsolescence.
The aim of this paper is to situate the issue of occupational health and safety (OHS) and smart collaborative robotics (cobotics) within the current and prospective European Union (EU) regulatory landscape and to formulate a proposal that could steer meaningful debate among EU policymakers so as to improve the regulatory dialogue between the (existing) robotics and (prospective) artificial intelligence (AI) policy frameworks,Footnote 13 with specific reference to the OHS implications of smart cobotics. The current OHS frameworks applicable to robotics within the EU derive from a bundle of Directives as well as from an extensive network of international industry standards understood as quasi-binding. This will be accompanied by the emerging AI regulatory framework in the form of a Regulation (AI Act) and a Directive (AI Liability Directive) – both of which are still drafts at the time of writing. We aim to demonstrate that the dialogue between these two frameworks (robotics OHS on the one hand and AI on the other) is not yet fertile, grounded and sophisticated enough – both terminologically and substantially – to accommodate the specific challenges arising from smart cobots (SmaCobs) in ways which would be helpful to engineers and adequate for the EU’s position in a global “regulatory race” in this area vis-à-vis China and other technologically pioneering jurisdictions.Footnote 14 Upon inspecting the shortcomings of the current and forthcoming rules, we will advance a proposal for a comprehensive EU Regulation to combine the two fields into a unitary piece of legislation, with the purpose of covering smart cobotics and taking account of possible future developments such as quantum computing (QC) technology. We intend to assist EU policymakers in pursuing a smart “Brussels Effect”Footnote 15 in this domain of OHS and SmaCobs and in turn contribute to ongoing global discussions as to how to secure operational safety in AI-driven cobotics through enhanced interdisciplinary integration, informed policy effectiveness and balanced regulatory scrutiny across applicable industries, markets and regions.
Our work is situated against the backdrop of challenges arising from automation as the regulatory chessboard of our time. A remarkable deal of scholarly and policy work has been produced in recent years with regards to inter alia robots and taxation, robot-assisted alternative dispute resolution, intellectual property (IP) adaptation to new forms of intangibles, consumer protection from subliminal cues,Footnote 16 the relationship between AI-aided neuroscientific inquiry and emotional lawyering,Footnote 17 privacy and data protection issues arising from algorithmic interference with one’s thoughts, choices, preferences, aspirations and behaviourFootnote 18 and even the legal personality of robots themselves.Footnote 19 Nonetheless, one area that has received very little attention is the regulation of automation vis-à-vis OHS in smart robot-populated workspaces. In fact, as humans are featuring back into the picture, human–robot collaborative environments are in need of a safety overhaul – most crucially in so-called “developing countries”Footnote 20 – in order to overcome safety-dependent barriers to wider cobotic adoption.Footnote 21 It is this issue that represents the focus of this article: how the EU regulatory framework currently addresses OHS issues in smart cobotic use and the need for a reform of the legislation in this area.
Furthermore, underexplored legal areas concern inter alia the application of AI and QC to collaborative robotics – or integration with them. Indeed, robotics is increasingly intertwined with AI applications, towards “a future that will increasingly be invented by humans and [AI systems] in collaboration”.Footnote 22 Extensive literature does exist on the topic, but almost exclusively from an engineering perspective and mostly focusing on medical surgeryFootnote 23 : there is some safety discussion within this literature.Footnote 24 Yet, no comprehensive cross-sector analysis of safety legal–ethical dilemmas raised by autonomous robots working with humans has ever been accomplished. The present work is not about whether “good protection of workers’ health in the performance of their duties using robots, AI”Footnote 25 is afforded per se. Rather, it appraises the extent to which safety rules can be encoded into machines, in such a way that workers’ safety could be entrusted upon robots regardless of human supervision, especially in contexts of human–robot collaboration (HRC) – not least through embodied (and yet removable) applications. Indeed, HRC has been questioning “the traditional paradigm of physical barriers separating machines and workers”, and it was enabled by “the use of multi-modal interfaces for more intuitive, aware and safer human–robot interaction”.Footnote 26 How should (EU) lawmakers envision the regulation of these collaborations from an OHS perspective? How are we to account for the complexity of “human factors” in collaborative roboticsFootnote 27 and encode SmaCobs with safety rules that consider all such factors? To what extent does this require a new OHS framework?
To be clear, our aim is not limited to addressing “the integration of safety-related features in the robot architectures and control systems” (which is concerned with solutions to enhance robots’ operational safety),Footnote 28 but to expound the legal–ethical implications of having safety rules – be they in the form of industry standards, technical protocols, company policies, soft regulations or binding laws – algorithmically encoded into (and thus enforced by) robotic machines, as if they were working environments’ safety guardians as regards their own conduct,Footnote 29 and that of their human collaborators as well. Would it be legal to entrust them with supervisory OHS duties, as is already being experimented with in Europe and around the world?Footnote 30 And even so, would it prove ethically sound, in the absence of validation by humans? In a scramble towards robotic morals-free “efficiency”, “[w]e are increasingly faced with ethical issues[,] particularly w[ith] self-modifying AI that might change its optimisation goals without humans being able to interfere or even discover that this was the case”.Footnote 31 This issue stands beyond that of deliberative safety, described as “the capability of the system to optimize speed, degrees of freedom, and safety to correspond to the particular task at hand”.Footnote 32 It is about robots being programmed to “smartly” take initiative about safety vis-à-vis both themselves and those humans they collaborate or share a working environment with, with the purpose of enforcing safety rules adaptively, responsively, sophisticatedly and comprehensively, in a horizontal and yet customised manner.
The managerial turn is hardly new in the robotics literature. Entrusting robots with management tasks has been suggested by scientists for quite a few years already.Footnote 33 Yet, far from the mundane responsibility of organising schedules and arranging deliverables, the more demanding safety managerialism as applied to robots has not been addressed in the literature so far, even though it has been identified as a need. Indeed, scholars have already noted that “[t]he problem of charging AI-assisted robots with specific OHS duties, and then enforcing such obligations from them, […] requires additional research”.Footnote 34 It is one thing for managers to identify workplace safety using risk-assessment software, to then decide where to allocate resources and refine strategies; but it is completely another to posit that robots themselves should run such software and make choices based on its appraisal. This work intends to fill this gap.
Considering that robots will increasingly perform better than humans in certain workstations, and taking stock of the distancing “new normal” inaugurated with the COVID-19 pandemicFootnote 35 (and unlikely to completely backtrack, not least subconsciously, especially at certain latitudes), the identification of appropriate rules for robots to be entrusted with OHS procedures and decision-making is a useful and increasingly urgent endeavour. Where should thresholds be set in order to mark the boundaries between human supervisors’ liability and automated (thus “autonomous”) decision-making? Can the two be reconciled? Should we allow machines to even reach the point of taking independent decisions that can impact workers’ safety? On the other hand, has this not always already been the case, that robots do make these decisions or are designed in a way that does encode certain normative OHS values, although in less “smart” a way? And were we to accept that machines can independently decide on safety issues, should vicarious liability rules for relevant corporate officers be envisioned? How to apportion them between programmers, managers, owners, shareholders, operators, supervisors and other relevant functions? These, and more, are all interdependent issues we explore in the present article.
This article narrowly addresses choices on and about OHS, not those broadly impacting OHS, which could encompass virtually any robotic action within a professional environment. At the same time, this work fits with the most general scholarly discourse on legal automation and robotised rule enforcement, which has mostly taken a transactional path inspired by smart contracts and the blockchain, while – it seems to us – neglecting certain fundamental aspects of measure-enforcement automation in the workplace, from a labour law and health law standpoint. Other relevant bodies of literature are: “disaster literature”, concerned with the potentially catastrophic consequences of experimenting too ambitiously with robots’ ability to transform themselves through self-learningFootnote 36 ; and the debate on algorithmic governmentality.Footnote 37
From here, we offer some definitions and background on SmaCobs, before turning to entrusting robots with enforcing safety rules. From there, we provide an overview of the current (and forthcoming) regulatory framework on this issue in the EU and set forth our proposal for reform. Overall, we find that the long-standing EU framework exhibited profound gaps when it comes to ensuring OHS in SmaCobs; the most recent binding Regulation in this area,Footnote 38 approved by the EU Council in May 2023, did endeavour to systematise the regulatory landscape and bridge some of those gaps, but we will argue that it still lies far from satisfactorily filling those lacunae. We outline certain key points and topics that the Regulation should have covered but failed to do so. Summarised in Fig. 1, we present a contrasted overview of: (1) the traditional, long-standing EU framework applicable to robotics safety; (2) the most recent regulatory effort by the EU in this area (ie the aforementioned Regulation – hereinafter “New Machinery Regulation”); and (3) our arguments as to why the Regulation is still far from achieving its stated objectives and cannot be satisfactorily applied to operations with smart collaborative robots more specifically.
II. Defining robots, collaboration and related safety risks
Before delving into the substance of our argument, it is necessary to make some clarifications and provide definitions. To begin with, what do we mean by safety rules? Those are prescriptions that address potential threats to inter alia the physical health of workers due to accidents (eg collisions or undue contact with hazardous materials) involving robots.
As for robots, they may be machines or software, but either way, in this paper we consider only smart (or AI-assisted) robots and particularly SmaCobs. SmaCobs are robots that are both “powered” by AI (and thus endowed with a certain degree of decision-making autonomy through machine learning (ML),Footnote 39 such as deep learning’s improving by trial and error) and collaborative with humans. Defining “AI” itself is a long-running source of disagreement.Footnote 40 ML is generally considered a core characteristic of AI. ML routes and techniques do varyFootnote 41 (eg supervised, unsupervised, enhanced, etc.,Footnote 42 as well as interactiveFootnote 43 ). Here, we consider all ML paths whereby a robot can autonomously take decisions that are not directly retrievable from instructions and possibly learn from patterns of mistakes – but without necessarily adapting comprehensively to each human being they collaborate with, which is plausibly a behavioural ability retained by humans only. In keeping with the European Parliament (EP), intelligent robots are able to exchange positional and sensorial data with the environment, learn from environmental responses to their behavioural patterns and analyse changes in the environment – but all of this without biologically delivering on vital functions.Footnote 44 Our working definition broadly subscribes to the EP’s formulation, with a focus on the decision-making follow-up to environmental learning, as well as on the collaborative features of SmaCobs.
As for the “collaboration” aspect,Footnote 45 it might take a variety of shapes, “spanning from teleoperation to workspace sharing and synergistic co-control”.Footnote 46 Here we focus on OHS procedures over whose compliance a robot could have tangible control. These might also encompass aspects of quality control and repairing of indoor environments as well as optimisation of production chains so as to minimise human distraction (and risks emanating therefrom) caused by over-reiteration of elementary tasks.
There are a myriad of threats and risks to health and safety involving robots, which exhibit several causes, sources and effects on the health and safety of humans. Here we offer an overview of them. The precise incidence of such risks and threats will depend upon various factors, including how the cyber-physical robotic system is set up, what other technologies it may integrate and the kind of interaction it pursues with humans in a particular workplace.
One cause of accidents may be due to substantive miscommunication between robots about their “peers” or about humansFootnote 47 : for example, human intervention is often assumed as corrective, but for it to be received as indeed correction rather than noise, the right parameters for robots to decode human intentions must be in place.Footnote 48 Miscommunication might equally occur between humans about robots, or between robots and humansFootnote 49 (individually or group-wiseFootnote 50 ), coming from time-related, space-related or goal-related misunderstandings,Footnote 51 but also from, for example, the misdetection, misappreciation or misinterpretation of humans’ chronic or extemporary pain.Footnote 52 While very experimental for now, the deployment of brain–machine (or machine-mediated brain-to-brain) interfaces not only for remote control as today,Footnote 53 but also to encode rules via direct transfer, might represent a miscommunication-intensive risk factor.
Other incident causes may well involve concurrent action over shared working tools (fallacious instrumental interchangeability) or technical dysfunction of the machine on either the software or hardware side – and most frequently both. On the software side, one may face programming errors, including those leading to miscalculations, misplaced manoeuvring schemes, and mis-calibrated human-detection or real-time data-capture systems, but also misconfigured learning from assumed-safe test-aimed digital twins,Footnote 54 overdemanding interfaces to other subsystems, dependencies upon operating modes, ineffective response times and suboptimal selection of objects’ model flows within the applicable environment. On the hardware side, frequent causes of concern are defective appliances, including sensors as well as measuring and protective components and force calibration tools. There is, however, a software aspect to this, too: the harmful action taken by cobots as a result of misleading interaction and poor integration between sensing/awareness/perception algorithms and self-learning OHS-encoding algorithms. This is all the more relevant today “given the recent rise in interest in sensing technology for improving operational safety by tracking parameters such as mental workload, situation awareness, and fatigue”.Footnote 55 The process through which multiple algorithms are combined into one single algorithmic outcome through opaque interactions remains high on the legal and regulatory agenda.Footnote 56 Still in the hardware domain, corrupted or unexpectedly encrypted data, or partly missing/altered/inaccessible data due to data breachesFootnote 57 (including breached cloud-stored data)Footnote 58 and cyber-misappropriation,Footnote 59 which may also be accomplished through encryption-disabling QCs themselves,Footnote 60 are further risks. They may implicate misreadings in machine-to-machine interfaces, and they are primed or aggravated by a wide range of external circumstances, including poor Internet connectivity, environmental noise, toxic fumes, non-standard heat or radiation, power outages, inappropriate lighting, misleading and/or contradictory and/or asynchronous feedback and damaged environmental sensors. All these “physical” factors – be they hardware- or software-disrupting – might be exacerbated by remoteness and virtuality; for example, by interaction through immersive virtual reality (VR) and augmented reality (AR)Footnote 61 interfaces, due to both representational inaccuracies in the VR/AR itself and sensorial detachment that leads to accidents in the real world – without discounting cognitive interferences as well.Footnote 62 Defective or inappropriate external protections in working spaces may also contribute to risks turning into actual harm.
Risks to workers’ mental health can also arise. One overarching cause of mental harm to workers is identified with their interaction with robots,Footnote 63 expressed, for instance, as frustration at robots’ non-sentience and inability to “understand each other” and accommodate mood change. However, there are also risks from the converse, when robots are too sentient: alienation and loneliness may surface when robots understand and adapt so well that they are prematurely “humanised” but later fail to deliver the same responses of a human for more complex tasks or in emotional terms.Footnote 64 Human workers may be left with a misplaced sense of attachment, responsiveness, recognition, solidarity, affinity and trust,Footnote 65 all the way up to betrayed feelings of emotional safety and even intellectual comfort. Robots with reassuring, facially expressive, human-friendly appearances (typical of “humanoids”, “androids”, humanised chimeras, “cyborgs”, etc.) may well mislead inexpert users into acting as if robots were committed to being sympathetic and trustworthy.Footnote 66 Confirming the importance of these debates, the European Commission (EC) has recently advised that “[e]xplicit obligations for producers could be considered also in respect of mental safety risks of users when appropriate (ex. collaboration with humanoid robots)”,Footnote 67 and that “[e]xplicit obligations for producers of, among others, AI humanoid robots to explicitly consider the immaterial harm their products could cause to users […] could be considered for the scope of relevant EU legislation”.Footnote 68
Addressing mental aspects of human–cobot interaction is key to ensuring their successful adoption. Indeed, the subjective experiences of fear and disorientation triggered by robots’ AI-powered apparent humanisation risk offsetting the benefits of adopting SmaCobs for OHS oversight and enforcement. To draw on a (perhaps rudimentary, but fit-for-purpose) taxonomy, one may postulate that humans tend to require and seek dispositional, situational and learnt trust before productively engaging with robots in the workplace.Footnote 69 While situational trust obviously varies with the circumstances, dispositional trust is “installed” in the human individual and resistant to change. Enforcing robot-encoded rules is thus a transient workaround to ensuring a level of trust in robots. Meaningful and fundamental change can only be attained through humans’ “learnt trust” stemming from long-term effectiveness in SmaCobs’ “care” and management of health risks, including common mental health disorders such as the anxiety–depression spiral. Self-evident as it may sound, “individuals who ha[ve] learned about robots in positive contexts, such as safety-critical applications, [exhibit] higher levels of positivity toward the technology”,Footnote 70 particularly with ageing,Footnote 71 and this seems yet another good reason to win human coworkers’ trust. This is especially the case when the human’s trust is mediated by psychiatrically diagnosed conditions: over time, the trust acquired with “familiar” robots will likely enhance the dispositional trust towards robotic applications more broadly (we may call this a dynamic trust transferability variable). Trust is easier to gain – and lasts longer, disseminating to new workers as well – when robotic ethics resonates with human ethical precepts, and it is here that most trust-enhancing attempts fail miserably. The hindrance is ethically uninformed robot decision-making – ethics can only be encoded into emotionless robots up to a certain point (it might reflect programmers’ moral horizon at a specific point in time but will never take a life on its own),Footnote 72 whereas it develops gradually and spontaneously in (most) healthy humans living socially, and its basic tenets are at least in theory widely shared even if they moderately adjust over one’s lifetime. Hence, while the legal lexicon of liability can be employed with regards to robots as well, ethical guidance on what a “right” action is morality-wise should be conceived for the human side of the collaboration only.Footnote 73
Continuing with the mental health discussion, certain aspects relate to the broader sociology and politics of HRC more than they impact the individual worker per se. Concern arises with the robotised monitoring of work performance, the chilling surveillance of workers, hyper-scheduled micromanagement and data analyticsFootnote 74 – worse still when performed supposedly for OHS purposes. All of this occurs within the datafication of health policing and tracking in the workplace and the increasingly pervasive algorithmic management of welfare, wellness and well-being.Footnote 75 Against this “attention economy” backdrop, cognitive overload, too, threatens human workers in their attempt to situate themselves between human and robotic expectations, demands, practices and objectives.Footnote 76 Somewhat linked to this, quality competition with and the race to perfectionism against robots (either in the absolute or with certain robots against some other human–robot working units, even within the same department) are two more issues for today’s policymakers. What the quest towards neoliberal “maximisation” entails for workers is an erroneous sense that no final objective or potential end exists on the “efficiency” side.Footnote 77 An exhausting pace of working, mechanisation, sense of alienationFootnote 78 (and inattentiveness stemming therefrom), isolation, exclusion, disconnection and interchangeability (replaceability, permutability, anonymousness) are all manifestations or consequences of such a misbelief. For instance, “psychosocial risk factors [arise] if people are driven to work at a cobot’s pace (rather than the cobot working at a person’s pace)”,Footnote 79 worsened by rhythmic noise pollution and other discomforting, interfering and/or distressing factors. Conversely, disappointment may be unleashed at robots whose efficiency has been severely constrained. Indeed, unlike traditional industrial robots, cobots’ performance is “continually in a state of becoming […,] in conflict with the planned day-to-day efficiencies and predictability of automated production systems”.Footnote 80 They might be just as efficient, but performance will occasionally drop, readjust over time and in any event convey a disrupted sense of absolute reliability efficiency-wise.Footnote 81 On another note, neoliberal ideas of “disruptive innovation”Footnote 82 guiding SmaCobs’ implementation may result in negative social repercussions from robots’ takeover of certain tasks (ie uncertainty, precarity, unpredictability, contractual casualisation, skill obsolescence, deskilling,Footnote 83 tax injustice,Footnote 84 etc.), which need to be accounted for, scrutinised and avoided.
III. Entrusting robots with enforcing safety requirement for human benefit
Despite the aforementioned health and safety risks that robots may cause, “[w]hen working with robots equipped with artificial intelligence, the topic of health protection has not yet been comprehensively described in the literature”.Footnote 85 While in automation engineering and industrial occupational psychology the “main clusters of interest [are] contact avoidance, contact detection and mitigation, physical ergonomics, cognitive and organizational ergonomics”,Footnote 86 regulatory scholarship on health and safety has failed to keep pace with developments related to the challenges of ML in redefining what a cobot “is” and what it “does” over time and through experience – that is, upon learning. This calls into question the apportionment of liabilities, vis-à-vis doubts as to the possibility to assess robots’ behaviour ex ante and design accurate legal mitigations of labour risks.
Notwithstanding this paucity of literature, we are not the first to identify challenges with the existing (EU) framework:
[g]iven the new psychological and physical threats related to the use of AI robots, it is necessary to expand the EU legislation with general guidelines for the use of intelligent robots in the work environment. Indeed, such robots must be defined in the applicable legal framework. Employers should also define, as part of their internal regulations, the procedures for employee communication with artificial intelligence, and relevantly update their training in the OHS area.Footnote 87
Other authors have explored how to apportion liability in the event of robot failures, most often in the medical context.Footnote 88 In other words, those papers are concerned with liability arising from robots’ failure to act in conformity to external guidance or expectations (ie explicit commands activated to deal with specific situations and not activated or deactivated autonomously by the robot beyond very elementary and straightforward options that are arguably not characterisable as “smart”).
Here, instead, we are concerned with robotic failures to assess, validate, instruct, maintain and act upon safety rules whose “enforcement” has been entrusted to them through design and coding. We will thus ponder whether robots can be sophisticated enough to do this, and who exercises responsibility for ensuring that they are duly designed and encoded in this way. Another debate that is already explored in the literature is whether robots can account for their own safety.Footnote 89 Here, however, we will focus on the safety of human workers exclusively – though unsafe robots are unlikely to protect robot co-workers and make them safe.
If we do seek to design and encode SmaCobs with the ability to enforce OHS rules, several issues arise.
One issue is the extent to which robots can “understand” us, especially our mental health: can awareness of mental states of dysfunction or well-being be algorithmically encoded into SmaCobs for the SmaCob to recognise them as they manifest? Progress might be made towards robotic ability capturing biochemical and signalling dysfunctions subsumed under somatisation symptoms, but robots will likely remain unable to interpret the experiential components that factor into complex mental disabilities. Worse still, prospected risks may negatively impact rather than mitigate the emergence or relapse of such symptoms; this is because robots are unlikely to alleviate disorders that they themselves contribute to engendering (eg anxiety) or whose shades and contaminations they struggle to appreciate; therefore, it can be expected that their response will be depersonalised and stereotyped. Yet again, though, are humans any better in treating and addressing these conditions from an occupational health perspective?
A related issue, where perceptions play a key role, concerns whether the robot itself could be the one to decide whether safety measures are correctly implemented and rules/procedures properly satisfied; in other words, whether it could serve not only as an enforcer, but also as the equivalent of an internal auditor or health inspector. In the affirmative, how would human–robot or robot–robot conflicts of views on the appropriateness of certain enforcement outcomes/strategies be handled – for instance, in the event of humans maintaining they are being treated unfairly (eg overstrictly or overdemandingly), or of multiple robots entrusted with similar/complementary functions that respond to the environment slightly differently despite similar encoding? Should we arrange workstations so to ensure that workers are matched with “their” machines in the long run so as to accommodate mutual long-term customisation and encourage adaptation?Footnote 90 Notably, the feelings of belonging and “expertise” thereby created do not need to (and would probably never) be scientifically tested: it is about cognitive adjustment to recurring action-schemes, the creation of interactional habits whose safeguarding would plausibly enhance trust and reduce friction while delivering on production targets. And if things do not work out eventually, could a mirrored “right to repair” be exercised by robots themselves? Could robots self-repair, or would they be required to cease operations to have a human intervene in the conflict?
Conflicts are conceivably more frequent if gestures, behaviours, physical expressions, commands, directions and individuals entirely are misallocated into boxes that will then function as addressees of group-customised sets of actions by cobots. If a worker adhered to a recurrent behavioural pattern and the robot categorises them within a certain category for rule-enforcement purposes, but the worker one day refrains from walking the same path (for any mental, physical and/or environmental reason) and such deviation causes OHS incidents, who is liable for them? We may look to engineers, for them to just “find a way” to adjust said robot’s categorisations. But the conundrum remains that we are unaware of what those boxes look like and who lies therein. Even assuming we could know what the categories were, we would still lack cognisance of how (and why?) robots opted for them (along robotic pathways towards information accumulation, at some point a cumulation effect manifests) over alternative options, and with that we would ignore what elements from the relevant pattern of behaviour truly contributed to its overall machine assessment or to what extent/percentage. Even upon guessing, we would be left with granularity of decision-making that does not return the majority of shades (“weighing options”) plausibly composed by the algorithm to formulate its sorting-into-boxes response.Footnote 91
Let us refrain from further viability comments for now and instead turn to matters of purpose. Besides the feasibility issue, why should we encode smart robots with safety rules, for them to enforce those rules on our behalf? To begin with, having robots enforce rules and supervise their application on our behalf seems important because “human oversight may be used detrimentally to assign guilt and responsibility to humans”Footnote 92 even when the issue lies with machines’ unpredictability. And while additional potential reasons abound, which would require more comprehensive research beyond the limits of this paper, we shall outline a few of them here. One reason is that in extensive collaborative environments, the reverse (humans enforcing safety rules on robots) might paradoxically prove more challenging logistically, as well as time-consuming and economically inconvenient. Human response might prove slower and will definitely exhibit elements of instinctiveness that make it suboptimal as either over-responsive (and thus costly due to overcautious work interruption) or under-responsive (and thus costly given the human and non-human losses potentially materialising). Economic considerations aside, an even more compelling and forward-looking reason draws from QC, and smart robots’ forecasted ability to be powered by it:
Imagine the progress of AI if the power of Quantum Computing was readily available with potential processing power a million times more powerful than today’s classical computers with each manufacturer striving to reach not just Quantum Supremacy but far and beyond this revolutionary breakthrough. Humanity can harness this amazing technology with AI and [brain–computer interfaces (BCIs),] generating a new technology revolution […]. The possibility for super strength and super intelligent humans to match the huge advancement in AI intelligence is attainable. [… H]umans and AI will strive for Artificial General Intelligence (AGI). This potential for Humans and AI to grow and develop together is staggering but the question of Security, Regulations and Ethics alongside AI and future human BCI advancements highlights the need for standard security and regulations to be put in place.Footnote 93
Indeed, in endorsing the idea that “[a] legal–ethical framework for quantum technology should build on existing rules and requirements for AI”,Footnote 94 including relevant risk awareness from algorithmic regulation,Footnote 95 we will explain in the paragraphs below that the QC revolution matters for smart cobotics in all possible respects: operationally, conceptually, logistically, energetically and even intellectually. The key takeaway here, though, is that QC may well matter safety-wise as well. If successfully implemented, this innovation could revolutionise the accuracy, adaptation and contextual responsiveness of OHS monitoring to the actual needs of each and every specific task and operator, be they humans or machines – whose cooperative work has never been as intertwined. Indeed, “[i]f one wanted to say what the law is in a particular domain[, … q]uantum computing will enhance the accuracy of the [ML] model by locating the relevant optimal mix of values and weights”.Footnote 96 From a conceptual standpoint, this links to complex-systems physics, in that it networks the complexity of large-scale human–non-human group behavioural responses to hazardous events into mathematical formulas, to then model machine responses based thereupon by successive approximations.Footnote 97 This advantage may be mentally pictured through the concept of quantum entanglement: a physical state where each particle can be neither located nor described independently from all others.Footnote 98 When it comes to technology-intensive safety policing, our complex legal systems may start to resemble just as intricate tangles, and QC-powered SmaCobs could enjoy an edge in sophisticatedly interpreting such complexity in real time.
Trusting scientists’ own enthusiastic forecasts on QC and AI,Footnote 99 we foresee QC-powered SmaCobs as the most powerful expression of autonomic computing, whereby “the ability to monitor system condition and provide real-time feedback”Footnote 100 is exponentially scaled up compared to non-quantum devices: precisely what is needed in automated/encoded safety enforcement! Needless to say, QC is not going to solve or circumvent the black-box conundrum – if anything, it will worsen it. Yet, on balance, the risk–benefit analysis might make it competitive over non-QC but still black-boxed solutions: the output, still unexplainable, will be exponentially more sophisticated, responsive and adaptive, catering for the interactional complexity of safety needs in contemporary (and future) collaborative workstations. Enhanced computed skillsets will be deployed not only to identify the optimal “regulatory cocktail” to apply to specific contingencies as they arise, as posited above, but also to enhance bodily detection and health monitoring capacity.Footnote 101 Even more profoundly, because quantum bits (abbreviated as “qubits”) are the paradigmatic expression of entities whose probabilistically determined coordinatesFootnote 102 are altered as soon as they are measured,Footnote 103 from a philosophical (but soon also practical) perspective they convey (and indeed capture) the idea that collaborative systems are those in which cobots and humans, by observing and “measuring” one another, alter each other’s spatiotemporal coordinates and action-outcomes. What could be better than QC-powered AI-driven cobots one day appreciating and making the most of these precision-shaping perceptive geometries to collaborate safely on multiple and entangled cognitive and physical planes? Scholars in different domains expect similarly momentous ramifications; for example, psychologists and quantum physicists expect QC-powered smart robots to capture the complexity of humans’ emotional shades and transpose it into quantifiable, encodable oscillations,Footnote 104 factually transfiguring the communication between human beings and computers and recasting it at a superior level of seamlessness and sophistication. The reverse is just as true: however encoded and artificial, robots too have “emotions” to communicate to their human counterparts,Footnote 105 and QC is premised to aid their “decoding” remarkably.Footnote 106 Other researchers have gone so far as to hypothesise quantum creativity,Footnote 107 whose implications are of course unparalleled.
On balance, while staying aware of the unjustified hype surrounding QC as a commercial technology of general use,Footnote 108 we can, however, foresee that, specifically within professional applications in cobotics, the incipient quantum ML revolutionFootnote 109 indeed holds a reasonable degree of potential to radically enhance the convenience of having robots supervise OHS rules over both themselves and humans – rather than the other way round. This will be even more the case if the Penrose–Hameroff hypothesis, suggesting that human consciousness arises from quantum phenomena such as entanglement and superimposition,Footnote 110 will ever be empirically validated.
IV. The current EU regulatory framework relevant to SmaCobs
We have identified the debates and explained both their significance and what is probably awaiting ahead of us in terms of technological development. Yet how is the regulatory framework in the EU currently catering for such a complexity of techno-policy inputs and scenarios? We are going to offer a review of the state of the art hereinafter, and we have devised Fig. 2 to assist the reader in navigating it.
The main regulatory documents applicable to robotics and/or AI are included in Fig. 2. A few of them, like the Medical Device Regulation (EU 2017/745), do not warrant further elaboration here as they do not help elucidate the specific challenges raised by SmaCobs. From here we consider the Product Liability Directive, the Machinery Directive, the Framework Directive, the Product Safety Directive and selected policies and technical standards in this area. At the time of writing, the Machinery Directive has been just repealed by the aforementioned “New Machinery Regulation”, but we will nonetheless analyse the Directive to support our argument that the transition from the “old-school” safety framework, of which the Directive was part, to the much welcome but still somewhat “static” regulatory mode of the Regulation insufficiently caters for the necessities of smart cobotics. Later in the paper, we will further emphasise the reasons why we contend that the Regulation, which definitely represents progress on certain issues, should be further refined in important ways.
1. Product Liability Directive
Council Directive 85/374/EECFootnote 111 (Product Liability Directive) is relevant for SmaCobs as products rather than as manufacturers or production supervisors. The preambulatory recital clausesFootnote 112 mention that
liability without fault on the part of the producer is the sole means of adequately solving the problem, peculiar to our age of increasing technicality, of a fair apportionment of the risks inherent in modern technological production [… although] the contributory negligence of the injured person may be taken into account to reduce or disallow such liability[.]
Since we are scrutinising collaborative robots, this is especially applicable here. The preamble moves on to specify that “the defectiveness of the product should be determined by reference not to its fitness for use but to the lack of the safety which the public at large is entitled to expect”.Footnote 113 The obvious dilemma here concerns who counts as the general public and how it could elaborate an informed opinion based on reasonable expectations if the technicality of cobot-applied ML is somehow obscure even to the brightest engineering minds. This leads to the third consideration we can make based on this Directive. The just-mentioned lack of foreseeability might induce one to simplistically conclude that whatever wrong OHS decision SmaCobs adopt based on ML, the manufacturer would not be exempted from liability solely because ML makes SmaCobs’ actions unforeseeable. Indeed,
the possibility offered to a producer to free himself from liability if he proves that the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of a defect to be discovered may be felt in certain Member States to restrict unduly the protection of the consumer; whereas it should therefore be possible for a Member State to maintain in its legislation or to provide by new legislation that this exonerating circumstance is not admitted[.]Footnote 114
Nevertheless, this clause refers to unknowable defects rather than to intrinsic technological limitations whose perimeter may still change over time but to which manufacturers are already accustomed. One may thus observe that if the clause does not discount liability for unknown defects, it would accordingly not solicit exceptions in the event of incident-causing foreseeable technology limitations. And yet, these choices pertain to the realm of policy and probably fall well beyond the manufacturing stage, meaning that it is probably policymakers who are those in charge of such high-level choices as to whether as a society we should accept the inherent unforeseeability in ML. From a legal perspective, the difference is that this is unforeseeability by design, meaning that it does not depend on subsequent technology development (eg new hazard discoveries and techno-scientific unreadiness at “the time when the product was put into circulation”Footnote 115 ) as was typical of the pre-SmaCobs era. Instead, the unforeseeability in SmaCobs is inherent in the design of the machines themselves, whose behaviour is by definition – albeit within certain boundaries – unforeseeable to programmers when they designed and encoded the SmaCobs at the outset. In light of this, the entire rationale for liability clauses warrants rethinking so as to approximate them to the needs of a ML-powered future,Footnote 116 shifting at least part of the burden onto those regulators that assess technological products’ fitness for marketisation.
Analogous rethinking is due with regards to the causality nexus, whereby current legislation provides that “[t]he injured person shall be required to prove the damage, the defect and the causal relationship between defect and damage”.Footnote 117 In fact, consider inappropriate situational evaluations by SmaCobs, leading to an underestimation of health hazards that eventually materialise and harm workers: how can the causal relationship between ML-powered decisional outcomes and workers’ harm be ascertained and demonstrated? If the process leading to that cannot be explained (beyond common sense),Footnote 118 neither can it be satisfactorily proven.Footnote 119 Should we accept court proceedings grounded in prima facie cases for harm? The EU is attempting to address some of these issues in its draft AI Liability Directive:
when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet [the standard causality-grounded] burden of proof. In particular, it may be excessively difficult to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake. In such cases, the level of redress afforded by national civil liability rules may be lower than in cases where technologies other than AI are involved in causing damage. Such compensation gaps may contribute to a lower level of societal acceptance of AI and trust in AI-enabled products and services. To reap the economic and societal benefits of AI […], it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems […], by ensuring that victims of damage caused with the involvement of AI have the same effective compensation as victims of damage caused by other technologies.Footnote 120
Of course, this debate extends well beyond the EU; across the Atlantic, for instance, US and Canadian scholars are facing exactly the same shortcomingsFootnote 121 : “the law as currently established may be useful for determining liability for mechanical defects, but not for errors resulting from the autonomous robot’s ‘thinking’”.Footnote 122 Should we expand liability boundaries so to address dangerous (as opposed to “defective”) algorithms?Footnote 123 But are most algorithms not intrinsically dangerous? As cobots become smarter, the mechanics-intensive language of “defectiveness” falls deeper and deeper into obsolescenceFootnote 124 : we advise a paradigm shift towards “dangerousness”, with the caveat that any ML-driven cobot is and will remain to some extent dangerous. Indeed, as distinct from defectiveness, which can be “fixed”, dangerousness can only be “lowered” and “coped with”, but hardly “negated”, both operationally and conceptionally: it is inherent to the very nature of machines that are allowed some room to learn by themselves and take autonomous decisions stemming from that learning. If dangerousness cannot be dispelled, we can at least learn how to best adapt to (and survive) its potential effects – and this should be the regulatory focus in smart cobotics. Implications for the industry are broad, including as regards the redefinition of insurance schemes.Footnote 125
2. General Product Safety Directive
The General Product Safety Directive (2001/95/EC), too, may concern robots as products rather than product manufacturers. Of interest here, it provides that
[i]nformation available to the authorities of the Member States or the Commission relating to risks to consumer health and safety posed by products shall in general be available to the public, in accordance with the requirements of transparency and without prejudice to the restrictions required for monitoring and investigation activities.Footnote 126
Leaving aside the question of how broadly the “in general” caveat may be construed, it should be noted that restrictions apply exclusively in the interests of auditors and investigators, while trade secret ownership seems to be of no concern. If “risks to consumer health and safety” reside in the source code of cobots’ AI, such code would warrant disclosure as inherently risky for consumers (ie in this case, those who purchase the robots to then feed them with relevant normative-regulatory-administrative data and redeploy them as “OHS invigilators” within production chains). “Professional secrecy” is cited as a further ground for exceptions,Footnote 127 but it merely engrains one of the sub-classes of commercially protected secrets. Moreover, disclosing algorithms’ coding provides unserviceable information when it comes to tracking the entire spectrum of their potential operational outcomes. Knowing the code does little to enhance our knowledge about code-enabled learning patterns and results; disclosure just results in cosmetic, formalistic transparency, which may even be counterproductive in that it might instil a false sense of confidence in robotically encoded safety. Disclosing robot algorithms’ source code may infringe copyright and violate trade secrets protection while proving of only superficial reassurance safety-wise.
Another relevant provision contains the obligation of professional upgrade and “lifelong learning”. Indeed, it is provided that
[w]here producers and distributors know or ought to know, on the basis of the information in their possession and as professionals, that a product that they have placed on the market poses risks to the consumer that are incompatible with the general safety requirement, they shall immediately inform the competent authorities of the Member States […].Footnote 128
This formulation encapsulates a cogent example of ex post obligation that is dependent on technology developments and baseline professionalism, as such compelling manufacturers to remain accountable for algorithmically caused harms. However, conceptual complexity intervenes here: from a technical perspective only (hence, focusing on the engineering side without entering the realm of theories of legal liability), to what extent are ML failures the fault of an algorithm as it was coded rather than of the data it was fed with? Answering this question is far from trivial. To exemplify, if ML-powered harm is believed to be mainly produced out of “bad data”, then manufacturers are exempted from notifying the authorities of the risks that algorithmic learning might elicit based on deeper understandings of the inner workings of AI.
The third and final point we ought to highlight from the General Product Safety Directive regards the obsolescence of the “sampling” criterion for market authorisation: it stipulates that producers are expected to assess risks inter alia by “sample testing of marketed products”,Footnote 129 which is irrelevant when it comes to algorithmic learning, which entails that every machine behaves uniquely. One should also bear in mind that testing ML-powered cobots in realistic conditions with humans can form part of a vicious circle, as it proves riskier (and thus regulatorily harder) precisely due to algorithmic unforeseeability, which means that the machines that most need testing (ie the smart and collaborative ones) are those that are going to be tested the least – or in the least realistic, human-participated and use-proximate conditions.Footnote 130 The debate is thus warranted on meaningful alternatives to or additional safeguards for both human participation and behavioural sampling.Footnote 131 Subtracting humans from the testing equation seems unadvisable:
cognition is said to be “situated”. When applied to the example of collaborative robots, risk cannot only be understood from the techno-centric perspective of mere energy containment in terms of managing speed, force, and separation.Footnote 132
At the same time, awareness must be raised of unsafe testing that could subject humans to unpredictable (and indeed untested) ML-prompted cobotic action. It is true that “difficulties [surface] in identifying the moment in time during a robot’s trajectory where a specific algorithm is the least secure, requiring simulation or testing with the entire system”,Footnote 133 and this is obviously going to prove resource-depleting. Yet, in the long run, it might prevent safety accidents and thus, overall, contribute to optimising costs, starting with insurance premiums.
3. Framework Directive
Council Directive 89/391/EEC (Framework Directive)Footnote 134 sets out general prevention-orientated principles on the health and safety of workers (at work). It exhibits little specificity on most matters of interest here. As with all other Directives, it is worth considering how it has been implemented in practice by Member States (MSs) and readapted to cater for the aforementioned challenges. Paradoxically, flawed or even poor implementation recordsFootnote 135 might turn into an opportunity for “surgical” normative reforms at the MS level without rediscussing and revising the entire EU OHS framework. Of some relevance to us here, this Directive holds that legal persons, too, may be employers,Footnote 136 which calls into question the long-standing and rather complex debate on granting legal personality to (advanced) robots.Footnote 137 Even in the affirmative, could robots take on the role as employers of other robots or even humans – thus becoming responsible for their health and safety, including civil compensation and indemnities? To borrow from the public international law (PIL) lexicon, they might even be charged directly with obligations of conduct or of result. According to the United Nations Educational, Scientific and Cultural Organization (UNESCO):
the agency of robots is not as autonomous as human agency is, since it has its origins in the work of designers and programmers, and in the learning processes that cognitive robotic systems have gone through.Footnote 138
Reflecting on this, we humans, acting upon our own DNA “programming”, behavioural imitation by models, and our own “learning processes” acquired via growth and development, are not exceptionally different; and in any case, for both parties to stay safe, commitment should be reciprocal.Footnote 139
The Directive also mandates that MSs “ensure adequate controls and supervision”.Footnote 140 In light of its general preventative approach, this could plausibly be understood in our case as ensuring that robots entrusted with OHS enforcement are smart (and thus adaptable and responsive to changing circumstances) but not too smart (and thus capable of self-governance and resistance to external deactivation).Footnote 141 It also provides that whenever supervisory capability cannot be found within the firm, outsourcing is allowed, as long as the employer makes external services aware of factors that are actually or potentially affecting the health and safety of workers, and all relevant information thereabout, including measures and activities in the realm of first aid, fire-fighting and evacuation.Footnote 142 Since the encoding of safety rules is a profitable and technologically advanced solution, this may create issues related to algorithms and know-how as trade secretsFootnote 143 and related licensing options. In any case, it is clear that this Directive was conceived for another era:
It does not define levels of autonomy for robots which in the future may play a significant role in the work process of many European employers. This makes the analysis of risks and threats posed by new areas of scientific and technological progress a justified task.Footnote 144
4. Machinery Directive
Directive 2006/42/EC (the Machinery Directive)Footnote 145 inhabited a key position within the EU industrial safety rules architecture. As per this Directive’s terminology, in the case scrutinised here, the encoded robot would oversee the application of safety policies as its normal functioning, while “safety components” would be those installed into the robot to ensure that such normal functioning is in fact carried out properly rather than endangering other robots and – most importantly – humans. Indeed, the Directive defined a safety component not merely as that “the failure and/or malfunction of which endangers the safety of persons”, but also that “which is not necessary in order for the machinery to function, or for which normal components may be substituted in order for the machinery to function”.Footnote 146 Tellingly, the Directive added that such a component was one “which is independently placed on the market”; hence, the applicability of this piece of legislation to safety aspects of the case under scrutiny was limited to separate components – mostly mechanical ones – and did not encompass built-in antagonistic algorithmic control systems that enhance the safety of OHS-encoded SmaCobs (eg by constraining the “learning” of the “main” ones).Footnote 147
On a different note, while this Directive mandated an “iterative process of risk assessment and risk reduction”Footnote 148 based on the International Organization for Standardization (ISO)’s 12100 standard, no specific mention was made of ML risk mitigation, namely vis-à-vis strategies to identify, implement and recalibrate over time relevant limitations on robots’ ability to learn from observable patterns and refine their behaviour accordingly. In fact, as per the Directive, machines could be marketed so long as they did “not endanger the health and safety of persons […], when properly installed and maintained and used for [their] intended purpose or under reasonably foreseeable conditions”,Footnote 149 with the latter hardly being identifiable when ML is at stake – unless one read the “reasonably” qualification liberally. Either way, the Commission declared that it might accommodate amendments related to the Internet of Things (IoT) and smart roboticsFootnote 150 – though the contents of such amendments as well as the Commission’s policy approach in integrating them remain undefined – and they definitely do not feature in the New Machinery Regulation.
5. Policies and technical standards
In touching upon the recently issued civil liability rules, we leave the legal tangle of Directives behind and approach a marginally less formal but possibly even more essential regulatory portfolio, composed of extensive policies and technical standards. EU institutions have been long framing robotics legal developments within initiatives on AI,Footnote 151 with the tandem of the proposed AI Act and AI Liability Directive representing the culmination of this policy journey of countless Recommendations, Briefs, Proposals, Communications, Reports, Surveys and so forth. In the explanatory introduction to its proposed Liability Directive aimed at repealing the 1985 Directive, the Commission asserted that “factors such as the interconnectedness or self-learning functions of products have been added to the non-exhaustive list of factors to be taken into account by courts when assessing defectiveness”,Footnote 152 which resonates well with the general aim of adapting the EU’s liability framework to smart applications.
The next step for us here is to study EU law’s interaction with international industry standards.Footnote 153 Those are, for example, the ones developed by ISO Technical Committees (TCs) 299 “Robotics”, 184/SC 2 “Robots and robotic devices”, and 110/SC 2 “Safety of powered industrial trucks”, including the aforementioned ISO 12100, as well as ISO 3691-4:2020 on safety requirements for and verification of inter alia driverless industrial trucks and their systems, and ISO Technical Specification (TS) 15066, dedicated to cobots – but not a standard (yet).
Attention should be paid to ISO 10218, originally issued in 2011 and whose updated version was set to be released by 2022 – at the time of writing, though, it is still awaiting a compliance check with the theoretically voluntary and yet quasi-binding EU harmonised standards (ENs).Footnote 154 The second part of ISO 10218 (ie ISO 10218-2) is dedicated to industrial robot systems, robot applications and robot cells, with a decisive collaborative flavour. It was reported that
[a]s a “draft international standard” (DIS), the ISO/DIS 10218-2 was published in late 2020 and is currently under evaluation. Collaborative applications are identified as characterized by one or more of three technologies: “hand-guided controls” (HGC), “speed and separation monitoring” (SSM), and “power and force limiting” (PFL). Specific risk assessment is envisaged for potential human–robot contact conditions, as well as related passive safety design measures and active risk reduction measures.Footnote 155
One of the key premises of this paper can be restated again here: if one examines the list of “[c]hanges between the 2011 edition and the ongoing revision”,Footnote 156 numerous interesting items can be found – including “local and remote control”, “communications” and “cybersecurity” – but the issue of AI is totally neglected; dossiers such as AI security and liability, algorithmic governance or ML are not tabled as deserving of inclusion. This class of challenges, in its intersection with the collaborativeness of future robots, is left off the agenda. Unfortunate as it may be, it is not entirely surprising: international safety standards are still hampered by priorities that were conceived for high-volume manufacturing. Instead, not only “today [can] the same robot manipulator […] be used for manufacturing, logistics, rehabilitation, or even agricultural applications[, a versatility that] can lead to uncertainty with respect to safety and applicable standards”,Footnote 157 but algorithmic learning warrants even deeper policy specialisation in order to account for the most diverse professional domains.
Another ISO TC, namely number 199 (“Safety of machinery”), has developed standard 11161:2007, specifying the safety requirements for integrated manufacturing systems that incorporate interconnected machines for specific applications. The same Committee, and particularly its Working Group 8 (“Safe Control Systems”), has also developed the two-part standard EN/ISO 13849; this builds on the International Electrotechnical Commission (IEC)’s 62061 standard, which had simplified the original IEC 61508 standard for the machinery sector, and it is of special relevance here as it applies it to safety-related parts of control systems – that is, components of control systems that respond to safety-related input signals and generate safety-related output signals.Footnote 158 The 13849 standard adopts the definitions of safety integrity level (SIL) and performance level (PL) so as to rate probabilities of harmful events occurring due to overall machine safety levels through quantifiable and non-quantifiable variables.Footnote 159 In this sense it displays, we consider, the right approach.Footnote 160 However, the shortcomings of the other standards mentioned previously can also be seen here: no mention is made of QC or AI-related hazards (and opportunities), and expressions such as “artificial intelligence”, “algorithms” or “machine learning” are wholly absent from both the text and the accompanying major techno-policy reports.Footnote 161 The text does incorporate parameters and procedures such as “software safety lifecycle with verification and validation”, “use of suitable programming languages and computer-based tools with confidence from use” or “impact analysis and appropriate software safety lifecycle activities after modifications”.Footnote 162 However, these only cover traditional electronics and software and thus “non-smart” IT programming and coding endeavours, and their simplicity would be frustrated by the intricacies of algorithmic self-“improvement”.
In September 2020, yet another ISO Committee – number 159, namely Working Group 6 on “Human-centred design processes for interactive systems” from its Sub-Committee 4 on the “Ergonomics of human-system interaction” – published its Technical Report (TR) 9241-810:2020 on “Robotic, intelligent and autonomous systems”. This does fulfil its promise in identifying some of tomorrow’s challenges for AI-driven cobots, but it is far from being translated into policy (ie a technical standard) – let alone action. One worrying aspect is that it strives for human enhancement as much as it calls for human-centric robot design, without elaborating on the risks of expecting humans to “enhance” their performance through moral elevation and physical fitness (including integrated bodily extensions such as prosthetics, implants and powered exoskeletons) with a robot-imposed pace and stakes.
The preceding paragraphs have given a due overview (and SmaCobs-orientated commentary) of the current scenario. On a more socio-legal reading, as “robot manufacturers typically take part in standardization committees, along with integrators, end-users and stakeholders representing public health and health insurance”,Footnote 163 these are important for understanding the state of the field and for building consensus around policy harmonisation. In order “to devote attention to the significant influence of human–cobot workplace ethics on the process standardization of collaborative workplaces”,Footnote 164 we call for hybrid committees that could inject insights from the social sciences into these engineering-intensive quasi-normative endeavours. As for their legal authoritativeness, international standards’ salience is first grounded in governmental and “peer” auditors’ expectations about their de facto bindingness, with the case of China (but referring to domestic standards) being an outstanding and normatively influential exemplification of thisFootnote 165 – curiously, those auditors would themselves become a subject of SmaCobs’ safety decision-making. Within the EU, for instance, the aforementioned ENs are adopted by the EC, explicitly endorsed by relevant governmental and executive agencies,Footnote 166 as well as referred to by courts through judicial activismFootnote 167 from the domestic to the Union level in adjudicating technically demanding cases.Footnote 168 Furthermore, they
provide manufacturers with means to presume conformity with the requirements, through the legally binding “presumption of conformity”. If the manufacturer decides not to use ENs, it bears the burden of proof to satisfactorily demonstrate that an alternative standard or methodology provides an equivalent or [higher] level of safety than that provided by the harmonised standard.Footnote 169
International technical standards frequently feature in binding laws as an explicit or – most often – implicit reference; this is the case, for example, with the abovementioned Directive 89/391/EEC, which commits the European Council to remain aware of and up to date about not only “the adoption of [other D]irectives in the field of technical harmonisation and standardisation”, but equally on “technical progress, changes in international regulations or specifications and new findings”.Footnote 170 In this way, standards are often “hardened” into enforceable legislation. Several courts, and most prominently the Court of Justice of the European Union (CJEU), are increasingly following suit, extending their jurisdiction over the interpretation and bindingness of technical standards issued by private industry bodies and deciding liability cases based thereupon.Footnote 171 Given their incorporation into law, the extent to which those standards are genuinely international is worth pondering: do they reflect geo-economic power or scientifically validated best practices? Do they account for different (working) cultures, production dynamics, technical capabilities and socio-political orientations? To put it bluntly: are they legitimate?Footnote 172 These comparative socio-legal questions exceed the scope of this paper, but what we preliminarily observed is that jurisdictions around the world (perhaps less so in “non-Western” polities), differently from some scholarly circles,Footnote 173 tend to endorse such standards’ technical validity and “policy legitimacy”. For instance, Japanese scholars have called on the Japanese government to make such standards directly enforceable under Japanese law as per the model of the EU.Footnote 174 Furthermore, certain standards and specifications are directly addressed to jurisdictions beyond the Union.Footnote 175
Lastly, can both rules and standards be encoded into robots, and will robots be capable of untangling normative conflicts – even situated at different degrees of bindingness? It was boldly asserted that “within a conceptual framework of algorithmic law and regulation, classical distinctions of legal theory […] become either redundant or obsolete”,Footnote 176 but this cannot discount smart robots’ ability to output behaviours based on their assessment of the overall legal instruction stemming from multi-level and tangled legal documents whose instructions are situated on a sliding scale of characteristics, including indeed bindingness. This seems topical in a context such as algorithmic regulation that appears to be questioning long-standing dichotomies between, for instance, hard and soft laws.Footnote 177 This is precisely the aforementioned potential added value envisioned for QC in the (near?) future: to provide algorithms governing the robotic enforcement of safety rules with both the computing power and the sophistication to immediately respond to even the most unheard-of scenarios by recourse to the most appropriate combination of legal resources as selected among the applicable hundreds or thousands – from the softest to the hardest ones. However, whether such a future scenario for QC arises remains to be seen and will be dependent on a number of technical, social, economic, political and – of course – legal factors.
V. Key points for a tentative proposal under EU law
Quantum mechanics and chaos theory have demonstrated that perfect predictability is nothing more than a chimera. Footnote 178
In the preceding section, we have listed and analysed current laws, policies and standards in place internationally and especially within the EU to outline all major elements that, as they stand to date, appear to contrast with the effective policing of safe environments where SmaCobs and human workers can thrive together. Drawing on the gaps we identified in these frameworks, accounting for the relevant contexts described earlier and noting that the New Machinery Regulation has failed to deliver on its promise, here we offer some key reasoning and recommendations, with a view to developing a “core” policy set that (EU) policymakers may wish to consider towards the aim of crafting of a more coherent, comprehensive and forward-looking regulatory approach to SmaCobs.
Such an approach would be in line with the regulatory turn in technology-intensive sectors, as especially but not exclusively witnessed in the EU.Footnote 179 Over the last decade or so, the idea that market mechanisms would suffice to govern technology and innovation has become less popular in the wake of surveillance and data protection-related scandals such as the Snowden revelations, Cambridge Analytica and others. The EU has also harnessed the “Brussels Effect” of its legislation to (attempt to) assert its power and stance within the multipolar geopolitical scenario and race to dominate AI development: with China and the USA ahead of it in terms of technology development, the EU may be compensating with regulatory development, “persuasiveness” and sophistication.Footnote 180 While market mechanisms may assert a governing force on SmaCobs, these are beyond the scope of this paper and require further research. In the meantime, given the EU’s regulatory turn and the deficiencies of the current legal framework identified in the previous section, we consider that legal reform is warranted and consistent with the EU’s current approach to technology governance.
At the outset, let us stress that after several decades of robotics being regulated through Directives, a legal instrument in the form of a Regulation rather than a Directive, as demanded by the EP itselfFootnote 181 as well as by the Commission in its then-draft New Machinery Regulation,Footnote 182 was indeed warranted. As outlined in the previous section, this field was already Directive-intensive, and it would have certainly benefitted from a bloc-wide harmonised approach. Transposition timings for Directives into MSs’ domestic legal orders are lengthy, and such a strategic dossier could no longer be fragmented along national dividing lines. The New Machinery Regulation is a move in the right direction, but Fig. 1 supra delineates in detail, comparatively, why its role in potentially regulating smart cobotics will remain limited.
The time has come for a fully-fledged Regulation conceived for the challenges of smart cobotics, particularly if the EU aspires to outpace (or at least keep up with) China and the USA, reinforce the Brussels Effect and establish an overarching and efficient normative framework that prepares companies for the automation age by integrating data protection, IP and OHS standards into a coherent regulatory landscape. Also, this industry needs to be highly standardised because it is by definition an integrated one: SmaCobs are expected to contribute to “high manufacturing” operations, with examples including the assembly of aerospace components or the construction of cutting-edge biomedical facilities – all concerted efforts that mostly involve cooperation across domestic jurisdictions (and, indeed, across the European continent). Related to this, the Regulation should adopt an “omnibus” as opposed to “sectoral” approachFootnote 183 – possibly leaving room to domestic legislators for some autonomy regarding how it would apply to industry sectors in which each MS is prominent, but it should nevertheless ensure compliance with the binding core of our proposed Regulation and account for SmaCobs’ use across traditional industry or sectoral boundaries. By “binding core” we intend the (majority) bundle of obligations that would be immediately detailed enough to be uniformly transposed into MSs’ domestic legal orders, without further specification as to the means to satisfy said Regulation’s requirements. True, immediate transposition is the very essential advantage of any Regulation over a Directive, but we have already witnessed (eg with biometric data processing as per the General Data Protection Regulation (GDPR)) that when a Regulation promises to exhibit extreme complexity, MSs might negotiate its precise requirements up to a certain extent, leaving a few most controversial sections somewhat “open-ended” as to how MSs are to implement specific groups of provisionsFootnote 184 through a sort of “mini-Directive” within the Regulation itself.
Such a Regulation should finally resolve and take a “future-proof” stance on a range of controversial matters (as we surveyed them earlier). The first of these should be whether robots can be granted legal personhood and, if so, be deemed as employers and thus entrusted with responsibility to oversee workers, train them, inform regulatory agencies, coordinate with authorities and appoint relevant subordinates (eg OHS managers). In this respect, it should be recalled that Directive 89/391/EEC confers on workers the duty to “cooperate […] with the employer and/or workers with specific responsibility for the safety and health of workers”,Footnote 185 which would establish a whole new human–robot interaction (HRI) field and redefine the parameters of notification and processing of imminent dangers. If we accept robots as legal persons (equivalent to, eg, a corporation), how are we supposed to apportion liabilities in the event of faults? The issue would deserve extensive analysis on its own, but for now, and within the limits of this paper, we will reason by analogy from data protection law. Under the EU’s GDPR, the users of the data-processing device can be generally assumed to be data controllers, while manufacturers are exempted from liability as they are not those who process the data directly. However, a recent stream of scholarship advises that
the development of “smart” devices with “local” or “edge” computing upends the assumption that device manufacturers automatically fall outside the scope of the GDPR. Device manufacturers may often have sufficient influence on the processing to be deemed “controllers”, even where personal data is processed on-device only without any direct processing by the manufacturer. This is because device manufacturers may in certain cases “determine the means and purposes of the processing”.Footnote 186
Similar reasoning could apply in the OHS domain. Indeed, if robots are truly “smart” and respond to the environment largely independently of the input of their ultimate user, there would be no reason to bestow the ultimate user with civil liability for faults that are, in fact, closer to shortcomings on the programming or market approval side. If a robot is programmed to pursue unsupervised learning, programmers should not face liability for the outcomes themselves of such ML but rather for the very fact that no appropriate limits to this learning were encoded into the robot. By “limits” we mean not mere temporal or contextual limitations, but self-restraint in the number and complexity of interconnections among pieces of data that the robot feeds itself with in order to learn from them. Admittedly, this is easier said than done; however, the technical preliminary discussion before encoding standards should indeed revolve around the conceptual identification, engineering viability, legal definition and ethical boundaries of the mentioned “limits”. Once smart cobots OHS legislation in the EU becomes accurate enough to set out red lines not to be overcome and mandates programmers to grant machines learning abilities up to a specific extent only, then any harm ensuing from the defiance of such limits on the part of the machine should indeed be attributed to the manufacturer rather than final users.
On more operational grounds, machines’ smart-to-non-smart progressions and multiphase gradations should be foreseen by law, to the extent of temporarily shutting learning endeavours down upon need (eg during the most critical assembly stages or upon potential harm materialising). No matter how smart, there should be a mechanism to switch machines off if risks of harm to humans concretise, meaning that at least one available human should always retain a last-resort commanding capacity within the firm at any one time – somewhat based on the model of flight commanders versus automatic pilots in civil aviation. This would offset the unprecedented and discouraging relationship of subordination that certain low-skilled workers may experience vis-à-vis robots if the latter take over the direction of (given chains of) field operation. It is worth emphasising that workers should not necessarily enjoy the capacity to redirect dysfunctional robots’ operations, but just to turn them off; indeed, workers’ substandard (or in any case cognitively untrained) performance in redirecting robotic operations might itself represent a source of hazard,Footnote 187 which could even come as unsupervised in this case because OHS standards would be encoded into and enforced by those same robots that exhibits signs of dysfunction – who watches the watchers? In this respect,
regulators tackling the issue of human oversight in the future will need to develop clear criteria that balance the potentially competing interests of human-centrism versus utility and efficiency. Indeed, […] the “robot-friendliness” of the law in different jurisdictions may lead to a new regulatory race to the bottom.Footnote 188
Even where workers were not originally under-skilled, the protracted entrusting of standards enforcement (eg on safety) to robots could gradually lead to obsolescence: “as professionals cease to exercise their skills and end up less proficient when the technology fails”,Footnote 189 the threat of “skill fading” should be factored into any emergency plan contingent to robotic failure.
Related to this, MSs should establish a procedure to make sure that robots with encoded OHS standards remain smart but not too smart, perhaps combining an open-access techno-scientific registry keeping record of periodic cycles of review and auditing. This registry should evidence how, within the ethical autonomy continuum, encoded robots stop short of exceeding the point upon which their ethical landscape would become fully automated,Footnote 190 and it should be open for the public to monitor and even submit observations. As regards auditing, given that tomorrow’s robots will be in the hundreds of millions but most of them highly customised, and tracking how each learns over time would be unsustainably expensive, problems would arise as to what “sample” would stand as representative of all robots from a given market batch, production line or corporate conglomerate. It is emphasised in the literature that “[w]ith the emergence of automation in audit, sampling procedures become obsolete because algorithms allow auditors to audit the entire data population, not just the sample”.Footnote 191 This might hold true for all “algorithms as such” as they are initially programmed, but it will prove unhelpful with regards to both how such algorithms learn over time and how they come to interact with the specific sensing and motor apparatuses of each different cobot. Indeed, the parallel problem arises as to what class of on-the-ground situations would be representative of hazards caused by dysfunctional safety-encoded cobots. All of these matters, in turn, depend on whether safety-encoded robots encompass themselves within their policing scope or are confined to enforcing rules vis-à-vis third parties (ie humans and other robots) only – an issue that also relates to cognate debates around robots’ self-perception and rudimental degrees of sentience.
The proper balance between OHS standards’ encoding by-design and on-the-road readjustment should be sought. We should ensure that (most of) cobots’ functions are designed to be safe from the outset, but safeguarding reprogramming flexibility can prove equally wise and salient in certain contexts where design rigidity is more of a barrier to day-to-day safety choices than an enabler, particularly with regards to the neuropsychiatric spectrum of professional health. The legislator is called upon to be pragmatic and to ensure that the “division of labour” between safety encoders at the design stage and those contributing to it subsequently is clearly delineated and mutually subscribed to, both inclusively and participatorily through testbeds attended by physically, socio-culturally and neurodiverse pools of individuals representative of expected but also future potential users. Not only are there “difficulties in identifying the point in time during a robot’s trajectory where a specific algorithm is the least safe, requiring either a simulation or a test with the completed system”,Footnote 192 but even once said completed system is assembled, programmed and tested, unbounded ML would make it impossible for regulators to assess its hazardousness. Programmers and inspectors alike are called upon to pursue not maintenance but reassessment and verification from scratch at regular intervals. Mindful of the further caveat that certain “learnings” might trigger exponential rather than linear behavioural changes vis-à-vis the time scale, regulators should also appreciate that scenario-disrupting expectational asymmetries can arise any time, so that “regular” inspection might not mean much as regards safety.Footnote 193
In some cases, smart adaptation to complex (actual or intended) interactional stimuli and environmental cues would lead to incidents that are, however, fewer in number (and perhaps magnitude?) compared to non-adaptation. Think of a robot that is capable of adapting its instruction delivery to the linguistic proficiency, diction (accent, tone), phraseological structure, cultural literacy, situational awareness and frequent vocabulary of relevant users. This could lead to overconfident users or confuse other relevant users (eg temporary shift replacements), but mastering the language at different levels, too, might trigger incidents caused by miscomprehension between robots and human speakers – particularly when non-natives or specially disadvantaged subgroups of blue-collar workers are at play.
The identification of context-sensitive risk-mitigation strategies is of the essence. These may depend inter alia on the nature (field of activity, public exposure, political susceptibility), location (surroundings, applicable jurisdiction, etc.) and size of the firm, training background of relevant officers, safety equipment and engineering conditions of building spaces such as laboratories and workstations, evacuation plans, substance toxicity, presence of workers in special need of assistance, project flows and proximity to first-aid centres.
In policing robotics, legislators have long tended to reinforce old-fashioned dichotomous divides between “mental” and “physical” health, to focus on the latter. Nonetheless, common (and increasingly prevalent all across “developed” societies) neuropsychiatric conditions such as anxiety and depression – but also panic, bipolar, schizophrenic, post-traumatic and obsessive–compulsive disorders, to mention but a few – should feature right at the core of any assessment strategy and be given due weight by engineers while designing robotic forms of interaction with humans.Footnote 194 More specifically, we recommend that engineers prioritise them within AI-driven checklists when coding SmaCobs’ enforcement of OHS rules. For instance, biomedical engineers should seek the assistance of clinicians and other relevant health professionals in pondering how the understanding of, say, generalised anxiety disorders approximate to real human experiences when anxiety identification, mitigation and prevention are encoded into robots as part of safety responses. Would robots be sensitive to said disorders and their inexplicable fluctuations? Would they perceive (clinically relevant) anxiety in any “human-like” fashion – assuming (without conceding) there is any?Footnote 195 Also, the information currently to be shared with potential outsourced safety providers does not read as having been conceived for the automation age, in which hazards tend to blend physicality and psychology with cognition conundrums and mostly relate to machine autonomy, human mental health well-being and suboptimal techno-managerial expertise. EU legislation has long addressed the issue of workplace-related stress,Footnote 196 requiring supervisors to keep it monitored and intervene when necessary: this is what would be required of robots, too, were they entrusted with health supervisory functions – also vis-à-vis more demanding conditions. On a less clinical regard but still related to the human mind, mitigation strategies to reduce psychological (as “alternative” to psychiatric) discomfort (as “opposed” to disorders) should be devised as well, including through questionnaires, psychological metrics and behavioural metrics developed by HRI-specialised research institutes.Footnote 197 These should feature as binding (but progressive) requirements within the Regulation we are outlining here.
Just like any other algorithmically driven machines, smart robots are prone to bias.Footnote 198 Mindful of the discriminatory allocational strategies subsumed under the vast majority of algorithmically devised groups,Footnote 199 the Legislator should decide whether SmaCobs can and should define specially protected (or “vulnerable”) clusters of individuals for OHS purposes and who has ultimate legal recourse capability as a remedy if serious consequences arise from robots’ misallocation of a worker in over- or under-protected categories. Whose interests will SmaCobs protect first?Footnote 200 In human-populated working environments, “[w]hat is advantageous to the firm may be viewed negatively from an individual’s perspective[, as d]iffering groups need to navigate and negotiate their values with respect to the others”.Footnote 201 But what if the negotiation is to be carried out between humans and robots? Would humans accept safety “paramount interests” to be prioritised, calibrated and acted upon by algorithmically fed robotic entities on behalf of a superior collectivity? Also with a view to “democratising” this process (to the extent possible), the most meaningful balance should be drawn between the disclosure of algorithmic codes for public accountability purposes and the protection of trade secrets that extend far beyond professional secrets. This is to ensure that innovation is not jeopardised through trade secrets’ over-disclosure (and thus poor protection)Footnote 202 while little gain is achieved on the accountability side – due to ML outcomes’ inherent inexplicability. Tentatively, specific provisions should be devised to encourage the sharing of best practices among employers and employees from different companies and business districts, without running into trade secret misappropriation, non-compliance with non-compete contractual clauses and prohibitions on collusive conduct in competition law. Furthermore, exactly because interest-maximisation strategies differ so remarkably between smart machines and humans, specific rules shall be devised to accommodate dispute prevention, handling and mitigation not only among SmaCobs or among humans, and not even among each SmaCob and its human co-worker only, but equally between different SmaCob–coworker teams within the same working group.
No matter how cohesive, our proposed EU Regulation should, however, incorporate context-sensitive provisions aimed at leaving room for further sector-specific regulatory manoeuvring (depending on the actual job profiles and tasks warranting smart robotic collaboration within Europe), and it should be informed by participatory stakeholder input and processes, especially from workers themselves. Its bindingness and direct applicability throughout the EU will come at a cost; in other words, that it should be conceived in the form of “framework legislation” whereby certain details are deferred to later legal instruments (while possibly being covered by soft arrangements in the meantime). This is also aligned with the preferences expressed by States during the negotiations towards the first ever international binding instrument on business and human rightsFootnote 203 – which is somewhat relevant here as well.Footnote 204
The Regulation we propose would also strive to more organically integrate industry standards into binding EU law, not as mere expectations of conduct but as compulsory safety requirements where applicable. Industry standards currently feature six main “skills” that machines should exhibit for them to be deemed safe (maintain safe distance; maintain dynamic stability; limit physical interaction energy; limit range of movement; maintain proper alignment; limit restraining energy),Footnote 205 but we advocate for the introduction of a seventh skill: monitor self-learning pace and outcomes and prevent harmful effects thereof on humans (and the environment) through pre-emptive switch-off. This is what should be tested; as for how, the beyond-component on-the-whole approach should be preferred whenever feasible, accounting for worst-possible-scenario types of unexpected behaviours grounded not so much in the most extreme actions that machines are programmed to deliver as in the most extreme actions they could technically (learn to) accomplish. Not least, interfaces should be explored with PIL and its jurisdictional assertions – for instance, when it comes to safety incidents stemming from inter-jurisdictional VR-mediated interactions between robots and humans, as well as in the humanitarian aid domain with the encoding of rules of engagement within automated health recovery procedures.
VI. Conclusions
Compared to non-collaborative industry robots, cobots (let alone smart ones) are still a market niche, but powering them with AI and possibly QC in the future – even accounting for sustainability challenges related to increased computing power and related energy consumptionFootnote 206 – will have them strategically deployed for the forthcoming smart factories.Footnote 207 Yet, while the EU is assertively legislating across virtually the entire spectrum of policy areas invested in the digital, AI and soon quantum transformations,Footnote 208 including algorithmically intensive industries, no current law addresses the encoding of OHS standards for robots, nor does any ongoing legislative process address the specific risks stemming therefrom – or to cobotics more widely, for that matter.
The new EU liability regime for AI applications, which will be regulated through the (currently draft) AI Liability Directive, is not robotics-specific and fails to address most socio-technical complexities arising from human–cobot interactions in collaborative settings – as summarised in Fig. 1 supra. Furthermore, its harmonising momentum will remain limited precisely due to its status as a Directive, especially from a tort law perspective.Footnote 209 The AI Act is not resolutory in this essential respect either.Footnote 210
As per occupational safety in robotics, the field was entirely regulated through Directives. With its most recent New Machinery Regulation, coming into effect in mid-June 2023, the EU did chart a qualitative step forward, which is nevertheless too modest and scope-limited to respond to today’s needs in this specific industry domain as well as to withstand the related sector-specific regulatory competition from other world regions. This represents a fracture between the policy arena and industry developments, just as much as it embeds a disconnection between the lawyering world and engineers’ concerns.Footnote 211 The misalignment is specular: law and technology specialists are aware of the AI revolution elicited by smart machines as regards the ethics of machine–human interactions but fail to translate them into cutting-edge, “frontier” policies; on their side, engineers mostly dismiss these ethical matters on potentially unforeseeable ML-triggered risk as peripheral, futuristic or at most improbable while progressing fast in assembling algorithmically powered robots. In fact:
we could interpret their reactions as indicative of their lack of interest in such questions, or their lack of exposure to more metaphysical debates on the nature of AI. […] From their perspective, they are only building machines that need to be safe, they are not building machines whose behaviour needs to be safe or lawful. [… Nonetheless,] this question of machine behaviour as a distinct and emergent feature of robotics, which goes beyond the mere sum total safety of the components assembled could become a relevant trope for analysis of the engineering practices.Footnote 212
We agree: overall unpredictability of result for machine behaviour taken as a whole should be the driving concept behind policing efforts and culturally informed lawyer–engineer cooperation in smart collaborative robotics, with dignity for humans and robots at once as their guiding momentum,Footnote 213 and with mutual trainingFootnote 214 as well as contextual ethical awareness as appropriate.Footnote 215
Hence, we surmise that the time has come for EU regulators to either embrace the techno-scientific and regulatory challenge fully (also via the establishment of supercomputers, serendipity-encouraging regulatory sandboxesFootnote 216 (including in VRFootnote 217 ) that can mitigate the over-enforcement of safety compliance – as well as through knowledge support by funded projectsFootnote 218 and the newly signed European Public–Private Partnership in AI, Data, and RoboticsFootnote 219 ) or accept that the EU will soon be outpaced by its East Asian and North American counterparts. The Commission itself acknowledged that “advanced robots and [IoT] products empowered by AI may act in ways that were not envisaged at the time when the system was first put into operation[, so much that g]iven AI’s widespread uses, both horizontal and sectoral rules may need to be reviewed”,Footnote 220 but in a time-shrunk policy sector where timing counts volumes,Footnote 221 it is not yet systemically acting upon such need. Without forward-looking regulatory efforts,Footnote 222 the EU will fail to secure its self-declared “world-leading position in robotics and competitive manufacturing and services sectors, from automotive to healthcare, energy, financial services and agriculture”.Footnote 223 The New Machinery Regulation is “too little, too late”; and it falls short of properly serving the specificity of smart cobotics.
Across a wide portfolio of policy and professional domains, it seems often to be lamented that regulators’ and lawyers’ concerns tend to halt or delay as opposed to facilitate or enable innovation,Footnote 224 but OHS standards are so key to cobots’ trustful adoption and diffusion that regulatorily addressing the challenges that they bring about will only catalyse technical improvements, enhance reliability and unlock trustworthiness in this fast-paced sector over the years to come. We aspire for our European proposal to represent a frontier indication to that end, for the EU and beyond.
Acknowledgments
Earlier drafts of the present work were presented by the authors at the University of Oxford’s Bonavero Institute of Human Rights (“Algorithms at Work” Reading Group, 9 March 2023), University of Aberdeen (2nd Annual SCOTLIN Conference, 27 March 2023), as well as at Belfast’s Titanic Centre during the SPRITE+ Conference on 29 June 2023. We are grateful to the organisers and attendees of these three scholarly gatherings for their challenging questions and comments about our work. We acknowledge funding from the UK Engineering and Physical Sciences Research Council’s “Made Smarter Innovation – Research Centre for Smart, Collaborative Industrial Robotics” Project (2021-2025, EPSRC Reference: EP/V062158/1). We also acknowledge precious inputs and insights from former and current members of the aforementioned Centre, including Professor YAN Xiu-Tian, Dr Tiziana Carmen Callari and Dr NIU Cong. Riccardo Vecellio Segate gratefully acknowledges the superlative learning environment at Politecnico di Milano (Polytechnic University of Milan), where he is currently enrolled as a BEng Candidate in Industrial Production Engineering and without whose inspiring teaching staff and library resources this paper would have never been accomplished. No humans or animals were involved in this research. While the substance of the present article was conceived for the first draft as completed and submitted in early December 2022, the authors have tried their best to keep the manuscript and its references current throughout the extensive peer-review and editorial process.
Competing interests
The authors declare none.