Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-27T22:44:44.112Z Has data issue: false hasContentIssue false

Section IIA - Private and Public Dimensions of Health Research Regulation

Introduction

from Part II - Reimagining Health Research Regulation

Published online by Cambridge University Press:  09 June 2021

Graeme Laurie
Affiliation:
University of Edinburgh
Edward Dove
Affiliation:
University of Edinburgh
Agomoni Ganguli-Mitra
Affiliation:
University of Edinburgh
Catriona McMillan
Affiliation:
University of Edinburgh
Emily Postan
Affiliation:
University of Edinburgh
Nayha Sethi
Affiliation:
University of Edinburgh
Annie Sorbie
Affiliation:
University of Edinburgh

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC 4.0 https://creativecommons.org/cclicenses/

It is a common trope in discussions of human health research, particularly as to its appropriate regulation, to frame the analysis in terms of the private and public interests that are at stake. Too often, in our view, these interests are presented as being in tension with each other, sometimes irreconcilably so. In this section, the authors grapple with this (false) dichotomy, both by providing deeper insights into the nature and range of the interests in play, as well as by inviting us to rethink attendant regulatory responses and responsibilities. This is the common theme that unites the contributions.

The section opens with the chapter from Postan (Chapter 23) on the question of the return of individually relevant research findings to health research participants. Here an argument is made – adopting a narrative identity perspective – that greater attention should be paid to the informational interests of participants, beyond the possibility that findings might be of clinical utility. Set against the ever-changing nature of the researcher–participant relationship, Postan posits that there are good reasons to recognise these private identity interests, and, as a consequence, to reimagine the researcher as interpretative partner of research findings. At the same time, the implications of all of this for the wider research enterprise are recognised, not only in resource terms but also with respect to striking a defensible balance of responsibilities to participants while seeking to deliver the public value of research itself.

As to the concept of public interest per se, this has been tackled by Sorbie in Chapter 6, and various contributions in Section IB have addressed the role and importance of public engagement in the design and delivery of robust health research regulation. In this section, several authors build on these earlier chapters in multiple ways. For example, Taylor and Whitton (Chapter 24) directly challenge the putative tension between public and private interests, arguing that each is implicated in the other’s protection. They offer a reconceptualisation of privacy through a public interest lens, raising important questions for existing laws of confidentiality and data protection. Their perspective requires us to recognise the common interest at stake. Most uniquely, however, they extend their analysis to show how group privacy interests currently receive short shrift in health research regulation, and they suggest that this dangerous oversight must be addressed adequately because the failure to recognise group privacy interests might ultimately jeopardise the common public interest in health research.

Starkly, Burgess (Chapter 25) uses just such an example of threats to group privacy – the care.data debacle – to mount a case for mobilising public expertise in the design of health research regulation. Drawing on the notion of deliberative public engagement, he demonstrates how this process cannot only counter asymmetries of power in the structural design of regulation but also how the resulting advice about what is in the public interest can bring both legitimacy and trustworthiness to resultant models of governance. This is of crucial importance, because as he states: ‘[i]t is inadequate to assert or assume that research and its existing and emerging regulation is in the public interest’. His contribution allows us to challenge any such assertion and to move beyond it responsibly.

The last two contributions to this section continue this theme of structural reimagining of regulatory architectures, set against the interests and values in play. Vayena and Blassime (Chapter 26) offer the example of Big Data to propose a model of adaptive governance that can adequately accommodate and respond to the diverse and dynamic interests. Following principles-based regulation as previously discussed by Sethi in Chapter 17, they outline a model involving six principles and propose key factors for their implementation and operationalisation into effective governance structures and processes. This form of adaptive governance mirrors the discussions by Kaye and Prictor in Chapter 10. Importantly, the factors identified by the current authors – of social learning, complementarity and visibility – not only lend themselves to full and transparent engagement with the range of public and private interests, they require it. In the final chapter of this section, Brownsword (Chapter 27) invites us to address an overarching question that is pertinent to this entire volume: ‘how are the interests in pushing forward with research into potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them?’ His contribution is to push back against the common regulatory response when discussing public and private interests: namely, to seek a ‘balance’. While not necessarily rejecting the balancing exercise as a helpful regulatory device at an appropriate point in the trajectory of regulatory responses to a novel technology, he implores us to place this in ‘a bigger picture of lexically ordered regulatory responsibilities’. For him, morally and logically prior questions are those that ask whether any new development – such as automated healthcare – poses threats to human existence and agency. Only thereafter ought we to consider a role for the balancing exercise that is currently so prevalent in human health research regulation.

Collectively, these contributions significantly challenge the public/private trope in health research regulation, but they leave it largely intact as a framing device for engaging with the constantly changing nature of the research endeavour. This is helpful in ensuring that on-going conversations are not unduly disrupted in unproductive ways. By the same token, individually these chapters provide a plethora of reasons to rethink the nature of how we frame public and private interests, and this in turn allows us to carve out new pathways in the future regulatory landscape. Thus:

  • Private interests have been expanded as to content (Postan) and extended as to their reach (Taylor and Whitton).

  • Moreover, the implications of recognising these reimagined private interests have been addressed, and not necessarily in ways resulting in inevitable tension with appeals to public interest.

  • The content of public interest has been aligned with deliberative engagement in ways that can increase the robustness of health research regulation as a participative exercise (Burgess).

  • Systemic oversight that is adaptive to the myriad of evolving interests has been offered as proof of principle (Vayena and Blassime).

  • The default of seeking balance between public and private interests has been rightly questioned, at least as to its rightful place in the stack of ethical considerations that contribute to responsible research regulation (Brownsword).

23 Changing Identities in Disclosure of Research Findings

Emily Postan
23.1 Introduction

This chapter offers a perspective on the long-running ethical debate about the nature and extent of responsibilities to return individually relevant research findings from health research to participants. It highlights the ways in which shifts in the research landscape are changing the roles of researchers and participants, the relationships between them, and what this might entail for the responsibilities owed towards those who contribute to research by taking part in it. It argues that a greater focus on the informational interests of participants is warranted and that, as a corollary to this, the potential value of findings beyond their clinical utility deserves greater attention. It proposes participants’ interests in using research findings in developing their own identities as a central example of this wider value and argues that these could provide grounds for disclosure.

23.2 Features of Existing Disclosure Guidance

This chapter is concerned with the questions of whether, why, when and how individually relevant findings, which arise in the course of health research, should be offered or fed-back to the research participant to whom they directly pertain.Footnote 1 Unless otherwise specified, what will be said here applies to findings generated through observational and hands-on studies, as well as those using previously collected tissues and data.

Any discussion of ethical and legal responsibilities for disclosure of research findings must negotiate a number of category distinctions relating to the nature of the findings and the practices within which they are generated. However, as will become clear below, several lines of demarcation that have traditionally structured the debate are shifting. A distinction has historically been drawn between the intended (pertinent, or primary) findings from a study and those termed ‘incidental’ (ancillary, secondary, or unsolicited). ‘Incidental findings’ are commonly defined as individually relevant observations generated through research, but lying outwith the aims of the study.Footnote 2 Traditionally, feedback of incidental findings has been presented as more problematic than that of ‘intended findings’ (those the study set out to investigate). However, the cogency of this distinction is increasingly questioned, to the extent that many academic discussions and guidance documents have largely abandoned it.Footnote 3 There are several reasons for this, including difficulties in drawing a bright line between the categories in many kinds of studies, especially those that are open-ended rather than hypothesis-driven.Footnote 4 The relevance of researchers’ intentions to the ethics of disclosure is also questioned.Footnote 5 For these reasons, this chapter will address the ethical issues raised by the return of individually relevant research results, irrespective of whether they were intended.

The foundational question of whether findings should be fed-back – or feedback offered as an option – is informed by the question of why they should. This may be approached by examining the extent of researchers’ legal and ethical responsibilities to participants – as shaped by their professional identities and legal obligations – the strength of participants’ legitimate interests in receiving feedback, or researchers’ responsibilities towards the research endeavour. The last of these includes consideration of how disclosure efforts might impact on wider public interests in the use of research resources and generation of valuable generalisable scientific knowledge, and public trust in research. These considerations then provide parameters for addressing questions of which kinds of findings may be fed-back and under what circumstances. For example, which benefits to participants would justify the resources required for feedback? Finally, there are questions of how, including how researchers should plan and manage the pathway from anticipating the generation of such findings to decisions and practices around disclosure.

In the past two decades, a wealth of academic commentaries and consensus statements have been published, alongside guidance by research funding bodies and professional organisations, making recommendations about approaches to disclosure of research findings.Footnote 6 Some are prescriptive, specifying the characteristics of findings that ought to be disclosed, while others provide process-focused guidance on the key considerations for ethically, legally and practically robust disclosure policies. It is not possible here to give a comprehensive overview of all the permutations of responses to the four questions above. However, some prominent and common themes can be extracted.

Most strikingly, in contrast to the early days of this debate, it is rare now to encounter the bald question of whether research findings should ever be returned. Rather the key concerns are what should be offered and how.Footnote 7 The resource implications of identifying, validating and communicating findings are still acknowledged, but these are seen as feeding into an overall risk/benefit analysis rather than automatically implying non-disclosure. In parallel with this shift, there is less scepticism about researchers’ general disclosure responsibilities. In the UK, researchers are not subject to a specific legal duty to return findings.Footnote 8 Nevertheless, there does appear to be a growing consensus that researchers do have ethical responsibilities to offer findings – albeit limited and conditional ones.Footnote 9 The justifications offered for these responsibilities vary widely, however, and indeed are not always made explicit. This chapter will propose grounds for such responsibilities.

When it comes to determining what kinds of findings should be offered, three jointly necessary criteria are evident across much published guidance. These are captured pithily by Lisa Eckstein et al. as ‘volition, validity and value’.Footnote 10 Requirements for analytic and clinical validity entail that the finding reliably measures and reports what it purports to. Value refers to usefulness or benefit to the (potential) recipient. In most guidance this is construed narrowly in terms of the information’s clinical utility – construed as actionability and sometimes further circumscribed by the seriousness of the condition indicated.Footnote 11 Utility for reproductive decision-making is sometimes included.Footnote 12 Although some commentators suggest that ‘value’ could extend to the non-clinical, subjectively determined ‘personal utility’ of findings, it is generally judged that this alone would be insufficient to justify disclosure costs.Footnote 13 The third necessary condition is that the participant should have agreed voluntarily to receive the finding, having been advised at the time of consenting to participate about the kinds of findings that could arise and having had the opportunity to assent to or decline feedback.Footnote 14

Accompanying this greater emphasis on the ‘which’ and ‘how’ questions is an increasing focus upon the need for researchers to establish clear policies for disclosing findings, that are explained in informed consent procedures, and an accompanying strategy for anticipating, identifying, validating, interpreting, recording, flagging-up and feeding-back findings in ways that maximise benefits and minimise harms.Footnote 15 Broad agreement among scholars and professional bodies that – in the absence of strong countervailing reasons – there is an ethical responsibility to disclose clinically actionable findings is not, however, necessarily reflected in practice, where studies may still lack disclosure policies, or have policies of non-disclosure.Footnote 16

Below I shall advance the claim that, despite a greater emphasis upon, and normalisation of, feedback of findings, there are still gaps, which mean that feedback policies may not be as widely instituted or appropriately directed as they should be. Chief among these gaps are, first, a continued focus on researchers’ inherent responsibilities considered separately from participants’ interests in receiving findings and, second, a narrow conception of when these interests are engaged. These gaps become particularly apparent when we attend to the ways in which the roles of researchers and participants and relationships between them have shifted in a changing health research landscape. In the following sections, I will first highlight the nature of these changes, before proposing what these mean for participants’ experiences, expectations and informational interests and, thus, for ethically robust feedback policies and practices.

23.3 The Changing Health Research Landscape

The landscape of health research is changing. Here I identify three facets of these changes and consider how these could – and indeed should – have an effect on the practical and ethical basis of policies and practices relating to the return of research findings.

The first of these developments is a move towards ‘learning healthcare’ systems and translational science, in which the transitions between research and care are fluid and cyclical, and the lines between patient and participant are often blurred.Footnote 17 The second is greater technical capacities, and appetite, for data-driven research, including secondary research uses of data and tissues – sourced from patient records, prior studies, or biobanks – and linkage between different datasets. This is exemplified by the growth in large-scale and high-profile of genomic studies such as the UK’s ‘100,000 Genomes’ project.Footnote 18 The third development is increasing research uses of technologies and methodologies, such as functional neuroimaging, genome-wide association studies, and machine-learning, which lend themselves to open-ended, exploratory inquiries rather than hypothesis-driven ones.Footnote 19 I wish to suggest that these three developments have a bearing on disclosure responsibilities in three key respects: erosion of the distinction between research and care; generation of findings with unpredictable or ambiguous validity and value; and a decreasing proximity between researchers and participants. I will consider each of these in turn.

Much of the debate about disclosure of findings has, until recently, been premised on there being a clear distinction between research and care, and what this entails in terms of divergent professional priorities and responsibilities, and the experiences and expectations of patient and participants. Whereas it has been assumed that clinicians’ professional duty of care requires disclosure of – at least – clinically actionable findings, researchers are often seen as being subject to a contrary duty to refrain from feedback if this would encourage ‘therapeutic misconceptions’, or divert focus and resources from the research endeavour.Footnote 20 However, as health research increasingly shades into ‘learning healthcare’, these distinctions become increasingly untenable.Footnote 21 It is harder to insist that responsibilities to protect information subjects’ interests do not extend to those engaged in research, or that participants’ expectations of receiving findings are misconceived. Furthermore, if professional norms shift towards more frequent disclosure, so the possibility that healthcare professionals may be found negligent for failing to disclose becomes greater.Footnote 22 These changes may well herald more open feedback policies in a wider range of studies. However, if these policies are premised solely on the duty of care owed in healthcare contexts to participants-as-patients, then the risk is that any expansion will fail to respond adequately to the very reasons why findings should be offered at all – to protect participants’ core interests.

Another consequence of the shifting research landscape, and the growth of data-driven research in particular, lies in the nature of findings generated. For example, many results from genomic analysis or neuroimaging studies are probabilistic rather than strongly predictive, and produce information of varying quality and utility.Footnote 23 And open-ended and exploratory studies pose challenges precisely because what they might find – and thus their significance to participants – are unpredictable and, especially in new fields of research, may be less readily validated. These characteristics are of ethical significance because they present obstacles to meeting the requirements (noted above) for securing validity, value and ascertaining what participants wish to receive. And where validity and value are uncertain, robust analysis of the relative risks and benefits of disclosure is not possible. Given these challenges, it is apparent that meeting participants’ informational interests will require more than just instituting clear disclosure policies. Instead, more flexible and discursive disclosure practices may be needed to manage unanticipated or ambiguous findings.

Increasingly, health research is conducted using data or tissues that were collected for earlier studies, or sourced from biobanks or patient records.Footnote 24 In these contexts, in contrast to the closer relationships entailed by translational studies, researchers may be geographically, temporally and personally far-removed from the participants. This poses a different set of challenges when determining responsibilities for disclosing research findings. First, it may be harder to argue that researchers working with pre-existing data collections hold a duty of care to participants, especially one analogous to that of a healthcare professional. Second, there is the question of who is responsible for disclosure: is it those who originally collected materials, manage this resource or generate the findings? Third, if consent is only sought when the data or tissues are originally collected, it is implausible that a one-off procedure could address in detail all future research uses, let alone the characteristics, of all future findings.Footnote 25 And finally, in these circumstances, disclosure may be more resource-intensive where, for example, much time has elapsed or datasets have been anonymised. These observations underscore the problems of thinking of ‘health research’ as a homogenous category in which the respective roles and expectations of researchers and participants are uniform and easily characterised, and ethical responsibilities attach rigidly to professional identities.

Finally, it is also instructive to attend to shifts in wider cultural and legal norms surrounding our relationships to information about ourselves and the increasing emphasis on informational autonomy, particularly with respect to accessing and controlling information about our health or genetic relationships. There is increased legal protection of informational interests beyond clinical actionability, including the interest in developing ones identity, and in reproductive decision-making.Footnote 26 For example, European human rights law has recognised the right to access to one’s health records and the right to know one’s genetic origins as aspects of the Article 8 right to respect for private life.Footnote 27 And in the UK, the legal standard for information provision by healthcare professionals has shifted from one determined by professional judgement, to that which a reasonable patient would wish to know.Footnote 28

When taken together, the factors considered in this section provide persuasive grounds for looking beyond professional identities, clinical utility and one-off consent and information transactions when seeking to achieve ethically defensible feedback of research findings. In the next section, I will present an argument for grounding ethical policies and practices upon the research participants’ informational interests.

23.4 Re-focusing on Participants’ Interests

What emerges from the picture above is that the respective identities and expectations of researchers and participants are changing, and with them the relationships and interdependencies between them. Some of these changes render research relationships more intimate, akin to clinical care, while other makes them more remote. And the roles that each party fulfils, or are expected to fulfil, may be ambiguous. This lack of clarity presents obstacles to relying on prior distinctions and definitions and raises questions about the continued legitimacy of some existing guiding principles.Footnote 29 Specifically, it disrupts the foundations upon which disclosure of individually relevant results might be premised. In this landscape, it is no longer possible or appropriate – if indeed it ever was – simply to infer what ethical feedback practice would entail from whether not an actor is categorised as ‘a researcher’. This is due not only to ambiguity about the scope of this role and associated responsibilities. It also looks increasingly unjustifiable to give only secondary attention to the nature and specificity of participants’ interests: to treat these as if they are a homogenous group of narrowly health-related priorities that may be honoured, provided doing so does not get in the way of the goal of generating generalisable scientific knowledge. There is a need to revisit the nature and balance of private and public interests at stake. My proposal here is that participants’ informational interests, and researchers’ particular capacities to protect these interests, should comprise the heart of ethical feedback practices.

There are several reasons why it seems appropriate – particularly now – to place participants’ interests at the centre of decision-making about disclosure. First, participants’ roles in research are no less in flux than researchers’. While it may be true that the inherent value of any findings to participants – whether they might wish to receive them and whether the information would be beneficial or detrimental to their health, well-being, or wider interests – may not be dramatically altered by emerging research practices, their motivations, experiences and expectations of taking part may well be different. In the landscape sketched above, it is increasingly appropriate to think of participants less as passive subjects of investigation, but rather as partners in the research relationship.Footnote 30 This is a partnership grounded in the contributions that participants make to a study and in the risks and vulnerabilities incurred when they agree to take part. The role of participant-as-partner is underscored by the rise of the idea that there is an ethical ‘duty to participate’.Footnote 31 This idea has escaped the confines of academic argument. Implications of such a duty are evident in in public discourse concerning biobanks and projects such as 100,000 Genomes. For example, referring to that project, the (then) Chief Medical Officer for England has said that to achieve ‘the genomic dream’, we should ‘agree to use of data for our own benefit and others’.Footnote 32 A further compelling reason for placing the interests of participants at the centre of return policies is that doing so is essential to building confidence and demonstrating trustworthiness in research.Footnote 33 Without this trust there would be no participants and no research.

In light of each of these considerations, it is difficult to justify the informational benefits of research accruing solely to the project aims and the production of generalisable knowledge, without participants’ own core informational interests inviting corresponding respect. That is, respect that reflects the nature of the joint research endeavour and the particular kinds of exposure and vulnerabilities participants incur.

If demonstrating respect was simply a matter of reciprocal recognition of participants’ contributions to knowledge production, then it could perhaps be achieved by means other than feedback. However, research findings occupy a particular position in the vulnerabilities, dependencies and responsibilities of the researcher relationship. Franklin Miller and others argue that researchers have responsibilities to disclose findings that arise from a particular pro tanto ethical responsibility to help others and protect their interests within certain kinds of professional relationships.Footnote 34 These authors hold that this responsibility arises because, in their professional roles, researchers have both privileged access to private aspects of participants’ lives, and particular opportunities and skills for generating information of potential significance and value to participants to which they would not otherwise have access.Footnote 35 I would add to this that being denied the opportunity to obtain otherwise inaccessible information about oneself not only fails to protect participants from avoidable harms, it also fails to respect and benefit them in ways that recognise the benefits they bring to the project and the vulnerabilities they may incur, and trust they invest, when doing so.

None of what I have said seeks to suggest that research findings should be offered without restriction, or at any cost. The criteria of ‘validity, value and volition’ continue to provide vital filters in ensuring that information meets recipients’ interests at all. However, providing these three conditions are met, investment of research resources in identifying, validating, offering and communicating individually relevant findings, may be ethically justified, even required, when receiving them could meet non-trivial informational interests. One question that this leaves unanswered, of course, is what counts as an interest of this kind.

23.5 A Wider Conception of Value: Research Findings as Narrative Tools

If responsibilities for feedback are premised on the value of particular information to participants, it seems arbitrary to confine this value solely to clinical actionability, unless health-related interests are invariably more critical than all others. It is not at all obvious that this is so. This section provides a rationale for recognising at least one kind of value beyond clinical utility.Footnote 36

It is suggested here that where research findings support a participant’s abilities to develop and inhabit their own sense of who they are, significant interests in receiving these findings will be engaged. The kinds of findings that could perform this kind of function might include, for example, those that provide diagnoses that explain longstanding symptoms – even where there is no effective intervention – susceptibility estimates that instigate patient activism, or indications of carrier status or genetic relatedness that allow someone to (re)assess of understand their relationships and connections to others.

The claim to value posited here goes beyond appeals to ‘personal utility’, as commonly characterised in terms of curiosity, or some unspecified, subjective value. It is unsurprising that, thus construed, personal utility is rarely judged to engage sufficiently significant interests to warrant the effort and resources of disclosing findings.Footnote 37 However, the claim here – which I have more fully discussed elsewhereFootnote 38 – is that information about the states, dispositions and functions of our bodies and minds, and our relationships to others (and others’ bodies) – such as that conveyed by health research findings – is of value to us when, and to the extent that, it provides constitutive and interpretive tools that help us to develop our own narratives about who we are – narratives that constitute our identities.Footnote 39 Specifically, this value lies not in contributing to just any identity-narrative, but one that makes sense when confronted by our embodied and relational experiences and supports us in navigating and interpreting these experiences.Footnote 40 These experiences include those of research participation itself. A coherent, ‘inhabitable’ self-narrative is of ethical significance, because such a narrative is not just something we passively and inevitably acquire. Rather, it is something we develop and maintain, which provides the practical foundations for our self-understanding, interpretive perspective and values, and thus our autonomous agency, projects and relationships.Footnote 41 If we do indeed have a significant interest in developing and maintaining such a narrative, and some findings generated in health research can support us in doing so, then my claim is that these findings may be at least as valuable to us as those that are clinically actionable. As such, our critical interests in receiving them should be recognised in feedback policies and practices.

In response to concern that this proposal constitutes an unprecedented incursion of identity-related interests into the (public) values informing governance of health research, it is noted that the very act of participating in research is already intimately connected to participants’ conceptions of who they are and what they value, as illustrated by choices to participate motivated by family histories of illness,Footnote 42 or objections to tissues or data being used for commercial research.Footnote 43 Participation already impacts upon the self-understandings of those who choose to contribute. Indeed, it may often be seen as contributing to the narratives that comprise their identities. Seen in this light, it is not only appropriate, but vital, that the identity-constituting nature of research participation is reflected in the responsibilities that researchers – and the wider research endeavour – owe to participants.

23.6 Revisiting Ethical Responsibilities for Feeding Back Findings

What would refocusing ethical feedback for research findings to encompass the kinds of identity-related interests described above mean for the responsibilities of researchers and others? I submit that it entails responsibilities both to look beyond clinical utility to anticipate when findings could contribute to participants’ self-narratives and to act as an interpretive partner in discharging responsibilities for offering and communicating findings.

It must be granted that the question of when identity-related interests are engaged by particular findings is a more idiosyncratic matter than clinical utility. This serves to underscore the requirement that any disclosure of findings is voluntary. And while this widening of the conception of ‘value’ is in concert with increasing emphasis on individually determined informational value in healthcare – as noted above – it is not a defence of unfettered informational autonomy, requiring the disclosure of whatever participants might wish to see. In order for research findings to serve the wider interests described above, they must still constitute meaningful and reliable biomedical information. There is no value without validity.Footnote 44

These two factors signal that the ethical responsibilities of researchers will not be discharged simply by disclosing findings. There is a critical interpretive role to be fulfilled at several junctures, if participants’ interests are to be protected. These include: anticipating which findings could impact on participants’ health, self-conceptions or capacities to navigate their lives; equipping participants to understand at the outset whether findings of these kinds might arise; and, if participants choose to receive these findings, ensuring that these are communicated in a manner that is likely to minimise distress, and enhance understanding of the capacities and limitations of the information in providing reliable explanations, knowledge or predictions about their health and their embodied states and relationships. This places the researcher in the role of ‘interpretive partner’, supporting participants to make sense of the findings they receive and to accommodate – or disregard – them in conducting their lives and developing their identities.

This role of interpretive partner represents a significant extension of responsibilities from an earlier era in which a requirement to report even clinically significant findings was questioned. The question then arises as to who will be best placed to fulfil this role. As noted above, dilemmas about who should disclose arise most often in relation to secondary research uses of data.Footnote 45 These debates err, however, when they treat this as a question focused on professional and institutional duties abstracted from participants’ interests. When we attend to these interests, the answer that presents itself is that feedback should be provided by whoever is best placed to recognise and explain the potential significance of the findings to participants. And it may in some cases be that those best placed to do this are not researchers at all, but professionals performing a role analogous to genetic counsellors.

Even though the triple threshold conditions for disclosure – validity, value and volition – still apply, any widening of the definition of value implies a larger category of findings to be validated, offered and communicated. This will have resource implications. And – as with any approach to determining which findings should be fed-back and how – the benefits of doing so must still be weighed against any resultant jeopardy to the socially valuable ends of research. However, if we are not simply paying lip-service to, but taking seriously, the ideas that participants are partners in, not merely passive objects of, research, then protecting their interests – particularly those incurred through participation – is not supererogatory, but an intrinsic part of recognising their contribution to biomedical science, their vulnerability, trust and experiences of contributing. Limiting these interests to receipt of clinically actionable findings is arbitrary and out of step with wider ethico-legal developments in the health sphere. Just because these findings arise in the context of health research is not on its own sufficient reason for interpreting ‘value’ solely in clinical terms.

23.7 Conclusion

In this chapter, I have argued that there are two shortcomings in current ethical debates and guidance regarding policies and practices for feeding back individually relevant findings from health research. These are, first, a focus on the responsibilities of actors for disclosure that remains insufficiently grounded in the essential questions of when and how disclosure would meet core interests of participants; and, second, a narrow interpretation of these interests in terms of clinical actionability. Specifically, I have argued that participants have critical interests in accessing research findings where these offer valuable tools of narrative self-constitution. These shortcomings have been particularly brought to light by changes in the nature of health research, and addressing them becomes ever more important as the role participants evolves from one of an object of research, to active members of shared endeavours. I have proposed that in this new health research landscape, there are not only strong grounds for widening feedback to include potentially identity-significant findings, but also to recognise the valuable role of researchers and others as interpretive partners in the relational processes of anticipating, offering and disclosing findings.

24 Health Research and Privacy through the Lens of Public Interest A Monocle for the Myopic?

Mark Taylor and Tess Whitton
24.1 Introduction

Privacy and public interest are reciprocal concepts, mutually implicated in each other’s protection. This chapter considers how viewing the concept of privacy through a public interest lens can reveal the limitations of the narrow conception of privacy currently inherent to much health research regulation (HRR). Moreover, it reveals how the public interest test, applied in that same regulation, might mitigate risks associated with a narrow conception of privacy.

The central contention of this chapter is that viewing privacy through the lens of public interest allows the law to bring into focus more things of common interest than privacy law currently recognises. We are not the first to recognise that members of society share a common interest in both privacy and health research. Nor are we the first to suggest that public is not necessarily in opposition to private, with public interests capable of accommodating private and vice versa.Footnote 1 What is novel about our argument is the suggestion that we might invoke public interest requirements in current HRR to protect group privacy interests that might otherwise remain out of sight.

It is important that HRR takes this opportunity to correct its vision. A failure to do so will leave HRR unable to take into consideration research implications with profound consequences for future society. A failure will undermine legitimacy in HRR. It is no exaggeration to say that the value of a confidential healthcare system may come to depend on whether HRR acknowledges the significance of group data to the public interest. It is group data that shapes health policies, evaluates success, and determines the healthcare opportunities offered to members of particular groups. Individual opportunity, and entitlement, is dependent upon group classification.

The argument here is three-fold: (1) a failure to take common interests into account when making public interest decisions undermines the legitimacy of the decision-making process; (2) a common interest in privacy extends to include group interests; (3) the law’s current myopia regarding group privacy interests in data protection law and the duty of confidence law can be corrected, to a varying extent, through bringing group privacy interests into view through the lens of public interest.

24.2 Common Interests, Public Interest and Legitimacy

In this section, we seek to demonstrate how a failure to take the full range of common (group) interests into account when making public interest decisions will undermine the legitimacy of those decisions.

When Held described broad categories into which different theories of public interest might be understood to fall, she listed three: preponderance or aggregative theories, unitary theories and common interest theories.Footnote 2 When Sorauf earlier composed his own list, he combined common interests with values and gave the category the title ‘commonly-held value’.Footnote 3 We have separately argued that a compelling conception of public interest may be formed by uniting elements of ‘common interest’ and ‘common value’ theories of public interest.Footnote 4 It is, we suggest, through combining facets of these two approaches that one can overcome the limitations inherent to each. Here we briefly recap this argument before seeking to build upon it.

Fundamental to common interest theories of the public interest is the idea that something may serve ‘the ends of the whole public rather than those of some sector of the public’.Footnote 5 If one accepts the idea that there may be a common interest in privacy protection, as well as in the products of health research, then ‘common interest theory’ brings both privacy and health research within the scope of public interest consideration. However, it cannot explain how – in case of any conflict – they ought to be traded-off against each other – or other common interests – to determine the public interest in a specific scenario.

In contrast to common interest theories, commonly held value theories claim the ‘public interest emerges as a set of fundamental values in society’.Footnote 6 If one accepts that a modern liberal democracy places a fundamental value upon all members of society being respected as free and equal citizens, then any interference with individual rights should be defensible in terms that those affected can both access and have reason to endorseFootnote 7 – with discussion subject to the principles of public reasoning.Footnote 8 Such a commitment is enough to fashion a normative yardstick, capable of driving a public interest determination. However, the object of measurement remains underspecified.

It is through combining aspects of common interest and common value approaches that a practical conception of the public interest begins to emerge: any trade-off between common interests ought to be defensible in terms of common value: for reasons that those affected by a decision can both access and have reason to endorse.Footnote 9

An advantage of this hybrid conception of public interest is its connection with (social) legitimacy.Footnote 10 If a decision-maker fails to take into account the full range of interests at stake, then not only do they undermine any public interest claim, but also the legitimacy of the decision-making process underpinning it.Footnote 11 Of course, this does not imply that the legitimacy of a system depends upon everyone perceiving the ‘public interest’ to align with their own contingent individual or common interests. Public-interest decision-making should, however, ensure that when the interests of others displace any individual’s interests, including those held in common, it should (ideally) be transparent why this has happened and (again, ideally) the reasons for displacement should be acceptable as ‘good reasons’ to the individual.Footnote 12 If the displaced interest is more commonly held, it is even more important for a system practically concerned with maintaining legitimacy, to transparently account for that interest within its decision-making process.

Any failure to account transparently for common interests will undermine the legitimacy of the decision-making process.

24.3 Common Interests in (Group) Privacy

In this section, the key claim is that a common interest in privacy extends beyond a narrow atomistic conception of privacy to include group interests.

We are aware of no ‘real definition’ of privacy.Footnote 13 There are, however, many stipulative or descriptive definitions, contingent upon use of the term within particular cultural contexts. Here we operate with the idea that privacy might be conceived in the legal context as representing ‘norms of exclusivity’ within a society: the normative expectation that some states of information separation are, by default, to be maintained.Footnote 14 This is a broad conception of privacy extending beyond the atomistic one that Bennet and Raab observe to be the prevailing privacy paradigm in many Western societies.Footnote 15 It is not necessary to defend a broad conception of privacy in order to recognise a common interest in privacy protection. It is, however, necessary to broaden the conception in order to bring all of the possible common interests in privacy into view. As Bennet and Raab note, the atomistic conception of privacy

fails to properly understand the construction, value and function of privacy within society.Footnote 16

Our ambition here is not to demonstrate an atomistic conception to be ‘wrong’ in any objective or absolute sense; but, rather to recognise the possibility that a coherent conception of privacy may extend its reach and capture additional values and functions. In 1977, after a comprehensive survey of the literature available at the time, Margulis proposed the following consensus definition of privacy

[P]rivacy, as a whole or in part, represents control over transactions between person(s) and other(s), the ultimate aim of which is to enhance autonomy and/or to minimize vulnerability.Footnote 17

Nearly thirty years after the definition was first offered, Margulis recognised that his early attempt at a consensus definition

failed to note that, in the privacy literature, control over transactions usually entailed limits on or regulation of access to self (Allen, 1998), sometimes to groups (e.g., Altman, 1975), and occasionally to larger collectives such as organisations (e.g., Westin, 1967).Footnote 18

The adjustment is important. It allows for a conception of privacy to recognise that there may be relevant norms, in relation to transactions involving data, that do not relate to identifiable individuals but are nonetheless associated with normative expectation of data flows and separation. Not only is there evidence that there are already such expectations in relation to non-identifiable data,Footnote 19 but data relating to groups – rather than just individuals – will be of increasing importance.Footnote 20

There are myriad examples of how aggregated data have led to differential treatment of individuals due to association with group characteristics.Footnote 21 Beyond the obvious examples of individual discrimination and stigmatisation due to inferences drawn from (perceived) group membership, there can be group harm(s) to collective interests including, for example, harm connected to things held to be of common cultural value and significance.Footnote 22 It is the fact that data relates to the group level that leaves cultural values vulnerable to misuse of the data.Footnote 23 This goes beyond a recognition that privacy may serve ‘not just individual interests but also common, public, and collective purposes’.Footnote 24 It is recognition that it is not only individual privacy but group privacy norms that may serve these common purposes. In fact, group data, and the norms of exclusivity associated with it, are likely to be of increasing significance for society. As Taylor, Floridi and van der Sloot note,

with big data analyses, the particular and the individual is no longer central. … Data is analysed on the basis of patterns and group profiles; the results are often used for general policies and applied on a large scale.Footnote 25

This challenges the adequacy of a narrow atomistic conception of privacy to account for what will increasingly matter to society. De-identification of an individual as a member of a group, including those groups that may be created through the research and may not otherwise exist, does not protect against any relevant harm.Footnote 26 In the next part, we suggest that not only can the concept of the public interest be used to bring the full range of privacy interests into view, but that a failure to do so will undermine the legitimacy of any public interest decision-making process.

24.4 Group Privacy Interests and the Law

The argument in this section is that, although HRR does not currently recognise the concept of group privacy interests, through the concept of public interest inherent to both the law of data protection and the duty of confidence, there is opportunity to bring group privacy interests into view.

24.4.1 Data Protection Law

The Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (hereafter, Treaty 108) (as amended)Footnote 27 cast the template for subsequent data protection law when it placed the individual at the centre of its object and purposeFootnote 28 and defined ‘personal data’ as:

any information relating to an identified or identifiable individual (‘data subject’)Footnote 29

This definition narrows the scope of data protection law even further than data relating to an individual. Data relating to unidentified or unidentifiable individuals fall outside its concern. This blinkered view is replicated through data protection instruments from the first through to the most recent: the EU General Data Protection Regulation (GDPR).

The GDPR is only concerned with personal data, defined in a substantively similar and narrow fashion to Treaty 108. In so far as its object is privacy protection, it is predicated upon a relatively narrow and atomistic, conception of privacy. However, if the concerns associated with group privacy are viewed through the lens of public interest, then they may be given definition and traction even within the scope of a data protection instrument like the GDPR. The term ‘the public interest’ appears in the GDPR no fewer than seventy times. It has a particular significance in the context of health research. This is an area, such as criminal investigation, where the public interest has always been protected.

Our argument is that it is through the application of the public interest test to health research governance in data protection law, that there is an opportunity to recognise in part common interests in group privacy. For example, any processing of personal data within material and territorial scope of the GDPR requires a lawful basis. Among the legal bases most likely to be applicable to the processing of personal data for research purposes is either that the processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller (Article 6(1)(e)), or, that it is necessary for the purposes of the legitimate interests pursued by the controller (Article 6(1)(f)). In the United Kingdom (UK), where universities are considered to be public authorities, universities are unable to rely upon ‘legitimate interests’ as a basis for lawful processing. Much health research in the UK will thus be carried out on the basis that it is necessary for the performance of a task in the public interest. Official guidance issued in the UK is that the organisations relying upon the necessity of processing to carry out a task ‘in the public interest’

should document their justification for this, by reference to their public research purpose as established by statute or University Charter.Footnote 30

Mere assertion that a particular processing operation is consistent with an organisation’s public research purpose will provide relatively scant assurance that the operation is necessary for the performance of a task in the public interest. More substantial justification would document justification relevant to particular processing operations. Where research proposals are considered by institutional review boards, such as university or NHS ethics committees, then independent consideration by such bodies of the public interest in the processing operation would provide the rationale. We suggest this provides an opportunity for group privacy concerns to be drawn into consideration. They might also form part of any privacy impact assessment carried out by the organisation. What is more, for the sake of legitimacy, any interference with group interests, or risk of harm to members of a group or to the collective interests of the group as a whole, should be subject to the test that members of the group be offered reasons to accept the processing as appropriate.Footnote 31 Such a requirement might support good practice in consumer engagement prior to the roll out of major data initiatives.

Admittedly, while this may provide opportunity to bring group privacy concerns into consideration where processing is carried out by a public authority (and the legal basis of processing is performance of a task carried out in the public interest), this only provides limited penetration of group privacy concerns into the regulatory framework. It would not, for example, apply where processing was in pursuit of legitimate interests or another lawful basis. There are other limited opportunities to bring group privacy concerns into the field of vision of data protection law through the lens of public interest.Footnote 32 However, for as long as the gravitational orbit of the law is around the concept of ‘personal data’, the chances to recognise group privacy interests are likely to be limited and peripheral. By contrast, more fundamental reform may be possible in the law of confidence.

24.4.2 Duty of Confidence

As with data protection and privacy,Footnote 33 there is an important distinction to be made between privacy and confidentiality. However, the UK has successfully defended its ability to protect the right to respect for private and family life, as recognised by Article 8 of the European Convention on Human Rights (ECHR), by pointing to the possibility of an action for breach of confidence.Footnote 34 It has long been recognised that the law’s protection of confidence is grounded in the public interestFootnote 35 but, as Lord Justice Briggs noted in R (W,X,Y and Z) v. Secretary of State for Health (2015),

the common law right to privacy and confidentiality is not absolute. English common law recognises the need for a balancing between this right and other competing rights and interests.Footnote 36

The argument put forward here is consistent with the idea that the protection of privacy and other competing rights and interests, such as those associated with health research, are each in the public interest. The argument here is that when considering the appropriate balance or trade-off between different aspects of the public interest, then a broader view of privacy protection than has hitherto been taken by English law is necessary to protect the legitimacy of decision-making. Such judicial innovation is possible.

The law of confidence has already evolved considerably over the past twenty or so years. Since the Human Rights Act 1998Footnote 37 came into force in 2000, the development of the common law has been in harmony with Articles 8 and 10 of the ECHR.Footnote 38 As a result, as Lord Hoffmann put it,

What human rights law has done is to identify private information as something worth protecting as an aspect of human autonomy and dignity.Footnote 39

Protecting private information as an aspect of individual human autonomy and dignity might signal a shift toward the kind of narrow and atomistic conception of privacy associated with data protection law. This would be as unnecessary as it would be unfortunate. In relation to the idea of privacy, the European Court of Human Rights has itself said that

The Court does not consider it possible or necessary to attempt an exhaustive definition of the notion of ‘private life’ … Respect for private life must also comprise to a certain degree the right to establish and develop relationships with other human beings.Footnote 40

It remains open to the courts to recognise that the implications of group privacy concerns have a bearing on an individual’s ability to establish and develop relations with other human beings. Respect for human autonomy and dignity may yet serve as a springboard toward a recognition by the law of confidence that data processing impacts upon the conditions under which we live social (not atomistic) lives and our ability to establish and develop relationships as members of groups. After all, human rights are due to members of a group and their protection has always been motivated by group concerns.Footnote 41

One of us has argued elsewhere that English Law took a wrong turn when R (Source Informatics) v. Department of HealthFootnote 42 was taken to be authority for the proposition that a duty of confidence cannot be breached through the disclosure of non-identifiable data. It is possible that the ratio in Source Informatics may yet be re-interpreted and recognised to be consistent with a claim that legal duties may be engaged through use and disclosure of non-identifiable data.Footnote 43 In some ways, this would simply be to return to the roots of the legal protection of privacy. In her book The Right to Privacy, Megan Richardson traces the origins and influence of the ideas underpinning the legal right to privacy. As she remarks, ‘the right from the beginning has been drawn on to serve the rights and interests of minority groups’.Footnote 44 Richardson recognises that, even in those cases where an individual was the putative focus of any action or argument,

Once we start to delve deeper, we often discover a subterranean network of families, friends and other associates whose interests and concerns were inexorably tied up with those of the main protagonist.Footnote 45

As a result, it has always been the case that the right to privacy has ‘broader social and cultural dimensions, serving the rights and interests of groups, communities and potentially even the public at large’.Footnote 46 It would be a shame if, at a time when we may need it most, the duty of confidence would deny its own potential to protect reasonable expectations in the use and disclosure of information simply because misuse had the potential to impact more than one identifiable individual.

24.5 Conclusion

The argument has been founded on the claim that a commitment to the protection of common interests in privacy and the product of health research, if placed alongside the commonly held value in individuals as free and equal persons, may establish a platform upon which one can construct a substantive idea of the public interest. If correct, then it is important to a proper calculation of the public interest to understand the breadth of privacy interests that need to be accounted for if we are to avoid subjugating the public to governance, and a trade-off between competing interests, that they have no reason to accept.

Enabling access to the data necessary for health research is in the public interest. So is the protection of group privacy. Recognising this point of connection can help guide decision-making where there is some kind of conflict or tension. The public interest can provide a common, commensurate framing. When this framing has a normative dimension, then this grounds the claim that the full range of common interests ought to be brought into view and weighed in the balance. One must capture all interests valued by the affected public, whether individual or common in nature, to offer them a reason to accept a particular trade-off between privacy and the public interest in health research. To do otherwise is to get the balance of governance wrong and compromise its social legitimacy.

That full range of common interests must include interests in group data. An understanding of what the public interest requires in a particular situation is short-sighted if this is not brought into view. An implication is that group interests must be taken into account within an interpretation and application of public interest in data protection law. Data controllers should be accountable for addressing group privacy interests in any public interest claim. With respect to the law of confidence, there is scope for even more significant reform. If the legitimacy of the governance framework, applicable to health data, is to be assured into the future, then it needs to be able to see – so that it might protect – reasonable expectations in data relating to groups of persons and not just identifiable individuals. Anything else will be a myopic failure to protect some of the most sensitive data about people simply on the grounds that misuse does not affect a sole individual but multiple individuals simultaneously. That is not a governance model that we have any reason to accept and we have the concept of public interest at our disposal to correct our vision and bring the full range of relevant interests into view.

25 Mobilising Public Expertise in Health Research Regulation

Michael M. Burgess
25.1 Introduction

This chapter will develop the role that deliberative public engagement should have in health research regulation. The goal of public deliberation is to mobilise the expertise that members of the public have, to explore their values in relation to specific trade-offs, with the objective of recommendations that respect diverse interests. Public deliberation requires that a small group is invited to a structured event that supports informed, civic-minded consideration of diverse perspectives on public interest. Ensuring that the perspectives considered are inclusive of perspectives that might otherwise be marginalised or silenced requires explicitly designing the small group in relation to the topic. Incorporating public expertise enhances the trustworthiness of policies and governance by explicitly acknowledging and negotiating diverse public interests. Trustworthiness is distinct from trust, so the chapter begins by exploring that distinction in the context of the example of care.data and the loss of trust in the English National Health Service’s (NHS) use of electronic health records for research. While better public engagement prior to the announcement might have avoided the loss of trust, subsequent deliberative public engagement may build trustworthiness into the governance of health research endeavours and contribute to re-establishing trust.

25.2 Trustworthiness of Research Governance

Some activities pull at the loose threads of social trust. These events threaten to undermine the presumption of legitimacy that underlie activities directed to public interest. The English NHS care.data programme is one cautionary tale (NHS, 2013).Footnote 1 The trigger event was the distribution of a pamphlet to households. The pamphlet informed the public that a national database of patients’ medical records would be used for patient care, to monitor outcomes, and that research would take place on anonymous datasets. The announcement was entirely legitimate within the existing regulatory regime. The Editor-in-Chief of the British Medical Journal summarises, ‘But all did not go to plan. NHS England’s care.data programme failed to win the public’s trust and lost the battle for doctors’ support. Two reports have now condemned the scheme, and last week the government decided to scrap it.’Footnote 2

The stimulation of public distrust is often characterised by a political deployment of a segment of the public, but it may lead to a wider rejection of previously non-controversial trade-offs. In the case of care.data, the first response was to ensure better education about benefits and enhanced informed consent. The Caldicott Report on the care.data programme called for better technology standards, publication of disclosure procedures, an easy opt-out procedure and a ‘dynamic consent’ process.Footnote 3

There are good reasons to doubt that improved regulation and informed consent procedures alone will restore the loss, or sustain current levels, of public trust. It was unlikely that the negative reaction to care.data had to do with an assessment of the adequacy of the regulations for privacy and access to health data. Moreover, everything proposed under care.data was perfectly lawful. It is far more likely that the reaction had to do with a rejection of what was presented as a clear case of justified use of patients’ medical records. The perception was that the trade-offs were not legitimate, at least to some of the public and practitioners. The destabilisation of trust that patient information was being used in appropriate ways, even with what should have been an innocuous articulation of practice, suggests a shift in how the balance between access to information and privacy are perceived. Regulatory experts on privacy and informed consent may strengthen protection or recalibrate what is protected. But such measures do not develop an understanding and response to how a wider public might assign proportionate weight to privacy and access in issues related to research regulation. Social controversy about the relative weight of important public interests demonstrates the loss of legitimacy of previous decisions and processes. It is the legitimacy of the programmes that require public input.

The literature on public understanding of science also suggests that merely providing more detailed information and technical protections is unlikely to increase public trust.Footnote 4 Although alternative models of informed consent are beyond the scope of this chapter, it seems more likely that informed consent depends on relationships of trust, and that trust, or its absence, is more of a heuristic approach that serves as a context in which people make decisions under conditions of limited time and understanding.Footnote 5 Trust is often extended without assessment of whether the conditions justify trust, or are trustworthy. It also follows that trust may not be extended even when the conditions seem to merit trust. The complicated relationship between trust and trustworthiness has been discussed in another chapter (see Chuong and O’Doherty, Chapter 12) and in the introduction to this volume, citing Onora O’Neill, who encourages us to focus on demonstrating trustworthiness in order to earn trust.

The care.data experience illustrates how careful regulation within the scope of law and current research ethics, and communicating those results to a wide public, was not sufficient for the plan to be perceived as legitimate and to be trusted. Regulation of health research needs to be trustworthy, yet distrust can be stimulated despite considerable efforts and on-going vigilance. If neither trust nor distrust are based on the soundness of regulation of health research, then the sources of distrust need to be explicitly addressed.

25.3 Patients or Public? Conceptualising What Interests Are Important

It is possible to turn to ‘patients’ or ‘the public’ to understand what may stabilise or destabilise trust and legitimacy in health research. There is considerable literature and funding opportunities related to involving patients in research projects, and related improved outcomes.Footnote 6 The distinction between public and patients is largely conceptual, but it is important to clarify what aspects of participants’ lives we are drawing on to inform research and regulation, and then to structure recruitment and the events to emphasise that focus.Footnote 7 In their role as a patient, or caregivers and advocates for family and friends in healthcare, participants can draw on their experiences to inform clinical care, research and policy. In contrast, decisions that allocate across healthcare needs, or broader public interests, require consideration of a wider range of experiences, as well as the values, and practical knowledge that participants hold as members of the public. Examples of where it is important to achieve a wider ‘citizen’ perspective include funding decisions on drug expenditures and disinvestment, and balancing privacy concerns against benefits from access to health data or biospecimens.Footnote 8 Considerations of how to involve the public in research priorities is not adequately addressed by involving community representatives on research ethics review committees.Footnote 9

Challenges to trust and legitimacy often arise when there are groups who hold different interpretations of what is in the public interest. Vocal participants on issues are often divide into polarised groups. But there is often also a multiplicity of public interests, so there is no single ‘public interest’ to be discovered or determined. Each configuration of a balance of interests also has resource implications, and the consequences are borne unevenly across the lines of inequity in society. There is democratic deficit when decisions are made without input from members of public who will be affected by the policy but have not been motivated to engage. This deficit is best addressed by ‘actively seek(ing) out moral perspectives that help to identify and explore as many moral dimensions of the problem as possible’.Footnote 10 This rejects the notions that bureaucracies and elected representatives are adequately informed by experts and stakeholders to determine what is in the interests of all who will be affected by important decisions. These decisions are, in fact, about a collective future, often funded by public funds with opportunity costs. Research regulation, like biotechnology development and policy, must explicitly consider how and who decides the relative importance of benefits and risks.

The distinction between trust and trustworthiness, between bureaucratic legitimacy and perceived social licence, gives rise to the concern that much patient and public engagement may be superficial and even manipulative.Footnote 11 Careful consideration must be given to how the group is convened, informed, facilitated and conclusions or recommendations are formulated. An earlier chapter considered the range of approaches to public and patient engagement, and how different approaches are relevant for different purposes (see Aitkin and Cunningham-Burley, Chapter 11).Footnote 12 To successfully stimulate trust and legitimacy, the process of public engagement requires working through these dimensions.

25.4 Conceptualising Public Expertise: Representation and Inclusiveness

The use of the term ‘public’ is normally intended to be as inclusive as possible, but it is also used to distinguish the call to public deliberation from other descriptions of members of society or stakeholders. There is a specific expertise called upon when people participate as members of the public as opposed to patients, caregivers, stakeholders or experts. Participants are sought for their broad life perspective. As perspective bearers coming from a particular structural location in society, with ‘experience, history and social knowledge’,Footnote 13 participants draw on their own social knowledge and capacity in a deliberative context that supports this articulation without presuming that their experiences are adequate to understand that of others’, or that there is necessarily a common value or interest

‘Public expertise’ is what we all develop as we live in our particular situatedness, and in structured deliberative events it is blended with an understanding of other perspectives, and directed to develop collective advice related to the controversial choices that are the focus of the deliberation. Adopting Althusser’s notion of hailing or ‘interpellation’ as ideological construction of people’s role and identity, Berger and De Cleen suggest that calling people to deliberate ‘offers people the opportunity to speak (thus empowering them) and a central aspect of how their talk is constrained and given direction (the exercise of power on people)’.Footnote 14 In deliberation, the manifestations of public expertise is interwoven with the overall framing, together co-creating the capacity to consider the issues deliberated from a collective point of view.Footnote 15 Political scientist Mark Warren suggests that ‘(r)epresentation can be designed to include marginalized people and unorganized interests, as well as latent public interests’.Footnote 16 As one form of deliberation, citizen juries, captured in the name and process, the courts have long drawn on public to constitute a group of peers who must make sense and form collective judgments out of conflicting and diverse information and alternative normative weightings.Footnote 17

Simone Chambers, in a classic review of deliberative democracy, emphasised two critiques from diversity theory, and suggested that these would be a central concern in the next generation of deliberative theorists: (1) reasonableness and reason-giving; (2) conditions of equality as participants in deliberative activities.Footnote 18 The facilitation of deliberative events is discussed below, but participants can be encouraged and given the opportunity to understand each other’s perspectives in a manner that may be less restrictive than theoretical discussions suggest. For example, the use of narrative accounts to explain how participants come to hold particular beliefs or positions provide important perspectives that might not be volunteered or considered if there was a strong emphasis on justifying one’s views with reasons in order for them to be considered.Footnote 19

The definition and operationalisation of inclusiveness is important because deliberative processes are rarely large scale, focussing instead on the way that small groups can demonstrate how a wider public would respond if they were informed and civic-minded.Footnote 20 Representation or inclusiveness is often the starting place for consideration of an engagement process.Footnote 21 Steel and colleagues have described three different types of inclusiveness that provides conceptual clarity about the constitution of a group for engagement: representative, egalitarian and normic diversity.Footnote 22

Representative diversity requires that the distribution of the relevant sub-groups in the sample reflects the same distribution as in the reference population. Egalitarian inclusiveness requires equal representation of people from each relevant sub-group so that each perspective is given equal representation. In contrast to representative diversity, Egalitarian diversity ignores the size of each sub-group in the population, and emphasises equal representation of each sub-group. Normic diversity requires the over-representation of sub-groups who are marginalised or overwhelmed by the larger, more influential or mainstream groups in the population. Each of these concepts aim for a symmetry, but the representative approach presumes that symmetry is the replication of the population, while egalitarian and normic concepts directly consider asymmetry of power and voice in society.

Attempts to enhance the range of perspectives considered in determining the public interest(s) is likely to draw on the normic and egalitarian concepts of diversity, and de-emphasise the representative notion. The goal of deliberative public engagement is to address a democratic deficit whereby some groups have been the dominant perspectives considered on the issues, even if none have prevailed over others. It seeks to include a wider range of input from diverse citizens about how to live together given the different perspectives on what is ‘in the public interest. Normic diversity suggests that dominant groups are less present in the deliberating group, and egalitarian suggests that it is important to have similar representation across the anticipated diverse perspectives. The deliberation must be informed about, but not subjugated by, dominant perspectives, and one approach is to exclude dominant perspectives, including those of substance experts, from participating in the deliberation, but introduce their perspectives and related information through materials and presentations intended to inform participants. Deliberative participants must exercise their judgement and critically consider a wide range of perspectives, while stakeholders are agents for a collective identity that asserts the importance of one perspective over others.Footnote 23 It is also challenging to identify the range of relevant perspectives that give particular form to the public expertise for an issue, although demographics may be used to ensure that participants reflect a range of life experiences.Footnote 24 Specific questions may also suggest that particular public perspectives are important to include in the deliberating group. For example, in Californian deliberations on biobanks it was important to include Spanish-only speakers because, despite accounting for the majority of births, they were often excluded from research regulation issues (normic diversity), and they were an identifiable group who likely had unique perspectives compared to other demographic segments of the California population (egalitarian diversity).Footnote 25

25.5 Mobilising Public Expertise in Deliberation

As previously discussed, mobilising public expertise requires considerable support. To be credible and legitimate, a deliberative process must demonstrate that the participants are adequately informed and consider diverse perspectives. Participants must respectfully engage each other in the development of recommendations that focus on reasoned inclusiveness but fully engage the trade-offs required in policy decisions.

It seems obvious that participation in an engagement to advise research regulation must be informed about the activities to be regulated. This is far from a simple task. An engagement can easily be undermined if the information provided is incomplete or biased. It is important to provide not only complete technical details, but also ensure that social controversies and stakeholder perspectives are fairly represented. This can be managed by having an advisory of experts, stakeholders and potential knowledge users. Advisors can provide input into the questions and the range of relevant information that participants must consider to be adequately informed. It is also important to consider how best to provide information to support comprehension across participants with different backgrounds. One approach is to utilise a combination of a background booklet and a panel of four to six speakers.Footnote 26 The speakers, a combination of experts and stakeholders, are asked to be impassioned, explaining how or why they come to their particular view. This will establish that there are controversies, help draw participants into the issues and stimulate interest in the textual information.

Facilitation is another critical element of deliberative engagement. Deliberative engagement is distinguished by collective decisions supported by reasons from the participants – the recommendations and conclusions are the result of a consideration of the diverse perspectives reflected in the process and among participants. The approach to facilitation openly accepts that participants may re-orient the discussion and focus, and that the role of facilitation is to frame the discussion in a transparent manner.Footnote 27 Small groups of six to eight participants can be facilitated to develop fuller participation and articulation of different perspectives and interests than is possible in a larger group. Large group facilitation can be oriented to giving the participants as much control over topic and approach as they assume, while supporting exploration of issues and suggesting statements where the group may be converging. The facilitator may also draw closure to enable participants to move on to other issues by suggesting that there is a disagreement that can be captured. Identifying places of deep social disagreement identifies where setting policy will need to resolve controversy about what is genuinely in the public’s interest, and where there may be a need for more nuanced decisions on a case-by-case basis. The involvement of industry and commercialisation in biobanks is a general area that has frequently defied convergence in deliberation.Footnote 28

Even if recruitment succeeds in convening a diverse group of participants, sustaining diversity and participation across participants requires careful facilitation. The deliberative nature of the activity is dynamic. Participants increase their level of knowledge and understanding of diverse perspectives as facilitation encourages them to shift from an individual to a collective focus. Premature insistence on justifications can stifle understanding of diverse perspectives, but later in the event, justifications are crucial to produce reasons in support of conclusions. Discussion and conclusions can be inappropriately influenced by participants’ personalities, as well as the tendency for some participants to position themselves as having authoritative expertise. It is well within the expertise of the public to consider whether claims to special knowledge or personalities are lacking substantive support for their positions. But self-reflective and respectful communication is not naturally occurring, and deliberation requires skilled facilitation to avoid dominance of some participants and to encourage critical reflection and participation of quieter participants. The framing of the issues and information as well as facilitation inevitably shapes the conclusions, and participants may not recognise that issues and concerns important to them have been ruled out of scope.

Assessing the quality of deliberative public engagement is fraught with challenges. Abelson and Nabatchi have provided good overviews of the state of deliberative civic engagement, assessing its impacts and assessment.Footnote 29

There are recent considerations of whether and under what conditions deliberative public engagement is useful and effective.Footnote 30 Because deliberative engagement is expensive and resource intensive, it needs to be directed to controversies where the regulatory bodies want, and are willing, to have their decisions and policies shaped by public input. Such authorities do not thereby give up their legitimate duties and freedom to act in the public interest or to consult with experts and stakeholders. Rather, activities such as deliberative public engagement are supplemental to the other sources of advice, and not determinative of the outcomes. This point is important for knowledge users, sponsors and participants to understand.

How, then, might deliberative public engagement have helped avoid the negative reaction to care.data? It is first important to distinguish trust from trustworthiness. Trust, sometimes considered as social licence, is usually presumed in the first instance. As a psychological phenomenon, trust is often a heuristic form of reasoning that supports economical use of intellectual and social capital.Footnote 31 There is some evidence that trust is particularly important with regard to research participation.Footnote 32 Based on previous experiences, we develop trust – or distrust – in people and institutions. There is a good chance that many people whose records are in the NHS would approach the use of their records for other purposes with a general sense of trust. Loss of trust often flows from abrupt discovery that things are not as we presumed, which is what appears to have happened in care.data. On the other hand, trustworthiness of governance is when the governance system has the characteristics that, if scrutinised, would support that it is worthy of trust.

Given this understanding, it might have been possible to demonstrate trustworthiness of governance of the NHS data by holding deliberative public engagement and considering its recommendations for data management. Also, public trust might not have been as widely undermined if the announcement of extension of access to include commercial partners provided a basis for finding the governance trustworthy. Of course, distrust of critical stakeholders and members of public will still require direct responses to their concerns.

It is important to note that trustworthiness that can stand up to scrutiny is the goal, rather than directing efforts at increasing trust. Since trust is given in many cases without reflection, it can often be manipulated. By aiming at trustworthiness, arrived at through considerations that include deliberative public input, the authorities demonstrate that their approach is trustworthy. Articulating how controversies have been considered with input from informed and deliberating members of public would have demonstrated that the trust presumed at the outset was, in an important sense, justified. Now, after the trust has been lost and education and reinforced individual consent has not addressed the concerns, deliberation to achieve legitimate and trustworthy governance may have a more difficult time stimulating wide public trust, but it may remain the best available option.

25.6 Conclusion

Deliberative public engagement has an important role in the regulation of health research. Determining what trade-offs are in the public interest requires a weighing of alternatives and relative weights of different interests. Experts and stakeholders are legitimate advocates for the interests they represent, but their interests manifest an asymmetry of power. Including a well-designed process to include diverse public input can increase the legitimacy and trustworthiness of the policies. Deliberative engagement mobilises a wider public to direct their collective experience and expertise. The resulting advice about what is in the public interest explicitly builds diversity in the recruitment of the participants and in the design of the deliberation.

Deliberative public engagement is helpful for issues where there is genuine controversy about what is in the public interest, but it is far from a panacea. It is an important complement to stakeholder and expert input. The deliberative approach starts with careful consideration of the issues to be deliberated and how the diversity is to be structured into the recruitment of a deliberating small group. Expert and stakeholder advisors, as well as decision-makers who are the likely recipients of the conclusions of the deliberation, can help develop the range of information necessary for deliberation on the issues to be informed. Participants need to be supported by exercises and facilitation that helps them develop a well-informed and respectful understanding of diverse perspectives. Facilitation then shifts to support the development of a collective focus and conclusions with justifications. Diversity and asymmetry of power is respected through the conceptualisation and implementation of inclusiveness, the development of information, and through facilitation and respect for different kinds of warranting. There must be a recognition that the role of event structure and facilitation means that the knowledge is co-produced with the participants, and that it is very challenging to overcome asymmetries, even in the deliberation itself. Another important feature is the ability to identify persistent disagreements and not force premature consensus on what is in the public interest. In this quality, it mirrors the need for, and nature of, regulation of health research to struggle with the issue of when research is in ‘the public interest’.

It is inadequate to assert or assume that research and its existing and emerging regulation is in the public interest. It is vital to ensure wide, inclusive consideration that is not overwhelmed by economic or other strongly vested interests. This is best accomplished by developing, assessing and refining ways to better include diverse citizens in the informed reflections about what is in our collective interests, and how to best live together when those interests appear incommensurable.

26 Towards Adaptive Governance in Big Data Health Research Implementing Regulatory Principles

Effy Vayena and Alessandro Blasimme
26.1 Introduction

In recent times, biomedical research has begun to tap into larger-than-ever collections of different data types. Such data include medical history, family history, genetic and epigenetic data, information about lifestyle, dietary habits, shopping habits, data about one’s dwelling environment, socio-economic status, level of education, employment and so on. As a consequence, the notion of health data – data that are of relevance for health-related research or for clinical purposes – is expanding to include a variety of non-clinical data, as well as data provided by research participants themselves through commercially available products such as smartphones and fitness bands.Footnote 1 Precision medicine that pools together genomic, environmental and lifestyle data represents a prominent example of how data integration can drive both fundamental and translational research in important domains such as oncology.Footnote 2 All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover new disease-related patterns.

To date, most scholarship and policy on these issues has focused on privacy and data protection. Less attention has been paid to addressing other aspects of the wicked challenges posed by Big Data health research and even less work has been geared towards the development of novel governance frameworks.

In this chapter, we make the case for adaptive and principle-based governance of Big Data research. We outline six principles of adaptive governance for Big Data research and propose key factors for their implementation into effective governance structures and processes.

26.2 The Case for Adaptive Principles of Governance in Big Data Research

For present purposes, the term ‘governance’ alludes to a democratisation of administrative decision-making and policy-making or, to use the words of sociologist Anthony Giddens, to ‘a process of deepening and widening of democracy [in which] government can act in partnership with agencies in civil society to foster community renewal and development.’Footnote 3

Regulatory literature over the last two decades has formalised a number of approaches to governance that seem to address some of the defining characteristics of Big Data health research. In particular, adaptive governance and principles-based regulation appear well-suited to tackle three specific features of Big Data research, namely: (1) the evolving, and thus hardly predictable nature of the data ecosystem in Big Data health research – including the fast-paced development of new data analysis techniques; (2) the polycentric character of the actor network of Big Data and the absence of a single centre of regulation; and (3) the fact that most of these actors do not currently share a common regulatory culture and are driven by unaligned values and visions.Footnote 4

Adaptive governance is based on the idea that – in the presence of uncertainty, lack of evidence and evolving, dynamic phenomena – governance should be able to adapt to the mutating conditions of the phenomenon that it seeks to govern. Key attributes of adaptive governance are the inclusion of multiple stakeholders in governance design,Footnote 5 collaboration between regulating and regulated actors,Footnote 6 the incremental and planned incorporation of evidence in governance solutionsFootnote 7 and openness to cope with uncertainties through social learning.Footnote 8 This is attained by planning evidence collection and policy revision rounds in order to refine the fit between governance and public expectations; distributing regulatory tasks across a variety of actors (polycentricity); designing partially overlapping competences for different actors (redundancy); and by increasing participation in policy and management decisions by otherwise neglected social groups. Adaptive governance thus seems to adequately reflect the current state of Big Data health research as captured by the three characteristics outlined above. Moreover, social learning – a key feature of adaptive governance – can help explore areas of overlapping consensus even in a fragmented actor network like the one that constitutes Big Data research.

Principles based regulation (PBR) is a governance approach that emerged in the 1990s to cope with the expansion of the financial services industry. Just as Big Data research is driven by technological innovation, financial technologies (the so-called fintech industry) have played a disruptive role for the entire financial sector.Footnote 9 Unpredictability, accrual of new stakeholders and lack of regulatory standards and best practices characterise this phenomenon. To respond to this, regulators such as the UK Financial Services Authority (FSA), backed-up by a number of academic supporters of ‘new governance’ approaches,Footnote 10 have proposed principles-based regulation as a viable governance model.Footnote 11 In this model, regulation and oversight relies on broadly-stated principles that reflect regulators orientations, values and priorities. Moreover, implementation of the principles is not entirely delegated to specified rules and procedures. Rather, PBR relies on regulated actors to set up mechanism to comply with the principles.Footnote 12 Principles are usually supplemented by guidance, white papers and other policies and processes to channel the compliance efforts of regulated entities. See further on PBR, Sethi, Chapter 17, this volume.

We contend that PBR is helpful to set up Big Data governance in the research space because it is explicitly focussed on the creation of some form of normative alignment between the regulator and the regulated; it creates conditions that can foster the emergence of shared values among different regulated stakeholders. Since compliance is not rooted on box-ticking nor respect for precisely-specified rules, PBR stimulates experimentation with a number of different oversight mechanisms. This bottom-up approach allows stakeholders to explore a wide range of activities and structures to align with regulatory principles, favouring the selection of more cost-efficient and proportionate mechanisms. Big data health research faces exactly this need to create stakeholders’ alignment and to cope with the wide latitude of regulatory attitudes that is to be expected in an innovative domain with multiple newcomers.

The governance model that we propose below relies on both adaptive governance – as to its capacity to remain flexible to future evolutions of the field – and PBR – because of its emphasis on principles as sources of normative guidance for different stakeholders.

26.3 A Framework to Develop Systemic Oversight

The framework we propose below provides guidance to actors that have a role in the shaping and management of research employing Big Data; it draws inspiration from the above-listed features of adaptive governance. Moreover, it aligns with PBR in that it offers guidance to stakeholders and decision-makers engaged at various levels in the governance of Big Data health research. As we have argued elsewhere, our framework will facilitate the emergence of systemic oversight functions for the governance of Big Data health research.Footnote 13 The development of systemic oversight relies on six high-order principles aimed at reducing the effects of a fragmented governance landscape and at channelling governance decisions – through both structures and processes – towards an ethically defensible common ground. These six principles do not predefine which specific governance structures and processes shall be put in place – hence the caveat that they represent high-order guides. Rather, they highlight governance features that shall be taken into account in the design of structures and processes for Big Data health research. Equally, our framework is not intended as a purpose-neutral approach to governance. Quite to the contrary; the six principles we advance do indeed possess a normative character in that they endorse valuable states of affairs that shall occur as a result of appropriate and effective governance. By the same token, our framework suggests that action should be taken in order to avoid certain kinds of risks that will most likely occur if left unattended. In this section, we will illustrate the six principles of systemic oversight – adaptivity, flexibility, monitoring, responsiveness, reflexivity and inclusiveness – while the following section deals with the effective interpretation and implementation of such principles in terms of both structures and processes.

Adaptivity: adaptivity is the capacity of governance structures and processes to ensure proper management of new forms of data as they are incorporated into health research practices. Adaptivity, as presented here, has also been discussed as a condition for resilience, that is, for the capacity of any given system to ‘absorb disturbances and reorganize while undergoing change so as to still retain essentially the same function, structure, identity and feedbacks.’Footnote 14 This feature is crucial in the case of a rapidly evolving field – like Big Data research – whose future shape, as a consequence, is hard to anticipate.

Flexibility: flexibility refers to the capacity to treat different data types depending on their actual use rather than their source alone. Novel analytic capacities are jeopardising existing data taxonomies, which rapidly renders regulatory categories constructed around them obsolete. Flexibility means, therefore, recognising the impact of technical novelties and, at a minimum, giving due consideration to their potential consequences.

Monitoring: risk minimisation is a crucial aim of research ethics. With the possible exception of highly experimental procedures, the spectrum of physical and psychological harms due to participation in health research is fairly straightforward to anticipate. In the evolving health data ecosystem described so far, however, it is difficult to anticipate upfront what harms and vulnerabilities research subjects may encounter due their participation in Big Data health research. This therefore requires on-going monitoring.

Responsiveness: despite efforts in monitoring emerging vulnerabilities, risks can always materialise. In Big Data health research, privacy breaches are a case in point. Once personal data are exposed, privacy is lost. No direct remedy exists to re-establish the privacy conditions that were in place before the violation. Responsiveness therefore prescribes that measures are put in place to at least reduce the impact of such violations on the rights, interests and well-being of research participants.

Reflexivity: it is well known that certain health-related characteristics cluster in specific human groups, such as populations, ethnic groups, families and socio-economic strata. Big data are pushing the classificatory power of research to the next level, with potentially worrisome implications. The classificatory assumptions that drive the use of rapidly evolving data-mining capacities need to be put under careful scrutiny as to their plausibility, opportunity and consequences. Failing to do so will result in harms to all human groups affected by those assumptions. What is more, public support for, as well as trust in, scientific research may be jeopardised by the reputational effects that can arise if reflexivity and scrutiny are not maintained.

Inclusiveness: the last component of systemic oversight closely resonates with one of the key features of adaptive governance, that is, the need to include all relevant parties in the governance process. As more diverse data sources are aggregated, the more difficult it becomes for research participants to exert meaningful control on the expanding cloud of personal data that is implicated by their participation.Footnote 15 Experimenting with new forms of democratic engagement is therefore imperative for a field that depends on resources provided by participants (i.e. data), but that, at the same time, can no longer anticipate how such resources will be employed, how they will be analysed and with which consequences. See Burgess, Chapter 25.

These six principles can be arranged to form the acronym AFIRRM: our model framework for the governance of Big Data health research.

26.4 Big Data Health Research: Implementing Effective Governance

While there is no universal definition of the notion of effective governance, it alludes in most cases to an alignment between purposes and outcomes, reached through processes that fulfil constituents’ expectations and which project legitimacy and trust onto the involved actors.Footnote 16 This understanding of effective governance fits well with our domain of interest: Big Data health research. In the remainder of this chapter, drawing on literature on the implementation of adaptive governance and PBR, we discuss key issues to be taken into account in trying to derive effective governance structures and oversight mechanism from the AFIRRM principles.

The AFIRRM framework endorses the use of principles as high-level articulations of what is to be expected by regulatory mechanisms for the governance of Big Data health research. Unlike the use of PBR in financial markets where a single regulator expects compliance, PBR in the Big Data context responds to the reality that governance functions are distributed among a plethora of actors, such as ethics review committees, data controllers, privacy commissioners, access committees, etc. PBR within the AFIRRM framework offers a blueprint for such a diverse array of governance actors to create new structures and processes to cope with the specific ethical and legal issues raised by the use of Big Data. Such principles have a generative function in the governance landscape, that is, in the process of being created to govern those issues.

The key advantage of principles in this respect is that they require making the reason behind regulation visible to all interested parties, including publics. This amounts to an exercise of public accountability that can bring about normative coherence among actors with different starting assumptions. The AFIRRM principles stimulate a bottom-up exploration of the values at stake and how compliance with existing legal requirements will be met. In this sense, the AFIRRM principles perform a formal, more than a substantive function, precisely because we assume the substantive ethical and legal aims of regulation that have already been developed in health research – such as the protection of research participants from the risk of harm – to hold true also for research employing Big Data. What AFIRRM principles do is to provide a starting point for deliberation and action that respects existing ethical standards and complies with pre-existing legal rules.

The AFIRRM principles do not envision actors in the space of Big Data research to self-regulate, but they do presuppose trust between regulators and regulated entities: regulators need to be confident that regulated entities will do their best to give effect to the principles in good faith. While some of the interests at stake in Big Data health research might be in tension – like the interest of researchers to access and distribute data, and the interests of data donors to control what their personal data are used for – developing efficient governance structures and processes that meet stakeholders’ expectations is of advantage for all interested parties to begin with conversations based on core agreed principles. Practically, this requires all relevant stakeholders to have a say in the development and operationalisation of the principles at stake.

Adaptive governance scholarship has identified typical impediments to effective operationalisation of adaptive mechanisms. A 2012 literature review of adaptive governance, network management and institutional analysis identified three key challenges to the effective implementation of adaptive governance: ill-defined purposes and objectives, unclear governance context and lack of evidence in support of blueprint solutions.Footnote 17

Let us briefly illustrate each of these challenges and explain how systemic oversight tries to avoid them. In the shift from centralised forms of administration and decision-making, to less formalised and more distributed governance networks that occurred over the last three decades,Footnote 18 the identification of governance objectives is no longer straightforward. This difficulty may also be due to the potentially conflicting values of different actors in the governance ecosystem. In this respect, systemic oversight has the advantage of not being normatively neutral. The six principles of systemic oversight determinedly aim at fostering an ethical common ground for a variety of governance actors and activities in the space of Big Data research. What underpins the framework, therefore, is a view of what requires ethical attention in this rapidly evolving field, and how to prioritise actions accordingly. In this way, systemic oversight can provide orientation for a diverse array of governance actors (structures) and mechanisms (processes), all of which are supposed to produce an effective system of safeguards around activities in this domain. Our framework directs attention to critical features of Big Data research and promotes a distributed form of accountability that will, where possible, emerge spontaneously from the different operationalisations of its components. The six components of systemic oversight, therefore, suggest what is important to take into account when considering how to adapt the composition, mandate, operations and scope of oversight bodies in the field of Big Data research.

The second challenge to effective adaptive governance – unclear governance context – refers to the difficulty of mapping the full spectrum of rules, mechanisms, institutions and actors involved in a distributed governance system or systems. Systemic oversight requires mapping the overall governance context in order to understand how best to implement the framework in practice. This amounts to an empirical inquiry into the conditions (structures, mechanisms and rules) in which governance actors currently operate. In a recent study we showed that current governance mechanisms for research biobanks, for instance, are not aligned with the requirements of systemic oversight.Footnote 19 In particular, we showed that systemic oversight can contribute to improve accountability of research infrastructures that, like biobanks, collect and distribute an increasing amount of scientific data.

The third and last challenge to effective operationalisation of adaptive mechanisms has to do with the limits of ready-made blueprint solutions to complex governance models. Political economist and Nobel Laureate Elinor Ostrom has written extensively on this. In her work on socio-ecological systems, Ostrom has convincingly shown that policy actors have the tendency to buy into what she calls ‘policy panaceas’,Footnote 20 that is, ready-made solutions to very complex problems. Such policy panaceas are hardly ever supported by solid evidence regarding the effectiveness of their outcomes. One of the most commonly cited reasons for their lack of effectiveness is that complexity entails high degrees of uncertainty as to the very phenomenon that policy makers are trying to govern.

We saw that uncertainty is characteristic of Big Data research too (see Section 26.2). That is why systemic oversight refrains from prescribing any particular governance solution. While not rejecting traditional predict-and-control approaches (such as informed consent, data anonymisation and encryption), systemic oversight does not put all the regulatory weight on any particular instrument or body. The systemic ambition of the framework lies in its pragmatic orientation towards a plurality of tools, mechanisms and structures that could jointly stabilise the responsible use of Big Data for research purposes. In this respect, our framework acknowledges that ‘[a]daptation typically emerges organically among multiple centers of agency and authority in society as a relatively self-organized or autonomous process marked by innovation, social learning and political deliberation’.Footnote 21

Still, a governance framework’s capacity to avoid known bottlenecks to operationalisation is a necessary but not a sufficient condition to its successful implementation. The further question is how the principles of the systemic oversight model can be incorporated into structures and processes in Big Data research governance. With structures we mean actors and networks of actors involved in governance, and organised in bodies charged with oversight, organisational or policy-making responsibilities. Processes, instead, are the mechanisms, procedures, rules, laws and codes through which actors operate and bring about their governance objectives. Structures and processes define the polycentric, redundant and experimental system of governance that an adaptive governance model intends to promote.Footnote 22

26.5 Key Features of Governance Structures and Processes

Here we follow the work of Rijke and colleaguesFootnote 23 in identifying three key properties of adaptive governance structures: centrality, cohesion and density. While it is acknowledged that centralised structures can be effective as a response to crises and emergencies, centralisation is precisely a challenge in Big Data; our normative response is to call for inclusive social learning among the broad array of stakeholders, subject to challenges of incomplete representation of relevant interests (see further below). Still, this commitment can help to promote network cohesion by fostering discussion about how to implement the principles, while also promoting the formation of links between governance actors, as required by density. In addition, this can help to ensure that governance roles are fairly distributed among a sufficiently diverse array of stakeholders and that, as a consequence, decisions are not hijacked by technical experts.

The governance space in Big Data research is already populated by numerous actors, such as IRBs, data access committees and advisory boards. These bodies are not necessarily inclusive of a sufficiently broad array of stakeholders and therefore they may not be very effective at promoting social learning. Their composition could thus be rearranged in order to be more representative of the interests at stake and to promote continued learning. New actors could also enter the governance system. For instance, data could be made available for research by data subjects themselves through data platforms.Footnote 24

Network of actors (structures) operating in the space of health research do so through mechanisms and procedures (processes) such as informed consent and ethics review, as well as data access review, policies on reporting research findings to participants, public engagement activities and privacy impact assessment.

Processes are crucial to effective governance of health research and are a critical component of the systemic oversight approach as their features can determine the actual impact of its principles. Drawing on scholarship in adaptive governance, we present three such features (components) that are central to the appropriate interpretation of the systemic oversight principles.

Social learning: social learning refers to learning that occurs by observing others.Footnote 25 In governance settings that are open to participation by different stakeholders, social learning can occur across different levels and hierarchies of the governance structures. According to many scholars, including Ostrom,Footnote 26 social learning represents an alternative to policy blueprints (see above) – especially when it is coupled with and leading to adaptive management. Planned adaptations – that is, previously scheduled rounds of policy revision in light of new knowledge – can be occasions for governance actors to capitalise on each other’s experience and learn about evolving expectations and risks. Such learning exercises can reduce uncertainty and lead to adjustments in mechanisms and rules. The premise of this approach is the realisation that in complex systems characterised by pronounced uncertainty, ‘no particular epistemic community can possess all the necessary knowledge to form policy’.Footnote 27 Social learning – be it aimed at gathering new evidence, at fostering capacity building or at assessing policy outcomes – is relevant to all of the six components of systemic oversight. The French law on bioethics, for instance, prescribes periodic rounds of nationwide public consultation – the so-called Estates General on bioethics.Footnote 28 This is an example of how social learning can be fostered. Similar social learning can be triggered even at smaller scales – for instance in local oversight bodies – in order to explore new solutions and alternative designs.

Complementarity: complementarity is the capacity of governance processes to fulfil both the need for processes to be functionally compatible and to ensure procedural correspondence between processes and the phenomena they intend to regulate. Functional complementarity refers to the distribution of regulatory functions across a given set of processes exhibiting partial overlap (see redundancy, above). This feature is crucial for both monitoring and reflexivity. Procedural complementarity, on the other hand, refers to the temporal alignment between governance processes and the activities that depend on such processes. One prominent example, in this respect, is the timing of ethics review processes, or that of data access requests processing.Footnote 29 For instance, the European General Data Protection Regulation (GDPR) prescribes a maximum 72-hour delay between detection and notification of privacy breaches. This provision is an example of procedural complementarity that would be of the utmost importance for the principle of responsiveness.

Visibility: governance processes need to be visible, that is, procedures and their scope need to be as publicly available as possible to whomever is affected by them or must act accordingly to them. The notion of regulatory visibility has recently been highlighted by Laurie and colleagues, who argue for regulatory stewardship within ecosystems to help researchers clarify values and responsibilities in health research and navigate the complexities.Footnote 30 Recent work also demonstrates that currently it is difficult to access policies and standard operating procedures of prominent research institutions like biobanks. In principle, fair scientific competition may militate against disclosure of technical details about data processing, but it is hard to imagine practical circumstances in which administrators of at least publicly funded datasets would not have incentives to share as much information as possible regarding the way they handle their data. Process visibility goes beyond fulfilling a pre-determined set of criteria (for instance, for auditing purposes). By disclosing governance processes and opportunities for engagement, actors actually offer reasons to be trusted by a variety of stakeholders.Footnote 31 This feature is of particular relevance for the principles of monitoring and reflexivity, as well as to improve the effectiveness of inclusive governance processes.

26.6 Conclusion

In this chapter, we have defended adaptive governance as a suitable regulatory approach for Big Data health research by proposing six governance principles to foster the development of appropriate structures and processes to handle critical aspects of Big Data health research. We have analysed key aspects of implementation and identified a number of important features that can make adaptive regulation operational. However, one might legitimately ask: in the absence of a central regulatory actor endowed with clearly recognised statutory prerogatives, how can it be assumed that the AFIRRM principles will be endorsed by the diverse group of stakeholders operating in the Big Data health research space? Clearly, this question does not have a straightforward answer. However, to increase likelihood of uptake, we have advanced AFIRRM as a viable and adaptable model for the creation of necessary tools that can deliver on common objectives. Our model is based on a careful analysis of regulatory scholarship vis-à-vis the key attributes of this type of research. We are currently undertaking considerable efforts to introduce AFIRRM to regulators, operators and organisations in the space of research or health policy. We are cognisant of the fact that the implementation of a model like AFIRRM needs not be temporally linear. Different actors may take initiative at different points in time. It cannot be expected that a coherent system of governance will emerge in a synchronically orchestrated manner through the uncoordinated action of multiple stakeholders. Such a path could only be imagined if a central regulator had the power and the will to make it happen. Nothing indicates, however, that regulation will assume a centralised character anytime soon. Nevertheless, polycentricity is not in itself a barrier to the emergence of a coherent governance ecosystem. Indeed, the AFIRRM principles – in line with its adaptive orientation – rely precisely on polycentric governance to cope with the uncertainty and complexity of Big Data health research.

27 Regulating Automated Healthcare and Research Technologies First Do No Harm (to the Commons)

Roger Brownsword
27.1 Introduction

New technologies, techniques, and tests in healthcare, offering better prevention, or better diagnosis and treatment, are not manna from heaven. Typically, they are the products of extensive research and development, increasingly enabled by high levels of automation and reliant on large datasets. However, while some will push for a permissive regulatory environment that is facilitative of beneficial innovation, others will push back against research that gives rise to concerns about the safety and reliability of particular technologies as well as their compatibility with respect for fundamental values. Yet, how are the interests in pushing forward with research into potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them?

A stock answer to this question is that regulators, neither over-regulating nor under-regulating, should seek an accommodation or a balance of interests that is broadly ‘acceptable’. If the issue is about risks to human health and safety, then regulators – having assessed the risk – should adopt a management strategy that confines risk to an acceptable level; and, if there is a tension between, say, the interest of researchers in accessing health data and the interest of patients in both their privacy and the fair processing of their personal data, then regulators should accommodate these interests in a way that is reasonable – or, at any rate, not manifestly unreasonable.

The central purpose of this chapter is not to argue that this balancing model is always wrong or inappropriate, but to suggest that it needs to be located within a bigger picture of lexically ordered regulatory responsibilities.Footnote 1 In that bigger picture, the paramount responsibility of regulators is to act in ways that protect and maintain the conditions that are fundamental to human social existence (the commons). After that, a secondary responsibility is to protect and respect the values that constitute a group as the particular kind of community that it is. Only after these responsibilities have been discharged do we get to a third set of responsibilities that demand that regulators seek out reasonable and acceptable balances of conflicting legitimate interests. Accordingly, before regulators make provision for a – typically permissive – framework that they judge to strike an acceptable balance of interests in relation to some particular technology, technique or test, they should check that its development, exploitation, availability and application crosses none of the community’s red lines and, above all, that it poses no threat to the commons.

The chapter is in three principal parts. First, in Section 27.2, we start with two recent reports by the Nuffield Council on Bioethics – one a report on the use of Non-Invasive Prenatal Testing (NIPT),Footnote 2 and the other on genome-editing and human reproduction.Footnote 3 At first blush, the reports employ a similar approach, identifying a range of legitimate – but conflicting – interests and then taking a relatively conservative position. However, while the NIPT report exemplifies a standard balancing approach, the genome-editing report implicates a bigger picture of regulatory responsibilities. Second, in Section 27.3, I sketch my own take on that bigger picture. Third, in Section 27.4, I speak to the way in which the bigger picture might bear on our thinking about the regulation of automated healthcare and research technologies. In particular, in this part of the chapter, the focus is on those technologies that power smart machines and devices, technologies that are hungry for human data but then, in their operation, often put humans out of the loop.

27.2 NIPT, Genome-Editing and the Balancing of Interests

In its report on the ethics of NIPT, the Nuffield Council on Bioethics identifies a range of legitimate interests that call for regulatory accommodation. On the one side, there is the interest of pregnant women and their partners in making informed reproductive choices. On the other side, there are interests – particularly of the disability community and of future children – in equality, fairness and inclusion. The question is: how are regulators to ‘align the responsibilities that [they have] to support women to make informed reproductive choices about their pregnancies, with the responsibilities that [they have] … to promote equality, inclusion and fair treatment for all’?Footnote 4 In response to which, the Council, being particularly mindful of the interests of future children – in an open future – and the interest in a wider societal environment that is fair and inclusive, recommends that a relatively restrictive approach should be taken to the use of NIPT.

In support of the Council’s approach and its recommendation, there is a good deal that can be said. For example, the Council consulted widely before drawing up the inventory of interests to be considered: it engaged with the arguments rationally and in good faith; where appropriate, its thinking was evidence-based; and its recommendation is not manifestly unreasonable. If we were to imagine a judicial review of the Council’s recommendation, it would surely survive the challenge.

However, if the Council had given greater weight to the interest in reproductive autonomy together with the argument that women have ‘a right to know’ and that healthcare practitioners have an interest in doing the best that they can for their patients,Footnote 5 leading to a much less restrictive recommendation, we could say exactly the same things in its support.

In other words, so long as the Council – and, similarly, any regulatory body – consults widely and deliberates rationally, and so long as its recommendations are not manifestly unreasonable, we can treat its preferred accommodation of interests as acceptable. Yet, in such balancing deliberations, it is not clear where the onus of justification lies or what the burden of justification is; and, in the final analysis, we cannot say why the particular restrictive position that the Council takes is more or less acceptable than a less restrictive position.

Turning to the Council’s second report, it hardly needs to be said that the development of precision gene-editing techniques, notably CRISPR-Cas9, has given rise to considerable debate.Footnote 6 Addressing the ethics of gene editing and human reproduction, the Council adopted a similar approach to that in its report on NIPT. Following extensive consultation – and, in this case, an earlier, more general, reportFootnote 7 – there is a careful consideration of a range of legitimate interests, following which a relatively conservative position is taken. Once again, although the position taken is not manifestly unreasonable, it is not entirely clear why this particular position is taken.

Yet, in this second report, there is a sense that something more than balancing might be at stake.Footnote 8 For example, the Council contemplates the possibility that genome editing might inadvertently lead to the extinction of the human species – or, conversely, that genome editing might be the salvation of humans who have catastrophically compromised the conditions for their existence. In these short reflections about the interests of ‘humanity’, we can detect a bigger picture of regulatory responsibilities.

27.3 The Bigger Picture of Regulatory Responsibilities

In this part of the chapter, I sketch what I see as the bigger – three-tier – picture of regulatory responsibilities and then speak briefly to the first two tiers.

27.3.1 The Bigger Picture

My claim is that regulators have a first-tier ‘stewardship’ responsibility for maintaining the pre-conditions for any kind of human social community (‘the commons’). At the second tier, regulators have a responsibility to respect the fundamental values of a particular human community, that is to say, the values that give that community its particular identity. At the third tier, regulators have a responsibility to seek out an acceptable balance of legitimate interests. The responsibilities at the first tier are cosmopolitan and non-negotiable. The responsibilities at the second and third tiers are contingent, depending on the fundamental values and the interests recognised in each particular community. Conflicts between commons-related interests, community values and individual or group interests are to be resolved by reference to the lexical ordering of the tiers: responsibilities in a higher tier always outrank those in a lower tier. Granted, this does not resolve all issues about trade-offs and compromises because we still have to handle horizontal conflicts within a particular tier. But, by identifying the tiers of responsibility, we take an important step towards giving some structure to the bigger picture.

27.3.2 First-Tier Responsibilities

Regulatory responsibilities start with the existence conditions that support the particular biological needs of humans. Beyond this, however, as agents, humans characteristically have the capacity to pursue various projects and plans whether as individuals, in partnerships, in groups, or in whole communities. Sometimes, the various projects and plans that they pursue will be harmonious; but often – as when the acceptability of the automation of healthcare and research is at issue – human agents will find themselves in conflict with one another. Accordingly, regulators also have a responsibility to maintain the conditions – conditions that are entirely neutral between the particular plans and projects that agents individually favour – that constitute the context for agency itself.

Building on this analysis, the claim is that the paramount responsibility for regulators is to protect, preserve, and promote:

  • the essential conditions for human existence (given human biological needs);

  • the generic conditions for human agency and self-development; and,

  • the essential conditions for the development and practice of moral agency.

These, it bears repeating, are imperatives in all regulatory spaces, whether international or national, public or private. Of course, determining the nature of these conditions will not be a mechanical process. Nevertheless, let me indicate how the distinctive contribution of each segment of the commons might be elaborated.

In the first instance, regulators should take steps to maintain the natural ecosystem for human life.Footnote 9 At minimum, this entails that the physical well-being of humans must be secured: humans need oxygen, they need food and water, they need shelter, they need protection against contagious diseases, if they are sick they need whatever treatment is available, and they need to be protected against assaults by other humans or non-human beings. When the Nuffield Council on Bioethics discusses catastrophic modifications to the human genome or to the ecosystem, it is this segment of the commons that is at issue.

Second, the conditions for meaningful self-development and agency need to be constructed: there needs to be sufficient trust and confidence in one’s fellow agents, together with sufficient predictability to plan, so as to operate in a way that is interactive and purposeful rather than merely defensive. Let me suggest that the distinctive capacities of prospective agents include being able: to form a sense of what is in one’s own self-interest; to choose one’s own ends, goals, purposes and so on (‘to do one’s own thing’); and to form a sense of one’s own identity (‘to be one’s own person’).

Third, the commons must secure the conditions for an aspirant moral community, whether the particular community is guided by teleological or deontological standards, by rights or by duties, by communitarian or liberal or libertarian values, by virtue ethics, and so on. The generic context for moral community is impartial between competing moral visions, values, and ideals; but it must be conducive to ‘moral’ development and ‘moral’ agency in the sense of forming a view about what is the ‘right thing’ to do relative to the interests of both oneself and others.

On this analysis, each human agent is a stakeholder in the commons where this represents the essential conditions for human existence together with the generic conditions of both self-regarding and other-regarding agency. While respect for the commons’ conditions is binding on all human agents, it should be emphasised that these conditions do not rule out the possibility of prudential or moral pluralism. Rather, the commons represents the pre-conditions for both individual self-development and community debate, giving each agent the opportunity to develop his or her own view of what is prudent, as well as what should be morally prohibited, permitted or required.

27.3.3 Second-Tier Responsibilities

Beyond the stewardship responsibilities, regulators are also responsible for ensuring that the fundamental values of their particular community are respected. Just as each individual human agent has the capacity to develop their own distinctive identity, the same is true if we scale this up to communities of human agents. There are common needs and interests but also distinctive identities.

In the particular case of the United Kingdom: although there is not a general commitment to the value of social solidarity, arguably this is actually the value that underpins the NHS. Accordingly, if it were proposed that access to NHS patient data – data, as Philip Aldrick has put it, that is ‘a treasure trove … for developers of next-generation medical devices’Footnote 10 – should be part of a transatlantic trade deal, there would surely be an uproar because this would be seen as betraying the kind of healthcare community that we think we are.

More generally, many nation states have expressed their fundamental (constitutional) values in terms of respect for human rights and human dignity.Footnote 11 These values clearly intersect with the commons’ conditions and there is much to debate about the nature of this relationship and the extent of any overlap – for example, if we understand the root idea of human dignity in terms of humans having the capacity freely to do the right thing for the right reason,Footnote 12 then human dignity reaches directly to the commons’ conditions for moral agency.Footnote 13 However, those nation states that articulate their particular identities by reference to their commitment to respect for human dignity are far from homogeneous. Whereas in some communities, the emphasis of human dignity is on individual empowerment and autonomy, in others it is on constraints relating to the sanctity, non-commercialisation, non-commodification and non-instrumentalisation of human life.Footnote 14 These differences in emphasis mean that communities articulate in very different ways on a range of beginning-of-life and end-of-life questions as well as on questions of acceptable health-related research, and so on.

Given the conspicuous interest of today’s regulators in exploring technological solutions, an increasingly important question will be whether, and if so, how far, a community sees itself as distinguished by its commitment to regulation by rule and by human agents. In some smaller-scale communities or self-regulating groups, there might be resistance to a technocratic approach because automated compliance compromises the context for trust and for responsibility. Or, again, a community might prefer to stick with regulation by rules and by human agents because it is worried that with a more technocratic approach, there might be both reduced public participation in the regulatory enterprise and a loss of flexibility in the application of technological measures.

If a community decides that it is generally happy with an approach that relies on technological measures rather than rules, it then has to decide whether it is also happy for humans to be out of the loop. Furthermore, once a community is asking itself such questions, it will need to clarify its understanding of the relationship between humans and robots – in particular, whether it treats robots as having moral status, or legal personality, and the like.

These are questions that each community must answer in its own way. The answers given speak to the kind of community that a group aspires to be. That said, it is, of course, essential that the fundamental values to which a particular community commits itself are consistent with (or cohere with) the commons’ conditions.

27.4 Automated Healthcare and the Bigger Picture of Regulatory Responsibility

One of the features of the NHS Long Term PlanFootnote 15 – in which the NHS is described as ‘a hotbed of innovation and technological revolution in clinical practice’Footnote 16 – is the anticipated role to be played by technology in ‘helping clinicians use the full range of their skills, reducing bureaucracy, stimulating research and enabling service transformation’.Footnote 17 Moreover, speaking about the newly created unit, NHSX (a new joint organisation for digital, data and technology), the Health Secretary, Matt Hancock, said that this was ‘just the beginning of the tech revolution, building on our Long Term Plan to create a predictive, preventative and unrivalled NHS’.Footnote 18

In this context, what should we make of the regulatory challenge presented by smart machines and devices that incorporate the latest AI and machine learning algorithms for healthcare and research purposes? Typically, these technologies need data on which to train and to improve their performance. While the consensus is that the collection and use of personal data needs governance and that big datasets (interrogated by state of the art algorithmic tools) need it a fortiori, there is no agreement as to what might be the appropriate terms and conditions for the collection, processing and use of personal data or how to govern these matters.Footnote 19

In its recent final report on Ethics Guidelines for Trustworthy AI,Footnote 20 the European Commission (EC) independent high-level expert group on artificial intelligence takes it as axiomatic that the development and use of AI should be ‘human-centric’. To this end, the group highlights four key principles for the governance of AI, namely: respect for human autonomy, prevention of harm, fairness and explicability. Where tensions arise between these principles, then they should be dealt with by ‘methods of accountable deliberation’ involving ‘reasoned, evidence-based reflection rather than intuition or random discretion’.Footnote 21 Nevertheless, it is emphasised that there might be cases where ‘no ethically acceptable trade-offs can be identified. Certain fundamental rights and correlated principles are absolute and cannot be subject to a balancing exercise (e.g. human dignity)’.Footnote 22

In line with this analysis, my position is that while there might be many cases where simple balancing is appropriate, there are some considerations that should never be put into a simple balance. The group mentions human rights and human dignity. I agree. Where a community treats human rights and human dignity as its constitutive principles or values, they act – in Ronald Dworkin’s evocative terms – as ‘trumps’.Footnote 23 Beyond that, the interest of humanity in the commons should be treated as even more foundational (so to speak, as a super-trump).

It follows that the first question for regulators is whether new AI technologies for healthcare and research present any threat to the existence conditions for humans, to the generic conditions for self-development, and to the context for moral development. It is only once this question has been answered that we get to the question of compatibility with the community’s particular constitutive values, and, then, after that, to a balancing judgment. If governance is to be ‘human-centric’, it is not enough that no individual human is exposed to an unacceptable risk or is not actually harmed. To be fully human-centric, technologies must be designed to respect both the commons and the constitutive values of particular human communities.

Guided by these regulatory imperatives, we can offer some short reflections on the three elements of the commons and how they might be compromised by the automation of research and healthcare.

27.4.1 The Existence Conditions

Famously, Stephen Hawking remarked that ‘the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity’.Footnote 24 As the best thing, AI would contribute to ‘[the eradication of] disease and poverty’Footnote 25 as well as ‘[helping to] reverse paralysis in people with spinal-cord injuries’.Footnote 26 However, on the downside, some might fear that in our quest for greater safety and well-being, we will develop and embed ever more intelligent devices to the point that there is a risk of the extinction of humans – or, if not that, then a risk of humanity surviving ‘in some highly suboptimal state or in which a large portion of our potential for desirable development is irreversibly squandered’.Footnote 27 If this concern is well-founded, then communities will need to be extremely careful about how far and how fast they go with intelligent devices.

Of course, this is not specifically a concern about the use of smart machines in the hospital or in the research facility: the concern about the existential threat posed to humans by smart machines arises across the board; and, indeed, concerns about existential threats are provoked by a range of emerging technologies.Footnote 28 In such circumstances, a regulatory policy of precaution and zero risk is indicated; and while stewardship might mean that the development and application of some technologies that we value has to be restricted, this is better than finding that they have compromised the very conditions on which the enjoyment of such technologies is predicated.

27.4.2 The Conditions for Self-Development and Agency

The developers of smart devices are hungry for data: data from patients, data from research participants, data from the general public. This raises concerns about privacy and data protection. While it is widely accepted that our privacy interests – in a broad sense – are ‘contextual’,Footnote 29 it is important to understand not just that ‘there are contexts and contexts’ but that there is a Context in which we all have a common interest. What most urgently needs to be clarified is whether any interests that we have in privacy and data protection touch and concern the essential conditions (the Context).

If, on analysis, we judge that privacy reaches through to the interests that agents necessarily have in the commons’ conditions – particularly in the conditions for self-development and agency – it is neither rational nor reasonable for agents, individually or collectively, to authorise acts that compromise these conditions (unless they do so in order to protect some more important condition of the commons). As Bert-Jaap Koops has so clearly expressed it, privacy has an ‘infrastructural character’, ‘having privacy spaces is an important presupposition for autonomy [and] self-development’.Footnote 30 Without such spaces, there is no opportunity to be oneself.Footnote 31 On this reading, privacy is not so much a matter of protecting goods – informational or spatial – in which one has a personal interest, but protecting infrastructural goods in which there is either a common interest (engaging first-tier responsibilities) or a distinctive community interest (engaging second-tier responsibilities).

By contrast, if privacy – and, likewise, data protection – is simply a legitimate informational interest that has to be weighed in an all things considered balance of interests, then we should recognise that what each community will recognise as a privacy interest and as an acceptable balance of interests might well change over time. To this extent, our reasonable expectations of privacy might be both ‘contextual’ and contingent on social practices.

27.4.3 The Conditions for Moral Development and Moral Agency

As I have indicated, I take it that the fundamental aspiration of any moral community is that regulators and regulatees alike should try to do the right thing. However, this presupposes a process of moral reflection and then action that accords with one’s moral judgment. In this way, agents exercise judgment in trying to do the right thing and they do what they do for the right reason in the sense that they act in accordance with their moral judgment. Accordingly, if automated research and healthcare relieves researchers and clinicians from their moral responsibilities, even though well intended, this might result in a significant compromising of their dignity, qua the conditions for moral agency.Footnote 32

Equally, if robots or other smart machines are used for healthcare and research purposes, some patients and participants might feel that this compromises their ‘dignity’ – robots might not physically harm humans, but even caring machines, so to speak, ‘do not really care’.Footnote 33 The question then is whether regulators should treat the interests of such persons as a matter of individual interest to be balanced against the legitimate interests of others, or as concerns about dignity that speak to matters of either (first-tier) common or (second-tier) community interest.

In this regard, consider the case of Ernest Quintana whose family were shocked to find that, at a particular Californian hospital, a ‘robot’ displaying a doctor on a screen was used to tell Ernest that the medical team could do no more for him and that he would soon die.Footnote 34 What should we make of this? Should we read the family’s shock as simply expressing a preference for the human touch or as going deeper to the community’s constitutive values or even to the commons’ conditions? Depending on how this question is answered, regulators will know whether a simple balance of interests is appropriate.

27.5 Conclusion

In this chapter, I have argued that it is not always appropriate to respond to new technologies for healthcare and research simply by enjoining regulators to seek out an acceptable balance of interests. My point is not that we should eschew either the balancing approach or the idea of ‘acceptability’ but that regulators should respond in a way that is sensitised to the full range of their responsibilities.

To the simple balancing approach, with its broad margin for ‘acceptable’ accommodation, we must add the regulatory responsibility to be responsive to the red lines and basic values that are distinctive of the particular community. Any claimed interest or proposed accommodation of interests that crosses these red lines or that is incompatible with the community’s basic values is ‘unacceptable’ – but this is for a different reason to that which applies where a simple balancing calculation is undertaken.

Most fundamentally, however, regulators have a stewardship responsibility in relation to the anterior conditions for humans to exist and for them to function as a community of agents. We should certainly say that any claimed interest or proposed accommodation of interests that is incompatible with the maintenance of these conditions is totally ‘unacceptable’ – but it is more than that. Unlike the red lines or basic values to which a particular community commits itself – red lines and basic values that may legitimately vary from one community to another – the commons’ conditions are not contingent or negotiable. For human agents to compromise the conditions upon which human existence and agency is itself predicated is simply unthinkable.

Finally, it should be said that my sketch of the regulatory responsibilities is incomplete – in particular, concepts such as the ‘public interest’ and the ‘public good’ need to be located within this bigger picture; and, there is more to be said about the handling of horizontal conflicts and tensions within a particular tier. Nevertheless, the ‘take home message’ is clear. Quite simply: while automated healthcare and research might be efficient and productive, new technologies should not present unacceptable risks to the legitimate interests of humans; beyond mere balancing, new technologies should be compatible with the fundamental values of particular communities; and, above all, these technologies should do no harm to the commons’ conditions – supporting human existence and agency – on which we all rely and which we undervalue at our peril.

Footnotes

23 Changing Identities in Disclosure of Research Findings

1 This chapter will not discuss responsibilities actively to pursue findings, or disclosures to family members in genetic research, nor is it concerned with feedback of aggregate findings. For discussion of researchers’ experiences of encountering and disclosing incidental findings in neuroscience research see Pickersgill, Chapter 31 in this volume.

2 S. M. Wolf et al., ‘Managing Incidental Findings in Human Subjects Research: Analysis and Recommendations’, (2008) The Journal of Law, Medicine & Ethics, 36(2), 219248.

3 L. Eckstein et al., ‘A Framework for Analyzing the Ethics of Disclosing Genetic Research Findings’, (2014) The Journal of Law, Medicine & Ethics, 42(2), 190207.

4 B. E. Berkman et al., ‘The Unintended Implications of Blurring the Line between Research and Clinical Care in a Genomic Age’, (2014) Personalized Medicine,11(3), 285295.

5 E. Parens et al., ‘Incidental Findings in the Era of Whole Genome Sequencing?’, (2013) Hastings Center Report, 43(4), 1619.

6 For example, in addition to sources cited elsewhere in this chapter, see R. R. Fabsitz et al., ‘Ethical and Practical Guidelines for Reporting Genetic Research Results to Study Participants’, (2010) Circulation: Cardiovascular Genetics, 3(6),574580; G. P. Jarvik et al., ‘Return of Genomic Results to Research Participants: The Floor, the Ceiling, and the Choices in Between’, (2014) The American Journal of Human Genetics, 94(6), 818826.

7 C. Weiner, ‘Anticipate and Communicate: Ethical Management of Incidental and Secondary Findings in the Clinical, Research, and Direct-to-Consumer Contexts’, (2014) American Journal of Epidemiology, 180(6), 562564.

8 Medical Research Council and Wellcome Trust, ‘Framework on the Feedback of Health-Related Findings in Research’, (Medical Research Council and Wellcome Trust, 2014).

9 Berkman et al., ‘The Unintended Implications’.

10 Eckstein et al., ‘A Framework for Analyzing’.

11 Wolf et al., ‘Managing Incidental Findings’.

13 Eckstein et al., ‘A Framework for Analyzing’.

14 Medical Research Council and Wellcome Trust, ‘Framework on the Feedback’.

16 Berkman et al., ‘The Unintended Implications’.

17 S. M. Wolf et al., ‘Mapping the Ethics of Translational Genomics: Situating Return of Results and Navigating the Research-Clinical Divide’, (2015) Journal of Law, Medicine & Ethics, 43(3), 486501.

18 G. Laurie and N. Sethi, ‘Towards Principles–Based Approaches to Governance of Health–Related Research Using Personal Data’, (2013) European Journal of Risk Regulation, 4(1), 4357. Genomics England, ‘The 100,000 Genomes Project’, (Genomics England), www.genomicsengland.co.uk/about-genomics-england/the-100000-genomes-project/.

19 Eckstein et al., ‘A Framework for Analyzing’.

20 A. L. Bredenoord et al., ‘Disclosure of Individual Genetic Data to Research Participants: The Debate Reconsidered’, (2011) Trends in Genetics, 27(2), 4147.

21 Wolf et al., ‘Mapping the Ethics’.

22 In the UK, the expected standard of duty of care is assessed to what reasonable members of the profession would do as well as what recipients want to know (see C. Johnston and J. Kaye, ‘Does the UK Biobank Have a Legal Obligation to Feedback Individual Findings to Participants?’, (2004) Medical Law Review, 12(3), 239267.

23 D. I. Shalowitz et al., ‘Disclosing Individual Results of Clinical Research: Implications of Respect for Participants’, (2005) JAMA, 294(6), 737740.

24 Laurie and Sethi, ‘Towards Principles–Based Approaches’.

25 G. Laurie and E. Postan, ‘Rhetoric or Reality: What Is the Legal Status of the Consent Form in Health-Related Research?’, (2013) Medical Law Revue, 21(3), 371414.

26 Odièvre v. France (App. no. 42326/98) [2003] 38 EHRR 871; ABC v. St George’s Healthcare NHS Trust & Others [2017] EWCA Civ 336.

27 J. Marshall, Personal Freedom through Human Rights Law?: Autonomy, Identity and Integrity Under the European Convention on Human Rights (Leiden: Brill, 2008).

28 A. M. Farrell and M. Brazier, ‘Not So New Directions in the Law of Consent? Examining Montgomery v Lanarkshire Health Board’, (2016) Journal of Medical Ethics, 42(2), 8588.

29 G. Laurie, ‘Liminality and the Limits of Law in Health Research Regulation: What Are We Missing in the Spaces In-Between?’, (2016) Medical Law Review, 25 (1), 4772.

30 J. Kaye et al., ‘From Patients to Partners: Participant-Centric Initiatives in Biomedical Research’, (2012) Nature Reviews Genetics, 13(5), 371.

31 J. Harris, ‘Scientific Research Is a Moral Duty’, (2005) Journal of Medical Ethics, 31(4), 242248.

32 S. C. Davies, ‘Chief Medical Officer Annual Report 2016: Generation Genome’, (Department of Health and Social Care, 2017), p. 4.

33 Wolf et al., ‘Mapping the Ethics’.

34 F. G. Miller et al., ‘Incidental Findings in Human Subjects Research: What Do Investigators Owe Research Participants?’, (2008) The Journal of Law, Medicine & Ethics, 36(2), 271279.

36 In Chapter 39 of this volume, Shawn Harmon presents a parallel argument that medical device regulations are similarly premised on a narrow conception of harm that fails to account for identity impacts.

37 Eckstein et al., ‘A Framework for Analyzing’.

38 E. Postan, ‘Defining Ourselves: Personal Bioinformation as a Tool of Narrative Self-Conception’, (2016) Journal of Bioethical Inquiry, 13(1), 133151.

39 M. Schechtman, The Constitution of Selves (New York: Cornell University Press, 1996).

40 Postan, ‘Defining Ourselves’.

41 C. Mackenzie, ‘Introduction: Practical Identity and Narrative Agency’ in K. Atikins and C. Mackenzie (eds), Practical Identity and Narrative Agency (Abingdon: Routledge, 2013), pp. 128.

42 L. d’Agincourt-Canning, ‘Genetic Testing for Hereditary Breast and Ovarian Cancer: Responsibility and Choice’, (2006) Qualitative Health Research, 16(1), 97118.

43 P. Carter et al., ‘The Social Licence for Research: Why care.data Ran into Trouble’, (2015) Journal of Medical Ethics, 41(5), 404409.

44 E. M. Bunnik et al., ‘Personal Utility in Genomic Testing: Is There Such a Thing?’, (2014) Journal of Medical Ethics, 41(4), 322326.

45 S. M. Wolf et al., ‘Managing Incidental Findings and Research Results in Genomic Research Involving Biobanks and Archived Data Sets’, (2012) Genetics in Medicine, 14(4), 361384.

24 Health Research and Privacy through the Lens of Public Interest A Monocle for the Myopic?

1 The idea that both privacy and health research may be described as ‘public interest causes’ is also compelling developed in W. W. Lowrance, Privacy, Confidentiality, and Health Research (Cambridge University Press, 2012) and the relationship between privacy and the public interested in C. D. Raab, ‘Privacy, Social Values and the Public Interest’ in A. Busch and J. Hofmann (eds), Politik und die Regulierung von Information [Politics and the Regulation of Information] (Baden-Baden, Germany: Politische Vierteljahresschrift, Sonderheft 46, 2012), pp. 129151.

2 V. P. Held, The Public Interest and Individual Interests (New York: Basic Books, 1970).

3 F. J. Sorauf, ‘The Public Interest Reconsidered’, (1957), The Journal of Politics, 19(4), 616639.

4 M. J. TaylorHealth Research, Data Protection, and the Public Interest in Notification’, (2011) Medical Law Review, 19(2), 267303; M. J. Taylor and T. Whitton, ‘Public Interest, Health Research and Data Protection Law: Establishing a Legitimate Trade-Off between Individual Control and Research Access to Health Data’, (2020) Laws, 9(1), 6.

5 M. Meyerson and E. C. Banfield, cited by Sorauf ‘The Public Interest Reconsidered’, 619.

6 J. Bell, ‘Public Interest: Policy or Principle?’ in R. Brownsword (ed.), Law and the Public Interest: Proceedings of the 1992 ALSP Conference (Stuttgart: Franz Steiner, 1993) cited in M. Feintuck, Public Interest in Regulation (Oxford University Press, 2004), p. 186.

7 There is a connection here with what has been described by Rawls as ‘public reasons’: limited to premises and modes of reasoning that are accessible to the public at large. L. B. Solum, ‘Public Legal Reason’, (2006) Virginia Law Review, 92(7), 14491501, 1468.

8 ‘The virtue of public reasoning is the cultivation of clear and explicit reasoning orientated towards the discovery of common grounds rather than in the service of sectional interests, and the impartial interpretation of all relevant available evidence.’ Nuffield Council on Bioethics, ‘Public Ethics and the Governance of Emerging Biotechnologies’, (Nuffield Council on Bioethics, 2012), 69.

9 G. Gaus, The Order of Public Reason: A Theory of Freedom and Morality in a Diverse and Bounded World (Cambridge University Press, 2011), p. 19. Note the distinction Gaus draws here between the Restricted and the Expansive view of Freedom and Equality.

10 We here associate legitimacy with ‘the capacity of the system to engender and maintain the belief that the existing political institutions are the most appropriate ones for the society’ S. M. Lipset, Political Man: The Social Bases of Politics (Baltimore, MD: John Hopkins University Press, 1981 [1959]), p. 64. This is consistent with recognition that the ‘liberal principle of legitimacy states that the exercise of political power is justifiable only when it is exercised in accordance with constitutional essentials that all citizens may reasonably be expected to endorse in the light of principles and ideals acceptable to them as reasonable and rational’, Solum, ‘Public Legal Reason’, 1472. See also D. Curtin and A. J. Meijer, ‘Does Transparency Strengthen Legitimacy?’, (2006) Inform Polity II, 11(2), 109122, 112 and M. J. Taylor, ‘Health Research, Data Protection, and the Public Interest in Notification’, (2011) Medical Law Review, 19(2), 267303.

11 The argument offered is a development of one originally presented in M. J. Taylor, Genetic Data and the Law (Cambridge University Press, 2012), see esp. pp. 2934.

12 The term ‘accept’ is chosen over ‘prefer’ for good reason. M. J. Taylor and N. C. TaylorHealth Research Access to Personal Confidential Data in England and Wales: Assessing Any Gap in Public Attitude between Preferable and Acceptable Models of Consent’, (2014) Life Sciences, Society and Policy, 10(1), 124.

13 A ‘real definition’ is to be contrasted with a nominal definition. A real definition may associate a word or term with elements that must necessarily be associated with the referent (a priori). A nominal definition may be discovered by investigating word usage (a posteriori). For more, see Stanford Encyclopedia of Philosophy, ‘Definitions’, (Stanford Encyclopedia of Philosophy, 2015), www.plato.stanford.edu/entries/definitions/.

14 G. Laurie recognises privacy to be a state of non-access. G. Laurie, Genetic Privacy: A Challenge to Medico-Legal Norms (Cambridge University Press, 2002) p. 6. We prefer the term ‘exclusivity’ rather than ‘separation’ as it recognises a lack of separation in one aspect does not deny a privacy claim in another. E.g. one’s normative expectations regarding use and disclosure are not necessarily weakened by sharing information with health professionals. For more see M. J. Taylor, Genetic Data and the Law: A Critical Perspective on Privacy Protection (Cambridge University Press, 2012), pp. 1340.

15 See, C. J. Bennet and C. D. Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Ashgate, 2003), p. 13.

17 S. T. MargulisConceptions of Privacy: Current Steps and Next Steps’, (1977) Journal of Social Issues, 33(3), 521, 10.

18 S. T. MargulisPrivacy as a Social Issue and a Behavioural Concept’, (2003) Journal of Social Issues, 9(2), 243261, 245.

19 Department of Health, ‘Summary of Responses to the Consultation on the Additional Uses of Patient Data’, (Department of Health, 2008).

20 Our argument has no application to aggregate data that does not relate to a group until or unless that association is made.

21 A number are described for example by V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor (New York: St Martin’s Press, 2018).

22 See e.g. Foster v. Mountford (1976) 14 ALR 71. (Australia)

23 An example of the kind of common purpose that privacy may serve relates to the protection of culturally significant information. A well-known example of this is the harm associated with the research conducted with the Havasupai Tribe in North America. R. Dalton, ‘When Two Tribes Go to War’, (2004) Nature, 430(6999), 500502; A. Harmon, ‘Indian Tribe Wins Fight to Limit Research of Its DNA’, The New York Times (21 April 2010). Similar concerns had been expressed by the Nuu-chahnulth of Vancouver Island, Canada, when genetic samples provided for one purpose (to discover the cause of rheumatoid arthritis) were used for other purposes. J. L. McGregor, ‘Population Genomics and Research Ethics with Socially Identifiable Groups’, (2007) Journal of Law and Medicine, 35(3), 356370, 362. Proposals to establish a genetic database on Tongans floundered when the ethics policy focused on the notion of individual informed consent and failed to take account of the traditional role played by the extended family in decision-making. B. Burton, ‘Proposed Genetic Database on Tongans Opposed’, (2002) BMJ, 324(7335), 443.

24 P. M. Regan, Legislating Privacy: Technology, Social Values, and Public Policy (University of North Carolina Press, 1995) p. 221.

25 L. Taylor et al. (eds), Group Privacy: New Challenges of Data Technologies (New York: Springer, 2017), p. 5.

26 Taylor et al., ‘Group Privacy’, p. 7.

27 Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, Strasbourg, 28 January 1981, in force 1 October 1985, ETS No. 108, Protocol CETS No. 223.

28 To protect every individual, whatever his or her nationality or residence, with regard to the processing of their personal data, thereby contributing to respect for his or her human rights and fundamental freedoms, and in particular the right to privacy.

29 ‘Convention for the Protection of Individuals’, Article 2(a).

31 M. J. Taylor and T. Whitton, ‘Public Interest, Health Research and Data Protection Law: Establishing a Legitimate Trade-Off between Individual Control and Research Access of Health Data,’ (2020) Laws, 9(6), 124, 17–19; J. Rawls, The Law of Peoples (Harvard University Press, 1999), pp. 129180.

32 E.g. The conception of public interest proposed in this chapter would allow concerns associated with processing in a third country, or an international organisation, to be taken into consideration where associated with issues of group privacy. Article 49(5) of the General Data Protection Regulation, Regulation (EU) 2016/679, OJ L 119, 4 May 2016.

33 Although data protection law seeks to protect fundamental rights and freedoms, in particular the right to respect for a private life, without collapsing the concepts of data protection and privacy.

34 Earl Spencer v. United Kingdom [1998] 25 EHRR CD 105.

35 W v. Egdell [1993] 1 All ER 835. Ch. 359.

36 R (W, X, Y and Z) v. Secretary of State for Health [2015] EWCA Civ 1034, [48].

37 Campbell v. MGN Ltd [2004] UKHL 22, [2004] ALL ER (D) 67 (May), per Lord Nicholls [11].

38 Footnote Ibid. [14].

39 Footnote Ibid. [50].

40 Niemietz v. Germany [1992] 13710/88 [29].

41 Regan, ‘Legislating Privacy’, p. 8.

42 [1999] EWCA Civ 3011.

43 M. J. Taylor, ‘R (ex p. Source Informatics) v. Department of Health [1999]’ in J. Herring and J. Wall (eds), Landmark Cases in Medical Law (Oxford: Hart, 2015), pp. 175192; D. Beyleveld, ‘Conceptualising Privacy in Relation to Medical Research Values’ in S. A. M. MacLean (ed.), First Do No Harm (Farnham, UK: Ashgate, 2006), p. 151. It is interesting to consider how English Law may have something to learn in this respect from the Australian courts e.g. Foster v. Mountford (1976) 14 ALR 71.

44 M. Richardson, The Right to Privacy (Cambridge University Press, 2017), p. 120.

45 Footnote Ibid., p. 122.

46 Footnote Ibid., p. 119.

25 Mobilising Public Expertise in Health Research Regulation

1 NHS, ‘News: NHS England sets out the next steps of public awareness about care.data’, (NHS, 2013), www.england.nhs.uk/2013/10/care-data/.

2 F. Godlee, ‘What Can We Salvage From care.data?’, (2016) BMJ, 354(i3907).

3 F. Caldicott et al., ‘Information: To Share Or Not to Share? The Information Governance Review’, (UK Government Publishing Service, 2013).

4 A. Irwin and B. Wynne, Misunderstanding Science. The Public Reconstruction of Science and Technology (Abingdon: Routledge, 1996); S. Locke, ‘The Public Understanding of Science – A Rhetorical Invention’, (2002) Science Technology & Human Values, 27(1), 87111.

5 K. C. O’Doherty and M. M. Burgess, ‘Developing Psychologically Compelling Understanding of the Involvement of Humans in Research’, (2019) Human Arenas 2(6), 118.

6 J. F. Caron-Flinterman et al., ‘The Experiential Knowledge of Patients: A New Resource for Biomedical Research?’, (2005) Social Science and Medicine, 60(11), 25752584; M. De Wit et al., ‘Involving Patient Research Partners has a Significant Impact on Outcomes Research: A Responsive Evaluation of the International OMERACT Conferences’, (2013) BMJ Open, 3(5); S. Petit-Zeman et al., ‘The James Lind Alliance: Tackling Research Mismatches’, (2010) Lancet, 376(9742), 667669; J. A. Sacristan et al., ‘Patient Involvement in Clinical Research: Why, When, and How’, (2016) Patient Preference and Adherence, 2016(10), 631640.

7 C. Mitton et al., ‘Health Technology Assessment as Part of a Broader Process for Priority Setting and Resource Allocation’, (2019) Applied Health Economics and Health Policy, 17(5), 573576.

8 M. Aitken et al., ‘Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research’, (2018) International Journal of Population Data Science, 4(1), 16; C. Bentley et al., ‘Trade-Offs, Fairness, and Funding for Cancer Drugs: Key Findings from a Public Deliberation Event in British Columbia, Canada’, (2018) BMC Health Services Research, 18(1), 339362; S. M. Dry et al., ‘Community Recommendations on Biobank Governance: Results from a Deliberative Community Engagement in California’, (2017) PLoS ONE 12(2), 114; R. E. McWhirter et al., ‘Community Engagement for Big Epidemiology: Deliberative Democracy as a Tool’, (2014) Journal of Personalized Medicine, 4(4), 459474.

9 J. Brett et al., ‘Mapping the Impact of Patient and Public Involvement on Health and Social Care Research: A Systematic Review’, (2012) Health Expectations, 17(5), 637650; R. Gooberman-Hill et al., ‘Citizens’ Juries in Planning Research Priorities: Process, Engagement and Outcome’, (2008) Health Expectations, 11(3), 272281; S. Oliver et al., ‘Public Involvement in Setting a National Research Agenda: A Mixed Methods Evaluation’, (2009) Patient, 2(3), 179190.

10 S. Sherwin, ‘Toward Setting an Adequate Ethical Framework for Evaluating Biotechnology Policy’, (Canadian Biotechnology Advisory Committee, 2001). As cited in M. M. Burgess and J. Tansey, ‘Democratic Deficit and the Politics of “Informed and Inclusive” Consultation’ in E. Einsiedel (ed.), From Hindsight to Foresight (Vancouver: UBC Press, 2008), pp. 275288.

11 A. Irwin et al., ‘The Good, the Bad and the Perfect: Criticizing Engagement Practice’, (2013) Social Studies of Science, 43(1), 118135; S. Jasanoff, The Ethics of Invention: Technology and the Human Future (Manhattan, NY: Norton Publishers, 2016); B. Wynne, ‘Public Engagement as a Means of Restoring Public Trust in Science: Hitting the Notes, but Missing the Music?’, (2006) Community Genetics 9(3), 211220.

12 J. Gastil and P. Levine, The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century (Plano, TX: Jossey-Bass Publishing, 2005).

13 I. M. Young, Inclusion and Democracy (Oxford University Press, 2000), p. 136.

14 M. Berger and B. De Cleen, ‘Interpellated Citizens: Suggested Subject Positions in a Deliberation Process on Health Care Reimbursement’, (2018) Comunicazioni Sociali, 1, 91103; L. Althusser, ‘Ideology and Ideological State Apparatuses: Notes Towards an Investigation’ in L. Althusser (ed.) Lenin and Philosophy and Other Essays (Monthly Review Press, 1971), pp. 173174.

15 H. L. Walmsley, ‘Mad Scientists Bend the Frame of Biobank Governance in British Columbia’, (2009) Journal of Public Deliberation, 5(1), Article 6.

16 M. E. Warren, ‘Governance-Driven Democratization’, (2009) Critical Policy Studies, 3(1), 313, 10.

17 G. Smith and C. Wales, ‘Citizens’ Juries and Deliberative Democracy’, (2000) Political Studies, 48(1), 5165.

18 S. Chambers, ‘Deliberative Democratic Theory’, (2003) Annual Review of Political Science, 6, 307326.

19 M. M. Burgess et al., ‘Assessing Deliberative Design of Public Input on Biobanks’ in S. Dodds and R. A. Ankeny (eds) Big Picture Bioethics: Developing Democratic Policy in Contested Domains (Switzerland: Springer, 2016), pp. 243276.

20 R. E. Goodin and J. S. Dryzek, ‘Deliberative Impacts: The Macro-Political Uptake of Mini-Publics’, (2006) Politics & Society, 34(2), 219244.

21 H. Longstaff and M. M. Burgess, ‘Recruiting for Representation in Public Deliberation on the Ethics of Biobanks’, (2010) Public Understanding of Science, 19(2), 212–24.

22 D. Steel et al., ‘Multiple Diversity Concepts and Their Ethical-Epistemic Implications’, (2018) The British Journal for the Philosophy of Science, 8(3), 761780.

23 K. Beier et al., ‘Understanding Collective Agency in Bioethics’, (2016) Medicine, Health Care and Philosophy, 19(3), 411422.

24 Longstaff and Burgess, ‘Recruiting for Representation’.

25 S. M. Dry et al., ‘Community Recommendations on Biobank Governance’.

26 Burgess et al., ‘Assessing Deliberative Design’, pp. 270–271.

27 A. Kadlec and W. Friedman, ‘Beyond Debate: Impacts of Deliberative Issue Framing on Group Dialogue and Problem Solving’, (Center for Advances in Public Engagement, 2009); H. L. Walmsley, ‘Mad Scientists Bend the Frame of Biobank Governance in British Columbia’, (2009) Journal of Public Deliberation, 5(1), Article 6.

28 M. M. Burgess, ‘Deriving Policy and Governance from Deliberative Events and Mini-Publics’ in M. Howlett and D. Laycock (eds), Regulating Next Generation Agri-Food Biotechnologies: Lessons from European, North American and Asian Experiences (Abingdon: Routledge, 2012), pp. 220236; D. Nicol, et al., ‘Understanding Public Reactions to Commercialization of Biobanks and Use of Biobank Resources’, (2016) Social Sciences and Medicine, 162, 7987.

29 J. Abelson et al., ‘Bringing ‘The Public’ into Health Technology Assessment and Coverage Policy Decisions: From Principles to Practice’, (2007) Health Policy, 82(1), 3750; T. Nabatchi et al., Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement (Oxford University Press, 2012).

30 D. Caluwaeerts and M. Reuchamps, The Legitimacy of Citizen-led Deliberative Democracy: The G1000 in Belgium (Abingdon: Routledge, 2018).

31 G. Gigerenzer and P. M. Todd, ‘Ecological Rationality: The Normative Study of Heuristics’, in P. M. Todd and G. Gigerenzer (eds), Ecological Rationality: Intelligence in the World (Oxford University Press, 2012).

32 E. Christofides et al., ‘Heuristic Decision-Making About Research Participation in Children with Cystic Fibrosis’, (2016) Social Science & Medicine, 162, 3240; O’Doherty and Burgess, ‘Developing Psychologically Compelling Understanding’;M. M. Burgess and K. C. O’Doherty, ‘Moving from Understanding of Consent Conditions to Heuristics of Trust’, (2019) American Journal of Bioethics, 19(5), 2426.

26 Towards Adaptive Governance in Big Data Health Research Implementing Regulatory Principles

1 fitbit Inc., ‘National Institutes of Health Launches Fitbit Project as First Digital Health Technology Initiative in Landmark All of Us Research Program (Press Release)’, (fitbit, 2019).

2 D. C. Collins et al., ‘Towards Precision Medicine in the Clinic: From Biomarker Discovery to Novel Therapeutics’, (2017) Trends in Pharmacological Sciences, 38(1), 2540.

3 A. Giddens, The Third Way: The Renewal of Social Democracy (New York: John Wiley & Sons, 2013), p. 69.

4 E. Vayena and A. Blasimme, ‘Health Research with Big Data: Time for Systemic Oversight’, (2018) The Journal of Law, Medicine & Ethics, 46(1), 119129.

5 C. Folke et al., ‘Adaptive Governance of Social-Ecological Systems’, (2005) Annual Review of Environment and Resources, 30, 441473.

6 T. Dietz et al., ‘The Struggle to Govern the Commons’, (2003) Science, 302(5652), 19071912.

7 C. Ansell and A. Gash, ‘Collaborative Governance in Theory and Practice’, (2008) Journal of Public Administration Research and Theory, 18(4), 543571.

8 J. J. Warmink et al., ‘Coping with Uncertainty in River Management: Challenges and Ways Forward’, (2017) Water Resources Management, 31(14), 45874600.

9 R. J. McWaters et al., ‘The Future of Financial Services-How Disruptive Innovations Are Reshaping the Way Financial Services Are Structured, Provisioned and Consumed’, (World Economic Forum, 2015).

10 R. A. W. Rhodes, ‘The New Governance: Governing without Government’, (1996) Political Studies, 44(4), 652667.

11 J. Black, ‘The Rise, Fall and Fate of Principles Based Regulation’, (2010) LSE Legal Studies Working Paper, 17.

13 Vayena and Blasimme, ‘Health Research’.

14 B. Walker et al., ‘Resilience, Adaptability and Transformability in Social–Ecological Systems’, (2004) Ecology and Society, 9 (2), 4.

15 E. Vayena and A. Blasimme, ‘Biomedical Big Data: New Models of Control over Access, Use and Governance’, (2017) Journal of Bioethical Inquiry, 14(4), 501513.

16 See, for example, S. Arjoon, ‘Striking a Balance between Rules and Principles-Based Approaches for Effective Governance: A Risks-Based Approach’, (2006) Journal of Business Ethics, 68(1), 5382; A. Kezar, ‘What Is More Important to Effective Governance: Relationships, Trust, and Leadership, or Structures and Formal Processes?’, (2004) New Directions for Higher Education, 127, 3546.

17 J. Rijke et al., ‘Fit-for-Purpose Governance: A Framework to Make Adaptive Governance Operational’, (2012) Environmental Science & Policy, 22, 7384.

18 R. A. W. Rhodes, Understanding Governance: Policy Networks, Governance, Reflexivity, and Accountability (Buckingham: Open University Press, 1997); R. A. W. Rhodes, ‘Understanding Governance: Ten Years On’, (2007) Organization Studies, 28(8), 12431264.

19 F. Gille et al. ‘Future-proofing biobanks’ governance’,  (2020) European Journal of Human Genetics, 28, 989–996.

20 E. Ostrom, ‘A Diagnostic Approach for Going beyond Panaceas’, (2007) Proceedings of the National Academy of Sciences, 104(39), 1518115187.

21 D. A. DeCaro et al., ‘Legal and Institutional Foundations of Adaptive Environmental Governance’, (2017) Ecology and Society: A Journal of Integrative Science for Resilience and Sustainability, 22 (1), 1.

22 B. Chaffin et al., ‘A Decade of Adaptive Governance Scholarship: Synthesis and Future Directions’, (2014) Ecology and Society, 19(3), 56.

23 Rijke et al., ‘Fit-for-Purpose Governance’.

24 A. Blasimme et al., ‘Democratizing Health Research Through Data Cooperatives’, (2018) Philosophy & Technology, 31(3), 473479.

25 A. Bandura and R. H. Walters, Social Learning Theory, vol. 1 (Prentice-hall Englewood Cliffs, NJ, 1977).

26 Ostrom, ‘A Diagnostic Approach’.

27 D. Swanson et al., ‘Seven Tools for Creating Adaptive Policies’, (2010) Technological Forecasting and Social Change, 77(6), 924939, 925.

28 D. Berthiau, ‘Law, Bioethics and Practice in France: Forging a New Legislative Pact’, (2013) Medicine, Health Care and Philosophy, 16(1), 105113.

29 G. Silberman and K. L. Kahn, ‘Burdens on Research Imposed by Institutional Review Boards: The State of the Evidence and Its Implications for Regulatory Reform’, (2011) The Milbank Quarterly, 89(4), 599627.

30 G. T. Laurie et al., ‘Charting Regulatory Stewardship in Health Research: Making the Invisible Visible’, (2018) Cambridge Quarterly of Healthcare Ethics, 27(2), 333347.

31 O. O’Neill, ‘Trust with Accountability?’, (2003) Journal of Health Services Research & Policy, 8(1), 34.

27 Regulating Automated Healthcare and Research Technologies First Do No Harm (to the Commons)

1 See, further, R. Brownsword, Law, Technology and Society: Re-imagining the Regulatory Environment (Abingdon: Routledge, 2019), Ch. 4.

2 Nuffield Council on Bioethics, ‘Non-invasive Prenatal Testing: Ethical Issues’, (March 2017); for discussion, see R. Brownsword and J. Wale, ‘Testing Times Ahead: Non-Invasive Prenatal Testing and the Kind of Community that We Want to Be’, (2018) Modern Law Review, 81(4), 646672.

3 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction: Social and Ethical Issues’, (July 2018).

4 Nuffield Council on Bioethics, ‘Non-Invasive Prenatal Testing’, para 5.20.

5 Compare N. J. Wald et al., ‘Response to Walker’, (2018) Genetics in Medicine, 20(10), 1295; and in Canada, see the second phase of the Pegasus project, Pegasus, ‘About the Project’, www.pegasus-pegase.ca/pegasus/about-the-project/.

6 See, e.g., J. Harris and D. R. Lawrence, ‘New Technologies, Old Attitudes, and Legislative Rigidity’ in R. Brownsword et al. (eds) Oxford Handbook of Law, Regulation and Technology (Oxford University Press, 2017), pp. 915928.

7 Nuffield Council on Bioethics, ‘Genome Editing: An Ethical Review’, (September 2016).

8 Nuffield Council on Bioethics, ‘Genome Editing and Human Reproduction’, paras 3.72–3.78.

9 Compare, J. Rockström et al., ‘Planetary Boundaries: Exploring the Safe Operating Space for Humanity’ (2009) Ecology and Society, 14(2); K. Raworth, Doughnut Economics (Random House Business Books, 2017), pp. 4353.

10 P. Aldrick, ‘Make No Mistake, One Way or Another NHS Data Is on the Table in America Trade Talks’, The Times, (8 June 2019), 51.

11 See R. Brownsword, ‘Human Dignity from a Legal Perspective’ in M. Duwell et al. (eds), Cambridge Handbook of Human Dignity (Cambridge University Press, 2014), pp. 122.

12 For such a view, see R. Brownsword, ‘Human Dignity, Human Rights, and Simply Trying to Do the Right Thing’ in C. McCrudden (ed), Understanding Human Dignity – Proceedings of the British Academy 192 (The British Academy and Oxford University Press, 2013), pp. 345358.

13 See R. Brownsword, ‘From Erewhon to Alpha Go: For the Sake of Human Dignity Should We Destroy the Machines?’, (2017) Law, Innovation and Technology, 9(1), 117153.

14 See D. Beyleveld and R. Brownsword, Human Dignity in Bioethics and Biolaw (Oxford University Press, 2001);R. Brownsword, Rights, Regulation and the Technological Revolution (Oxford University Press, 2008).

15 NHS, ‘NHS Long Term Plan’, (January 2019), www.longtermplan.nhs.uk.

16 Footnote Ibid., 91.

18 Department of Health and Social Care, ‘NHSX: New Joint Organisation for Digital, Data and Technology’, (19 February 2019), www.gov.uk/government/news/nhsx-new-joint-organisation-for-digital-data-and-technology.

19 Generally, see R. Brownsword, ‘Law, Technology and Society’, Ch. 12; D. Schönberger, ‘Artificial Intelligence in Healthcare: A Critical Analysis of the Legal and Ethical Implications’, (2019) International Journal of Law and Information Technology, 27(2), 171203.

For the much-debated collaboration between the Royal Free London NHS Foundation Trust and Google DeepMind, see, J. Powles, ‘Google DeepMind and healthcare in an age of algorithms’, (2017) Health and Technology, 7(4), 351367.

20 European Commission, ‘Ethics Guidelines for Trustworthy AI’, (8 April 2019).

22 Footnote Ibid., emphasis added.

23 R. Dworkin, Taking Rights Seriously, revised edition (London: Duckworth, 1978).

24 S. Hawking, Brief Answers to the Big Questions (London: John Murray, 2018) p. 188.

25 Footnote Ibid., p. 189.

26 Footnote Ibid., p. 194.

27 See, N. Bostrom, Superintelligence (Oxford University Press, 2014), p. 281 (Footnote note 1);M. Ford, The Rise of the Robots (London: Oneworld, 2015), Ch. 9.

28 For an indication of the range and breadth of this concern, see e.g. ‘Resources on Existential Risk’, (2015), www.futureoflife.org/data/documents/Existential%20Risk%20Resources%20(2015-08-24).pdf.

29 See, for example, D. J. Solove, Understanding Privacy (Cambridge, MA: Harvard University Press, 2008); H. Nissenbaum, Privacy in Context (Palo Alto, CA: Stanford University Press, 2010).

30 B. Koops, ‘Privacy Spaces’, (2018) West Virginia Law Review, 121(2), 611665, 621.

31 Compare, too, M. Brincker, ‘Privacy in Public and the Contextual Conditions of Agency’ in T. Timan, et al. (eds), Privacy in Public Space (Cheltenham: Edward Elgar, 2017), pp. 6490; M. Hu, ‘Orwell’s 1984 and a Fourth Amendment Cybersurveillance Nonintrustion Test’, (2017) Washington Law Review, 92(4), 18191904, 1903–1904.

32 Compare K. Yeung and M. Dixon-Woods, ‘Design-Based Regulation and Patient Safety: A Regulatory Studies Perspective’, (2010) Social Science and Medicine, 71(3), 502509.

33 Compare R. Brownsword, ‘Regulating Patient Safety: Is It Time for a Technological Response?’, (2014) Law, Innovation and Technology, 6(1), 129.

34 See M. Cook, ‘Bedside Manner 101: How to Deliver Very Bad News’, Bioedge (17 March 2019), www.bioedge.org/bioethics/bedside-manner-101-how-to-deliver-very-bad-news/12998.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×