Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-26T03:12:01.274Z Has data issue: false hasContentIssue false

Antitrust as a Guardrail for Socially Responsible Neurotechnology Design

Published online by Cambridge University Press:  10 May 2023

Roland Nadler*
Affiliation:
Peter A. Allard School of Law, University of British Columbia, Vancouver, British Columbia, Canada Neuroethics Canada, Division of Neurology, Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
*
Corresponding author: Roland Nadler, Peter A. Allard School of Law, University of British Columbia, 1822 East Mall, Vancouver, BC V6T 1Z1, Canada. Email: nadler@student.ubc.ca
Rights & Permissions [Opens in a new window]

Abstract:

The neurotechnology sector is likely to develop under pressure towards commercialized, nonmedical products and may also undergo market consolidation. This possibility raises ethical, social, and policy concerns about the future responsibility of neurotechnology innovators and companies for high-consequence design decisions. Present-day internet technology firms furnish an instructive example of the problems that arise when providers of communicative technologies become too big for accountability. As a guardrail against the emergence of similar problems, concerned neurotechnologists may wish to draw inspiration from antitrust law and direct efforts, where appropriate, against undue consolidation in the commercial neurotechnology market.

Résumé :

Résumé :

Mesure de sécurité : recours aux lois antitrust pour une organisation socialement responsable du secteur de la neurotechnologie.

Il est bien possible que le secteur de la neurotechnologie cède aux pressions et qu’il se tourne vers la commercialisation de produits non médicaux, voire à la consolidation du marché. Cette dernière possibilité soulève des préoccupations d’ordre éthique et social, ainsi qu’en matière de politique en ce qui concerne la responsabilité future des innovateurs et des entreprises en neurotechnologie au regard de leurs décisions lourdes de conséquence quant au type de structure envisagé. Ainsi, des entreprises spécialisées dans le domaine de la technologie d’Internet sont des exemples éloquents des problèmes que soulève la consolidation lorsque des fournisseurs de technologie en communication deviennent si importants qu’ils se soustraient à leur responsabilité. Des technologistes préoccupés par la situation pourraient s’inspirer de lois antitrust comme mesure de sécurité contre l’apparition de ce type de problème, et diriger leurs efforts de lutte, lorsqu’il y a lieu, contre une consolidation excessive du marché de la neurotechnologie commerciale.

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Canadian Neurological Sciences Federation

Introduction

Scientists, clinicians, and neuroethicists are united not only in spurring ethical practice around present-day neurotechnology, but also in planning thoughtfully for what its future may bring. Because it is foreseeable that neurotechnology developers and companies will encounter pressure towards commercialization and market consolidation, readiness for the future should include a corresponding push to promote accountability at a structural level. Despite unfolding within a system of private enterprise, neurotechnology development is best understood as a means of serving the public; yet no one can engage in genuine public service unless they are answerable to the people they aim to help. If the ideal of accountability is to remain realistic, there may be an eventual need to inhibit undue market consolidation in the commercial neurotechnology sector.

Concerns about unfair competition, market consolidation, and unaccountable corporate power are usually associated with the field of antitrust law, and while this discussion of technology ethics does not involve analysis of legal doctrine or economic data, it does draw on the overall spirit of antitrust. Taking the example of contemporary media-and-technology firms as a case study, I develop an analogy to neurotechnology and ultimately suggest an antitrust-inspired approach for preserving accountability. Large internet technology companies currently enjoy de facto authority to shape key domains of social and expressive activity; if neurotechnological development continues along its structurally foreseeable course, the largest commercial neurotechnology firms may find themselves in a similar position of unsought authority.

This is not to offer a doomsaying vision of commercial neurotechnology purveyors as the inevitable successors of Facebook and Twitter. The evolution of technology in society is not deterministic; it is shaped by policy, material conditions, and social norms. If the combination of high-consequence design decisions and low public accountability seen today in big technology is normatively undesirable, then it makes sense to shape the evolution of neurotechnology towards a market consisting of many small firms rather than one or two dominant ones. This would reduce the likelihood of the same undesirable combination emerging in the neurotechnology space.

Opportunities for Action

  • The downside of contemporary technology titans is the resultant inhibition of accountability; but these titanic firms pose an opportunity for action and advocacy.

  • Advocates for responsible neurotechnology innovation must be vigilant against the lack of accountability that comes with big tech.

  • There is a need to integrate stakeholder values and the public good into neurotechnology development, and the more growth had by technology developers, the more challenging this integration becomes.

  • By keeping firms small enough to remain answerable to the public, democratic values may be protected.

  • A decentralized ecosystem of neurotechnology firms is a sustainable course for the future.

Case Study

Context

As reflected in both the primary literature and neuroethical commentaries on neurotechnological innovation, early neuromodulatory interventions have principally focused on the remediation of well-defined disease states and severe, treatment-resistant mental health conditions with established diagnostic criteria.Reference Neren, Johnson, Legon, Bachour, Ling and Divani1,Reference Illes, Lipsman and McDonald2 Scholarly writing on the future of neuromodulation, however, reveals ambitions to expand its uses into more enhancement-oriented (i.e., non-pathological) and less physician-mediated domains.Reference Tyler, Schmorrow and Fidopiastis3 Another notable tendency in the academic and entrepreneurial outlook on future neuromodulation is to forecast its intertwining with brain–computer interface (BCI) technologies.Reference De Ridder, Maciaczyk and Vanneste4 While BCI and neuromodulation are conceptually distinguishable categories, their complementarity is evident from both the level of excited interest and attendant scrutiny generated by promises to integrate them.Reference Dadia and Greenbaum5 Throughout this discussion of commercial neurotechnology, neuromodulation-capable BCI will be the principal focus.

Visions of a high-neurotechnology future are to some extent imaginary. They should not be treated as reliable factual predictions or inevitabilities. Imagination, however, is not inert: shared ways of thinking about the future provide a nucleus around which research agendas and entrepreneurial funding can cohere. Shared imagination can also influence popular expectations, leading to the motivating promise of profit for delivering on those expectations. Additionally, innovators and users alike will naturally seek to push neuromodulation towards improved safety, reduced invasiveness, and greater accessibility.

It is not guaranteed that technology will develop as imagined. However, given the prevailing market-oriented channels along which societies direct innovation efforts, some broad contours of its developmental trajectory are reasonably foreseeable. To advise planning around a by-default likelihood of increasingly commercialized advanced neurotechnology is not the same as doing neuroethics by speculative fiction. Speculation about the development of science and technology has well-documented pitfalls as a method.Reference Racine, Rubio, Chandler, Forlini and Lucke6 At the same time, if one encounters an acorn in the right conditions, it is not improperly speculative to view the eventual growth of an oak in that spot as a default outcome – not a certainty, but the presumptive result, absent other intervening factors.

Method

The above context helps situate a brief comparison to present-day internet technology companies. The comparison takes the form of an informal case study, one meant to better ground the overall argument. This case study furnishes only a loose analogy; it is intended to identify broad structural similarities between the contemporary internet and the foreseeable landscape for consumer neurotechnology. Specifically, the aim is to highlight how dominant technology firms influence society in consequential ways through product design decisions.

Media scholars have observed that discussions of so-called “big tech censorship” often envision an unregulated social internet, devoid of content moderation to curb bigotry, abuse, or disinformation.Reference Kor-Sins7 This case study and neuroethical analysis may appear sympathetic to such a vision, due to a shared emphasis on the value of decentralized power. The similarity is purely superficial. For clarity, I affirmatively oppose the wish for internet deregulation and reject the ideological project it stems from, which embraces discredited laissez-faire approaches to the marketplace of ideas.Reference Brietzke8 Rather, the values motivating this work are egalitarian and democratic ones.

Analysis

Proprietors of key internet platforms – such as Alphabet (encompassing Google and YouTube), Meta (Facebook and WhatsApp), and Twitter – have come to wield considerable influence as “custodians of the internet.”Reference Gillespie9 These entities actively set the terms, dynamics, and limits of much public discourse and at present are only weakly constrained by popular sentiment or legal authority in how they do so. Some of this term-setting is familiarly overt, for example, via creating and enforcing content moderation policies. Less obvious but equally notable is the ambient influence these companies exert on the public consciousness by overseeing the organization, dissemination, and accessibility of information. Sometimes their power functions through case-by-case human judgment or via deliberately crafted terms of use, but there is increasing reliance on automation as well. Automation can embed value-laden presumptions or harmful social attitudes, with examples ranging from image cropping algorithms recreating the male gaze to autocomplete suggestions perpetuating racial stereotypes.Reference Birhane, Prabhu and Whaley10,Reference Baker and Potts11

The sheer size and dominant market position of the web’s foremost social hubs and media platforms exacerbate the stakes of proprietor design decisions. These services have become central to many spheres of civic life, and they tend not to be interoperable with competitor platforms. As a result, users of the internet have little meaningful choice among providers of its key communicative and information-organizing services; as legal scholars concerned about privacy have noted, pronounced consolidation in this space magnifies the impact of individual proprietor decisions on societally valued rights and norms.Reference Pasquale12,Reference Crawford13 This leads to a predicament where privately offered services take on key functions of a public commons, yet in many respects remain governed by private fiat.

Importantly, the enormous size of these platforms also leaves their proprietors with no effective way to opt out from their position of subtle authority. Choosing not to establish rules or automated systems governing content, after all, is still a choice; indeed, social network proprietors have come under especially intense scrutiny for their failures to take sufficient action against dangerous content.Reference Fink14,Reference Lavi15 Some scholars have applied the lens of behavioral psychology to make the case that levels of prosocial or antisocial behavior observed on the social internet are attributable in significant part to proprietor design decisions, rather than being solely a reflection of the user base.Reference Lavi16 For all of these reasons, it is fair to characterize these companies as making unusually high-consequence decisions from a position of low accountability.

This case study on internet technology has focused on widely adopted tools facilitating social and expressive activity, with special attention to how power can be exercised via design choices. In providing the context for the case study, I argued that there is a foreseeable path for neurotechnologies, especially neuromodulatory BCIs, to likewise become widely adopted tools facilitating social and expressive activity. The case study also noted the power-magnifying effect of market consolidation among internet technology firms. While clairvoyance is not possible, other medical device markets have consolidated, and commentators have characterized further consolidation in the neuromodulation industry as likely, with Medtronic and Boston Scientific as early dominant players.Reference Piuzzi, Ng and Song17,Reference Cavuoto, Krames, Peckham and Rezai18 Having drawn these analogies between the social importance of present-day internet technology firms and neurotechnology firms of the foreseeable future, I turn to the remaining question: Why view any of this as cause for concern?

Challenges and Opportunities

Challenging Scenarios

Consider the following hypothetical examples and associated questions. These are posed not as exercises in ethics by speculative fiction, but rather to bring an underlying principle into sharper relief.

Scenario 1

A clinician who helped to develop a flexibly configurable therapeutic neurostimulation device has worked with a young adult patient to successfully treat severe anxiety disorder using the device. After the conclusion of the therapeutic relationship, the now-former patient contacts the developer’s chief product designer and says, “lately I’ve been questioning my sexual orientation, and that was stressing me out, so I’ve been using my device to try and quiet that part of my mind down. It doesn't work too well, though, because the device keeps automatically redirecting the stimulation pattern. Do you think the algorithm could be changed to let me do what I want?” Setting aside the designer’s interpersonal ethical responsibilities in this scenario, is there a further obligation to push out a device firmware update that altogether eliminates the potential for this kind of use? Or should the update give consumers greater autonomy over this kind of usage? Does it matter whether the relevant jurisdiction has banned conversion therapy? Whether the ex-patient is a minor?

Scenario 2

A neurotechnology firm is developing a BCI device with an integrated natural language processing algorithm for a variety of potential communication-assistive applications. Should the algorithm be set up to define and classify a category of utterances as hate speech? How should utterances in the category be handled by the device in its operations? What procedure should govern initial creation and quality assurance for the category?

Scenario 3

A group of activists is planning to protest a controversial new law by peacefully occupying a government building. Hoping gain familiarity with the building’s layout and thereby stage a more organized and effective protest, the organizers navigate a simulation of the building’s interior using wearable neurostimulation devices designed for "augmented reality" applications. Does the device manufacturer have any responsibility to implement surveillance functionality in its wearables that would detect potentially unlawful activity? If so, is there any obligation for the manufacturer to act on the information gathered? Should the answer change if the activists are protesting a law that they believe infringes on fundamental human rights? Does it make a difference if organizers’ plan to enter the building is clearly unlawful?

These hypothetical situations have a common thread: in each one, a privately maintained neurotechnology poses a challenge with stakes not just for private individuals, but for the public. In each one, it seems that the company in charge of the relevant neurotechnology must make a design decision of considerable social or political significance. These scenarios are challenging by design. They pose questions that will invite disagreement about what the right answer is.

The point of raising these questions is not to answer them directly but to highlight that they are matters about which reasonable members of a society will disagree. By appreciating that these questions raise issues of concern to the democratic public, it becomes easier to appreciate the operative principle: that the rightful authority to answer them belongs, likewise, to the democratic public. Importantly, that authority does not rest with the designers, engineers, and executives who happen to find themselves in charge of making the relevant neurotechnological design decisions. These individuals must still implement decisions, but in a way that is respectful of the public’s ultimate ownership of contested political and social matters. This requires a corporate structure facilitating accountability to all persons affected by the decision at hand – to the immediate stakeholders, to those potentially impacted, and to the polity writ large.Reference Fung, Nagel and Smith19 The larger the neurotechnology firm, the more difficult it is to establish and maintain accountability to the public.

Opportunities for Action

What is undesirable about the powerful position of contemporary technology titans is not that they engage in consequential decision-making. Realistically, there is no avoiding this. The problem is that their titanic scale inhibits meaningful accountability and insulates them from demands for the same. It remains to be seen whether there is any realistic policy option that would restore accountability within the internet technology sector. Prudent early movers in the neurotechnology space, on the other hand, can take the example of behemoth technology firms as an opportunity for action and advocacy.

The example of big tech teaches that advocates for responsible neurotechnology innovation would be wise to cultivate vigilance against what has been called “the curse of bigness.”Reference Wu20 There is a recognized need to integrate stakeholder values and the public good into neurotechnology development. Yet, as seen in the social media context, when technology developers grow too big for accountability it becomes significantly more challenging to achieve such integration. The best antitrust scholarship teaches that consolidation has more harmful effects than simply driving up consumer prices; indeed, by keeping firms small enough to remain answerable to the public, antitrust reaches beyond narrow consumer-welfare goals and serves to protect democratic values.Reference Wu20,Reference Brietzke21 There is a corresponding neuroethical need to foster a decentralized ecosystem of neurotechnology firms. Doing so will make it less challenging to correct the course of any one firm should its decisions begin to disserve the public interest.

Conclusion

The neurotechnological futures imagined in the literature and in examples above are not inevitable. They are not even necessarily likely – unless society chooses to make them likely. Lawmakers and the public can approach these choices with the benefit of empiricism and expertise, and experts in ethical inquiry can act as helpful mediators in the public uptake of that empiricism. This analogy-driven case study has illustrated a key concern about democratic accountability as neurotechnology manufacturers move toward consolidation in a maturing market. What is required as this process unfolds is to keep the future-producing engines of neuromodulatory innovation answerable to the society that constitutes them.

Acknowledgements

The Peter A. Allard School of Law and Neuroethics Canada are located on the traditional, ancestral, unceded territory of the xwməθkwəỳəm (Musqueam people).

Funding

RN is generously supported by a Vanier Canada Graduate Scholarship.

Statement of Authorship

RN conceptualized and drafted the manuscript.

Disclosures

None.

References

Neren, D, Johnson, MD, Legon, W, Bachour, SP, Ling, G, Divani, AA. Vagus nerve stimulation and other neuromodulation methods for treatment of traumatic brain injury. Neurocrit Care. 2016;24:30819. DOI 10.1007/s12028-015-0203-0.CrossRefGoogle ScholarPubMed
Illes, J, Lipsman, N, McDonald, J, et al. From vision to action: Canadian leadership in ethics and neurotechnology. Int Rev Neurobiol. 2021;159:24173. DOI 10.1016/bs.irn.2021.06.012.CrossRefGoogle ScholarPubMed
Tyler, WJ. Multimodal neural interfaces for augmenting human cognition. In: Schmorrow, D, Fidopiastis, C, editors. Augmented cognition. Enhancing cognition and behavior in complex human environments. Cham: Springer; 2017, pp. 389407. DOI 10.1007/978-3-319-58625-0_29.CrossRefGoogle Scholar
De Ridder, D, Maciaczyk, J, Vanneste, S. The future of neuromodulation: smart neuromodulation. Expert Rev Med Devices. 2021;18:30717. DOI 10.1080/17434440.2021.1909470.CrossRefGoogle ScholarPubMed
Dadia, T, Greenbaum, D. Neuralink: the ethical ‘rithmatic of reading and writing the brain. AJOB Neurosci. 2019;10:1879. DOI 10.1080/21507740.2019.1665129.CrossRefGoogle ScholarPubMed
Racine, E, Rubio, TM, Chandler, J, Forlini, C, Lucke, J. The value and pitfalls of speculation about science and technology in bioethics: the case of cognitive enhancement. Med Health Care Philos. 2014;17:32537. DOI 10.1007/s11019-013-9539-4.CrossRefGoogle ScholarPubMed
Kor-Sins, R. The alt-right digital migration: a heterogeneous engineering approach to social media platform branding. New Media Soc. 2021. DOI 10.1177/14614448211038810.CrossRefGoogle Scholar
Brietzke, PH. How and why the marketplace of ideas fails. Valparaiso Univ Law Rev. 1997;31:95170.Google Scholar
Gillespie, T. Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press; 2018.Google Scholar
Birhane, A, Prabhu, VU, Whaley, J. Auditing saliency cropping algorithms. In: Proc. IEEE/CVF Winter Conf. Applications Comp. Vision; 2022, pp. 405159.CrossRefGoogle Scholar
Baker, P, Potts, A. Why do white people have thin lips? Google and the perpetuation of stereotypes via auto-complete search forms. Crit Discourse Stud. 2013;10:187204. DOI 10.1080/17405904.2012.744320.CrossRefGoogle Scholar
Pasquale, F. Privacy, antitrust, and power. George Mason Law Rev. 2013;20:100924.Google Scholar
Crawford, S. Captive audience: the telecom industry and monopoly power in the new Gilded Age. New Haven: Yale University Press; 2013.Google Scholar
Fink, C. Dangerous speech, anti-Muslim violence, and Facebook in Myanmar. J Int Aff. 2018;71:4352.Google Scholar
Lavi, M. Targeting exceptions. Fordham Intellect Prop Media Entertain Law J. 2021;32:65171.Google Scholar
Lavi, M. Evil nudges. Vanderbilt J Entertain Technol Law. 2018;21:194.Google Scholar
Piuzzi, NS, Ng, M, Song, S, et al. Consolidation and maturation of the orthopaedic medical device market between 1999 and 2015. Eur J Ortho Surg Traumatol. 2019;29:75966. DOI 10.1007/s00590-019-02372-z.CrossRefGoogle ScholarPubMed
Cavuoto, J. The birth of an industry. In: Krames, ES, Peckham, PH, Rezai, AR, editors. Neuromodulation: comprehensive textbook of principles, technologies, and therapies, 2nd ed. Cambridge, MA: Academic Press; 2018, pp. 166574. DOI 10.1016/B978-0-12-805353-9.00141-8.CrossRefGoogle Scholar
Fung, A. The principle of affected interests: an interpretation and defense. In: Nagel, JH, Smith, RM, editors. Representation: elections and beyond. Philadelphia: University of Pennsylvania Press; 2013, pp. 23768.Google Scholar
Wu, T. The curse of bigness: antitrust in the new Gilded Age. New York: Columbia Global Reports; 2018.CrossRefGoogle Scholar
Brietzke, PH. Book review: Robert Bork, the antitrust paradox — a policy at war with itself. Valparaiso Univ Law Rev. 1979;13:40321.Google Scholar