Introduction
Part I of this volume explored the novel concerns about privacy and data raised by home-based digital diagnostics. These arguments surrounding data access, rights, and regulation were framed primarily in abstract terms applicable to the very broad category of digital diagnostics. Part II carries these themes forward into three specific disease areas of profound public health, policy, and bioethical importance. The rise of new technology and telemedicine-based diagnostic pathways for these conditions – cardiovascular disease, reproductive health, and neurodegenerative disease – builds on accelerating advances in sensors, data transmission, artificial intelligence (AI), and data science. The COVID-19 pandemic amplified the opportunity and imperative to provide diagnostic and potentially therapeutic services outside of traditional clinical settings. New devices and systems may not only replace traditional care, but also expand the reach of critical screening and diagnosis to patients otherwise unable to access or navigate health systems. The three chapters in this part thus present real-world case studies of the hopes and hazards of applying digital diagnostics with a disease-specific focus at population-wide scale.
Patrik Bächtiger and colleagues introduce this part with their chapter, “Patient Self-Administered Screening for Cardiovascular Disease Using Artificial Intelligence in the Home.” The authors outline a novel attempt in the United Kingdom to address late or missed diagnoses of congestive heart failure, valvular heart disease, and atrial fibrillation – all conditions with high morbidity and mortality that can be substantially mitigated with early treatment. Using electronic health records from general practitioners, patients at high risk for these conditions are invited to use (in their own homes) an electronic stethoscope with the ability to record electrocardiograms (ECGs) as well as heart sounds, which then feed into AI algorithms for near-immediate diagnoses. While the theoretical clinical, public health, and economic benefits of this new pathway may be well-grounded, the authors consider several ethical features of the program to be in need of greater scrutiny. Equity may be both advanced or hindered by AI-enabled cardiovascular screening, which may reduce barriers to accessing traditional clinical evaluation and mitigate cognitive bias, at the cost of exposing patients to the biases of the algorithms themselves. Relatedly, decentralizing clinical screening into the home necessarily creates new roles and responsibilities for patients and families, and establishes new data structures with distinct potential risks and benefits. The authors propose programmatic metrics that might capture empirical evidence to adjudicate these ethical questions.
Equity, agency, and control of data extend into Donley and Rebouché’s contribution, “The Promise of Telehealth for Abortion,” which evaluates the growing but tremulous landscape for abortion services supported by telehealth and related advances. The authors trace the legal and regulatory arc of medical abortion services provided without direct in-person care, and the more recent conflicts raised by new state laws in the wake of the epochal Dobbs decision. In many states, the possibility of digital surveillance supporting abortion-related prosecutions raises the stakes for data rights and digital privacy just as new options expand for consumer- and clinically driven diagnostic devices or wearables capturing physiologic signals consistent with pregnancy. In theory and practice, it may already be the case that a smartwatch might “know” someone is pregnant before its wearer, and that knowledge necessarily lives in a digital health ecosystem potentially accessible to law enforcement and other parties. Donley and Rebouché nimbly forecast the challenges and future conflict in balancing access and safe provision of abortion services, while posing difficult questions about the legal risks borne by both patients and providers.
This part concludes by moving from the beginning of life toward its twilight, with Erickson and Largent’s exploration of the intersection between digital diagnostics and neurodegenerative diseases, “Monitoring (on) Your Mind: Digital Biomarkers for Alzheimer’s Disease.” Alzheimer’s disease and its related disorders retain their status as classical “clinical diagnoses” – those that cannot be made based on a physical exam, imaging, symptoms, or traditional blood tests alone, but only by an expert amalgamation of individual findings. While Alzheimer’s currently lacks the disease-modifying treatments available for many cardiovascular conditions, facilitating diagnoses through digital means may offer other benefits to patients and their families, and could potentially provide a bridgehead toward studying treatments in the future. The authors outline several novel avenues for leveraging digital diagnostics to identify cognitive impairment, many of which draw insights from everyday activities not usually considered as inputs for health measurement. Increasingly, digitized and wirelessly-connected features of daily life, including driving, appliances, phones, and smart speakers, will enable the algorithmic identification of early cognitive or functional limitations. Erickson and Largent ask how these advances complicate questions of consent and communication outside of traditional clinics, and revisit concerns about equity and either improved or exacerbated disparities in access to care.
Uniquely within this part, however, Erickson and Largent confront a more fundamental question posed by increasingly powerful digital diagnostics: How much do we really want to know about our own health? While fraught in other ways, diagnoses of heart failure or pregnancy generally cannot be ignored or dismissed, and (legal risks aside) patient care can generally be improved with earlier and more precise diagnosis. Identifying early (in particular, very early) cognitive impairment, however, offers more complex trade-offs among patients and their current or future caregivers, particularly in the absence of effective therapies. While genetics can offer similar pre-diagnosis or risk prediction, a critical distinction raised by digital diagnostics is their ubiquity: Anyone who drives, uses a smartphone, or types on a keyboard creates potential inputs to their eventual digital phenotyping, with all the attendant burdens. Digital diagnostics, used in our own homes, applied to more and more disease areas, will require a deeper reconciliation between relentless innovation and the boundaries of individuals’ desire to understand their own health.
I Introduction
The United Kingdom (UK) National Health Service (NHS) is funding technologies for home-based diagnosis that draw on artificial intelligence (AI).Footnote 1 Broadly defined, AI is the ability of computer algorithms to interpret data at human or super-human levels of performance.Footnote 2 One compelling use case involves patient-recorded cardiac waveforms that are interpreted in real time by AI to predict the presence of common, clinically actionable cardiovascular diseases. In this case, both electrocardiograms (ECGs) and phonocardiograms (heart sounds) are recorded by a handheld device applied by the patient in a self-administered smart stethoscope examination, communicating waveforms to the cloud via smartphone for subsequent AI interpretation – principally known as AI-ECG. Validation studies suggest the accuracy of this technology approaches or exceeds many established national screening programs for other diseases.Footnote 3 More broadly, the combination of a new device (a modified handheld stethoscope), novel AI algorithms, and communication via smartphone coalesce into a distinct clinical care pathway that may become increasingly prevalent across multiple disease areas.
However, the deployment of a home-based screening program combining hardware, AI, and a cloud-based digital platform for administration – all anchored in patient self-administration – raises distinct ethical challenges for safe, effective, and trustworthy implementation. This chapter approaches these concerns in five parts. First, we briefly outline the organizational structure of the NHS and associated regulatory bodies responsible for evaluating the safety of medical technology. Second, we highlight NHS plans to prioritize digital health and the specific role of AI in advancing this goal with a focus on cardiovascular disease. Third, we review the clinical imperative for early diagnosis of heart failure in community settings, and the established clinical evidence supporting the use of a novel AI-ECG-based tool to do so. Fourth, we examine the ethical concerns with the AI-ECG diagnostic pathway according to considerations of equity, agency, and data rights across key stakeholders. Finally, we propose a multi-agency strategy anchored in a purposefully centralized view of this novel diagnostic pathway – with the goal of preserving and promoting trust, patient engagement, and public health.
II The UK National Health Service and Responsible Agencies
For the purposes of this chapter, we focus on England, where NHS England is the responsible central government entity for the delivery of health care (Scotland, Wales, and Northern Ireland run devolved versions of the NHS). The increasing societal and political pressure to modernize the NHS has led to the formation of agencies tasked with this specific mandate, each of which plays a key role in evaluating and deploying the technology at issue in this chapter. Within NHS England, the NHSX was established with the aim of setting national NHS policy and developing best practices across technology, digital innovation, and data, including data sharing and transparency. Closely related, NHS Digital is the national provider of information, data, and IT systems for commissioners, analysts, and clinicians in health and social care in England. From a regulatory perspective, the Medicines and Healthcare products Regulatory Agency (MHRA) is responsible for ensuring that medicines and medical devices (including software) work and are acceptably safe for market entry within the scope of their labelled indications. Post Brexit, the UK’s underlying risk-based classification system remains similar to that of its international counterparts, categorizing risk into three incremental classes determined by the intended use of the product. In practice, most diagnostic technology (including ECG machines, stethoscopes, and similar) would be considered relatively low-risk devices (class I/II) compared with invasive, implantable, or explicitly life-sustaining technologies (class III). One implication of this risk tiering is that, unlike a new implanted cardiac device, such as a novel pacemaker or coronary stent, the market entry of diagnostic technology (including AI-ECGs) would not be predicated on having demonstrated their safety and effectiveness through, for example, a large trial with hard clinical endpoints.
Once a medical device receives regulatory authorization from the MHRA, the UK takes additional steps to determine whether and what the NHS should pay for it. The National Institute for Health and Clinical Excellence (NICE) evaluates the clinical efficacy and cost-effectiveness of drugs, health technologies, and clinical practices for the NHS. Rather than negotiating prices, NICE makes recommendations for system-wide funding and, therefore, deployment, principally based on using tools such as quality-adjusted life years. In response to the increasing number and complexity of digital health technologies, NICE partnered with NHS England to develop standards that aim to ensure that new digital health technologies are clinically effective and offer economic value. The subsequent evidence standards framework for digital health technologies aims to inform stakeholders by exacting appropriate evidence, and to be dynamic and value-driven, with a focus on offering maximal value to patients.Footnote 4
Considering the role of the regulatory bodies above, as applied to a novel AI-ECG device, we observe the following: Manufacturers seeking marketing authority for new digital health tools primarily focused on the diagnosis rather than treatment of a specific condition (like heart failure), must meet the safety and effectiveness standards of the MHRA – but those standards do not necessarily (or likely) require a dedicated clinical trial illustrating real-world clinical value. By contrast, convincing the NHS to pay for the new technology may require more comprehensive evidence sufficient to sway NICE, which is empowered to take a more holistic view of the costs and potential benefits of novel health tools. The advancement of this evidence generation for digital health tools is increasingly tasked to NHS sub-agencies. All of this aims to align with the NHS Long Term Plan, which defines the key challenges and sets an ambitious vision for the next ten years of health care in the UK.Footnote 5 AI is singled-out as a key driver for digital transformation. Specifically, the “use of decision support and AI to help clinicians in applying best practice, eliminate unwarranted variation across the whole pathway of care, and support patients in managing their health and condition.” Here we already note implicit ethical principles: Reducing unjustified variability in care (as a consideration of justice) and promoting patient autonomy by disseminating diagnostic capabilities that otherwise may be accessible only behind layers of clinical or administrative gatekeeping. Focusing on the specific imperative of heart failure, this chapter discusses whether either of these or other ethical targets are, on balance, advanced by AI-ECG. To do this, we first outline the relevant clinical and technological background below.
III Screening for Heart Failure with AI-ECG
The symptomatic burden and mortality risks of heart failure – where the heart is no longer able to effectively pump blood to meet the body’s needs under normal pressures – remain worse than those of many common, serious cancers. Among all chronic conditions, heart failure has the greatest impact on quality of life and costs the NHS over £625 million per year – 4 percent of its annual budget.Footnote 6 The NHS Long Term Plan emphasizes that “80% of heart failure is currently diagnosed in hospital, despite 40% of patients having symptoms that should have triggered an earlier assessment.” Subsequently, the Plan advocates for “using a proactive population health approach focused on … earlier detection and intervention to treat undiagnosed disorders.”Footnote 7 While the exact combination of data will vary by context, a clinical diagnosis of heart failure may include the integration of patients’ symptoms, physical exams (including traditional stethoscope auscultation of the heart and lungs), and various cardiac investigations, including blood tests and imaging. Individually, compared with a clinical diagnosis gold standard, the test characteristics of each modality vary widely, with sensitivity generally higher than specificity.
Similar to most chronic diseases in high-income countries, the burden of heart failure is greatest in those who are most deprived and tends to have an earlier age of onset in minority ethnic groups, who experience worse outcomes.Footnote 8 Therefore, heart failure presents a particularly attractive target for disseminated technology with the potential to speed up diagnosis and direct patients toward proven therapies, particularly if this mitigates the social determinants of health driving observed disparities in care. Given the epidemiology of the problem and the imperative for practical screening, a tool supporting the community-based diagnosis of heart failure has the potential to be both clinically impactful and economically attractive. The myriad diagnostics applicable to heart failure described, however, variously require phlebotomy, specialty imaging, and clinical interpretation to tie together signs and symptoms into a clinical syndrome. AI-supported diagnosis may overcome these limitations.
The near ubiquity of ECGs in well-phenotyped cardiology cohorts supports the training and testing of AI algorithms among tens of thousands of patients. This has resulted in both clinical and, increasingly, consumer-facing applications where AI can interrogate ECGs and accurately identify the presence, for example, of heart rhythm disturbances. Building on an established background suggesting that the ECG can serve as an accurate digital biomarker for the stages of heart failure, a recent advance in AI has unlocked the super-human capability to detect heart failure from a single-lead ECG alone.Footnote 9
The emergence of ECG-enabled stethoscopes, capable of recording single-lead ECGs during contact for routine auscultation (listening), highlighted an opportunity to apply AI-ECG to point-of-care screening. The Eko DUO (Eko Health, Oakland, CA, US) is one example of such an ECG-enabled stethoscope (see Figure 5.1). Detaching the tubing leaves a small cell phone-sized device embedded with sensors (electrodes and microphone) for recording both ECGs and phonocardiograms (heart sounds). Connectivity via Bluetooth allows the subsequent live streaming of both ECG and phonocardiographic waveforms to a user’s smartphone and the corresponding Eko app. Waveforms can be recorded and transmitted to cloud-based infrastructure, allowing them to be analyzed by cloud-based AI algorithms, such as AI-ECG.
While the current programmatic focus is on identifying community heart failure diagnoses, AI can, in theory, also be applied to ECG and phonocardiographic waveforms to identify the presence of two additional public health priorities: Atrial fibrillation, a common irregular heart rhythm, and valvular heart disease, typified by the presence of heart murmurs. Therefore, taken in combination, a fifteen-second examination with an ECG-enabled smart stethoscope may offer a three-in-one screening test for substantial drivers of cardiovascular morbidity and mortality, and systemically important health care costs.
The authors are currently embarking on the first stage of deploying such a screening pathway, anchored in primary care, given the high rates of undiagnosed heart failure and further cardiovascular disease, including atrial fibrillation and valvular disease, in communities across England.Footnote 10 The early stages of this pathway involve using NHS general practitioner electronic health records and applying search logic to identify those at risk for heart failure (e.g., risk factors such as hypertension, diabetes, previous myocardial infarction). Patients who consent are mailed a small parcel containing an ECG-enabled stethoscope (Eko DUO) and a simple instruction leaflet on how to perform and transmit a self-recording. Patients are encouraged to download the corresponding Eko App to their own phones (those who are unable to are sent a phone with the app preinstalled as part of the package). Patients whose data, as interpreted by AI, suggests the presence of heart failure, atrial fibrillation, or valvular heart disease are invited for further investigation in line with established NICE clinical pathways.
This sets the scene for a novel population health intervention that draws on a technology-driven screening test, initiated in the patient’s home, by the patient themselves. The current, hospital-centric approach to common and costly cardiovascular conditions combines clinical expertise and the available technologies to screen and unlock substantial clinical and health economic benefits through early diagnosis. Opportunities for more decentralized (outside of hospital), patient-activated screening with digital diagnostics will surely follow if AI-ECG proves tractable. Notably, here we have described what we believe to be among the earliest applications of “super-human” AI – accurately inferring the presence of heart failure from a single-lead ECG was previously thought impossible – with the potential for meeting a major unmet need through a clinical pathway that scales access to this potentially transformative diagnostic.
IV Ethical Considerations for Self-Administered Cardiovascular Disease Screening at Home
Having outlined the health policy and stakeholder landscape and specified how this relates to heart failure and AI-ECG, we can progress to discussing the unique ethical challenges posed by patient self-administration of this test in their own homes. Enthusiasm for such an approach to community, patient-driven cardiovascular screening is founded in not only clinical expediency, but also a recognition of the way in which this pathway may support normative public health goals, particularly around equity and patient empowerment. Despite these good-faith expectations, the deployment of such a home-based screening program combining hardware, AI, and a cloud-based digital platform for administration – all hinging on patient self-administration – raises distinct ethical challenges. In this section, we explore the ethical arguments in favor of the AI-ECG program, as well its potential pitfalls.
A Equity
One durable and compelling argument supporting AI-ECG arises from well-known disparities in cardiovascular disease and treatment. Cardiovascular disease follows a social gradient; this is particularly pronounced for heart failure diagnoses, where under-diagnosis in England is most frequent in the lowest-income areas. This tracks with language skills, a key social determinant of health related to a lower uptake of preventative health care and subsequently worse health outcomes. In England, nearly one million people (2 percent of the total population) lack basic English language skills. AI-ECG attenuates these disparities in several ways.
First, targeted screening based on risk factors (such as high blood pressure and diabetes) will, based on epidemiologic trends, necessarily and fruitfully support vulnerable patient groups for whom these conditions are more prevalent. These same patients will also be less able to access traditional facility-based cardiac testing. AI-ECG overcomes these concerns for the patients most in need.
Second, AI-ECG explicitly transfers a key gatekeeping diagnostic screen away from clinicians: The cognitive biases of traditional bedside medicine. Cross-cultural challenges in subjective diagnosis and treatment escalation are well documented, including in heart failure across a spectrum of disease severity, ranging from outpatient symptoms ascertainment to referral for advanced cardiac therapies and even transplant.Footnote 11 AI-ECG overcomes the biases embedded in traditional heart failure screenings by simplifying a complex syndromic diagnosis into a positive or negative result that is programmatically entwined with subsequent specialist referral.
These supporting arguments grounded in reducing the disparities in access to cardiac care may be balanced by equally salient concerns. Even a charitable interpretation of the AI-ECG pathway assumes a relatively savvy, engaged, and motivated patient. The ability to mail the AI-ECG screening package widely to homes is just the first step in a series of necessary steps: Opening and setting up the screening kit, including the phone and ECG-enabled stethoscope, successfully activating the device, and recording a high-quality tracing that is then processed centrally without data loss. While the authors’ early experience using this technology in various settings has been reassuring, it remains uncertain whether the established “digital divide” will complicate the equitable application of AI-ECG screening. Assuming equal (or even favorably targeted) access to the technology, are patients able to use it, and do they want to? The last point is critical: In the UK as well as the United States, trust in health care varies considerably and, (broadly speaking) in cardiovascular disease, tracks unfortunately and inversely with clinical need.
Indeed, one well-grounded reason for suspicion recalls another problem for the equity-driven enthusiasm for AI-ECG, which is the training and validation of the AI algorithms themselves. The “black box” nature of some forms of AI, where the reasons for model prediction cannot easily be inferred, has appropriately led to concerns over insidious algorithmic bias and subsequent reservations around deploying these tools for patient care.Footnote 12 Even low-tech heart failure screening confronts this same problem, as (for example) the most widely used biomarker for heart failure diagnosis has well-known performance variability according to age, sex, ethnicity, patient weight, renal function, and clinical comorbidities.Footnote 13 Conversely, studies to date have suggested that AI-ECG for heart failure detection does not exhibit these biases. It may still be the case that biases do exist, but that they require further large-scale deployment to manifest themselves.
To address these concerns, we propose several programmatic features as essential and intentional for reinforcing the potential of wide-scale screening to promote equity. First, it is imperative for program managers to prominently collect self-identified race, ethnicity, and other socioeconomic data (e.g., language, education) from all participants at each level of outreach – screened, invited, agreed, successfully tested, identified as “positive,” referred for specialist evaluation, and downstream clinical results. Disproportionate representation at each level, and differential drop-out at each step, must be explored, but that can only begin with high-quality patient-level data to inform analyses and program refinement. This is an aspiration dependent on first resolving the outlined issues with trust. Trust in AI-ECG may be further buttressed in several ways, recognizing the resource limitations available for screening programs generally. One option may be providing accommodations for skeptical patients in a way that still provides suitable opportunities to participate through alternative means. This could simply involve having patients attend an in-person appointment during which the AI-ECG examination is performed on them by a health care professional.
The patient end-user needs to feel trust and confidence in using the technology. This can be achieved through user-centric design that prioritizes a simple protocol, to maximize uptake, with the requisite level of technical detail to ensure adequate recording quality (e.g., getting the right position). The accuracy of AI-ECG depends on these factors, in contrast with other point-of-care technologies where the acquisition of the “input” is less subject to variability (e.g., finger-prick blood drop tests).
The centralized administration of NHS screening programs by NHS England paired with NHS Digital’s repository data on the uptake of screening offers granular insights to anticipate and plan for regions and groups at risk of low uptake. We propose enshrining a dedicated data monitoring plan into the AI-ECG screening protocol, with prespecified targets for uptake and defined mitigation strategies – monitored in near real-time. This is made possible through the unique connectivity (for a screening technology) of the platform driving AI-ECG, with readily available up-to-date data flows for highlighting disparities in access. However, a more proactive approach to targeting individuals within a population with certain characteristics needs to be balanced against the risk of stigmatization, and, ultimately, potential loss of trust that may further worsen the cardiovascular outcomes seeking to be improved.
Lastly, equity concerns around algorithmic performance are necessarily empirical questions that will also benefit from patient-level data collection. We acknowledge that moving from research in the form of prospective validation studies to deployment for patient care requires judgment in the absence of consensus, within the NHS or more globally, around the minimum scrutiny for an acceptable level (if any) of differential performance across – for starters – age, sex, and ethnicity. To avoid these potentially impactful innovations remaining in the domain of research, and to anticipate the wide-reaching implications of a deployment found to exhibit bias retrospectively, one possible solution would be to, by design, prospectively monitor for inconsistent test performance. Specifically, in the context of AI-ECG offering a binary yes or no screening test result for heart failure, it is important to measure the rate of false positive and false negative results. False positives can be measured through the AI-ECG technology platform linking directly into primary care EHR data. This allows positive AI-ECG results to be correlated with the outcomes of downstream gold-standard, definitive investigations for heart failure (e.g., echocardiography ultrasound scans). Such a prospective approach is less feasible for false negatives due to both the potentially longer time horizon for the disease to manifest and the uncertainty around whether AI-ECG truly missed the diagnosis. Instead, measuring the rate of false negatives may require a more expansive approach in the form of inviting a small sample of patients with negative AI screening tests for “quality control” next-step investigations. All of this risks adding complexity and, therefore, cost to a pathway seeking to simplify and save money. However, given this program’s position at the vanguard of AI deployments for health, a permissive approach balanced with checkpoints for sustained accuracy may help to blueprint best practices and build confidence for similar AI applications in additional disease areas.
B Agency
Another positive argument for AI-ECG screening aligns with trends in promoting agency, understood here as patient empowerment, particularly around the use of digital devices to measure, monitor, and manage one’s own health care – particularly in terms of cardiovascular disease. The enthusiastic commercial uptake of fitness wearables, for example, moved quickly past counting steps to incorporate heart rhythm monitoring.Footnote 14 Testing of these distributed technologies has shown mixed results, with the yield of positive cases necessarily depending on the population at issue.Footnote 15 Recalling the equity concerns above, the devices themselves may be more popular among younger and healthier patients, among whom true positive diagnoses may be uncommon. However, targeted and invited screening with AI-ECG may balance these concerns through enriching the population at risk by invitation.
Realistic concerns about agency extend beyond the previous warnings about digital literacy, access to reliable internet, and language barriers to ask more fundamental questions about whether patients actually want to assume this central role in their own health care. A key parallel here is the advent of mandates for shared decision-making in cardiovascular disease, particularly in the United States where federal law now requires selected Medicare beneficiaries considering certain cardiovascular procedures to incorporate “evidence based shared decision-making tools” in their treatment choices.Footnote 16 However, patients may reasonably ask if screening with AI-ECG should necessarily shift the key role of test administration (literally) into their hands. Unlike the only other at-home national screening test in the UK – simply taking a stool sample for bowel cancer screening – self-application of AI-ECG requires the successful execution of several codependent steps. Here, even a relatively low failure rate may prove untenable for population-wide scaling, risking that this technology may remain in the physician’s office.
Putting such responsibility on patients could be argued to not only directly shift this responsibility away from clinicians, but also dilute learning opportunities. While subtle, shifting the cognitive work of integrating complex signs and symptoms into a syndromic diagnosis like heart failure may have unwelcome implications for clinicians’ diagnostic skills. We emphasize that this is not just whimsical nostalgia for a more paternalistic time in medicine, but a genuine worry about reductionism in algorithmic diagnosis that oversimplifies complex constellations of findings into simple yes or no diagnoses (AI-ECG, strictly speaking, only flags a risk of heart failure, which is not clinically equivalent to a diagnosis). Resolving these tensions may be possible through seeing the educational opportunity and wider clinical application of the hardware enabling AI-ECG.
Careful metrics, as described previously, will allow concerns about agency to be considered empirically, at least within the categories of patient data collected. If, for example, the utilization of AI-ECG varies sharply according to age, race, ethnicity, or language fluency, this would merit investigation specifically interrogating whether this variability rests in part on patient preferences for taking on this task rather than an inability to do so. At the same time, early patient experiences with AI-ECG in real-world settings may provide opportunities for patient feedback regarding whether this specific device, or the larger role being asked of them in their own care, is perceived as an appropriate assignation of responsibility or an imposition. If, for example, patients experience this shifting of cardiovascular screening out of the office as an inappropriate deferral of care out of traditional settings, this may suggest the need for either refining the pathway (still using the device, but perhaps keeping it in a clinical setting) or more extensive community engagement and education to ensure stakeholder agreement on roles, rights, and responsibilities.
C Data Rights
A central government, NHS-funded public screening program making use of patients’ own smartphones necessarily raises important questions about data rights. Beyond the expected guardrails required by the General Data Protection Regulation (GDPR) and UK-specific health data legislation, AI-ECG introduces additional concerns. One is whether patient participants should be obligated to contribute their health data toward the continuous refinement of the AI-ECG algorithms themselves or instead be given opt-in or opt-out mechanisms of enrollment. We note that while employed in this context by a public agency, the intellectual property for AI-ECG is held by the device manufacturer. Thus, while patients may carry some expectation of potential future benefit from algorithmic refinement, the more obvious rewards accrue to private entities. Another potential opportunity, not lost on the authors as overseers for the nascent AI-ECG program, is the possibility that AI-ECG data linked to patients’ EHR records might support entirely new diagnostic discovery beyond the core cardiovascular conditions at issue. Other conditions may similarly have subtle manifestations in ECG waveforms, phonocardiography, or their combination – invisible to humans but not AI – that could plausibly emerge from widespread use. Beyond just opt-in or opt-out permissions – known to be problematic for meaningful engagement with patient consentFootnote 17 – what control should patients have around the use of their health data in this context? For example, the NHS now holds a rich variety of health data for each patient – including free text, imaging, and blood test results. Patients may be happy to offer some but not all of this data for application to their own health, with different decisions on stratifying what can be used for AI product development.
Lastly, AI-ECG will need to consider data security carefully, including the possibility, however remote, of malicious intent or motivated intruders entering the system. Health data can be monetized by cyber criminals. Cyber threat modelling should be performed by the device manufacturer early in the design phase to identify possible threats and their mitigants.Footnote 18 Documentation provided about embedded data security features adds valuable information for patients that may have concerns about the protection of their personal data, and can help them to make informed decisions on using AI-ECG. Beyond privacy, threat modelling should also account for patient safety, such as from an intruder with access that allows the manipulation of code or data. For example, it could be possible to manipulate results to deprive selected populations of appropriate referrals for care. Sabotaging results or causing a denial-of-service situation by flooding the system with incorrect data might also cause damage to the reputation of the system in such a way that patients and clinicians become wary of using it. Overall, anticipating these security and other data rights considerations beyond the relatively superficial means of user agreements remains an unmet challenge for AI-ECG.
V Final Recommendations
This chapter has outlined a novel clinical pathway to screen for cardiovascular disease using an at-home, patient self-administered AI technology that can provide a screening capability beyond human expertise. We set this against a backdrop of: (1) A diverse ecosystem of stakeholders impacted by and responsible for AI-ECG, spanning patients, NHS clinicians, NHS agencies, and the responsible regulatory and health economic bodies and (2) a health-policy landscape eager to progress the “use of decision support and AI” as part of a wider push to decentralize (i.e., modernize) care. To underscore the outlined considerations of equity, agency, and data rights, we propose two principal recommendations, framed against but generalizable beyond the pathway example of AI-ECG.
First, we advocate for a multi-agency approach that balances permissive regulation and deployment – to align with the speed of AI innovation – against ethical and statutory obligations to safeguard public health. Bodies such as NHS England, the MHRA, and NICE each have unique responsibilities, but with cross-cutting implications. The clinical and health economic case for urgent innovation for unmet needs, such as AI-ECG for heart failure, is obvious and compelling. Agencies working sequentially delays translating such innovations into clinical practice, missing opportunities to avert substantial cardiovascular morbidity and mortality. Instead, the identification of a potentially transformative technology should trigger a multi-agency approach that works together and in parallel to support timely deployment within clinical pathways to positively impact patient care. This approach holds not only during initial deployment, but also as technology progresses. Here, we could consider the challenge of AI algorithms continually iterating (i.e., improving): For a given version of AI-ECG, the MHRA grants regulatory approval, NICE endorses procurement, and NHS England guides implementation. After evaluating a medical AI technology and deeming it safe and effective, should these agencies limit its authorization to market only the version of the algorithm that was submitted, or permit the marketing of an algorithm that can learn and adapt to new conditions?Footnote 19 AI-ECG could continually iterate by learning from the ECG data accumulated during deployment, and also through continuing improvements in machine learning methodology and computational power. Cardiovascular data, including waveforms, imaging, blood, and physiological parameters, is generally high volume and repeatedly measured. This, therefore, offers a rich seam for taking advantage of AI’s defining strength to continually improve, unlike ordinary “medical devices.” Parodying the ship of Theseus, at what point is the algorithm substantially different to the original, and what prospective validation, if any, is needed if the claims remain the same? Multi-agency collaboration can reach a consensus on such questions that avoids unfamiliarity with the lifecycle of AI disrupting delivery of care by reactively resetting when new (i.e., improved) versions arrive. For AI-ECG, this could involve the expensive and time-consuming repetition of high-volume patient recruitment to validation research studies. Encouragingly, in a potential move toward multi-agency collaboration, in 2022, NHS England commissioned NICE to lead a consultation for a digital health evidence standards framework that aims to better align with regulators.Footnote 20
Second, both to account for the ethical considerations outlined in this chapter and to balance any faster implementation of promising AI technologies, we recommend a centralized responsibility for NHS England to deploy and thoroughly evaluate programs such as AI-ECG. This chapter has covered some of the critical variables to measure that will be unique to using an AI technology for patient self-administered screening at home. Forming a comprehensive list would, again, be amenable to a multi-agency approach, where NHS England can draw on the playbook for already-monitoring existing national screening programs. An evaluation framework addressing the outlined considerations around equity, agency, and data rights should be considered not only an intrinsic but a mandatory part of the design, deployment, and ongoing surveillance of AI-ECG. The inherent connectivity and instant data flow of such technology offers, unlike screening programs to date, the opportunity for real-time monitoring and, therefore, prompt intervention, not only for clinical indications, but also for any disparities in uptake, execution, algorithm performance, or cybersecurity. Ultimately, this will not only bolster the NHS’s position as a world leader in standards for patient safety, but also as an exemplar system for realizing effective AI-driven health care interventions.
Looking to the future for AI-ECG, translating the momentum for technological innovation in the NHS into patient benefit will require careful consideration of the outlined ethical pitfalls. This may, in the short term, establish best practices that build confidence for further applications. In the longer term, we see a convergence of commoditized AI algorithms for cardiovascular and wider disease, where increasingly sophisticated sensor technology may make future home-based screening a completely passive act. While moving toward such a reality could unlock major public health benefits, doing so will depend on bold early use cases, such as AI-ECG, that reveal unanticipated ethical challenges and allow them to be resolved. For now, the outlined policy recommendations can serve to underpin the stewardship of such novel diagnostic pathways in a way that preserves and promotes trust, patient engagement, and public health.
VI Conclusion
Patient self-administered screening for cardiovascular disease at home using an AI-powered technology offers substantial potential public health benefits, but also poses unique ethical challenges. We recommend a multi-agency approach to the lifecycle of implementing such AI technology, combined with a centrally overseen, mandatory prospective evaluation framework that monitors for equity, agency, and data rights. Assuming the responsibility to proactively address any observed neglect of these considerations instills trust as the foundation for the sustainable and impactful implementation of AI technologies for clinical application within patients’ own homes.
I Introduction
The COVID-19 pandemic catalyzed a transformation of abortion care. For most of the last half century, abortion was provided in clinics outside of the traditional health care setting.Footnote 1 Though a medication regimen was approved in 2000 to terminate a pregnancy without a surgical procedure, the Food and Drug Administration (FDA) required, among other things, that the drug be dispensed in person at a health care facility (the “in-person dispensing requirement”).Footnote 2 This requirement dramatically limited the medication’s promise to revolutionize abortion because it subjected medication abortion to the same physical barriers as procedural care.Footnote 3
Over the course of the COVID-19 pandemic, however, that changed. The pandemic’s early days exposed how the FDA’s in-person dispensing requirement facilitated virus transmission and hampered access to abortion without any medical benefits.Footnote 4 This realization created a fresh urgency to lift the FDA’s unnecessary restrictions. Researchers and advocates worked in concert to highlight evidence undermining the need for the in-person dispensing requirement,Footnote 5 which culminated in the FDA permanently removing the requirement in December 2021.Footnote 6
The result is an emerging new normal for abortion through ten weeks of pregnancy – telehealth – at least in the states that allow it.Footnote 7 Abortion by telehealth (what an early study dubbed “TelAbortion”) generally involves a pregnant person meeting online with a health care professional, who evaluates whether the patient is a candidate for medication abortion, and, if so, whether the patient satisfies informed consent requirements.Footnote 8 Pills are then mailed directly to the patient, who can take them and complete an abortion at home. This innovation has made earlier-stage abortions cheaper, less burdensome, and more private, reducing some of the barriers that delay abortion and compromise access.Footnote 9
In this chapter, we start with a historical account of how telehealth for abortion emerged as a national phenomenon. We then offer our predictions for the future: A future in which the digital transformation of abortion care is threatened by the demise of constitutional abortion rights. We argue, however, that the de-linking of medication abortion from in-person care has triggered a zeitgeist that will create new avenues to access safe abortion, even in states that ban it. As a result, the same states that are banning almost all abortions after the Supreme Court overturned Roe v. Wade will find it difficult to stop their residents from accessing abortion online. Abortion that is decentralized and independent of in-state physicians will undermine traditional state efforts to police abortion as well as create new challenges of access and risks of criminalization.
II The Early Abortion Care Revolution
Although research on medication abortion facilitated by telehealth began nearly a decade ago, developments in legal doctrine, agency regulation, and online availability over the last few years have ushered in remote abortion care and cemented its impact. This part reviews this recent history and describes the current model for providing telehealth for abortion services.
A The Regulation of Medication Abortion
In 2020, medication abortions comprised 54 percent of the nation’s total abortions, which is a statistic that has steadily increased over the past two decades.Footnote 10 A medication abortion in the United States typically has involved taking two types of drugs, mifepristone and misoprostol, often 24 to 48 hours apart.Footnote 11 The first medication detaches the embryo from the uterus and the second induces uterine contractions to expel the tissue.Footnote 12 Medication abortion is approved by the FDA to end pregnancies through ten weeks of gestation, although some providers will prescribe its use off-label through twelve or thirteen weeks.Footnote 13
The FDA restricts mifepristone under a system intended to ensure the safety of particularly risky drugs – a Risk Evaluation and Mitigation Strategy (REMS).Footnote 14 The FDA can also issue a REMS with Elements to Assure Safe Use (ETASU), which can circumscribe distribution and limit who can prescribe a drug and under what conditions.Footnote 15 The FDA instituted a REMS with ETASU for mifepristone, the first drug in the medication abortion regimen, which historically mandated, among other requirements, that patients collect mifepristone in-person at a health care facility, such as a clinic or physician’s office.Footnote 16 Thus, under the ETASU, certified providers could not dispense mifepristone through the mail or a pharmacy. Several states’ laws impose their own restrictions on abortion medication in addition to the FDA’s regulations, including mandating in-person pick-up, prohibiting telehealth for abortion, or banning the mailing of medication abortion; at the time of writing in 2023, most of those same states, save eight, ban almost all abortion, including medication abortion, from the earliest stages of pregnancy.Footnote 17
In July 2020, a federal district court in American College of Obstetricians & Gynecologists (ACOG) v. FDA temporarily suspended the in-person dispensing requirement and opened the door to the broader adoption of telehealth for abortion during the course of the pandemic.Footnote 18 Well before this case, in 2016, the non-profit organization, Gynuity, received an Investigational New Drug Approval to study the efficacy of providing medication abortion care by videoconference and mail.Footnote 19 In the study, “TelAbortion,” providers counselled patients online, and patients confirmed the gestational age with blood tests and ultrasounds at a location of their choosing.Footnote 20 As the pandemic took hold, patients who were not at risk for medical complications, were less than eight weeks pregnant, and had regular menstrual cycles could forgo ultrasounds and blood tests, and rely on home pregnancy tests and a self-report of the first day of their last menstrual period. The results of the study indicated that a “direct-to-patient telemedicine abortion service was safe, effective, efficient, and satisfactory.”Footnote 21 Since Gynuity’s study, additional research has demonstrated that abortion medication can be taken safely and effectively without in-person oversight.Footnote 22
The ACOG court’s temporary suspension of the in-person dispensing requirement in 2020 relied on this research. The district court held that the FDA’s requirement contradicted substantial evidence of the drug’s safety and singled out mifepristone without providing any corresponding health benefit.Footnote 23 The district court detailed how the in-person requirement exacerbated the burdens already shouldered by those disproportionately affected by the pandemic, emphasizing that low-income patients and people of color, who are the majority of abortion patients, are more likely to contract and suffer the effects of COVID-19.Footnote 24 While the district court’s injunction lasted, virtual clinics began operating, providing abortion care without satisfying any in-person requirements.Footnote 25
The FDA appealed the district court’s decision to the US Court of Appeals for the Fourth Circuit and petitioned the Supreme Court for a stay of the injunction in October and again in December 2020. The briefs filed by the Trump Administration’s solicitor general and ten states contested that the in-person dispensing requirement presented heightened COVID-19 risks for patients.Footnote 26 Indeed, some of the same states that had suspended abortion as a purported means to protect people from COVID-19 now argued that the pandemic posed little threat for people seeking abortion care.Footnote 27 ACOG highlighted the absurdity of the government’s position. The FDA could not produce evidence that any patient had been harmed by the removal of the in-person dispensing requirement, whereas, in terms of COVID-19 risk, “the day Defendants filed their motion, approximately 100,000 people in the United States were diagnosed with COVID-19 – a new global record – and nearly 1,000 people died from it.”Footnote 28
The Supreme Court was not persuaded by ACOG’s arguments. In January 2021, the Court stayed the district court’s injunction pending appeal with scant analysis.Footnote 29 Chief Justice Roberts, in a concurrence, argued that the Court must defer to “politically accountable entities with the background, competence, and expertise to assess public health.”Footnote 30 Justice Sotomayor dissented, citing the district court’s findings and characterizing the reimposition of the in-person dispensing requirement as “unnecessary, unjustifiable, irrational” and “callous.”Footnote 31
The impact of the Supreme Court’s order, however, was short-lived. In April 2021, the FDA suspended the enforcement of the requirement throughout the course of the pandemic and announced that it would reconsider aspects of the REMS.Footnote 32 In December 2021, the FDA announced that it would permanently lift the in-person dispensing requirement.
Other aspects of the mifepristone REMS, however, have not changed. The FDA still mandates that only certified providers who have registered with the drug manufacturer may prescribe the drug (the “certified provider requirement”), which imposes an unnecessary administrative burden that reduces the number of abortion providers.Footnote 33 An additional informed consent requirement – the FDA-required Patient Agreement Form, which patients sign before beginning a medication abortion – also remains in place despite repeating what providers already communicate to patients.Footnote 34 The FDA also added a new ETASU requiring that only certified pharmacies can dispense mifepristone.Footnote 35 The details of pharmacy certification were announced in January 2023; among other requirements, a pharmacy must agree to particular record-keeping, reporting, and medication tracking efforts, as well as designate a representative to ensure compliance.Footnote 36 This requirement, as it is implemented, could mirror the burdens associated with the certified provider requirement, perpetuating the FDA’s unusual treatment of this safe and effective drug.Footnote 37
Despite these restrictions, permission for providers and, at present, two online pharmacies to mail medication abortion has allowed virtual abortion clinics to proliferate in states that permit telehealth for abortion.Footnote 38 As explored below, this change has the potential to dramatically increase access to early abortion care, but there are obstacles that can limit such growth.
B Telehealth for Abortion
A new model for distributing medication abortion is quickly gaining traction across the country: Certified providers partnering with online pharmacies to mail abortion medication to patients after online intake and counseling.Footnote 39 For example, the virtual clinic, Choix, prescribes medication abortion to patients up to ten weeks of pregnancy in Maine, New Mexico, Colorado, Illinois, and California.Footnote 40 The founders describe how Choix’s asynchronous telehealth platform works:
Patients first sign up on our website and fill out an initial questionnaire, then we review their history and follow up via text with any questions. Once patients are approved to proceed, they’re able to complete the consent online. We send our video and educational handouts electronically and make them available via our patient portal. We’re always accessible via phone for patients.Footnote 41
The entire process, from intake to receipt of pills, takes between two to five days and the cost is $289, which is significantly cheaper than medication abortions offered by brick-and-mortar clinics.Footnote 42 Advice on taking the medication abortion and possible complications is available through a provider-supported hotline.Footnote 43 Choix is just one of many virtual clinics. Another virtual clinic, Abortion on Demand, provides medication abortion services to twenty-two states.Footnote 44 Many virtual clinics translate their webpages into Spanish but do not offer services in Spanish or other languages, although a few are planning to incorporate non-English services.Footnote 45
As compared to brick-and-mortar clinics, virtual clinics and online pharmacies provide care that costs less, offers more privacy, increases convenience, and reduces delays without compromising the efficacy or quality of care.Footnote 46 Patients no longer need to drive long distances to pick up safe and effective medications before driving back home to take them. In short, mailed pills can untether early-stage abortion from a physical place.Footnote 47
Telehealth for abortion, however, has clear and significant limitations. As noted above, laws in about half of the country prohibit, explicitly or indirectly, telemedicine for abortion. And telemedicine depends on people having internet connections and computers or smartphones, which is a barrier for low-income communities.Footnote 48 Even with a telehealth-compliant device, “[patients] may live in communities that lack access to technological infrastructure, like high-speed internet, necessary to use many dominant tele-health services, such as virtual video visits.”Footnote 49 Finally, the FDA has approved medication abortion only through ten weeks of gestation.
These barriers, imposed by law and in practice, will test how far telehealth for abortion can reach. As discussed below, the portability of medication abortion opens avenues that strain the bounds of legality, facilitated in no small part by the networks of advocates that have mobilized to make pills available to people across the country.Footnote 50 But extralegal strategies could have serious costs, particularly for those already vulnerable to state surveillance and punishment.Footnote 51 And attempts to bypass state laws could have serious consequences for providers, who are subject to professional, civil, and criminal penalties, as well as those who assist providers and patients.Footnote 52
III The Future of Abortion Care
The COVID-19 pandemic transformed abortion care, but the benefits were limited to those living in states that did not have laws requiring in-person care or prohibiting the mailing of abortion medication.Footnote 53 This widened a disparity in abortion access that has been growing for years between red and blue states.Footnote 54
On June 24, 2022, the Supreme Court issued its decision in Dobbs v. Jackson Women’s Health Organization, upholding Mississippi’s fifteen-week abortion ban and overturning Roe v. Wade.Footnote 55 Twenty-four states have attempted to ban almost all abortions, although ten of those bans have been halted by courts.Footnote 56 At the time of writing, pregnant people in the remaining fourteen states face limited options: Continue a pregnancy against their will, travel out of state to obtain a legal abortion, or self-manage their abortion in their home state.Footnote 57 Data from Texas, where the SB8 legislationFootnote 58 effectively banned abortion after roughly six weeks of pregnancy months before Dobbs, suggests that only a small percentage of people will choose the first option – the number of abortions Texans received dropped by only 10–15 percent as a result of travel and self-management.Footnote 59 Evidence from other countries and the United States’s own pre-Roe history also demonstrate that abortion bans do not stop abortions from happening.Footnote 60
Traveling to a state where abortion is legal, however, is not an option for many people.Footnote 61 Yet unlike the pre-Roe era, there is another means to safely end a pregnancy – one that threatens the antiabortion movement’s ultimate goal of ending abortion nationwide:Footnote 62 Self-managed abortion with medication. Self-managed abortion generally refers to abortion obtained outside of the formal health care system.Footnote 63 Thus, self-managed abortion can include a pregnant person buying medication abortion online directly from an international pharmacy (sometimes called self-sourced abortion) and a pregnant person interacting with an international or out-of-state provider via telemedicine, who ships them abortion medication or calls a prescription into an international pharmacy on their behalf.Footnote 64
Because many states have heavily restricted abortion for years, self-managed abortion is not new. The non-profit organization Aid Access started providing medication abortion to patients in the United States in 2017.Footnote 65 Each year, the number of US patients they have served has grown.Footnote 66 Once Texas’s SB8 became effective, Aid Access saw demand for their services increase 1,180 percent, levelling out to 245 percent of the pre-SB8 demand a month later.Footnote 67 Similarly, after Dobbs, the demand for Aid Access doubled, tripled, or even quadrupled in states with abortion bans.Footnote 68 There are advantages to self-managed abortion: The price is affordable (roughly only $105 for use of foreign providers and pharmacy) and the pregnant person can have an abortion at home.Footnote 69 The disadvantage is that receiving the pills can take one to three weeks (when shipped internationally) and comes with the legal risks explored below.
The portability of abortion medication, combined with the uptake of telehealth, poses an existential crisis for the antiabortion movement. Just as it achieved its decades-long goal of overturning Roe, the nature of abortion care has shifted and decentralized, making it difficult to police and control.Footnote 70 Before the advent of abortion medication, pregnant people depended on the help of a provider to end their pregnancies.Footnote 71 They could not do it alone. As a result, states would threaten providers’ livelihood and freedom, driving providers out of business and leaving patients with few options.Footnote 72 Many turned to unqualified providers who offered unsafe abortions that lead to illness, infertility, and death.Footnote 73 But abortion medication created safe alternatives for patients that their predecessors lacked. Because abortion medication makes the involvement of providers no longer necessary to terminate early pregnancies, the classic abortion ban, which targets providers, will not have the same effect.Footnote 74 And out-of-country providers who help patients self-manage abortions remain outside of a state’s reach.Footnote 75
The antiabortion movement is aware of this shifting reality. Indeed, antiabortion state legislators are introducing and enacting laws specifically targeting abortion medication – laws that would ban it entirely, ban its shipment through the mail, or otherwise burden its dispensation.Footnote 76 Nevertheless, it is unclear how states will enforce these laws. Most mail goes in and out of states without inspection.Footnote 77
This is not to suggest that self-management will solve the post-Roe abortion crisis. For one, self-managed abortion medication is generally not recommended beyond the first trimester, meaning later-stage abortion patients, who comprise less than 10 percent of the patient population, will either need to travel to obtain an abortion or face the higher medical risks associated with self-management.Footnote 78 Moreover, pregnant patients may face legal risks in self-managing an abortion in an antiabortion state.Footnote 79 Historically, legislators were unwilling to target abortion patients themselves, but patients and their in-state helpers may become more vulnerable as legislatures and prosecutors reckon with the inability to target in-state providers. These types of prosecutions may occur in a few ways.
First, even if shipments of abortion medication largely go undetected, a small percentage of patients will experience side effects or complications that lead them to seek treatment in a hospital.Footnote 80 Self-managed abortions mimic miscarriage, which will aid some people in evading abortion laws, although some patients may reveal to a health care professional that their miscarriage was induced with abortion medication.Footnote 81 And even with federal protection for patient health information,Footnote 82 hospital employees could report those they suspect of abortion-related crimes.Footnote 83 This will lead to an increase in the investigation and criminalization of both pregnancy loss and abortion.Footnote 84 This is how many people have become targets of criminal prosecution in other countries that ban abortion.Footnote 85
Second, the new terrain of digital surveillance will play an important role. Any time the state is notified of someone who could be charged for an abortion-related crime, the police will be able to obtain a warrant to search their digital life if they have sufficient probable cause. Anya Prince has explained the breadth of the reproductive health data ecosystem, in which advertisers and period tracking apps can easily capture when a person is pregnant.Footnote 86 The proliferation of “digital diagnostics” (for instance, wearables that track and assess health data) could become capable of diagnosing a possible pregnancy based on physiologic signals, such as temperature and heart rate, perhaps without the user’s knowledge. As Prince notes, this type of information is largely unprotected by privacy laws and companies may sell it to state entities.Footnote 87 Technology that indicates that a person went from “possibly pregnant” to “not pregnant” without a documented birth could signal an abortion worthy of investigation. Alternatively, pregnancy data combined with search histories regarding abortion options, geofencing data of out-of-state trips, and text histories with friends could be used to support abortion prosecutions.Footnote 88 Antiabortion organizations could also set up fake virtual clinics – crisis pregnancy centers for the digital age – to identify potential abortion patients and leak their information to the police.Footnote 89
These technologies will test conceptions of privacy as people voluntarily offer health data that can be used against them.Footnote 90 Law enforcement will, as they have with search engine requests and electronic receipts, use this digital information against people self-managing abortions.Footnote 91 And, almost certainly, low-income people and women of color will be targets of pregnancy surveillance and criminalization.Footnote 92 This is already true – even though drug use in pregnancy is the same in white and populations of color – Black women are ten times more likely to be reported to authorities.Footnote 93 And because low-income women and women of color are more likely to seek abortion and less likely to have early prenatal care, any pregnancy complications may be viewed suspiciously.Footnote 94
State legislatures and the federal government can help to protect providers and patients in the coming era of abortion care, although their actions may have a limited reach.Footnote 95 At the federal level, the FDA could assert that its regulation of medication abortion preempts contradictory state laws, potentially creating a nationwide, abortion-medication exception to state abortion bans.Footnote 96 The federal government could also use federal laws and regulations that govern emergency care, medical privacy, and Medicare and Medicaid reimbursement to preempt state abortion laws and reduce hospital-based investigations, though the impact of such laws and regulations would be more limited.Footnote 97 As this chapter goes to press in 2023, the Biden Administration is undertaking some of these actions.Footnote 98
State policies in jurisdictions supportive of abortion rights can also improve access for patients traveling to them. States can invest in telehealth generally to continue to loosen restrictions on telemedicine, as many states have done in response to the pandemic, reducing demand at brick-and-mortar abortion clinics and disparities in technology access.Footnote 99 They can also join interstate licensure compacts, which could extend the reach of telehealth for abortion in the states that permit the practice and allow providers to pool resources and provide care across state lines.Footnote 100 States can also pass abortion shield laws to insulate their providers who care for out-of-state residents by refusing to cooperate in out-of-state investigations, lawsuits, prosecutions, or extradition requests for abortion-related lawsuits.Footnote 101 All of these efforts will help reduce, but by no means stop, the sea change to abortion law and access moving forward. And none of these efforts protect the patients or those that assist them in states that ban abortion.
IV Conclusion
A post-Dobbs country will be messy. A right that generations took for granted – even though for some, abortion was inaccessible – disappeared in half of the country. The present landscape, however, is not like the pre-Roe era. Innovations in medical care and telehealth have changed abortion care, thwarting the antiabortion movement’s ability to control abortion, just as it gained the ability to ban it. Unlike patients in past generations, patients now will be able to access safe abortions, even in states in which it is illegal. But they will also face legal risks that were uncommon previously, given the new ways for the state to investigate and criminalize them.
As courts and lawmakers tackle the changing reality of abortion rights, we should not be surprised by surprises – unlikely allies and opponents may coalesce on both sides of the abortion debate. Laws that seek to punish abortion will become harder to enforce as mailed abortion pills proliferate. This will create urgency for some antiabortion states to find creative ways to chill abortion, while other states will be content to ban abortion in law, understanding that it continues in practice. Who states seek to punish will shift, with authorities targeting not only providers, but also patients, and with the most marginalized patients being the most vulnerable.Footnote 102
I Introduction
What first comes to mind when you hear the words “Alzheimer’s disease?”
For many, those words evoke images of an older adult who exhibits troubling changes in memory and thinking. Perhaps the older adult has gotten lost driving to church, although it’s a familiar route. Perhaps they have bounced a check, which is out of character for them. Perhaps they have repeatedly left the stove on while cooking, worrying their spouse or adult children. Perhaps they have trouble finding words or are confused by devices like iPhones and, as a result, have lost touch with longtime friends.
Alzheimer’s disease (AD) has traditionally been understood as a clinical diagnosis, requiring the presence of symptoms to be detected. The older adult we just envisioned might make an appointment with their physician. The physician would likely listen to the patient’s medical history – noting the characteristic onset and pattern of impairments – and determine that the patient has dementia, a loss of cognitive and functional abilities severe enough to interfere with daily life. Dementia can have numerous causes, and so the physician would also conduct a comprehensive physical and cognitive examination, perhaps ordering lab tests or brain imaging scans, as well as neuropsychological testing. After excluding other causes, the clinician would diagnose the patient with “probable” AD, a diagnosis that can only be confirmed postmortem via autopsy. This approach to diagnosis interweaves the patient’s experience of disabling cognitive and functional impairments (i.e., dementia) with the label of AD.Footnote 1
Yet, our ability to measure the neuropathology of AD is rapidly evolving, as is our understanding of the preclinical and prodromal stages of disease. Thus, it is now possible to identify individuals who are at risk for developing dementia caused by AD years or even decades before the onset of cognitive decline through clinical but also digital monitoring. A key premise of this article is that, in the future, the identification of at-risk individuals will continue to occur in clinical settings using traditional biomarker testing; but, the identification of at-risk individuals will also increasingly occur closer to – or even in – one’s home, potentially using digital biomarkers.
When you hear “Alzheimer’s disease,” in-home monitoring should come to mind.
Here, we argue that because AD affects the mind, the challenges associated with monitoring aimed at understanding the risk for disabling cognitive impairments are heightened as compared to the challenges of monitoring for physical ailments. In Section II, we discuss the biomarker transformation of AD, which is allowing us to see AD neuropathology in living persons and to identify individuals at increased risk for developing dementia caused by AD. In Section III, we outline empirical evidence regarding five different digital biomarkers; these digital biomarkers offer further insights into an individual’s risk for cognitive impairment and could soon be used for in-home monitoring. Finally, in Section IV, we identify six challenges that are particularly pronounced when monitoring for AD.
II The Evolving Understanding of AD
The field of AD research is rapidly moving from a syndromal definition of AD (see, e.g., the diagnostic process described in Section I) to a biological one. This shift reflects a growing understanding of the mechanisms underlying the clinical expression of AD.Footnote 2
Biomarkers can now be used to identify AD neuropathology in vivo. A biomarker is a “defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes or responses to an exposure or intervention.”Footnote 3 Individuals are understood to have a biomarker profile, which (as we’re using it here) describes the presence or absence in their brain of three AD biomarkers: Beta-amyloid, pathologic tau, or neurodegeneration. These biomarkers can be measured using various modalities, including positron emission tomography (PET), cerebrospinal fluid (CSF) sampling, or magnetic resonance imaging (MRI); moreover, that blood-based biomarker tests are now available.Footnote 4
In addition to the biomarker profile, a second, independent source of information is the individual’s cognitive stage. An individual may be cognitively unimpaired – within the expected range of cognitive testing scores and functioning in daily life, have mild cognitive impairment (MCI) – a slight but noticeable decline in cognitive skills, or have dementia. The patient’s biomarker profile can then be used in combination with the patient’s cognitive stage to characterize the patient’s place – and likely progression – along the Alzheimer’s continuum. The continuum spans the preclinical (i.e., clinically asymptomatic with evidence of AD neuropathology) and clinical (i.e., symptomatic) phases of AD.Footnote 5
Individuals in the preclinical stage have AD biomarkers but do not have clinically measurable cognitive impairment. They may be truly cognitively unimpaired, or they may have subjective cognitive decline – a self-experienced decline in cognitive capacity as compared to baseline.Footnote 6 Those with preclinical AD are understood to be at an increased risk of short-term cognitive decline.Footnote 7 An estimated 46.7 million Americans have preclinical AD (defined by amyloidosis, neurodegeneration, or both), though it’s important to emphasize that not all of them will progress to a dementia-level of impairment.Footnote 8
At present, preclinical AD remains a research construct. It is not yet diagnosed clinically. Researchers hope, however, that intervening earlier rather than later in the course of the disease will allow them to delay or prevent the onset of cognitive and functional impairment. Therefore, they are conducting secondary prevention trials, which recruit individuals who are asymptomatic but biomarker-positive for AD – that is, who have preclinical AD – to test new drugs or novel interventions. It is reasonable to assume that if the preclinical AD construct is validated and if a disease-modifying therapy for AD is found, preclinical AD will move from the research to the clinical context.
In the future, people who receive a preclinical AD diagnosis will have insight into their risk of developing MCI or dementia years or even decades before the onset of impairments.Footnote 9 Monitoring digital biomarkers in the home, the focus of Section III, will likely be complementary to clinical assessment. For instance, monitoring may be used to watch for incipient changes in cognition after a preclinical AD diagnosis. Or, conversely, data generated by in-home monitoring may suggest that it is time to see a clinician for an AD workup.
III Digital Biomarkers of AD
In parallel with our evolving understanding of beta-amyloid, pathologic tau, and neurodegeneration as “traditional” biomarkers of AD, there have been advances in our understanding of digital biomarkers for AD. Efforts to concretely and comprehensively define digital biomarkers are ongoing.Footnote 10 For the purposes of this chapter, we use the following definition: “Objective, quantifiable, physiological, and behavioral data that are collected and measured by means of digital devices, such as embedded environmental sensors, portables, wearables, implantables, or digestibles.”Footnote 11
Digital biomarkers have the potential to flag uncharacteristic behaviors or minor mistakes that offer insights into an older adult’s risk of cognitive and functional decline or to indicate early cognitive decline. As noted above, the preclinical stage of AD is characterized by the presence of biomarkers in the absence of measurable cognitive impairment. Despite going undetected on standard cognitive tests, subtle cognitive changes may be present. There is, in fact, a growing body of evidence that subjective cognitive decline in individuals with an unimpaired performance on cognitive tests may represent the first symptomatic manifestation of AD.Footnote 12 These small changes from the individual’s baseline may have downstream effects on complex cognitive and functional behaviors. Digital biomarkers offer a means of capturing these effects.
Here, we discuss five digital biomarkers for AD, highlighting both promising opportunities for monitoring the minds of older adults and limitations in our current knowledge and monitoring abilities. Crucially, these opportunities primarily reside outside of routine clinical settings. These examples are not meant to be exhaustive, but rather have been selected to highlight a range of monitoring modalities involving diverse actors. Moreover, they reveal a variety of potential challenges, which are the focus of Section IV.
A Driving Patterns
Due to the complex processes involved in spatial navigation and vehicle operation, an assessment of driving patterns offers an avenue for detecting changes in thinking and behavior. Indeed, prior studies demonstrate that those with symptomatic AD drive shorter distances and visit fewer unique destinations.Footnote 13 Research also suggests that detectable spatial navigation deficits may precede AD symptom development in cognitively normal individuals with AD biomarkers.Footnote 14 A limitation of this work is that it was conducted using simulators, which only measure performance in very controlled settings and so are limited in their generalizability.Footnote 15 Studies have, therefore, shifted to a naturalistic approach to data collection to characterize daily driving patterns. Researchers can passively collect data using global positioning system (GPS) devices installed in participant vehicles. The resulting information includes average trip distance, number of unique destinations, number of trips with a speed of six miles per hour or more below the posted limit (i.e., underspeed), and a variety of other measures to quantify driving performance.Footnote 16 These studies have found differing behavior and driving patterns between cognitively unimpaired participants with and without AD biomarkers, including a greater decline in the number of days driving per month for those with AD biomarkers.Footnote 17
These findings suggest that assessing driving patterns – as some insurers already do through standalone devices or appsFootnote 18 – may help identify individuals at risk for cognitive decline due to AD.
B Banking and Finances
Instrumental activities of daily living (IADLs) are complex activities necessary for individuals to live independently, such as managing one’s finances. As AD progresses, IADLs become increasingly impaired. A 2021 study examined longitudinal credit report information for over 80,000 Medicare beneficiaries.Footnote 19 The researchers found that those with an AD or related dementia diagnosis were more likely to have missed bill payments over the six years prior to their dementia diagnosis. They also found that individuals with a dementia diagnosis developed subprime credit scores two-and-a-half years before their diagnosis. In a prospective study of cognitively unimpaired older adults, researchers found that a low awareness of financial and other types of scams was associated with an increased risk for MCI and dementia, though the measure was too weak for prediction at the individual level.Footnote 20
More work is needed to characterize the timeframe of changes in financial management, but detecting changes such as missed payments, bounced checks, or altered purchasing behavior (e.g., repeated purchases) presents another opportunity to identify individuals with preclinical AD. Banking and financial institutions already use algorithms, behavioral analytics, and artificial intelligence (AI)-powered technology to identify unusual transactions or spending behaviors that may be suggestive of fraud.Footnote 21 Similar techniques could be adapted to monitor older adults and notify them of behaviors indicative of dementia risk.
C Smart Appliances
Sensors can be deployed in the home to detect cognitive changes in older adults.Footnote 22 In a task-based study, individuals with MCI have been shown to spend more time in the kitchen when performing a set of home-based activities.Footnote 23 While in the kitchen, participants with MCI open cabinets and drawers, as well as the refrigerator, more often than cognitively unimpaired participants.Footnote 24 Researchers are exploring whether it is possible to use similar techniques to differentiate between healthy controls and individuals with preclinical AD.Footnote 25 A challenge for such monitoring studies (and, by extension, for real-life uptake) is the need to deploy multiple sensors in the home. One study attempted to circumvent this issue by focusing on passive in-home power usage for large appliances; the team found, on average, lower daily and seasonal usage of appliances among people with cognitive impairment.Footnote 26
Smart appliances, like refrigerators and ovens, connect to the internet and can sync with smartphones or other devices. They are already in many homes and are another alternative to sensor-based systems for detecting early cognitive changes. Smart refrigerators could track the frequency with which they are opened and for how long. Similarly, smart ovens may track the time they are left on. Such usage information could then be shared with the consumer by the appliance itself, for example, via an app.
D Speech
Changes in speech have been used to characterize the progression of AD. Studies have often used active data collection in which individuals are recorded on a smartphone or similar device as they complete tasks associated with verbal fluency, picture description, and free speech. The voice recordings are then processed, sometimes using machine-learning techniques. Studies have found that short vocal tasks can be used to differentiate participants with MCI from those with dementia.Footnote 27 It remains an open question whether preclinical AD presents with detectable changes in speech. Yet, one study of speech changes found that cognitively unimpaired participants with AD biomarkers used fewer concrete nouns and content words during spontaneous speech.Footnote 28
The interest in modalities for passive speech data collection – for example, conversations over the phone, communication with digital assistants, and texting related information – is mounting.Footnote 29 Improvements in machine-learning to reduce the burden of speech analysis, coupled with broad access to devices with microphones, are increasing the potential of passive speech data collection. Automatic speech recognition used for digital assistants like Amazon Alexa and Apple Siri has made strides in accuracy. As technological advancements further streamline transcription and analysis, speech data may be used to characterize changes related to preclinical AD. Simply put, Alexa may soon diagnose progression along the Alzheimer’s continuum from preclinical AD to MCI to dementia.Footnote 30
E Device Use
The ways people use their smartphones – including the amount of time spent on certain apps, login attempts, patterns of use, and disruptions in social interactions – may reveal signs of cognitive decline.Footnote 31 Studies examining patterns of smartphone use in older adults with and without cognitive impairment suggest that app usage is related to cognitive health.Footnote 32 There is much interest in leveraging device use as a potential marker of cognitive decline. The Intuition Study (NCT05058950), a collaboration between Biogen and Apple Inc., began in September 2021 with the aim of using multimodal passive sensor data from iPhone and Apple Watch usage to differentiate normal cognition from MCI; a secondary aim is to develop a function for predicting between individuals who will and will not develop MCI. With 23,000 participants, this observational longitudinal study will be the largest study to date collecting passive device use data.
Devices, like smartphones, could soon flag usage patterns that are suggestive of an increased risk of decline. Further, specific apps may be developed to detect concerning behavior changes by accessing meta-data from other apps and devices; this may streamline access to information and improve consumer friendliness.
IV Challenges Ahead
Here, we identify six ethical and legal challenges that will accompany the monitoring of digital biomarkers for AD. These are not exclusive, and many issues associated generally with measuring digital biomarkers will apply here as well. Moreover, the challenges outlined herein are not unique to digital biomarkers for AD. Rather, we would argue that they are heightened in this context because AD is a disease not just of the brain but the mind.
A Consent to Collect and Consent to Disclose
Although we hypothesize that preclinical AD will not be diagnosed clinically (i.e., using traditional biomarkers) until there is a disease-modifying therapy that renders the diagnosis medically actionable, in-home monitoring of digital biomarkers is not subject to this constraint. In fact, potential means of collecting and analyzing digital biomarkers for AD are already in our homes.
Yet, it is unlikely that individuals are aware that the GPS devices in their cars, the smart appliances in their kitchens, and the online banking apps on their phones can provide insights into their risk of cognitive and functional decline. Plugging in the GPS device, using the smart oven, or paying bills online, therefore, does not imply consent to having one’s brain health measured. Nor can consent be presumed. Many individuals do not want to know about their risk of developing dementia caused by AD because there is little to be done about it.Footnote 33 Others eschew learning their dementia risk to avoid existential dread.Footnote 34 This all suggests that, if digital biomarkers for AD are to be collected, there must be explicit consent.
Even if individuals agree to having their digital biomarkers for AD monitored, they may ultimately choose against learning what is revealed therein. Some individuals who undergo testing to learn whether they are at risk for dementia caused by AD – whether due to genes or to biomarkers – subsequently decline to learn the results.Footnote 35
This contrasts with – drawing an analogy to emergency medicine – our ability to presume consent for an Apple watch to monitor for and alert us to a possibly fatal arrythmia. But even there, where there is greater reason to presume consent, the evidence suggests we ought to eschew a “more is more” approach to disclosure. Apple watch monitoring for arrythmia can unduly worry people who receive a notification and subsequently follow-up with doctors, undergoing invasive and expensive tests only for the results to come back normal.Footnote 36 When the rate of false positives is unknown – or remains high – and when there are risks and burdens associated with disclosure, caution must accompany implementation.
B Communicating Digital Biomarker Information
To date, traditional AD biomarker information has been disclosed to cognitively unimpaired adults in highly controlled environments, mostly through research studies and with specialist clinical expertise.Footnote 37 Substantial work has gone into developing methods for disclosure, and the recommended steps include preparing people to learn about their biomarker information, maintaining sensitivity in returning the results, and following-up to ensure people feel supported after learning the results.Footnote 38 Digital biomarkers present an opportunity for individuals to learn that they are exhibiting subtle signs of cognitive decline or are at risk for dementia in the future from an app or from their banker or insurance agent – and without the option to speak directly and quickly about the results with a medical professional.
Although the disclosure of AD biomarkers has generally been found to be safe in pre-screened populations,Footnote 39 care should be taken when disclosing digital biomarker information more broadly. Here, the field may learn from discussions of direct-to-consumer genetic or biochemical testing.Footnote 40
Another concern is that the monitoring of digital biomarker data could lead to the inadvertent disclosure of dementia risk. Imagine, for instance, that your changing device usage is flagged and then used to generate targeted advertisements for supplements to boost brain health or for memory games. You could learn you are at risk simply by scrolling through your social media feed. And, in that case, any pretense of thoughtful disclosure is dropped.
C Conflicting Desires for Monitoring
Studies suggest that some cognitively unimpaired older adults share their AD biomarker results with others because they would like to be monitored for – and alerted to – changes in cognition and function that might negatively affect their wellbeing.Footnote 41 Often, these individuals feel it is ethically important to share this information so as to prepare family members who might, in the future, need to provide dementia care or serve as a surrogate decision maker.Footnote 42 Other older adults, however, perceive monitoring as intrusive and unwelcomed.Footnote 43
In an interview study of the family members of cognitively unimpaired older adults with AD biomarkers, some family members described watching the older adult more closely for symptoms of MCI or dementia after learning the biomarker results.Footnote 44 This may reflect family members’ evolving understanding of themselves as pre-caregivers – individuals at increased risk for informal dementia caregiving.Footnote 45 Technology can allow family members to remotely monitor an older adult’s location, movements, and activities, in order to detect functional decline and changes in cognition, as well as to intervene if needed. Despite these putative advantages, monitoring may be a source of friction if older adults and their families do not agree on its appropriateness or on who should have access to the resulting information.
V Stigma and Discrimination
Dementia caused by AD is highly stigmatized.Footnote 46 Research with cognitively unimpaired individuals who have the AD biomarker beta-amyloid suggests that many worry that this information would be stigmatizing if disclosed to others.Footnote 47 Unfortunately, this concern is likely justified; a survey experiment with a nationally representative sample of American adults found that, even in the absence of cognitive symptoms, a positive AD biomarker result evokes stronger stigmatizing reactions among members of the general public than a negative result.Footnote 48
Discrimination occurs when stigmatization is enacted via concrete behaviors. Cognitively unimpaired individuals who have beta-amyloid anticipate discrimination across a variety of contexts – from everyday social interactions to employment, housing, and insurance.Footnote 49 It is not yet known whether – and if so to what extent – digital biomarkers will lead to stigma and discrimination. However, we must be aware of this possibility, as well as the scant legal protection against discrimination on the basis of biomarkers.Footnote 50
VI Information Privacy
Digital biomarker information is health information. But it is health information in the hands of bankers and insurance agents or technology companies – individuals and entities that are not health care providers and are therefore not subject to the privacy laws that govern health care data. The Health Insurance Portability and Privacy Act (HIPAA) focuses on data from medical records; it does not, for instance, cover data generated by smartphone apps.Footnote 51 The need for privacy is intensified by the potential for stigma and discrimination, discussed above.
Further, older adults with MCI and dementia are vulnerable – for example, to financial scammers. It is important to ensure that data generated by monitoring is not abused – by those who collect it or by those who subsequently access it – to identify potential targets for abuse and exploitation. Abuse and exploitation may occur at the hands of an unscrupulous app developer but also, or perhaps more likely, at the hands of an unscrupulous family member or friend.
VII Disparities in Health and Technology
The older population is becoming significantly more racially and ethnically diverse.Footnote 52 Black and Hispanic older adults are at higher risk than White older adults for developing AD, and they encounter disproportionate barriers to accessing health care generally, and dementia care specifically.Footnote 53 Health disparities are increasingly understood to reflect a broad, complex, and interrelated array of factors, including racism.Footnote 54 There are well-reported concerns about racism in AI.Footnote 55 Those are no less salient here and may be more salient, given disparities in care.
Further, monitoring may be cost prohibitive, impacted by the digital divide, or reliant on an individual’s geographic location. For instance, older adults, especially adults from minoritized communities, may not have smart devices. According to a Pew report using data collected in 2021, only 61 percent of those aged 65 and older owned a smartphone and 44 percent owned a tablet.Footnote 56 As many of the digital biomarkers described in Section III require a smart device, uptake of monitoring methods may be unevenly distributed and exacerbate, rather than alleviate, disparities in AD care.
VIII Conclusion
The older adult we envisioned at the beginning of this chapter will not be the only face of AD much longer. We may soon come to think, too, of adults with preclinical AD. These individuals may learn about their heightened risk of cognitive and functional impairment from a clinician. Or they may learn about it because the GPS device plugged into their car has detected slight alterations in their driving patterns, because their smart refrigerator has alerted them to the fridge door staying open a bit longer, or because their phone has noted slight changes in speech.
AD is undergoing a biomarker transformation, of which digital biomarkers are a part. AD is a deeply feared condition, as it robs people of their ability to self-determine. Care must therefore be taken to address the multitude of challenges that arise when monitoring our minds.