Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-26T04:23:52.925Z Has data issue: false hasContentIssue false

Part II - Digital Home Diagnostics for Specific Conditions

Published online by Cambridge University Press:  25 April 2024

I. Glenn Cohen
Affiliation:
Harvard Law School, Massachusetts
Daniel B. Kramer
Affiliation:
Harvard Medical School, Massachusetts
Julia Adler-Milstein
Affiliation:
University of California, San Francisco
Carmel Shachar
Affiliation:
Harvard Law School, Massachusetts

Summary

Type
Chapter
Information
Digital Health Care outside of Traditional Clinical Settings
Ethical, Legal, and Regulatory Challenges and Opportunities
, pp. 61 - 104
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

Part I of this volume explored the novel concerns about privacy and data raised by home-based digital diagnostics. These arguments surrounding data access, rights, and regulation were framed primarily in abstract terms applicable to the very broad category of digital diagnostics. Part II carries these themes forward into three specific disease areas of profound public health, policy, and bioethical importance. The rise of new technology and telemedicine-based diagnostic pathways for these conditions – cardiovascular disease, reproductive health, and neurodegenerative disease – builds on accelerating advances in sensors, data transmission, artificial intelligence (AI), and data science. The COVID-19 pandemic amplified the opportunity and imperative to provide diagnostic and potentially therapeutic services outside of traditional clinical settings. New devices and systems may not only replace traditional care, but also expand the reach of critical screening and diagnosis to patients otherwise unable to access or navigate health systems. The three chapters in this part thus present real-world case studies of the hopes and hazards of applying digital diagnostics with a disease-specific focus at population-wide scale.

Patrik Bächtiger and colleagues introduce this part with their chapter, “Patient Self-Administered Screening for Cardiovascular Disease Using Artificial Intelligence in the Home.” The authors outline a novel attempt in the United Kingdom to address late or missed diagnoses of congestive heart failure, valvular heart disease, and atrial fibrillation – all conditions with high morbidity and mortality that can be substantially mitigated with early treatment. Using electronic health records from general practitioners, patients at high risk for these conditions are invited to use (in their own homes) an electronic stethoscope with the ability to record electrocardiograms (ECGs) as well as heart sounds, which then feed into AI algorithms for near-immediate diagnoses. While the theoretical clinical, public health, and economic benefits of this new pathway may be well-grounded, the authors consider several ethical features of the program to be in need of greater scrutiny. Equity may be both advanced or hindered by AI-enabled cardiovascular screening, which may reduce barriers to accessing traditional clinical evaluation and mitigate cognitive bias, at the cost of exposing patients to the biases of the algorithms themselves. Relatedly, decentralizing clinical screening into the home necessarily creates new roles and responsibilities for patients and families, and establishes new data structures with distinct potential risks and benefits. The authors propose programmatic metrics that might capture empirical evidence to adjudicate these ethical questions.

Equity, agency, and control of data extend into Donley and Rebouché’s contribution, “The Promise of Telehealth for Abortion,” which evaluates the growing but tremulous landscape for abortion services supported by telehealth and related advances. The authors trace the legal and regulatory arc of medical abortion services provided without direct in-person care, and the more recent conflicts raised by new state laws in the wake of the epochal Dobbs decision. In many states, the possibility of digital surveillance supporting abortion-related prosecutions raises the stakes for data rights and digital privacy just as new options expand for consumer- and clinically driven diagnostic devices or wearables capturing physiologic signals consistent with pregnancy. In theory and practice, it may already be the case that a smartwatch might “know” someone is pregnant before its wearer, and that knowledge necessarily lives in a digital health ecosystem potentially accessible to law enforcement and other parties. Donley and Rebouché nimbly forecast the challenges and future conflict in balancing access and safe provision of abortion services, while posing difficult questions about the legal risks borne by both patients and providers.

This part concludes by moving from the beginning of life toward its twilight, with Erickson and Largent’s exploration of the intersection between digital diagnostics and neurodegenerative diseases, “Monitoring (on) Your Mind: Digital Biomarkers for Alzheimer’s Disease.” Alzheimer’s disease and its related disorders retain their status as classical “clinical diagnoses” – those that cannot be made based on a physical exam, imaging, symptoms, or traditional blood tests alone, but only by an expert amalgamation of individual findings. While Alzheimer’s currently lacks the disease-modifying treatments available for many cardiovascular conditions, facilitating diagnoses through digital means may offer other benefits to patients and their families, and could potentially provide a bridgehead toward studying treatments in the future. The authors outline several novel avenues for leveraging digital diagnostics to identify cognitive impairment, many of which draw insights from everyday activities not usually considered as inputs for health measurement. Increasingly, digitized and wirelessly-connected features of daily life, including driving, appliances, phones, and smart speakers, will enable the algorithmic identification of early cognitive or functional limitations. Erickson and Largent ask how these advances complicate questions of consent and communication outside of traditional clinics, and revisit concerns about equity and either improved or exacerbated disparities in access to care.

Uniquely within this part, however, Erickson and Largent confront a more fundamental question posed by increasingly powerful digital diagnostics: How much do we really want to know about our own health? While fraught in other ways, diagnoses of heart failure or pregnancy generally cannot be ignored or dismissed, and (legal risks aside) patient care can generally be improved with earlier and more precise diagnosis. Identifying early (in particular, very early) cognitive impairment, however, offers more complex trade-offs among patients and their current or future caregivers, particularly in the absence of effective therapies. While genetics can offer similar pre-diagnosis or risk prediction, a critical distinction raised by digital diagnostics is their ubiquity: Anyone who drives, uses a smartphone, or types on a keyboard creates potential inputs to their eventual digital phenotyping, with all the attendant burdens. Digital diagnostics, used in our own homes, applied to more and more disease areas, will require a deeper reconciliation between relentless innovation and the boundaries of individuals’ desire to understand their own health.

5 Patient Self-Administered Screening for Cardiovascular Disease Using Artificial Intelligence in the Home

Patrik Bächtiger , Mihir A. Kelshiker , Marie E. G. Moe , Daniel B. Kramer , and Nicholas S. Peters
I Introduction

The United Kingdom (UK) National Health Service (NHS) is funding technologies for home-based diagnosis that draw on artificial intelligence (AI).Footnote 1 Broadly defined, AI is the ability of computer algorithms to interpret data at human or super-human levels of performance.Footnote 2 One compelling use case involves patient-recorded cardiac waveforms that are interpreted in real time by AI to predict the presence of common, clinically actionable cardiovascular diseases. In this case, both electrocardiograms (ECGs) and phonocardiograms (heart sounds) are recorded by a handheld device applied by the patient in a self-administered smart stethoscope examination, communicating waveforms to the cloud via smartphone for subsequent AI interpretation – principally known as AI-ECG. Validation studies suggest the accuracy of this technology approaches or exceeds many established national screening programs for other diseases.Footnote 3 More broadly, the combination of a new device (a modified handheld stethoscope), novel AI algorithms, and communication via smartphone coalesce into a distinct clinical care pathway that may become increasingly prevalent across multiple disease areas.

However, the deployment of a home-based screening program combining hardware, AI, and a cloud-based digital platform for administration – all anchored in patient self-administration – raises distinct ethical challenges for safe, effective, and trustworthy implementation. This chapter approaches these concerns in five parts. First, we briefly outline the organizational structure of the NHS and associated regulatory bodies responsible for evaluating the safety of medical technology. Second, we highlight NHS plans to prioritize digital health and the specific role of AI in advancing this goal with a focus on cardiovascular disease. Third, we review the clinical imperative for early diagnosis of heart failure in community settings, and the established clinical evidence supporting the use of a novel AI-ECG-based tool to do so. Fourth, we examine the ethical concerns with the AI-ECG diagnostic pathway according to considerations of equity, agency, and data rights across key stakeholders. Finally, we propose a multi-agency strategy anchored in a purposefully centralized view of this novel diagnostic pathway – with the goal of preserving and promoting trust, patient engagement, and public health.

II The UK National Health Service and Responsible Agencies

For the purposes of this chapter, we focus on England, where NHS England is the responsible central government entity for the delivery of health care (Scotland, Wales, and Northern Ireland run devolved versions of the NHS). The increasing societal and political pressure to modernize the NHS has led to the formation of agencies tasked with this specific mandate, each of which plays a key role in evaluating and deploying the technology at issue in this chapter. Within NHS England, the NHSX was established with the aim of setting national NHS policy and developing best practices across technology, digital innovation, and data, including data sharing and transparency. Closely related, NHS Digital is the national provider of information, data, and IT systems for commissioners, analysts, and clinicians in health and social care in England. From a regulatory perspective, the Medicines and Healthcare products Regulatory Agency (MHRA) is responsible for ensuring that medicines and medical devices (including software) work and are acceptably safe for market entry within the scope of their labelled indications. Post Brexit, the UK’s underlying risk-based classification system remains similar to that of its international counterparts, categorizing risk into three incremental classes determined by the intended use of the product. In practice, most diagnostic technology (including ECG machines, stethoscopes, and similar) would be considered relatively low-risk devices (class I/II) compared with invasive, implantable, or explicitly life-sustaining technologies (class III). One implication of this risk tiering is that, unlike a new implanted cardiac device, such as a novel pacemaker or coronary stent, the market entry of diagnostic technology (including AI-ECGs) would not be predicated on having demonstrated their safety and effectiveness through, for example, a large trial with hard clinical endpoints.

Once a medical device receives regulatory authorization from the MHRA, the UK takes additional steps to determine whether and what the NHS should pay for it. The National Institute for Health and Clinical Excellence (NICE) evaluates the clinical efficacy and cost-effectiveness of drugs, health technologies, and clinical practices for the NHS. Rather than negotiating prices, NICE makes recommendations for system-wide funding and, therefore, deployment, principally based on using tools such as quality-adjusted life years. In response to the increasing number and complexity of digital health technologies, NICE partnered with NHS England to develop standards that aim to ensure that new digital health technologies are clinically effective and offer economic value. The subsequent evidence standards framework for digital health technologies aims to inform stakeholders by exacting appropriate evidence, and to be dynamic and value-driven, with a focus on offering maximal value to patients.Footnote 4

Considering the role of the regulatory bodies above, as applied to a novel AI-ECG device, we observe the following: Manufacturers seeking marketing authority for new digital health tools primarily focused on the diagnosis rather than treatment of a specific condition (like heart failure), must meet the safety and effectiveness standards of the MHRA – but those standards do not necessarily (or likely) require a dedicated clinical trial illustrating real-world clinical value. By contrast, convincing the NHS to pay for the new technology may require more comprehensive evidence sufficient to sway NICE, which is empowered to take a more holistic view of the costs and potential benefits of novel health tools. The advancement of this evidence generation for digital health tools is increasingly tasked to NHS sub-agencies. All of this aims to align with the NHS Long Term Plan, which defines the key challenges and sets an ambitious vision for the next ten years of health care in the UK.Footnote 5 AI is singled-out as a key driver for digital transformation. Specifically, the “use of decision support and AI to help clinicians in applying best practice, eliminate unwarranted variation across the whole pathway of care, and support patients in managing their health and condition.” Here we already note implicit ethical principles: Reducing unjustified variability in care (as a consideration of justice) and promoting patient autonomy by disseminating diagnostic capabilities that otherwise may be accessible only behind layers of clinical or administrative gatekeeping. Focusing on the specific imperative of heart failure, this chapter discusses whether either of these or other ethical targets are, on balance, advanced by AI-ECG. To do this, we first outline the relevant clinical and technological background below.

III Screening for Heart Failure with AI-ECG

The symptomatic burden and mortality risks of heart failure – where the heart is no longer able to effectively pump blood to meet the body’s needs under normal pressures – remain worse than those of many common, serious cancers. Among all chronic conditions, heart failure has the greatest impact on quality of life and costs the NHS over £625 million per year – 4 percent of its annual budget.Footnote 6 The NHS Long Term Plan emphasizes that “80% of heart failure is currently diagnosed in hospital, despite 40% of patients having symptoms that should have triggered an earlier assessment.” Subsequently, the Plan advocates for “using a proactive population health approach focused on … earlier detection and intervention to treat undiagnosed disorders.”Footnote 7 While the exact combination of data will vary by context, a clinical diagnosis of heart failure may include the integration of patients’ symptoms, physical exams (including traditional stethoscope auscultation of the heart and lungs), and various cardiac investigations, including blood tests and imaging. Individually, compared with a clinical diagnosis gold standard, the test characteristics of each modality vary widely, with sensitivity generally higher than specificity.

Similar to most chronic diseases in high-income countries, the burden of heart failure is greatest in those who are most deprived and tends to have an earlier age of onset in minority ethnic groups, who experience worse outcomes.Footnote 8 Therefore, heart failure presents a particularly attractive target for disseminated technology with the potential to speed up diagnosis and direct patients toward proven therapies, particularly if this mitigates the social determinants of health driving observed disparities in care. Given the epidemiology of the problem and the imperative for practical screening, a tool supporting the community-based diagnosis of heart failure has the potential to be both clinically impactful and economically attractive. The myriad diagnostics applicable to heart failure described, however, variously require phlebotomy, specialty imaging, and clinical interpretation to tie together signs and symptoms into a clinical syndrome. AI-supported diagnosis may overcome these limitations.

The near ubiquity of ECGs in well-phenotyped cardiology cohorts supports the training and testing of AI algorithms among tens of thousands of patients. This has resulted in both clinical and, increasingly, consumer-facing applications where AI can interrogate ECGs and accurately identify the presence, for example, of heart rhythm disturbances. Building on an established background suggesting that the ECG can serve as an accurate digital biomarker for the stages of heart failure, a recent advance in AI has unlocked the super-human capability to detect heart failure from a single-lead ECG alone.Footnote 9

The emergence of ECG-enabled stethoscopes, capable of recording single-lead ECGs during contact for routine auscultation (listening), highlighted an opportunity to apply AI-ECG to point-of-care screening. The Eko DUO (Eko Health, Oakland, CA, US) is one example of such an ECG-enabled stethoscope (see Figure 5.1). Detaching the tubing leaves a small cell phone-sized device embedded with sensors (electrodes and microphone) for recording both ECGs and phonocardiograms (heart sounds). Connectivity via Bluetooth allows the subsequent live streaming of both ECG and phonocardiographic waveforms to a user’s smartphone and the corresponding Eko app. Waveforms can be recorded and transmitted to cloud-based infrastructure, allowing them to be analyzed by cloud-based AI algorithms, such as AI-ECG.

Figure 5.1 Left to right: Eko DUO smart stethoscope; patient-facing “bell” of stethoscope labelled with sensors; data flow between Eko DUO, user’s smartphone, and cloud for the application of AI

While the current programmatic focus is on identifying community heart failure diagnoses, AI can, in theory, also be applied to ECG and phonocardiographic waveforms to identify the presence of two additional public health priorities: Atrial fibrillation, a common irregular heart rhythm, and valvular heart disease, typified by the presence of heart murmurs. Therefore, taken in combination, a fifteen-second examination with an ECG-enabled smart stethoscope may offer a three-in-one screening test for substantial drivers of cardiovascular morbidity and mortality, and systemically important health care costs.

The authors are currently embarking on the first stage of deploying such a screening pathway, anchored in primary care, given the high rates of undiagnosed heart failure and further cardiovascular disease, including atrial fibrillation and valvular disease, in communities across England.Footnote 10 The early stages of this pathway involve using NHS general practitioner electronic health records and applying search logic to identify those at risk for heart failure (e.g., risk factors such as hypertension, diabetes, previous myocardial infarction). Patients who consent are mailed a small parcel containing an ECG-enabled stethoscope (Eko DUO) and a simple instruction leaflet on how to perform and transmit a self-recording. Patients are encouraged to download the corresponding Eko App to their own phones (those who are unable to are sent a phone with the app preinstalled as part of the package). Patients whose data, as interpreted by AI, suggests the presence of heart failure, atrial fibrillation, or valvular heart disease are invited for further investigation in line with established NICE clinical pathways.

This sets the scene for a novel population health intervention that draws on a technology-driven screening test, initiated in the patient’s home, by the patient themselves. The current, hospital-centric approach to common and costly cardiovascular conditions combines clinical expertise and the available technologies to screen and unlock substantial clinical and health economic benefits through early diagnosis. Opportunities for more decentralized (outside of hospital), patient-activated screening with digital diagnostics will surely follow if AI-ECG proves tractable. Notably, here we have described what we believe to be among the earliest applications of “super-human” AI – accurately inferring the presence of heart failure from a single-lead ECG was previously thought impossible – with the potential for meeting a major unmet need through a clinical pathway that scales access to this potentially transformative diagnostic.

IV Ethical Considerations for Self-Administered Cardiovascular Disease Screening at Home

Having outlined the health policy and stakeholder landscape and specified how this relates to heart failure and AI-ECG, we can progress to discussing the unique ethical challenges posed by patient self-administration of this test in their own homes. Enthusiasm for such an approach to community, patient-driven cardiovascular screening is founded in not only clinical expediency, but also a recognition of the way in which this pathway may support normative public health goals, particularly around equity and patient empowerment. Despite these good-faith expectations, the deployment of such a home-based screening program combining hardware, AI, and a cloud-based digital platform for administration – all hinging on patient self-administration – raises distinct ethical challenges. In this section, we explore the ethical arguments in favor of the AI-ECG program, as well its potential pitfalls.

A Equity

One durable and compelling argument supporting AI-ECG arises from well-known disparities in cardiovascular disease and treatment. Cardiovascular disease follows a social gradient; this is particularly pronounced for heart failure diagnoses, where under-diagnosis in England is most frequent in the lowest-income areas. This tracks with language skills, a key social determinant of health related to a lower uptake of preventative health care and subsequently worse health outcomes. In England, nearly one million people (2 percent of the total population) lack basic English language skills. AI-ECG attenuates these disparities in several ways.

First, targeted screening based on risk factors (such as high blood pressure and diabetes) will, based on epidemiologic trends, necessarily and fruitfully support vulnerable patient groups for whom these conditions are more prevalent. These same patients will also be less able to access traditional facility-based cardiac testing. AI-ECG overcomes these concerns for the patients most in need.

Second, AI-ECG explicitly transfers a key gatekeeping diagnostic screen away from clinicians: The cognitive biases of traditional bedside medicine. Cross-cultural challenges in subjective diagnosis and treatment escalation are well documented, including in heart failure across a spectrum of disease severity, ranging from outpatient symptoms ascertainment to referral for advanced cardiac therapies and even transplant.Footnote 11 AI-ECG overcomes the biases embedded in traditional heart failure screenings by simplifying a complex syndromic diagnosis into a positive or negative result that is programmatically entwined with subsequent specialist referral.

These supporting arguments grounded in reducing the disparities in access to cardiac care may be balanced by equally salient concerns. Even a charitable interpretation of the AI-ECG pathway assumes a relatively savvy, engaged, and motivated patient. The ability to mail the AI-ECG screening package widely to homes is just the first step in a series of necessary steps: Opening and setting up the screening kit, including the phone and ECG-enabled stethoscope, successfully activating the device, and recording a high-quality tracing that is then processed centrally without data loss. While the authors’ early experience using this technology in various settings has been reassuring, it remains uncertain whether the established “digital divide” will complicate the equitable application of AI-ECG screening. Assuming equal (or even favorably targeted) access to the technology, are patients able to use it, and do they want to? The last point is critical: In the UK as well as the United States, trust in health care varies considerably and, (broadly speaking) in cardiovascular disease, tracks unfortunately and inversely with clinical need.

Indeed, one well-grounded reason for suspicion recalls another problem for the equity-driven enthusiasm for AI-ECG, which is the training and validation of the AI algorithms themselves. The “black box” nature of some forms of AI, where the reasons for model prediction cannot easily be inferred, has appropriately led to concerns over insidious algorithmic bias and subsequent reservations around deploying these tools for patient care.Footnote 12 Even low-tech heart failure screening confronts this same problem, as (for example) the most widely used biomarker for heart failure diagnosis has well-known performance variability according to age, sex, ethnicity, patient weight, renal function, and clinical comorbidities.Footnote 13 Conversely, studies to date have suggested that AI-ECG for heart failure detection does not exhibit these biases. It may still be the case that biases do exist, but that they require further large-scale deployment to manifest themselves.

To address these concerns, we propose several programmatic features as essential and intentional for reinforcing the potential of wide-scale screening to promote equity. First, it is imperative for program managers to prominently collect self-identified race, ethnicity, and other socioeconomic data (e.g., language, education) from all participants at each level of outreach – screened, invited, agreed, successfully tested, identified as “positive,” referred for specialist evaluation, and downstream clinical results. Disproportionate representation at each level, and differential drop-out at each step, must be explored, but that can only begin with high-quality patient-level data to inform analyses and program refinement. This is an aspiration dependent on first resolving the outlined issues with trust. Trust in AI-ECG may be further buttressed in several ways, recognizing the resource limitations available for screening programs generally. One option may be providing accommodations for skeptical patients in a way that still provides suitable opportunities to participate through alternative means. This could simply involve having patients attend an in-person appointment during which the AI-ECG examination is performed on them by a health care professional.

The patient end-user needs to feel trust and confidence in using the technology. This can be achieved through user-centric design that prioritizes a simple protocol, to maximize uptake, with the requisite level of technical detail to ensure adequate recording quality (e.g., getting the right position). The accuracy of AI-ECG depends on these factors, in contrast with other point-of-care technologies where the acquisition of the “input” is less subject to variability (e.g., finger-prick blood drop tests).

The centralized administration of NHS screening programs by NHS England paired with NHS Digital’s repository data on the uptake of screening offers granular insights to anticipate and plan for regions and groups at risk of low uptake. We propose enshrining a dedicated data monitoring plan into the AI-ECG screening protocol, with prespecified targets for uptake and defined mitigation strategies – monitored in near real-time. This is made possible through the unique connectivity (for a screening technology) of the platform driving AI-ECG, with readily available up-to-date data flows for highlighting disparities in access. However, a more proactive approach to targeting individuals within a population with certain characteristics needs to be balanced against the risk of stigmatization, and, ultimately, potential loss of trust that may further worsen the cardiovascular outcomes seeking to be improved.

Lastly, equity concerns around algorithmic performance are necessarily empirical questions that will also benefit from patient-level data collection. We acknowledge that moving from research in the form of prospective validation studies to deployment for patient care requires judgment in the absence of consensus, within the NHS or more globally, around the minimum scrutiny for an acceptable level (if any) of differential performance across – for starters – age, sex, and ethnicity. To avoid these potentially impactful innovations remaining in the domain of research, and to anticipate the wide-reaching implications of a deployment found to exhibit bias retrospectively, one possible solution would be to, by design, prospectively monitor for inconsistent test performance. Specifically, in the context of AI-ECG offering a binary yes or no screening test result for heart failure, it is important to measure the rate of false positive and false negative results. False positives can be measured through the AI-ECG technology platform linking directly into primary care EHR data. This allows positive AI-ECG results to be correlated with the outcomes of downstream gold-standard, definitive investigations for heart failure (e.g., echocardiography ultrasound scans). Such a prospective approach is less feasible for false negatives due to both the potentially longer time horizon for the disease to manifest and the uncertainty around whether AI-ECG truly missed the diagnosis. Instead, measuring the rate of false negatives may require a more expansive approach in the form of inviting a small sample of patients with negative AI screening tests for “quality control” next-step investigations. All of this risks adding complexity and, therefore, cost to a pathway seeking to simplify and save money. However, given this program’s position at the vanguard of AI deployments for health, a permissive approach balanced with checkpoints for sustained accuracy may help to blueprint best practices and build confidence for similar AI applications in additional disease areas.

B Agency

Another positive argument for AI-ECG screening aligns with trends in promoting agency, understood here as patient empowerment, particularly around the use of digital devices to measure, monitor, and manage one’s own health care – particularly in terms of cardiovascular disease. The enthusiastic commercial uptake of fitness wearables, for example, moved quickly past counting steps to incorporate heart rhythm monitoring.Footnote 14 Testing of these distributed technologies has shown mixed results, with the yield of positive cases necessarily depending on the population at issue.Footnote 15 Recalling the equity concerns above, the devices themselves may be more popular among younger and healthier patients, among whom true positive diagnoses may be uncommon. However, targeted and invited screening with AI-ECG may balance these concerns through enriching the population at risk by invitation.

Realistic concerns about agency extend beyond the previous warnings about digital literacy, access to reliable internet, and language barriers to ask more fundamental questions about whether patients actually want to assume this central role in their own health care. A key parallel here is the advent of mandates for shared decision-making in cardiovascular disease, particularly in the United States where federal law now requires selected Medicare beneficiaries considering certain cardiovascular procedures to incorporate “evidence based shared decision-making tools” in their treatment choices.Footnote 16 However, patients may reasonably ask if screening with AI-ECG should necessarily shift the key role of test administration (literally) into their hands. Unlike the only other at-home national screening test in the UK – simply taking a stool sample for bowel cancer screening – self-application of AI-ECG requires the successful execution of several codependent steps. Here, even a relatively low failure rate may prove untenable for population-wide scaling, risking that this technology may remain in the physician’s office.

Putting such responsibility on patients could be argued to not only directly shift this responsibility away from clinicians, but also dilute learning opportunities. While subtle, shifting the cognitive work of integrating complex signs and symptoms into a syndromic diagnosis like heart failure may have unwelcome implications for clinicians’ diagnostic skills. We emphasize that this is not just whimsical nostalgia for a more paternalistic time in medicine, but a genuine worry about reductionism in algorithmic diagnosis that oversimplifies complex constellations of findings into simple yes or no diagnoses (AI-ECG, strictly speaking, only flags a risk of heart failure, which is not clinically equivalent to a diagnosis). Resolving these tensions may be possible through seeing the educational opportunity and wider clinical application of the hardware enabling AI-ECG.

Careful metrics, as described previously, will allow concerns about agency to be considered empirically, at least within the categories of patient data collected. If, for example, the utilization of AI-ECG varies sharply according to age, race, ethnicity, or language fluency, this would merit investigation specifically interrogating whether this variability rests in part on patient preferences for taking on this task rather than an inability to do so. At the same time, early patient experiences with AI-ECG in real-world settings may provide opportunities for patient feedback regarding whether this specific device, or the larger role being asked of them in their own care, is perceived as an appropriate assignation of responsibility or an imposition. If, for example, patients experience this shifting of cardiovascular screening out of the office as an inappropriate deferral of care out of traditional settings, this may suggest the need for either refining the pathway (still using the device, but perhaps keeping it in a clinical setting) or more extensive community engagement and education to ensure stakeholder agreement on roles, rights, and responsibilities.

C Data Rights

A central government, NHS-funded public screening program making use of patients’ own smartphones necessarily raises important questions about data rights. Beyond the expected guardrails required by the General Data Protection Regulation (GDPR) and UK-specific health data legislation, AI-ECG introduces additional concerns. One is whether patient participants should be obligated to contribute their health data toward the continuous refinement of the AI-ECG algorithms themselves or instead be given opt-in or opt-out mechanisms of enrollment. We note that while employed in this context by a public agency, the intellectual property for AI-ECG is held by the device manufacturer. Thus, while patients may carry some expectation of potential future benefit from algorithmic refinement, the more obvious rewards accrue to private entities. Another potential opportunity, not lost on the authors as overseers for the nascent AI-ECG program, is the possibility that AI-ECG data linked to patients’ EHR records might support entirely new diagnostic discovery beyond the core cardiovascular conditions at issue. Other conditions may similarly have subtle manifestations in ECG waveforms, phonocardiography, or their combination – invisible to humans but not AI – that could plausibly emerge from widespread use. Beyond just opt-in or opt-out permissions – known to be problematic for meaningful engagement with patient consentFootnote 17 – what control should patients have around the use of their health data in this context? For example, the NHS now holds a rich variety of health data for each patient – including free text, imaging, and blood test results. Patients may be happy to offer some but not all of this data for application to their own health, with different decisions on stratifying what can be used for AI product development.

Lastly, AI-ECG will need to consider data security carefully, including the possibility, however remote, of malicious intent or motivated intruders entering the system. Health data can be monetized by cyber criminals. Cyber threat modelling should be performed by the device manufacturer early in the design phase to identify possible threats and their mitigants.Footnote 18 Documentation provided about embedded data security features adds valuable information for patients that may have concerns about the protection of their personal data, and can help them to make informed decisions on using AI-ECG. Beyond privacy, threat modelling should also account for patient safety, such as from an intruder with access that allows the manipulation of code or data. For example, it could be possible to manipulate results to deprive selected populations of appropriate referrals for care. Sabotaging results or causing a denial-of-service situation by flooding the system with incorrect data might also cause damage to the reputation of the system in such a way that patients and clinicians become wary of using it. Overall, anticipating these security and other data rights considerations beyond the relatively superficial means of user agreements remains an unmet challenge for AI-ECG.

V Final Recommendations

This chapter has outlined a novel clinical pathway to screen for cardiovascular disease using an at-home, patient self-administered AI technology that can provide a screening capability beyond human expertise. We set this against a backdrop of: (1) A diverse ecosystem of stakeholders impacted by and responsible for AI-ECG, spanning patients, NHS clinicians, NHS agencies, and the responsible regulatory and health economic bodies and (2) a health-policy landscape eager to progress the “use of decision support and AI” as part of a wider push to decentralize (i.e., modernize) care. To underscore the outlined considerations of equity, agency, and data rights, we propose two principal recommendations, framed against but generalizable beyond the pathway example of AI-ECG.

First, we advocate for a multi-agency approach that balances permissive regulation and deployment – to align with the speed of AI innovation – against ethical and statutory obligations to safeguard public health. Bodies such as NHS England, the MHRA, and NICE each have unique responsibilities, but with cross-cutting implications. The clinical and health economic case for urgent innovation for unmet needs, such as AI-ECG for heart failure, is obvious and compelling. Agencies working sequentially delays translating such innovations into clinical practice, missing opportunities to avert substantial cardiovascular morbidity and mortality. Instead, the identification of a potentially transformative technology should trigger a multi-agency approach that works together and in parallel to support timely deployment within clinical pathways to positively impact patient care. This approach holds not only during initial deployment, but also as technology progresses. Here, we could consider the challenge of AI algorithms continually iterating (i.e., improving): For a given version of AI-ECG, the MHRA grants regulatory approval, NICE endorses procurement, and NHS England guides implementation. After evaluating a medical AI technology and deeming it safe and effective, should these agencies limit its authorization to market only the version of the algorithm that was submitted, or permit the marketing of an algorithm that can learn and adapt to new conditions?Footnote 19 AI-ECG could continually iterate by learning from the ECG data accumulated during deployment, and also through continuing improvements in machine learning methodology and computational power. Cardiovascular data, including waveforms, imaging, blood, and physiological parameters, is generally high volume and repeatedly measured. This, therefore, offers a rich seam for taking advantage of AI’s defining strength to continually improve, unlike ordinary “medical devices.” Parodying the ship of Theseus, at what point is the algorithm substantially different to the original, and what prospective validation, if any, is needed if the claims remain the same? Multi-agency collaboration can reach a consensus on such questions that avoids unfamiliarity with the lifecycle of AI disrupting delivery of care by reactively resetting when new (i.e., improved) versions arrive. For AI-ECG, this could involve the expensive and time-consuming repetition of high-volume patient recruitment to validation research studies. Encouragingly, in a potential move toward multi-agency collaboration, in 2022, NHS England commissioned NICE to lead a consultation for a digital health evidence standards framework that aims to better align with regulators.Footnote 20

Second, both to account for the ethical considerations outlined in this chapter and to balance any faster implementation of promising AI technologies, we recommend a centralized responsibility for NHS England to deploy and thoroughly evaluate programs such as AI-ECG. This chapter has covered some of the critical variables to measure that will be unique to using an AI technology for patient self-administered screening at home. Forming a comprehensive list would, again, be amenable to a multi-agency approach, where NHS England can draw on the playbook for already-monitoring existing national screening programs. An evaluation framework addressing the outlined considerations around equity, agency, and data rights should be considered not only an intrinsic but a mandatory part of the design, deployment, and ongoing surveillance of AI-ECG. The inherent connectivity and instant data flow of such technology offers, unlike screening programs to date, the opportunity for real-time monitoring and, therefore, prompt intervention, not only for clinical indications, but also for any disparities in uptake, execution, algorithm performance, or cybersecurity. Ultimately, this will not only bolster the NHS’s position as a world leader in standards for patient safety, but also as an exemplar system for realizing effective AI-driven health care interventions.

Looking to the future for AI-ECG, translating the momentum for technological innovation in the NHS into patient benefit will require careful consideration of the outlined ethical pitfalls. This may, in the short term, establish best practices that build confidence for further applications. In the longer term, we see a convergence of commoditized AI algorithms for cardiovascular and wider disease, where increasingly sophisticated sensor technology may make future home-based screening a completely passive act. While moving toward such a reality could unlock major public health benefits, doing so will depend on bold early use cases, such as AI-ECG, that reveal unanticipated ethical challenges and allow them to be resolved. For now, the outlined policy recommendations can serve to underpin the stewardship of such novel diagnostic pathways in a way that preserves and promotes trust, patient engagement, and public health.

VI Conclusion

Patient self-administered screening for cardiovascular disease at home using an AI-powered technology offers substantial potential public health benefits, but also poses unique ethical challenges. We recommend a multi-agency approach to the lifecycle of implementing such AI technology, combined with a centrally overseen, mandatory prospective evaluation framework that monitors for equity, agency, and data rights. Assuming the responsibility to proactively address any observed neglect of these considerations instills trust as the foundation for the sustainable and impactful implementation of AI technologies for clinical application within patients’ own homes.

6 The Promise of Telehealth for Abortion

Greer Donley and Rachel Rebouché
I Introduction

The COVID-19 pandemic catalyzed a transformation of abortion care. For most of the last half century, abortion was provided in clinics outside of the traditional health care setting.Footnote 1 Though a medication regimen was approved in 2000 to terminate a pregnancy without a surgical procedure, the Food and Drug Administration (FDA) required, among other things, that the drug be dispensed in person at a health care facility (the “in-person dispensing requirement”).Footnote 2 This requirement dramatically limited the medication’s promise to revolutionize abortion because it subjected medication abortion to the same physical barriers as procedural care.Footnote 3

Over the course of the COVID-19 pandemic, however, that changed. The pandemic’s early days exposed how the FDA’s in-person dispensing requirement facilitated virus transmission and hampered access to abortion without any medical benefits.Footnote 4 This realization created a fresh urgency to lift the FDA’s unnecessary restrictions. Researchers and advocates worked in concert to highlight evidence undermining the need for the in-person dispensing requirement,Footnote 5 which culminated in the FDA permanently removing the requirement in December 2021.Footnote 6

The result is an emerging new normal for abortion through ten weeks of pregnancy – telehealth – at least in the states that allow it.Footnote 7 Abortion by telehealth (what an early study dubbed “TelAbortion”) generally involves a pregnant person meeting online with a health care professional, who evaluates whether the patient is a candidate for medication abortion, and, if so, whether the patient satisfies informed consent requirements.Footnote 8 Pills are then mailed directly to the patient, who can take them and complete an abortion at home. This innovation has made earlier-stage abortions cheaper, less burdensome, and more private, reducing some of the barriers that delay abortion and compromise access.Footnote 9

In this chapter, we start with a historical account of how telehealth for abortion emerged as a national phenomenon. We then offer our predictions for the future: A future in which the digital transformation of abortion care is threatened by the demise of constitutional abortion rights. We argue, however, that the de-linking of medication abortion from in-person care has triggered a zeitgeist that will create new avenues to access safe abortion, even in states that ban it. As a result, the same states that are banning almost all abortions after the Supreme Court overturned Roe v. Wade will find it difficult to stop their residents from accessing abortion online. Abortion that is decentralized and independent of in-state physicians will undermine traditional state efforts to police abortion as well as create new challenges of access and risks of criminalization.

II The Early Abortion Care Revolution

Although research on medication abortion facilitated by telehealth began nearly a decade ago, developments in legal doctrine, agency regulation, and online availability over the last few years have ushered in remote abortion care and cemented its impact. This part reviews this recent history and describes the current model for providing telehealth for abortion services.

A The Regulation of Medication Abortion

In 2020, medication abortions comprised 54 percent of the nation’s total abortions, which is a statistic that has steadily increased over the past two decades.Footnote 10 A medication abortion in the United States typically has involved taking two types of drugs, mifepristone and misoprostol, often 24 to 48 hours apart.Footnote 11 The first medication detaches the embryo from the uterus and the second induces uterine contractions to expel the tissue.Footnote 12 Medication abortion is approved by the FDA to end pregnancies through ten weeks of gestation, although some providers will prescribe its use off-label through twelve or thirteen weeks.Footnote 13

The FDA restricts mifepristone under a system intended to ensure the safety of particularly risky drugs – a Risk Evaluation and Mitigation Strategy (REMS).Footnote 14 The FDA can also issue a REMS with Elements to Assure Safe Use (ETASU), which can circumscribe distribution and limit who can prescribe a drug and under what conditions.Footnote 15 The FDA instituted a REMS with ETASU for mifepristone, the first drug in the medication abortion regimen, which historically mandated, among other requirements, that patients collect mifepristone in-person at a health care facility, such as a clinic or physician’s office.Footnote 16 Thus, under the ETASU, certified providers could not dispense mifepristone through the mail or a pharmacy. Several states’ laws impose their own restrictions on abortion medication in addition to the FDA’s regulations, including mandating in-person pick-up, prohibiting telehealth for abortion, or banning the mailing of medication abortion; at the time of writing in 2023, most of those same states, save eight, ban almost all abortion, including medication abortion, from the earliest stages of pregnancy.Footnote 17

In July 2020, a federal district court in American College of Obstetricians & Gynecologists (ACOG) v. FDA temporarily suspended the in-person dispensing requirement and opened the door to the broader adoption of telehealth for abortion during the course of the pandemic.Footnote 18 Well before this case, in 2016, the non-profit organization, Gynuity, received an Investigational New Drug Approval to study the efficacy of providing medication abortion care by videoconference and mail.Footnote 19 In the study, “TelAbortion,” providers counselled patients online, and patients confirmed the gestational age with blood tests and ultrasounds at a location of their choosing.Footnote 20 As the pandemic took hold, patients who were not at risk for medical complications, were less than eight weeks pregnant, and had regular menstrual cycles could forgo ultrasounds and blood tests, and rely on home pregnancy tests and a self-report of the first day of their last menstrual period. The results of the study indicated that a “direct-to-patient telemedicine abortion service was safe, effective, efficient, and satisfactory.”Footnote 21 Since Gynuity’s study, additional research has demonstrated that abortion medication can be taken safely and effectively without in-person oversight.Footnote 22

The ACOG court’s temporary suspension of the in-person dispensing requirement in 2020 relied on this research. The district court held that the FDA’s requirement contradicted substantial evidence of the drug’s safety and singled out mifepristone without providing any corresponding health benefit.Footnote 23 The district court detailed how the in-person requirement exacerbated the burdens already shouldered by those disproportionately affected by the pandemic, emphasizing that low-income patients and people of color, who are the majority of abortion patients, are more likely to contract and suffer the effects of COVID-19.Footnote 24 While the district court’s injunction lasted, virtual clinics began operating, providing abortion care without satisfying any in-person requirements.Footnote 25

The FDA appealed the district court’s decision to the US Court of Appeals for the Fourth Circuit and petitioned the Supreme Court for a stay of the injunction in October and again in December 2020. The briefs filed by the Trump Administration’s solicitor general and ten states contested that the in-person dispensing requirement presented heightened COVID-19 risks for patients.Footnote 26 Indeed, some of the same states that had suspended abortion as a purported means to protect people from COVID-19 now argued that the pandemic posed little threat for people seeking abortion care.Footnote 27 ACOG highlighted the absurdity of the government’s position. The FDA could not produce evidence that any patient had been harmed by the removal of the in-person dispensing requirement, whereas, in terms of COVID-19 risk, “the day Defendants filed their motion, approximately 100,000 people in the United States were diagnosed with COVID-19 – a new global record – and nearly 1,000 people died from it.”Footnote 28

The Supreme Court was not persuaded by ACOG’s arguments. In January 2021, the Court stayed the district court’s injunction pending appeal with scant analysis.Footnote 29 Chief Justice Roberts, in a concurrence, argued that the Court must defer to “politically accountable entities with the background, competence, and expertise to assess public health.”Footnote 30 Justice Sotomayor dissented, citing the district court’s findings and characterizing the reimposition of the in-person dispensing requirement as “unnecessary, unjustifiable, irrational” and “callous.”Footnote 31

The impact of the Supreme Court’s order, however, was short-lived. In April 2021, the FDA suspended the enforcement of the requirement throughout the course of the pandemic and announced that it would reconsider aspects of the REMS.Footnote 32 In December 2021, the FDA announced that it would permanently lift the in-person dispensing requirement.

Other aspects of the mifepristone REMS, however, have not changed. The FDA still mandates that only certified providers who have registered with the drug manufacturer may prescribe the drug (the “certified provider requirement”), which imposes an unnecessary administrative burden that reduces the number of abortion providers.Footnote 33 An additional informed consent requirement – the FDA-required Patient Agreement Form, which patients sign before beginning a medication abortion – also remains in place despite repeating what providers already communicate to patients.Footnote 34 The FDA also added a new ETASU requiring that only certified pharmacies can dispense mifepristone.Footnote 35 The details of pharmacy certification were announced in January 2023; among other requirements, a pharmacy must agree to particular record-keeping, reporting, and medication tracking efforts, as well as designate a representative to ensure compliance.Footnote 36 This requirement, as it is implemented, could mirror the burdens associated with the certified provider requirement, perpetuating the FDA’s unusual treatment of this safe and effective drug.Footnote 37

Despite these restrictions, permission for providers and, at present, two online pharmacies to mail medication abortion has allowed virtual abortion clinics to proliferate in states that permit telehealth for abortion.Footnote 38 As explored below, this change has the potential to dramatically increase access to early abortion care, but there are obstacles that can limit such growth.

B Telehealth for Abortion

A new model for distributing medication abortion is quickly gaining traction across the country: Certified providers partnering with online pharmacies to mail abortion medication to patients after online intake and counseling.Footnote 39 For example, the virtual clinic, Choix, prescribes medication abortion to patients up to ten weeks of pregnancy in Maine, New Mexico, Colorado, Illinois, and California.Footnote 40 The founders describe how Choix’s asynchronous telehealth platform works:

Patients first sign up on our website and fill out an initial questionnaire, then we review their history and follow up via text with any questions. Once patients are approved to proceed, they’re able to complete the consent online. We send our video and educational handouts electronically and make them available via our patient portal. We’re always accessible via phone for patients.Footnote 41

The entire process, from intake to receipt of pills, takes between two to five days and the cost is $289, which is significantly cheaper than medication abortions offered by brick-and-mortar clinics.Footnote 42 Advice on taking the medication abortion and possible complications is available through a provider-supported hotline.Footnote 43 Choix is just one of many virtual clinics. Another virtual clinic, Abortion on Demand, provides medication abortion services to twenty-two states.Footnote 44 Many virtual clinics translate their webpages into Spanish but do not offer services in Spanish or other languages, although a few are planning to incorporate non-English services.Footnote 45

As compared to brick-and-mortar clinics, virtual clinics and online pharmacies provide care that costs less, offers more privacy, increases convenience, and reduces delays without compromising the efficacy or quality of care.Footnote 46 Patients no longer need to drive long distances to pick up safe and effective medications before driving back home to take them. In short, mailed pills can untether early-stage abortion from a physical place.Footnote 47

Telehealth for abortion, however, has clear and significant limitations. As noted above, laws in about half of the country prohibit, explicitly or indirectly, telemedicine for abortion. And telemedicine depends on people having internet connections and computers or smartphones, which is a barrier for low-income communities.Footnote 48 Even with a telehealth-compliant device, “[patients] may live in communities that lack access to technological infrastructure, like high-speed internet, necessary to use many dominant tele-health services, such as virtual video visits.”Footnote 49 Finally, the FDA has approved medication abortion only through ten weeks of gestation.

These barriers, imposed by law and in practice, will test how far telehealth for abortion can reach. As discussed below, the portability of medication abortion opens avenues that strain the bounds of legality, facilitated in no small part by the networks of advocates that have mobilized to make pills available to people across the country.Footnote 50 But extralegal strategies could have serious costs, particularly for those already vulnerable to state surveillance and punishment.Footnote 51 And attempts to bypass state laws could have serious consequences for providers, who are subject to professional, civil, and criminal penalties, as well as those who assist providers and patients.Footnote 52

III The Future of Abortion Care

The COVID-19 pandemic transformed abortion care, but the benefits were limited to those living in states that did not have laws requiring in-person care or prohibiting the mailing of abortion medication.Footnote 53 This widened a disparity in abortion access that has been growing for years between red and blue states.Footnote 54

On June 24, 2022, the Supreme Court issued its decision in Dobbs v. Jackson Women’s Health Organization, upholding Mississippi’s fifteen-week abortion ban and overturning Roe v. Wade.Footnote 55 Twenty-four states have attempted to ban almost all abortions, although ten of those bans have been halted by courts.Footnote 56 At the time of writing, pregnant people in the remaining fourteen states face limited options: Continue a pregnancy against their will, travel out of state to obtain a legal abortion, or self-manage their abortion in their home state.Footnote 57 Data from Texas, where the SB8 legislationFootnote 58 effectively banned abortion after roughly six weeks of pregnancy months before Dobbs, suggests that only a small percentage of people will choose the first option – the number of abortions Texans received dropped by only 10–15 percent as a result of travel and self-management.Footnote 59 Evidence from other countries and the United States’s own pre-Roe history also demonstrate that abortion bans do not stop abortions from happening.Footnote 60

Traveling to a state where abortion is legal, however, is not an option for many people.Footnote 61 Yet unlike the pre-Roe era, there is another means to safely end a pregnancy – one that threatens the antiabortion movement’s ultimate goal of ending abortion nationwide:Footnote 62 Self-managed abortion with medication. Self-managed abortion generally refers to abortion obtained outside of the formal health care system.Footnote 63 Thus, self-managed abortion can include a pregnant person buying medication abortion online directly from an international pharmacy (sometimes called self-sourced abortion) and a pregnant person interacting with an international or out-of-state provider via telemedicine, who ships them abortion medication or calls a prescription into an international pharmacy on their behalf.Footnote 64

Because many states have heavily restricted abortion for years, self-managed abortion is not new. The non-profit organization Aid Access started providing medication abortion to patients in the United States in 2017.Footnote 65 Each year, the number of US patients they have served has grown.Footnote 66 Once Texas’s SB8 became effective, Aid Access saw demand for their services increase 1,180 percent, levelling out to 245 percent of the pre-SB8 demand a month later.Footnote 67 Similarly, after Dobbs, the demand for Aid Access doubled, tripled, or even quadrupled in states with abortion bans.Footnote 68 There are advantages to self-managed abortion: The price is affordable (roughly only $105 for use of foreign providers and pharmacy) and the pregnant person can have an abortion at home.Footnote 69 The disadvantage is that receiving the pills can take one to three weeks (when shipped internationally) and comes with the legal risks explored below.

The portability of abortion medication, combined with the uptake of telehealth, poses an existential crisis for the antiabortion movement. Just as it achieved its decades-long goal of overturning Roe, the nature of abortion care has shifted and decentralized, making it difficult to police and control.Footnote 70 Before the advent of abortion medication, pregnant people depended on the help of a provider to end their pregnancies.Footnote 71 They could not do it alone. As a result, states would threaten providers’ livelihood and freedom, driving providers out of business and leaving patients with few options.Footnote 72 Many turned to unqualified providers who offered unsafe abortions that lead to illness, infertility, and death.Footnote 73 But abortion medication created safe alternatives for patients that their predecessors lacked. Because abortion medication makes the involvement of providers no longer necessary to terminate early pregnancies, the classic abortion ban, which targets providers, will not have the same effect.Footnote 74 And out-of-country providers who help patients self-manage abortions remain outside of a state’s reach.Footnote 75

The antiabortion movement is aware of this shifting reality. Indeed, antiabortion state legislators are introducing and enacting laws specifically targeting abortion medication – laws that would ban it entirely, ban its shipment through the mail, or otherwise burden its dispensation.Footnote 76 Nevertheless, it is unclear how states will enforce these laws. Most mail goes in and out of states without inspection.Footnote 77

This is not to suggest that self-management will solve the post-Roe abortion crisis. For one, self-managed abortion medication is generally not recommended beyond the first trimester, meaning later-stage abortion patients, who comprise less than 10 percent of the patient population, will either need to travel to obtain an abortion or face the higher medical risks associated with self-management.Footnote 78 Moreover, pregnant patients may face legal risks in self-managing an abortion in an antiabortion state.Footnote 79 Historically, legislators were unwilling to target abortion patients themselves, but patients and their in-state helpers may become more vulnerable as legislatures and prosecutors reckon with the inability to target in-state providers. These types of prosecutions may occur in a few ways.

First, even if shipments of abortion medication largely go undetected, a small percentage of patients will experience side effects or complications that lead them to seek treatment in a hospital.Footnote 80 Self-managed abortions mimic miscarriage, which will aid some people in evading abortion laws, although some patients may reveal to a health care professional that their miscarriage was induced with abortion medication.Footnote 81 And even with federal protection for patient health information,Footnote 82 hospital employees could report those they suspect of abortion-related crimes.Footnote 83 This will lead to an increase in the investigation and criminalization of both pregnancy loss and abortion.Footnote 84 This is how many people have become targets of criminal prosecution in other countries that ban abortion.Footnote 85

Second, the new terrain of digital surveillance will play an important role. Any time the state is notified of someone who could be charged for an abortion-related crime, the police will be able to obtain a warrant to search their digital life if they have sufficient probable cause. Anya Prince has explained the breadth of the reproductive health data ecosystem, in which advertisers and period tracking apps can easily capture when a person is pregnant.Footnote 86 The proliferation of “digital diagnostics” (for instance, wearables that track and assess health data) could become capable of diagnosing a possible pregnancy based on physiologic signals, such as temperature and heart rate, perhaps without the user’s knowledge. As Prince notes, this type of information is largely unprotected by privacy laws and companies may sell it to state entities.Footnote 87 Technology that indicates that a person went from “possibly pregnant” to “not pregnant” without a documented birth could signal an abortion worthy of investigation. Alternatively, pregnancy data combined with search histories regarding abortion options, geofencing data of out-of-state trips, and text histories with friends could be used to support abortion prosecutions.Footnote 88 Antiabortion organizations could also set up fake virtual clinics – crisis pregnancy centers for the digital age – to identify potential abortion patients and leak their information to the police.Footnote 89

These technologies will test conceptions of privacy as people voluntarily offer health data that can be used against them.Footnote 90 Law enforcement will, as they have with search engine requests and electronic receipts, use this digital information against people self-managing abortions.Footnote 91 And, almost certainly, low-income people and women of color will be targets of pregnancy surveillance and criminalization.Footnote 92 This is already true – even though drug use in pregnancy is the same in white and populations of color – Black women are ten times more likely to be reported to authorities.Footnote 93 And because low-income women and women of color are more likely to seek abortion and less likely to have early prenatal care, any pregnancy complications may be viewed suspiciously.Footnote 94

State legislatures and the federal government can help to protect providers and patients in the coming era of abortion care, although their actions may have a limited reach.Footnote 95 At the federal level, the FDA could assert that its regulation of medication abortion preempts contradictory state laws, potentially creating a nationwide, abortion-medication exception to state abortion bans.Footnote 96 The federal government could also use federal laws and regulations that govern emergency care, medical privacy, and Medicare and Medicaid reimbursement to preempt state abortion laws and reduce hospital-based investigations, though the impact of such laws and regulations would be more limited.Footnote 97 As this chapter goes to press in 2023, the Biden Administration is undertaking some of these actions.Footnote 98

State policies in jurisdictions supportive of abortion rights can also improve access for patients traveling to them. States can invest in telehealth generally to continue to loosen restrictions on telemedicine, as many states have done in response to the pandemic, reducing demand at brick-and-mortar abortion clinics and disparities in technology access.Footnote 99 They can also join interstate licensure compacts, which could extend the reach of telehealth for abortion in the states that permit the practice and allow providers to pool resources and provide care across state lines.Footnote 100 States can also pass abortion shield laws to insulate their providers who care for out-of-state residents by refusing to cooperate in out-of-state investigations, lawsuits, prosecutions, or extradition requests for abortion-related lawsuits.Footnote 101 All of these efforts will help reduce, but by no means stop, the sea change to abortion law and access moving forward. And none of these efforts protect the patients or those that assist them in states that ban abortion.

IV Conclusion

A post-Dobbs country will be messy. A right that generations took for granted – even though for some, abortion was inaccessible – disappeared in half of the country. The present landscape, however, is not like the pre-Roe era. Innovations in medical care and telehealth have changed abortion care, thwarting the antiabortion movement’s ability to control abortion, just as it gained the ability to ban it. Unlike patients in past generations, patients now will be able to access safe abortions, even in states in which it is illegal. But they will also face legal risks that were uncommon previously, given the new ways for the state to investigate and criminalize them.

As courts and lawmakers tackle the changing reality of abortion rights, we should not be surprised by surprises – unlikely allies and opponents may coalesce on both sides of the abortion debate. Laws that seek to punish abortion will become harder to enforce as mailed abortion pills proliferate. This will create urgency for some antiabortion states to find creative ways to chill abortion, while other states will be content to ban abortion in law, understanding that it continues in practice. Who states seek to punish will shift, with authorities targeting not only providers, but also patients, and with the most marginalized patients being the most vulnerable.Footnote 102

7 Monitoring (on) Your Mind Digital Biomarkers for Alzheimer’s Disease

Claire Erickson and Emily A. Largent
I Introduction

What first comes to mind when you hear the words “Alzheimer’s disease?”

For many, those words evoke images of an older adult who exhibits troubling changes in memory and thinking. Perhaps the older adult has gotten lost driving to church, although it’s a familiar route. Perhaps they have bounced a check, which is out of character for them. Perhaps they have repeatedly left the stove on while cooking, worrying their spouse or adult children. Perhaps they have trouble finding words or are confused by devices like iPhones and, as a result, have lost touch with longtime friends.

Alzheimer’s disease (AD) has traditionally been understood as a clinical diagnosis, requiring the presence of symptoms to be detected. The older adult we just envisioned might make an appointment with their physician. The physician would likely listen to the patient’s medical history – noting the characteristic onset and pattern of impairments – and determine that the patient has dementia, a loss of cognitive and functional abilities severe enough to interfere with daily life. Dementia can have numerous causes, and so the physician would also conduct a comprehensive physical and cognitive examination, perhaps ordering lab tests or brain imaging scans, as well as neuropsychological testing. After excluding other causes, the clinician would diagnose the patient with “probable” AD, a diagnosis that can only be confirmed postmortem via autopsy. This approach to diagnosis interweaves the patient’s experience of disabling cognitive and functional impairments (i.e., dementia) with the label of AD.Footnote 1

Yet, our ability to measure the neuropathology of AD is rapidly evolving, as is our understanding of the preclinical and prodromal stages of disease. Thus, it is now possible to identify individuals who are at risk for developing dementia caused by AD years or even decades before the onset of cognitive decline through clinical but also digital monitoring. A key premise of this article is that, in the future, the identification of at-risk individuals will continue to occur in clinical settings using traditional biomarker testing; but, the identification of at-risk individuals will also increasingly occur closer to – or even in – one’s home, potentially using digital biomarkers.

When you hear “Alzheimer’s disease,” in-home monitoring should come to mind.

Here, we argue that because AD affects the mind, the challenges associated with monitoring aimed at understanding the risk for disabling cognitive impairments are heightened as compared to the challenges of monitoring for physical ailments. In Section II, we discuss the biomarker transformation of AD, which is allowing us to see AD neuropathology in living persons and to identify individuals at increased risk for developing dementia caused by AD. In Section III, we outline empirical evidence regarding five different digital biomarkers; these digital biomarkers offer further insights into an individual’s risk for cognitive impairment and could soon be used for in-home monitoring. Finally, in Section IV, we identify six challenges that are particularly pronounced when monitoring for AD.

II The Evolving Understanding of AD

The field of AD research is rapidly moving from a syndromal definition of AD (see, e.g., the diagnostic process described in Section I) to a biological one. This shift reflects a growing understanding of the mechanisms underlying the clinical expression of AD.Footnote 2

Biomarkers can now be used to identify AD neuropathology in vivo. A biomarker is a “defined characteristic that is measured as an indicator of normal biological processes, pathogenic processes or responses to an exposure or intervention.”Footnote 3 Individuals are understood to have a biomarker profile, which (as we’re using it here) describes the presence or absence in their brain of three AD biomarkers: Beta-amyloid, pathologic tau, or neurodegeneration. These biomarkers can be measured using various modalities, including positron emission tomography (PET), cerebrospinal fluid (CSF) sampling, or magnetic resonance imaging (MRI); moreover, that blood-based biomarker tests are now available.Footnote 4

In addition to the biomarker profile, a second, independent source of information is the individual’s cognitive stage. An individual may be cognitively unimpaired – within the expected range of cognitive testing scores and functioning in daily life, have mild cognitive impairment (MCI) – a slight but noticeable decline in cognitive skills, or have dementia. The patient’s biomarker profile can then be used in combination with the patient’s cognitive stage to characterize the patient’s place – and likely progression – along the Alzheimer’s continuum. The continuum spans the preclinical (i.e., clinically asymptomatic with evidence of AD neuropathology) and clinical (i.e., symptomatic) phases of AD.Footnote 5

Individuals in the preclinical stage have AD biomarkers but do not have clinically measurable cognitive impairment. They may be truly cognitively unimpaired, or they may have subjective cognitive decline – a self-experienced decline in cognitive capacity as compared to baseline.Footnote 6 Those with preclinical AD are understood to be at an increased risk of short-term cognitive decline.Footnote 7 An estimated 46.7 million Americans have preclinical AD (defined by amyloidosis, neurodegeneration, or both), though it’s important to emphasize that not all of them will progress to a dementia-level of impairment.Footnote 8

At present, preclinical AD remains a research construct. It is not yet diagnosed clinically. Researchers hope, however, that intervening earlier rather than later in the course of the disease will allow them to delay or prevent the onset of cognitive and functional impairment. Therefore, they are conducting secondary prevention trials, which recruit individuals who are asymptomatic but biomarker-positive for AD – that is, who have preclinical AD – to test new drugs or novel interventions. It is reasonable to assume that if the preclinical AD construct is validated and if a disease-modifying therapy for AD is found, preclinical AD will move from the research to the clinical context.

In the future, people who receive a preclinical AD diagnosis will have insight into their risk of developing MCI or dementia years or even decades before the onset of impairments.Footnote 9 Monitoring digital biomarkers in the home, the focus of Section III, will likely be complementary to clinical assessment. For instance, monitoring may be used to watch for incipient changes in cognition after a preclinical AD diagnosis. Or, conversely, data generated by in-home monitoring may suggest that it is time to see a clinician for an AD workup.

III Digital Biomarkers of AD

In parallel with our evolving understanding of beta-amyloid, pathologic tau, and neurodegeneration as “traditional” biomarkers of AD, there have been advances in our understanding of digital biomarkers for AD. Efforts to concretely and comprehensively define digital biomarkers are ongoing.Footnote 10 For the purposes of this chapter, we use the following definition: “Objective, quantifiable, physiological, and behavioral data that are collected and measured by means of digital devices, such as embedded environmental sensors, portables, wearables, implantables, or digestibles.”Footnote 11

Digital biomarkers have the potential to flag uncharacteristic behaviors or minor mistakes that offer insights into an older adult’s risk of cognitive and functional decline or to indicate early cognitive decline. As noted above, the preclinical stage of AD is characterized by the presence of biomarkers in the absence of measurable cognitive impairment. Despite going undetected on standard cognitive tests, subtle cognitive changes may be present. There is, in fact, a growing body of evidence that subjective cognitive decline in individuals with an unimpaired performance on cognitive tests may represent the first symptomatic manifestation of AD.Footnote 12 These small changes from the individual’s baseline may have downstream effects on complex cognitive and functional behaviors. Digital biomarkers offer a means of capturing these effects.

Here, we discuss five digital biomarkers for AD, highlighting both promising opportunities for monitoring the minds of older adults and limitations in our current knowledge and monitoring abilities. Crucially, these opportunities primarily reside outside of routine clinical settings. These examples are not meant to be exhaustive, but rather have been selected to highlight a range of monitoring modalities involving diverse actors. Moreover, they reveal a variety of potential challenges, which are the focus of Section IV.

A Driving Patterns

Due to the complex processes involved in spatial navigation and vehicle operation, an assessment of driving patterns offers an avenue for detecting changes in thinking and behavior. Indeed, prior studies demonstrate that those with symptomatic AD drive shorter distances and visit fewer unique destinations.Footnote 13 Research also suggests that detectable spatial navigation deficits may precede AD symptom development in cognitively normal individuals with AD biomarkers.Footnote 14 A limitation of this work is that it was conducted using simulators, which only measure performance in very controlled settings and so are limited in their generalizability.Footnote 15 Studies have, therefore, shifted to a naturalistic approach to data collection to characterize daily driving patterns. Researchers can passively collect data using global positioning system (GPS) devices installed in participant vehicles. The resulting information includes average trip distance, number of unique destinations, number of trips with a speed of six miles per hour or more below the posted limit (i.e., underspeed), and a variety of other measures to quantify driving performance.Footnote 16 These studies have found differing behavior and driving patterns between cognitively unimpaired participants with and without AD biomarkers, including a greater decline in the number of days driving per month for those with AD biomarkers.Footnote 17

These findings suggest that assessing driving patterns – as some insurers already do through standalone devices or appsFootnote 18 – may help identify individuals at risk for cognitive decline due to AD.

B Banking and Finances

Instrumental activities of daily living (IADLs) are complex activities necessary for individuals to live independently, such as managing one’s finances. As AD progresses, IADLs become increasingly impaired. A 2021 study examined longitudinal credit report information for over 80,000 Medicare beneficiaries.Footnote 19 The researchers found that those with an AD or related dementia diagnosis were more likely to have missed bill payments over the six years prior to their dementia diagnosis. They also found that individuals with a dementia diagnosis developed subprime credit scores two-and-a-half years before their diagnosis. In a prospective study of cognitively unimpaired older adults, researchers found that a low awareness of financial and other types of scams was associated with an increased risk for MCI and dementia, though the measure was too weak for prediction at the individual level.Footnote 20

More work is needed to characterize the timeframe of changes in financial management, but detecting changes such as missed payments, bounced checks, or altered purchasing behavior (e.g., repeated purchases) presents another opportunity to identify individuals with preclinical AD. Banking and financial institutions already use algorithms, behavioral analytics, and artificial intelligence (AI)-powered technology to identify unusual transactions or spending behaviors that may be suggestive of fraud.Footnote 21 Similar techniques could be adapted to monitor older adults and notify them of behaviors indicative of dementia risk.

C Smart Appliances

Sensors can be deployed in the home to detect cognitive changes in older adults.Footnote 22 In a task-based study, individuals with MCI have been shown to spend more time in the kitchen when performing a set of home-based activities.Footnote 23 While in the kitchen, participants with MCI open cabinets and drawers, as well as the refrigerator, more often than cognitively unimpaired participants.Footnote 24 Researchers are exploring whether it is possible to use similar techniques to differentiate between healthy controls and individuals with preclinical AD.Footnote 25 A challenge for such monitoring studies (and, by extension, for real-life uptake) is the need to deploy multiple sensors in the home. One study attempted to circumvent this issue by focusing on passive in-home power usage for large appliances; the team found, on average, lower daily and seasonal usage of appliances among people with cognitive impairment.Footnote 26

Smart appliances, like refrigerators and ovens, connect to the internet and can sync with smartphones or other devices. They are already in many homes and are another alternative to sensor-based systems for detecting early cognitive changes. Smart refrigerators could track the frequency with which they are opened and for how long. Similarly, smart ovens may track the time they are left on. Such usage information could then be shared with the consumer by the appliance itself, for example, via an app.

D Speech

Changes in speech have been used to characterize the progression of AD. Studies have often used active data collection in which individuals are recorded on a smartphone or similar device as they complete tasks associated with verbal fluency, picture description, and free speech. The voice recordings are then processed, sometimes using machine-learning techniques. Studies have found that short vocal tasks can be used to differentiate participants with MCI from those with dementia.Footnote 27 It remains an open question whether preclinical AD presents with detectable changes in speech. Yet, one study of speech changes found that cognitively unimpaired participants with AD biomarkers used fewer concrete nouns and content words during spontaneous speech.Footnote 28

The interest in modalities for passive speech data collection – for example, conversations over the phone, communication with digital assistants, and texting related information – is mounting.Footnote 29 Improvements in machine-learning to reduce the burden of speech analysis, coupled with broad access to devices with microphones, are increasing the potential of passive speech data collection. Automatic speech recognition used for digital assistants like Amazon Alexa and Apple Siri has made strides in accuracy. As technological advancements further streamline transcription and analysis, speech data may be used to characterize changes related to preclinical AD. Simply put, Alexa may soon diagnose progression along the Alzheimer’s continuum from preclinical AD to MCI to dementia.Footnote 30

E Device Use

The ways people use their smartphones – including the amount of time spent on certain apps, login attempts, patterns of use, and disruptions in social interactions – may reveal signs of cognitive decline.Footnote 31 Studies examining patterns of smartphone use in older adults with and without cognitive impairment suggest that app usage is related to cognitive health.Footnote 32 There is much interest in leveraging device use as a potential marker of cognitive decline. The Intuition Study (NCT05058950), a collaboration between Biogen and Apple Inc., began in September 2021 with the aim of using multimodal passive sensor data from iPhone and Apple Watch usage to differentiate normal cognition from MCI; a secondary aim is to develop a function for predicting between individuals who will and will not develop MCI. With 23,000 participants, this observational longitudinal study will be the largest study to date collecting passive device use data.

Devices, like smartphones, could soon flag usage patterns that are suggestive of an increased risk of decline. Further, specific apps may be developed to detect concerning behavior changes by accessing meta-data from other apps and devices; this may streamline access to information and improve consumer friendliness.

IV Challenges Ahead

Here, we identify six ethical and legal challenges that will accompany the monitoring of digital biomarkers for AD. These are not exclusive, and many issues associated generally with measuring digital biomarkers will apply here as well. Moreover, the challenges outlined herein are not unique to digital biomarkers for AD. Rather, we would argue that they are heightened in this context because AD is a disease not just of the brain but the mind.

A Consent to Collect and Consent to Disclose

Although we hypothesize that preclinical AD will not be diagnosed clinically (i.e., using traditional biomarkers) until there is a disease-modifying therapy that renders the diagnosis medically actionable, in-home monitoring of digital biomarkers is not subject to this constraint. In fact, potential means of collecting and analyzing digital biomarkers for AD are already in our homes.

Yet, it is unlikely that individuals are aware that the GPS devices in their cars, the smart appliances in their kitchens, and the online banking apps on their phones can provide insights into their risk of cognitive and functional decline. Plugging in the GPS device, using the smart oven, or paying bills online, therefore, does not imply consent to having one’s brain health measured. Nor can consent be presumed. Many individuals do not want to know about their risk of developing dementia caused by AD because there is little to be done about it.Footnote 33 Others eschew learning their dementia risk to avoid existential dread.Footnote 34 This all suggests that, if digital biomarkers for AD are to be collected, there must be explicit consent.

Even if individuals agree to having their digital biomarkers for AD monitored, they may ultimately choose against learning what is revealed therein. Some individuals who undergo testing to learn whether they are at risk for dementia caused by AD – whether due to genes or to biomarkers – subsequently decline to learn the results.Footnote 35

This contrasts with – drawing an analogy to emergency medicine – our ability to presume consent for an Apple watch to monitor for and alert us to a possibly fatal arrythmia. But even there, where there is greater reason to presume consent, the evidence suggests we ought to eschew a “more is more” approach to disclosure. Apple watch monitoring for arrythmia can unduly worry people who receive a notification and subsequently follow-up with doctors, undergoing invasive and expensive tests only for the results to come back normal.Footnote 36 When the rate of false positives is unknown – or remains high – and when there are risks and burdens associated with disclosure, caution must accompany implementation.

B Communicating Digital Biomarker Information

To date, traditional AD biomarker information has been disclosed to cognitively unimpaired adults in highly controlled environments, mostly through research studies and with specialist clinical expertise.Footnote 37 Substantial work has gone into developing methods for disclosure, and the recommended steps include preparing people to learn about their biomarker information, maintaining sensitivity in returning the results, and following-up to ensure people feel supported after learning the results.Footnote 38 Digital biomarkers present an opportunity for individuals to learn that they are exhibiting subtle signs of cognitive decline or are at risk for dementia in the future from an app or from their banker or insurance agent – and without the option to speak directly and quickly about the results with a medical professional.

Although the disclosure of AD biomarkers has generally been found to be safe in pre-screened populations,Footnote 39 care should be taken when disclosing digital biomarker information more broadly. Here, the field may learn from discussions of direct-to-consumer genetic or biochemical testing.Footnote 40

Another concern is that the monitoring of digital biomarker data could lead to the inadvertent disclosure of dementia risk. Imagine, for instance, that your changing device usage is flagged and then used to generate targeted advertisements for supplements to boost brain health or for memory games. You could learn you are at risk simply by scrolling through your social media feed. And, in that case, any pretense of thoughtful disclosure is dropped.

C Conflicting Desires for Monitoring

Studies suggest that some cognitively unimpaired older adults share their AD biomarker results with others because they would like to be monitored for – and alerted to – changes in cognition and function that might negatively affect their wellbeing.Footnote 41 Often, these individuals feel it is ethically important to share this information so as to prepare family members who might, in the future, need to provide dementia care or serve as a surrogate decision maker.Footnote 42 Other older adults, however, perceive monitoring as intrusive and unwelcomed.Footnote 43

In an interview study of the family members of cognitively unimpaired older adults with AD biomarkers, some family members described watching the older adult more closely for symptoms of MCI or dementia after learning the biomarker results.Footnote 44 This may reflect family members’ evolving understanding of themselves as pre-caregivers – individuals at increased risk for informal dementia caregiving.Footnote 45 Technology can allow family members to remotely monitor an older adult’s location, movements, and activities, in order to detect functional decline and changes in cognition, as well as to intervene if needed. Despite these putative advantages, monitoring may be a source of friction if older adults and their families do not agree on its appropriateness or on who should have access to the resulting information.

V Stigma and Discrimination

Dementia caused by AD is highly stigmatized.Footnote 46 Research with cognitively unimpaired individuals who have the AD biomarker beta-amyloid suggests that many worry that this information would be stigmatizing if disclosed to others.Footnote 47 Unfortunately, this concern is likely justified; a survey experiment with a nationally representative sample of American adults found that, even in the absence of cognitive symptoms, a positive AD biomarker result evokes stronger stigmatizing reactions among members of the general public than a negative result.Footnote 48

Discrimination occurs when stigmatization is enacted via concrete behaviors. Cognitively unimpaired individuals who have beta-amyloid anticipate discrimination across a variety of contexts – from everyday social interactions to employment, housing, and insurance.Footnote 49 It is not yet known whether – and if so to what extent – digital biomarkers will lead to stigma and discrimination. However, we must be aware of this possibility, as well as the scant legal protection against discrimination on the basis of biomarkers.Footnote 50

VI Information Privacy

Digital biomarker information is health information. But it is health information in the hands of bankers and insurance agents or technology companies – individuals and entities that are not health care providers and are therefore not subject to the privacy laws that govern health care data. The Health Insurance Portability and Privacy Act (HIPAA) focuses on data from medical records; it does not, for instance, cover data generated by smartphone apps.Footnote 51 The need for privacy is intensified by the potential for stigma and discrimination, discussed above.

Further, older adults with MCI and dementia are vulnerable – for example, to financial scammers. It is important to ensure that data generated by monitoring is not abused – by those who collect it or by those who subsequently access it – to identify potential targets for abuse and exploitation. Abuse and exploitation may occur at the hands of an unscrupulous app developer but also, or perhaps more likely, at the hands of an unscrupulous family member or friend.

VII Disparities in Health and Technology

The older population is becoming significantly more racially and ethnically diverse.Footnote 52 Black and Hispanic older adults are at higher risk than White older adults for developing AD, and they encounter disproportionate barriers to accessing health care generally, and dementia care specifically.Footnote 53 Health disparities are increasingly understood to reflect a broad, complex, and interrelated array of factors, including racism.Footnote 54 There are well-reported concerns about racism in AI.Footnote 55 Those are no less salient here and may be more salient, given disparities in care.

Further, monitoring may be cost prohibitive, impacted by the digital divide, or reliant on an individual’s geographic location. For instance, older adults, especially adults from minoritized communities, may not have smart devices. According to a Pew report using data collected in 2021, only 61 percent of those aged 65 and older owned a smartphone and 44 percent owned a tablet.Footnote 56 As many of the digital biomarkers described in Section III require a smart device, uptake of monitoring methods may be unevenly distributed and exacerbate, rather than alleviate, disparities in AD care.

VIII Conclusion

The older adult we envisioned at the beginning of this chapter will not be the only face of AD much longer. We may soon come to think, too, of adults with preclinical AD. These individuals may learn about their heightened risk of cognitive and functional impairment from a clinician. Or they may learn about it because the GPS device plugged into their car has detected slight alterations in their driving patterns, because their smart refrigerator has alerted them to the fridge door staying open a bit longer, or because their phone has noted slight changes in speech.

AD is undergoing a biomarker transformation, of which digital biomarkers are a part. AD is a deeply feared condition, as it robs people of their ability to self-determine. Care must therefore be taken to address the multitude of challenges that arise when monitoring our minds.

Footnotes

5 Patient Self-Administered Screening for Cardiovascular Disease Using Artificial Intelligence in the Home

1 United Kingdom Government Department of Health and Social Care, Health Secretary Announces £250 Million Investment in Artificial Intelligence, Gov.UK (August 8, 2019), www.gov.uk/government/news/health-secretary-announces-250-million-investment-in-artificial-intelligence.

2 Patrik Bächtiger, et al., Artificial Intelligence, Data Sensors and Interconnectivity: Future Opportunities for Heart Failure, Cardiac Failure Rev. 6 (2020).

3 Patrik Bächtiger, et al., Point-of-Care Screening for Heart Failure with Reduced Ejection Fraction Using Artificial Intelligence during ECG-Enabled Stethoscope Examination in London, UK: A Prospective, Observational, Multicentre Study, 4 Lancet Digit. Health 117, 117–25 (2022).

4 National Institute for Health and Care Excellence, Evidence Standards Framework for Digital Health Technologies (2018), www.nice.org.uk/corporate/ecd7.

5 NHS England, The NHS Long Term Plan (2019), www.longtermplan.nhs.uk/.

6 Nathalie Conrad, et al., Temporal Trends and Patterns in Heart Failure Incidence: A Population-Based Study of 4 Million Individuals, 391 The Lancet 572, 572–80 (2018).

7 NHS England, supra note 5.

8 Claire A Lawson, et al., Risk Factors for Heart Failure: 20-year Population-Based Trends by Sex, Socioeconomic Status, and Ethnicity, 13 Circulation: Heart Failure (2020).

9 Patrik Bächtiger, et al., supra note 3.

10 Michael Soljak, et al., Variations in Cardiovascular Disease Under-Diagnosis in England: National Cross-Sectional Spatial Analysis, 11 BMC Cardiovascular Disorders 1, 1–12 (2011).

11 Fouad Chouairi, et al., Evaluation of Racial and Ethnic Disparities in Cardiac Transplantation, 10 J. of the Am. Heart Ass’n (2021).

12 Matthew DeCamp & Jon C. Tilburt, Why We Cannot Trust Artificial Intelligence in Medicine, 1 Lancet Digit. Health 390 (2019).

13 Theresa A. McDonagh, et al., 2021 ESC Guidelines for the Diagnosis and Treatment of Acute and Chronic Heart Failure: Developed by the Task Force for the Diagnosis and Treatment of Acute and Chronic Heart Failure of the European Society of Cardiology (ESC) With the Special Contribution of the Heart Failure Association (HFA) of the ESC, 42 European Heart J. 3599, 3618 (2021).

14 David Duncker, et al., Smart Wearables for Cardiac Monitoring – Real-World Use beyond Atrial Fibrillation, 21 Sensors (2021).

15 Steven A Lubitz, et al., Screening for Atrial Fibrillation in Older Adults at Primary Care Visits: VITAL-AF Randomized Controlled Trial, 145 Circulation 946–54 (2022); Marco V Perez, et al., Large-Scale Assessment of a Smartwatch to Identify Atrial Fibrillation, 381 New England Journal of Medicine 1909–17 (2019).

16 Christopher E Knoepke, et al., Medicare Mandates for Shared Decision Making in Cardiovascular Device Placement, 12 Circulation: Cardiovascular Quality and Outcomes (2019).

17 Susan A Speer & Elizabeth Stokoe, Ethics in Action: Consent‐Gaining Interactions and Implications for Research Practice, 53 British J. of Soc. Psych. 5473 (2014).

18 Medical Device Innovation Consortium, The MITRE Corporation, Playbook for Threat Modelling Med. Devices (2021), www.mitre.org/news-insights/publication/playbook-threat-modeling-medical-devices.

19 Boris Babic, et al., Algorithms on Regulatory Lockdown in Medicine, 6 Science 1202 (2019).

20 National Institute for Health and Care Excellence, Evidence Standards Framework (ESF) for Digital Health Technologies Update – Consultation (2022), www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies/esf-consultation.

6 The Promise of Telehealth for Abortion

1 Greer Donley, Medication Abortion Exceptionalism, 107 Cornell L. Rev. 627, 647 (2022).

2 Id. at 643–51.

3 Id.

4 Id. at 648–51; Rachel Rebouché, The Public Health Turn in Reproductive Rights, 78 Wash & Lee L. Rev. 1355, 1383–86 (2021).

5 Rebouché, supra note 4, at 1383–86.

6 Donley, supra note 1, at 648–51.

7 Id. at 689–73.

8 David Cohen, Greer Donley & Rachel Rebouché, The New Abortion Battleground, 123 Colum. L. Rev. 1, 9–13 (2023).

9 Id.

10 Rachel Jones et al., Abortion Incidence and Service Availability in the United States, Guttmacher Inst. (2022), www.guttmacher.org/article/2022/11/abortion-incidence-and-service-availability-united-states-2020.

11 Donley, supra note 1, at 633.

12 Id.

13 Id.

14 Id. at 637–43.

15 Id.

16 Id.

17 Nineteen states mandate that the prescribing physician be physically present during an abortion or require patient-physician contact, such as mandatory pre-termination ultrasounds and in-person counseling. Five of these states also explicitly prohibit the mailing of abortion-inducing drugs (Arizona, Arkansas, Montana, Oklahoma, and Texas). Nine states have banned telehealth for abortion. Medication Abortion, Abortion Law Project, Ctr. for Pub. Health L. Rsch. (December 2021), http://lawatlas.org/datasets/medication-abortion-requirements. Of these states, currently only Alabama, Arizona, Indiana, Kansas, Montana, Nebraska, North Carolina, and Wisconsin have laws that preclude telehealth for abortion, but otherwise have not banned abortion before ten weeks.

18 Order for Preliminary Injunction, ACOG v. FDA, No. 8:20-cv-01320-TDC 80 (D. Md. July 13, 2020).

19 See Elizabeth Raymond et al., TelAbortion: Evaluation of a Direct to Patient Telemedicine Abortion Service in the United States, 100 Contraception 173, 174 (2019).

20 Id.

21 Id.

22 Hillary Bracken, Alternatives to Routine Ultrasound for Eligibility Assessment Prior to Early Termination of Pregnancy with Mifepristone-Misoprostol, 118 BJOG 1723 (2011).

23 Am. Coll. of Obstetricians and Gynecologists v. US Food and Drug Admin., No. TDC-20-1320, 2020 WL 8167535 at 210–11 (D. Md. August 19, 2020).

24 Id.

25 Donley, supra note 1, at 631.

26 Solicitor General Brief to US District Court for the District of Maryland, Case 8:20-cv-01320-TDC, November 11, 2020.

27 Rebouché, supra note 4, at 1383–89; Greer Donley, Beatrice A. Chen & Sonya Borrero, The Legal and Medical Necessity of Abortion Care Amid the COVID-19 Pandemic, 7 J.L. & Biosciences 1, 13 (2020).

28 Plaintiff Brief in Opposition to Defendants’ Renewed Motion to Stay the Preliminary Injunction, at 1, No. 20-1320-Tdc, November 13, 2020.

29 Food and Drug Admin. v. Am. Coll. of Obstetricians and Gynecologists, 141 S.Ct. 578 (2021).

30 Id. (Roberts, J., concurring); Rebouché, supra note 4, at 1389.

31 FDA v. ACOG, 141 S.Ct. at 583 (Sotomayor, J, dissenting).

32 Joint Motion to Stay Case Pending Agency Review at 2, Chelius v. Wright, no. 17-cv-493 (D. Haw. May 7, 2021), ECF no. 148.

33 Donley, supra note 1, at 643–48.

34 Id.

35 Id.

37 Donley, supra note 1, at 643–48.

38 Rachel Rebouché, Remote Reproductive Rights, 48 Am. J. L. & Med. __(in press, 2023). The FDA granted permission to two online pharmacies to dispense abortion medication while it determined the process for certification. Abagail Abrams, Meet the Pharmacist Expanding Access to Abortion Pills Across the US, Time (June 13, 2022), https://time.com/6183395/abortion-pills-honeybee-health-online-pharmacy/.

39 Carrie N. Baker, How Telemedicine Startups Are Revolutionizing Abortion Health Care in the US, Ms. Mag., November 16, 2020.

40 Id.

41 Carrie Baker, Online Abortion Providers Cindy Adam and Lauren Dubey of Choix: “We’re Really Excited about the Future of Abortion Care,” Ms. Mag. (April 14, 2022).

42 Id. Choix also offers a sliding scale of cost, starting at $175, for patients with financial need. Choix, Learn, FAQ, https://choixhealth.com/faq/.

43 Choix, Learn, FAQ, https://choixhealth.com/faq/.

44 Carrie Baker, Abortion on Demand Offers Telemedicine Abortion in 20+ States and Counting: “I Didn’t Know I Could Do This!,” Ms. Mag. (June 7, 2021), https://msmagazine.com/2021/06/07/abortion-on-demand-telemedicine-abortion-fda-rems-abortion-at-home/.

45 Ushma Upadhyay, Provision of Medication Abortion via Telehealth after Dobbs (draft presentation on file with the authors).

46 Donley, supra note 1, at 690–92.

47 Id.

48 David Simon & Carmel Shachar, Telehealth to Address Health Disparities: Potential, Pitfalls, and Paths, 49 J. L. Med. & Ethics 415 (2022).

49 Id.

50 Jareb A. Gleckel & Sheryl L. Wulkan, Abortion and Telemedicine: Looking Beyond COVID-19 and the Shadow Docket, 54 U.C. Davis L. Rev. Online 105, 112, 119–20 (2021).

51 Carrie N. Baker, Texas Woman Lizelle Herrera’s Arrest Foreshadows Post-Roe Future, Ms. Mag (April 16, 2022), https://msmagazine.com/2022/04/16/texas-woman-lizelle-herrera-arrest-murder-roe-v-wade-abortion/.

52 Cohen, Donley & Rebouché, supra note 8, at 12.

53 See Section II.

54 Donley, supra note 1, at 694.

55 Dobbs v. Jackson Women’s Health Organization, 142 S. Ct. 2228, 2242 (2022).

56 Tracking States Where Abortion is Now Banned, The New York Times (November 8, 2022), www.nytimes.com/interactive/2022/us/abortion-laws-roe-v-wade.html.

57 Thirteen states ban abortion from the earliest stages of pregnancy and Georgia bans abortion after six weeks. In addition to those fourteen states, Utah, Arizona and Florida ban abortion after fifteen weeks, Utah after eighteen and North Carolina after twenty. Id.

58 S.B. 8, 87th Gen. Assemb., Reg. Sess. (Tex. 2021) (codified as amended at Tex. Health & Safety Code Ann. §§ 171.201–.212 (West 2023)).

59 See Margot Sanger-Katz, Claire Cain Miller & Quoctrung Bui, Most Women Denied Abortions by Texas Law Got Them Another Way, The New York Times (March 6, 2022), www.nytimes.com/2022/03/06/upshot/texas-abortion-women-data.html.

60 Yvonne Lindgren, When Patients Are Their Own Doctors: Roe v. Wade in An Era of Self-Managed Care, 107 Cornell L. Rev. 151, 169 (2022).

61 Three quarters of abortion patients are of low income, Abortion Patients are Disproportionately Poor and Low Income, Guttmacher Inst. (May 19, 2016), www.guttmacher.org/infographic/2016/abortion-patients-are-disproportionately-poor-and-low-income and the cost and time associated with in-person abortion care delayed and thwarted abortion access when a ban on pre-viability abortion was constitutionally prohibited under Roe v. Wade, Ushma D. Upadhyay, et al., Denial of Abortion Because of Provider Gestational Age Limits in the United States, 104 Am. J. Public Health 1687, 1689–91 (2014).

62 Interview by Terry Gross with Mary Ziegler, Fresh Air, Nat’l Pub. Radio (June 23, 2022), www.npr.org/2022/06/23/1106922050/why-overturning-roe-isnt-the-final-goal-of-the-anti-abortion-movement.

63 Rachel K. Jones & Megan K. Donovan, Self-Managed Abortion May Be on the Rise, But Probably Not a Significant Driver of The Overall Decline in Abortion, Guttmacher Inst. (November 2019), www.guttmacher.org/article/2019/11/self-managed-abortion-may-be-rise-probably-not-significant-driver-overall-decline.

64 See Donley, supra note 1, at 697; Jennifer Conti, The Complicated Reality of Buying Abortion Pills Online, Self Mag. (April 9, 2019), www.self.com/story/buying-abortion-pills-online.

65 Jones & Donovan, supra note 62. When this chapter was drafted, Aid Access was serviced by international providers, but as this chapter was going to press, Aid Access began working with U.S.-based providers to prescribe and to mail medication abortion across the country. For additional information, see David S. Cohen, Greer Donley & Rachel Rebouché, Abortion Pills, 76 Stan. L. Rev. (forthcoming 2024), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335735.

66 Donley, supra note 1, at 660.

67 Abigail R. A. Aiken et al, Association of Texas Senate Bill 8 With Requests for Self-managed Medication Abortion, 5 JAMA Netw. Open e221122 (2022).

68 Abigail R. A. Aiken et al, Requests for Self-managed Medication Abortion Provided Using Online Telemedicine in 30 US States Before and After the Dobbs v Jackson Women’s Health Organization Decision, 328 J. Am. Med. Assn. 1768 (2022).

69 Cohen, Donley & Rebouché, supra note 8, n.98.

70 See id.

71 Lindgren, supra note 59, at 5–6.

72 See Meghan K. Donovan, Self-Managed Medication Abortion: Expanding the Available Options for U.S. Abortion Care, Guttmacher Inst. (October 17, 2018), www.guttmacher.org/gpr/2018/10/self-managed-medication-abortion-expanding-available-options-us-abortion-care.

73 Rachel Benson Gold, Lessons from Before Roe: Will Past be Prologue?, Guttmacher Inst. (March 1, 2003), www.guttmacher.org/gpr/2003/03/lessons-roe-will-past-be-prologue.

74 Greer Donley & Jill Wieber Lens, Subjective Fetal Personhood, 75 Vand. L. Rev. 1649, 1705–06 (2022).

75 Id.

76 Caroline Kitchener, Kevin Schaul & Daniela Santamariña, Tracking New Action on Abortion Legislation Across the States, Washington Post (last updated April 14, 2022), www.washingtonpost.com/nation/interactive/2022/abortion-rights-protections-restrictions-tracker/.

77 The Justice Department issued an opinion in December 2022 reaffirming the mailability of abortion medication in accordance with a general prohibition on postal agency inspections of packages containing prescription drugs. Application of the Comstock Act to the Mailing of Prescription Drugs That Can Be Used for Abortions, 46 Op. O.L.C. 1, 2 (2022).

78 The FDA has approved abortion medication through the first ten weeks, but the protocol is the same through twelve weeks. Later Abortion Initiative, Can Misoprostol and Mifepristone be Used for Medical Management of Abortion after the First Trimester? (2019), www.ibisreproductivehealth.org/sites/default/files/files/publications/lai_medication_abortion_0.pdf. After that, patients typically need a higher dose for an effective abortion, which takes place in a clinical facility. In a post-Dobbs world, however, some patients will attempt to self-manage second trimester abortions. Id.

79 Donley & Lens, supra note 73, at 39–43.

80 Id. Or people might seek after-abortion care if they are unfamiliar with how misoprostol works and believe they are experiencing complications when they likely are not.

81 Id.

82 Cohen, Donley & Rebouché, supra note 8, at 77 (discussing how the Health Insurance Portability and Accountability Act (HIPAA) prohibits covered health care employees from reporting health information to law enforcement unless an exception is met). The HIPAA’s protections might not be a sufficient deterrent for motivated individuals who want to report suspected abortion crimes, especially if the Biden Administration is not aggressive in enforcing the statute.

83 Donley & Lens, supra note 73, at 39–43.

84 Id.

85 Id.; Michelle Oberman, Abortion Bans, Doctors, and the Criminalization of Patients, 48 Hastings Ctr. Rep. 5 (2018).

86 Anya E.R. Prince, Reproductive Health Surveillance, B.C. L. Rev. (in press, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4176557.

87 Id.

88 Id.

89 See Leslie Reagan, Abortion Access in Post-Roe America vs. Pre-Roe America, The New York Times (December 10, 2021), www.nytimes.com/2021/12/10/opinion/supreme-court-abortion-roe.html.

90 David Cohen, Greer Donley & Rachel Rebouché, Abortion Pills, 59–65 (on file with the authors), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335735 (describing impending efforts to surveil pregnancies).

91 Data collected on people’s iPads and Google searches have been used in criminal prosecutions. See Laura Huss, Farah Diaz-Tello, & Goleen Samari, Self-Care, Criminalized: August 2022 Preliminary Findings, If/When/How (2022), www.ifwhenhow.org/resources/self-care-criminalized-preliminary-findings/.

92 In her book, Policing the Womb, Michelle Goodwin explains in great detail how the state particularly targets Black women and women of color during pregnancy. Michele Goodwin, Policing the Womb: Invisible Women and the Criminalization of Motherhood 21 (2020).

93 Id.

94 Donley & Lens, supra note 73, at 41.

95 Id.

96 Cohen, Donley & Rebouché, supra note 8, at 52–79.

97 Greer Donley, Rachel Rebouché & David Cohen, Existing Federal Laws Could Protect Abortion Rights Even if Roe Is Overturned, Time (January 24, 2023), https://time.com/6141517/abortion-federal-law-preemption-roe-v-wade/.

98 Cohen, Donley & Rebouché, supra note 8, at 71–79.

99 Id. at 65–74.

100 Id.

101 Id. at 31.

102 See Goodwin, supra note 91, at 12–26.

7 Monitoring (on) Your Mind Digital Biomarkers for Alzheimer’s Disease

1 The utility of an AD diagnosis has been debated. Presently, there is no cure for dementia caused by AD; however, clinicians may prescribe a disease-modifying therapy or medications to temporarily improve or delay dementia symptoms or address other symptoms or conditions, such as depression or agitation. A diagnosis of AD can also be useful for informing lifestyle changes, providing clarity about what is happening, facilitating future planning, and accessing systems and support for the patient and caregiver.

2 Clifford R. Jack et al., A/T/N: An Unbiased Descriptive Classification Scheme for Alzheimer Disease Biomarkers, 87 Neurology 539 (2016); Clifford R. Jack et al., NIA-AA Research Framework: Toward a Biological Definition of Alzheimer’s Disease, 14 Alzheimer’s & Dementia: J. Alzheimer’s Ass’n 535 (2018).

3 FDA-NIH Biomarker Working Group, BEST (Biomarkers, EndpointS, and other Tools) Resource (2016).

4 Suzanne E. Schindler & Randall J. Bateman, Combining Blood-based Biomarkers to Predict Risk for Alzheimer’s Disease Dementia, 1 Nat. Aging 26 (2021).

5 Paul S. Aisen et al., On the Path to 2025: Understanding the Alzheimer’s Disease Continuum, 9 Alzheimer’s Rsch. & Therapy 60 (2017).

6 Frank Jessen et al., The Characterisation of Subjective Cognitive Decline, 19 Lancet Neurol. 271 (2020).

7 Jack et al., supra note 2.

8 Ron Brookmeyer & Nada Abdalla, Estimation of Lifetime Risks of Alzheimer’s Disease Dementia using Biomarkers for Preclinical Disease, 14 Alzheimer’s & Dementia 981 (2018); Ron Brookmeyer et al., Forecasting the Prevalence of Preclinical and Clinical Alzheimer’s Disease in the United States, 14 Alzheimer’s & Dementia 121 (2018).

9 Jack et al., supra note 2.

10 Christian Montag, Jon D. Elhai & Paul Dagum, On Blurry Boundaries When Defining Digital Biomarkers: How Much Biology Needs to Be in a Digital Biomarker?, 12 Front. Psychiatry 740292 (2021).

11 Antoine Piau et al., Current State of Digital Biomarker Technologies for Real-Life, Home-Based Monitoring of Cognitive Function for Mild Cognitive Impairment to Mild Alzheimer Disease and Implications for Clinical Care: Systematic Review, 21 J. Med. Internet Rsch. e12785 (2019).

12 Frank Jessen et al., A Conceptual Framework for Research on Subjective Cognitive Decline in Preclinical Alzheimer’s Disease, 10 Alzheimer’s & Dementia 844 (2014).

13 Lidia P. Kostyniuk & Lisa J. Molnar, Self-regulatory Driving Practices among Older Adults: Health, Age and Sex Effects, 40 Accid. Anal. Prev. 1576 (2008); Jennifer D. Davis et al., Road Test and Naturalistic Driving Performance in Healthy and Cognitively Impaired Older Adults: Does Environment Matter?, 60 J. Am. Geriatric Soc’y 2056 (2012).

14 Samantha L. Allison et al., Spatial Navigation in Preclinical Alzheimer’s Disease, 52 J. Alzheimers Dis. 77 (2016); Gillian Coughlan et al., Spatial navigation Deficits – Overlooked Cognitive Marker for Preclinical Alzheimer Disease?, 14 Nat’l Rev. Neurol. 496 (2018).

15 Megan A. Hird et al., A Systematic Review and Meta-Analysis of On-Road Simulator and Cognitive Driving Assessment in Alzheimer’s Disease and Mild Cognitive Impairment, 53 J. Alzheimers Dis. 713 (2016).

16 Catherine M. Roe et al., A 2.5-Year Longitudinal Assessment of Naturalistic Driving in Preclinical Alzheimer’s Disease, 68 J. Alzheimers Dis. 1625 (2019); Sayeh Bayat et al., GPS Driving: A Digital Biomarker for Preclinical Alzheimer Disease, 13 Alzheimer’s Rsch. & Therapy 115 (2021).

17 Roe et al., supra note 16; Bayat et al., supra note 16.

18 Kristen Hall-Geisler & Jennifer Lobb, How Do Those Car Insurance Tracking Devices Work?, US News & World Rep. (March 9, 2022), www.usnews.com/insurance/auto/how-do-those-car-insurance-tracking-devices-work.

19 Lauren Hersch Nicholas et al., Financial Presentation of Alzheimer Disease and Related Dementias, 181 JAMA Internal Med. 220 (2021).

20 Patricia A. Boyle et al., Scam Awareness Related to Incident Alzheimer Dementia and Mild Cognitive Impairment: A Prospective Cohort Study, 170 Annals Internal Med. 702 (2019).

21 Benjamin Pimentel, Banks Watch Your Every Move Online. Here’s How It Prevents Fraud, Protocol (June 1, 2021), www.protocol.com/fintech/behavioral-analytics-bank-fraud-detection.

22 Yuriko Nakaoku et al., AI-Assisted In-House Power Monitoring for the Detection of Cognitive Impairment in Older Adults, 21 Sensors (Basel) 6249 (2021).

23 Piau et al., supra note 11; Nakaoku et al., supra note 22; Maxime Lussier et al., Smart Home Technology: A New Approach for Performance Measurements of Activities of Daily Living and Prediction of Mild Cognitive Impairment in Older Adults, 68 J. Alzheimers Dis. 85 (2019).

24 Piau et al., supra note 11; Nakaoku et al., supra note 22; Lussier et al., supra note 23.

25 The RADAR-AD Consortium et al., Remote Monitoring Technologies in Alzheimer’s Disease: Design of the RADAR-AD Study, 13 Alzheimer’s Rsch. & Therapy 89 (2021).

26 Nakaoku et al., supra note 22.

27 Alexandra König et al., Automatic Speech Analysis for the Assessment of Patients with Predementia and Alzheimer’s Disease, 1 Alzheimer’s & Dementia (Amst) 112 (2015); Alexandra Konig et al., Use of Speech Analyses within a Mobile Application for the Assessment of Cognitive Impairment in Elderly People, 15 Current Alzheimer Rsch. 120 (2018); Fredrik Öhman et al., Current Advances in Digital Cognitive Assessment for Preclinical Alzheimer’s Disease, 13 Alzheimer’s & Dementia (2021).

28 Sander C.J. Verfaillie et al., High Amyloid Burden is Associated with Fewer Specific Words During Spontaneous Speech in Individuals with Subjective Cognitive Decline, 131 Neuropsychologia 184 (2019).

29 Jessica Robin et al., Evaluation of Speech-Based Digital Biomarkers: Review and Recommendations, 4 Digit. Biomark 99 (2020); Lampros C. Kourtis et al., Digital Biomarkers for Alzheimer’s Disease: The Mobile/Wearable Devices Opportunity, 2 npj Digit. Med. 9 (2019).

30 David A. Simon et al., Should Alexa Diagnose Alzheimer’s?: Legal and Ethical Issues with At-home Consumer Devices, Cell Reps. Med. 100692 (2022).

31 Kourtis et al., supra note 29.

32 Jonas Rauber, Emily B. Fox & Leon A. Gatys, Modeling Patterns of Smartphone Usage and Their Relationship to Cognitive Health (2019), https://arxiv.org/abs/1911.05683; Mitchell L. Gordon et al., App Usage Predicts Cognitive Ability in Older Adults, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 1 (2019), https://dl.acm.org/doi/10.1145/3290605.3300398.

33 Emily A. Largent et al., Disclosing Genetic Risk of Alzheimer’s Disease to Cognitively Unimpaired Older Adults: Findings from the Study of Knowledge and Reactions to APOE Testing (SOKRATES II), 84 J. Alzheimer’s Dis. 1015 (2021).

34 Steven Pinker, My Genome, My Self, The New York Times (January 7, 2009), www.nytimes.com/2009/01/11/magazine/11Genome-t.html.

35 Emily A Largent et al., “That Would be Dreadful”: The Ethical, Legal, and Social Challenges of Sharing Your Alzheimer’s Disease Biomarker and Genetic Testing Results with Others, J. Law & Biosciences lsab004 (2021).

36 Larry Husten, Beware the Hype over the Apple Watch Heart App. The Device Could Do More Harm Than Good, Stat (March 15, 2019), www.statnews.com/2019/03/15/apple-watch-atrial-fibrillation/.

37 Claire M. Erickson et al., Disclosure of Preclinical Alzheimer’s Disease Biomarker Results in Research and Clinical Settings: Why, How, and What We Still Need to Know, 13 Alzheimer’s; Dementia: Diagnosis, Assessment; Disease Monitoring (2021).

38 Kristin Harkins et al., Development of a Process to Disclose Amyloid Imaging Results to Cognitively Normal Older Adult Research Participants, 7 Alzheimer’s Rsch. & Therapy 26 (2015).

39 Erickson et al., supra note 37.

40 Emily A. Largent, Anna Wexler & Jason Karlawish, The Future Is P-Tau – Anticipating Direct-to-Consumer Alzheimer Disease Blood Tests, 78 JAMA Neurol. 379 (2021).

41 Sato Ashida et al., The Role of Disease Perceptions and Results Sharing in Psychological Adaptation after Genetic Susceptibility Testing: The REVEAL Study, 18 Eur. J. Hum. Genetics 1296 (2010); Largent et al., supra note 35.

42 Largent et al., supra note 35.

43 Clara Berridge & Terrie Fox Wetle, Why Older Adults and Their Children Disagree About In-Home Surveillance Technology, Sensors, and Tracking, Gerontologist (2020), https://academic.oup.com/gerontologist/advance-article/doi/10.1093/geront/gnz068/5491612; Marcello Ienca et al., Intelligent Assistive Technology for Alzheimer’s Disease and Other Dementias: A Systematic Review, 56 J. Alzheimer’s Disease 1301 (2017); Largent et al., supra note 35.

44 Emily A Largent et al., Study Partner Perspectives on Disclosure of Amyloid PET Scan Results: Psychosocial Factors and Environmental Design/Living with Dementia and Quality of Life, 16 Alzheimer’s & Dementia (2020).

45 Emily A. Largent & Jason Karlawish, Preclinical Alzheimer Disease and the Dawn of the Pre-Caregiver, 76 JAMA Neurol. 631 (2019).

46 Lynne Corner & John Bond, Being at Risk of Dementia: Fears and Anxieties of Older Adults, 18 J. of Aging Studs. 143 (2004); Perla Werner & Shmuel M. Giveon, Discriminatory Behavior of Family Physicians Toward a Person with Alzheimer’s Disease, 20 Int. Psychogeriatr. 824 (2008); Alzheimer’s Association, 2019 Alzheimer’s Disease Facts and Figures, 15 Alzheimers Dementia 321 (2019).

47 Largent et al., supra note 35.

48 Shana D. Stites et al., The Relative Contributions of Biomarkers, Disease Modifying Treatment, and Dementia Severity to Alzheimer’s Stigma: A Vignette-based Experiment, 292 Soc. Sci. & Med. 114620 (2022).

49 Largent et al., supra note 35.

50 Jalayne J. Arias et al., The Proactive Patient: Long-Term Care Insurance Discrimination Risks of Alzheimer’s Disease Biomarkers, 46 J. Law. Med. Ethics 485 (2018).

51 Nicole Martinez-Martin et al., Data Mining for Health: Staking Out the Ethical Territory of Digital Phenotyping, 1 npj Digit. Med. 68 (2018); Anna Wexler & Peter B. Reiner, Oversight of Direct-to-consumer Neurotechnologies, 363 Science 234 (2019).

52 Sandra Colby & Jennifer Ortman, Projections of the Size and Composition of the US Population: 2014 to 2060, 13 (2015).

53 María P. Aranda et al., Impact of Dementia: Health Disparities, Population Trends, Care Interventions, and Economic Costs, 69 J. Am. Geriatrics Soc’y 1774 (2021).

54 Carl V. Hill et al., The National Institute on Aging Health Disparities Research Framework, 25 Ethn Dis 245 (2015); Camara P. Jones, Levels of Racism: A Theoretic Framework and a Gardener’s tale, 90 Am. J. Pub. Health 1212 (2000).

55 Effy Vayena, Alessandro Blasimme & I. Glenn Cohen, Machine Learning in Medicine: Addressing Ethical Challenges, 15 PLoS Med e1002689 (2018); Ravi B. Parikh, Stephanie Teeple & Amol S. Navathe, Addressing Bias in Artificial Intelligence in Health Care, 322 JAMA 2377 (2019).

56 Michelle Faverio, Share of Those 65 and Older Who Are Tech Users Has Grown in the Past Decade, Pew Rsch. Ctr. (January 13, 2022), www.pewresearch.org/fact-tank/2022/01/13/share-of-those-65-and-older-who-are-tech-users-has-grown-in-the-past-decade/.

Figure 0

Figure 5.1 Left to right: Eko DUO smart stethoscope; patient-facing “bell” of stethoscope labelled with sensors; data flow between Eko DUO, user’s smartphone, and cloud for the application of AI

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×