Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-26T08:48:16.967Z Has data issue: false hasContentIssue false

How Can Law and Policy Advance Quality in Genomic Analysis and Interpretation for Clinical Care?

Published online by Cambridge University Press:  01 January 2021

Rights & Permissions [Opens in a new window]

Abstract

Delivering high quality genomics-informed care to patients requires accurate test results whose clinical implications are understood. While other actors, including state agencies, professional organizations, and clinicians, are involved, this article focuses on the extent to which the federal agencies that play the most prominent roles — the Centers for Medicare and Medicaid Services enforcing CLIA and the FDA — effectively ensure that these elements are met and concludes by suggesting possible ways to improve their oversight of genomic testing.

Type
Symposium Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © American Society of Law, Medicine and Ethics 2020

Introduction

Delivering high quality care to patients depends on having accurate test results whose clinical implications are understood. While these requirements apply throughout medicine, the question of how best to ensure the quality of genetic tests used in clinical care, in particular, has vexed scientists and regulators alike for roughly two decades. Numerous federal advisory committees, expert scientific bodies, and professional societies have weighed in on the issue, proposed a variety of approaches, and identified a number of governmental and non-governmental entities to regulate the quality of single-gene tests.Reference Holtzman, Watson, Pratt, Leonard and Kaul1 Over time, the understanding and clinical use of genetic tests have increased dramatically, but challenges in ensuring patients get accurate results whose clinical impact is understood are not yet solved. One need only look at discrepant results from different laboratories and the number of variants of uncertain significance to see the enormity of the current challenges.Reference Van Driest2

As difficult as the issues attending single-gene tests are, genomic tests — which make possible the examination of multiple variants across one genome that can be analyzed individually or in combination to inform patient care — present a whole new level of complexity.3 An ongoing challenge for genomic tests, including those using next-generation or genome sequencing technology (NGS), is determining when the results of such testing are of sufficient quality to inform clinical decision-making. Indeed, simply gaining consensus on the meaning and appropriate parameters of “quality” in this context — let alone on which entities should be responsible for serving as quality gatekeepers — is difficult.

This article seeks both to identify the key components of quality in the NGS context and to evaluate the extent to which existing federal regulatory “checkpoints” for NGS test quality are sufficient to ensure the quality of genomic data and interpretation intended for use in diagnosis and treatment of patients. As part of this analysis, we will also identify the role of professional organizations, which have an important part to play in driving quality that is explored in greater length in a companion piece to this article.Reference Burke4 This article does not provide a general overview of state regulatory efforts to ensure quality or the potentially practice-informing impact of state liability rules even though these sources of law are often quite important. It does, however, briefly note interactions between New York State's laboratory regulations and the federal Clinical Laboratory Improvement Amendments (CLIA) and Food and Drug Administration (FDA) regulatory frameworks. Our focus here is on clinical testing. Later work can address the questions of ensuring quality in direct-to-consumer genetic testing and return of research results, even while we acknowledge that clinicians increasingly are being faced with requests to interpret and act upon genetic information originating from both of these contexts.

This article seeks both to identify the key components of quality in the NGS context and to evaluate the extent to which existing federal regulatory “checkpoints” for NGS test quality are sufficient to ensure the quality of genomic data and interpretation intended for use in diagnosis and treatment of patients.

Part I presents background, explaining that no single entity oversees the process of ensuring quality in genomic analysis and interpretation. Part II then defines the components of quality, including analytic validity, clinical validity, and clinical utility, as well as the attendant challenges. Part III analyzes current oversight, focusing on oversight by the Centers for Medicare & Medicaid Services (CMS) and FDA, current approaches to clinical utility, and the role of non-governmental entities. Part IV then presents our findings and recommendations for governing quality to advance integration of genomic testing into clinical care.

I. Background

Genomic data are generated for a range of clinical reasons and in different patient populations. For example, genomic tests may be performed as part of the evaluation of a child with unexplained developmental delay or other features that suggest a genetic disorder may be involved.5 Sequencing panels consisting of dozens to hundreds of genes are increasingly used to characterize cancers to refine prognosis and therapy.Reference Gornick, Hertz and McLeod6 Also, as the cost of sequencing decreases, the pressure to adopt genome-based approaches has increased, even if an ordering physician requests, and a laboratory interprets and includes in the final report, only a limited subset of the data generated by sequencing. Genome-based approaches, by their nature, generate a tremendous amount of uninterpreted data, most of which are not pertinent to answering the particular clinical question for which testing was ordered. Nevertheless, clinical laboratories and providers are currently encouraged to report results beyond the original clinical indication. The American College of Medical Genetics and Genomics (ACMG) recommends that clinical laboratories performing genome or exome sequencing should give patients the option to receive the results when pathogenic variants that are considered medically actionable are discovered in any of 59 specified disease-associated genes, regardless of the clinical indication for testing.Reference Kalia7 Some prominent scientistsenvision a day when genome sequencing and analysis are included as part of routine healthcare screening.Reference Collins and Church8

Under the current U.S. regulatory scheme, oversight of genetic and genomic test quality is distributed among different governmental and non-governmental actors, with no single entity in charge of the entire process. Additionally, applicable legal requirements may differ depending on the methodology and setting of testing, as discussed below.

II. Defining Quality

A necessary prerequisite to assessing the adequacy of current regulations to ensure genomic testing quality is to define the parameters of such “quality.” We focus here on quality in the traditional clinical genetic testing context while highlighting the added quality challenges that arise when performing genomic sequencing. In the clinical context, discussions of genetic test quality generally focus on the domains of analytical validity (including analytical verification), clinical validity, and clinical utility.Reference Joseph, Micheel, Strande, Sung and Teutsch9

Analytical validity often refers to how well a test detects, identifies, calculates, or analyzes the presence or absence of a particular gene change.Reference Burke10 Analytical verification is seen by some to be part of analytical validity,Reference Rehm11 while others view verification and validation as separate processes.Reference Lathrop12

Clinical validity usually refers to how well a genetic variant is correlated with the presence, absence, or risk of disease.Reference Fabsitz13 In the diagnostic setting, clinical validity is often described in terms of “sensitivity” (the proportion of affected individuals who have an abnormal test result), “specificity” (the proportion of unaffected people who do not have the abnormal result), “positive predictive value” (the probability that individuals who test positive actually have the condition), and “negative predictive value” (the probability that individuals who test negative do not have the condition).14 In the context of predictive testing (i.e., testing of asymptomatic individuals to identify genetic susceptibility to future disease), clinical validity usually is viewed as the measure of the accuracy with which the test predicts future clinical conditions.15 Genetic tests for cancer syndromes, for example, have relatively low sensitivity and somewhat higher specificity and so have less clinical validity for purposes of prediction for reasons discussed in more depth below; most women with breast cancer do not have germline mutations in BRCA 1 and 2, while unaffected women are not very likely to have such variants, but some will.

Healthcare providers make critical decisions about which patients to test and which tests to order. A genomic test with well-established analytical validity, clinical validity, and clinical utility for one intended use (e.g., diagnosis of symptomatic patients) may lack quality if inappropriately ordered for other uses (e.g., screening of healthy persons). But, as discussed below, even when diagnostic tests are validated, labeled, and promoted for a specific intended use consistent with regulatory requirements, healthcare providers have broad discretion to order genetic and other diagnostic tests as part of the practice of medicine. The independent role of the healthcare provider adds a further level of complexity to an already fragmented regulatory regime.

Clinical utility usually refers to the risks and benefits resulting from genetic test use and encompasses considerations of (a) whether a test (and subsequent interventions taken based on the result) leads to better clinical management or an improved health outcome among people with a positive test result, and (b) the potential risks posed by such testing.16 As such, clinical utility is both a vital measure but also more difficult to define precisely as its meaning can vary from one user to another. Some definitions of the term equate utility with “actionability,” and take the position that for a test to be clinically useful, there must be established therapeutic or preventive interventions available or other available actions that may change the course of the disease.17 However, a test result may also provide diagnostic or prognostic information that improves clinical management, even if the disease course is not changed. In addition, some definitions also encompass the concept of personal utility, based on the view that information may have value to patients and families and may have benefits to society, even when no medical intervention is available.18 However, there is ongoing controversy about conflating the concepts of personal utility with clinical utility,Reference Terry and Wolf19 and not all commentators agree that personal meaning supplies a legitimate basis for returning results.20 An example of ongoing debate is whether supplying a definitive diagnosis through genomic testing provides clinical utility to patients and families even when the underlying disorder has no specific treatment.

In the twenty years since the Secretary's Advisory Committee on Genetic Testing (“SACGT”) introduced its initial framework discussing the concepts of analytical validity, clinical validity, and clinical utility as they related to a regulatory scheme for genetic tests,21 these concepts have maintained their vitality while undergoing various clarifications and refinements. For example, FDA often refers to a test's “analytical performance” and “clinical performance,” as opposed to its analytic validity and clinical validity.22 The concept of analytic performance arguably encompasses two concepts: analytic validity (whether the test is capable of accurately detecting the analyte it purports to analyze) and analytic verification (a more process-oriented assessment of whether the test was performed in accordance with applicable standards, instructions, and procedures, considering factors such as operator qualifications and equipment calibration).23 While recognizing the importance of these ongoing refinements to the terminology, we choose in this paper to use the traditional terms analytical validity, clinical validity, and clinical utility, which have served as the pillars of clinical genetic testing quality for years.

Ensuring that patients get accurate clinical genomic test results that are pertinent to their own care raises additional complexities and difficulties that fall outside the scope of federal regulation. The genomic testing process begins with preanalytical steps that occur before a laboratory even receives a patient specimen. The specimen must be obtained, labeled, preserved, and transported to the laboratory in a manner and time frame that preserves the specimen for testing. This paper does not address these important preanalytic components of quality.

This paper also largely avoids the question of how clinician decision-making affects quality and how that decision-making can be optimized to support quality in genomic testing. Healthcare providers make critical decisions about which patients to test and which tests to order. A genomic test with well-established analytical validity, clinical validity, and clinical utility for one intended use (e.g., diagnosis of symptomatic patients) may lack quality if inappropriately ordered for other uses (e.g., screening of healthy persons). But, as discussed below, even when diagnostic tests are validated, labeled, and promoted for a specific intended use consistent with regulatory requirements, health-care providers have broad discretion to order genetic and other diagnostic tests as part of the practice of medicine. The independent role of the healthcare provider adds a further level of complexity to an already fragmented regulatory regime.

A. Defining the Process

Modern, high-throughput genomic testing comprises a series of steps performed within or at the direction of the laboratory. First, the biospecimen is collected, stabilized or preserved, and transported to the laboratory in an identified fashion. DNA is then extracted from the patient's biospecimen, and the laboratory applies special-purpose chemical reagents to create a library, which is a collection of short DNA fragments that can be analyzed simultaneously in parallel. Three analytical phases then follow, known as the “bioinformatics pipeline,” involving the use of instrumentation (sequencing analyzers and physical devices), consumables and supplies (e.g., chemical reagents), analytical software algorithms, and skilled human personnel.

  • Primary analysis, which involves raw data generation and base calling, uses instruments and software to read the sequence of nucleotides in the various DNA fragments and to assess the quality of the readings.Reference Celesti, Goldfeder and Moorthie24 The current output of this process is generally a FASTQ data file that records the line-up of nucleotides in the tested fragments.

  • Secondary analysis uses software to map these fragmentary readings onto a human reference genome, probabilistically aligning the fragments back into order and using the software to filter and clean the data and call variants; that is, identify specific locations where the tested individual's genome differs from the human reference genome.25 The current outputs of this process are generally a binary alignment map (BAM) file or compressed alignment (CRAM) file,Reference Hsi-Yang26 in which the fragmentary readings are reassembled into a model of the person's tested DNA, and a variant call file (VCF) that summarizes the identified genetic variants.

  • Tertiary analysis involves annotating the variants with available information about the potential clinical significance of those variants, prioritizing the variants through filtration using various annotation fields, and interpreting a subset of the variants to prepare a test report.27 The interpretive process typically requires a mix of interpretive software and expert human judgment.

In addition to the software required for these three analytical steps, genomic testing laboratories also depend on process-related software — for example, laboratory information management systems that help orchestrate workflows, guide laboratory technicians and automated systems in using the right reagents in the right way, track specimens and results, and generate quality control data.28

Some of the software in the genomic testing bioinformatics pipeline is a component of instrumentation that the laboratories purchase (“embedded software”). However, some laboratories rely, at least in part, on software sold separately as an accessory to their instrumentation, software developed in-house, or software supplied by external software vendors or cloud-based service providers (“stand-alone software”). The level of regulatory oversight software receives may vary, depending on its provenance, as discussed later in this article. Whatever its origin, all of the analytical and process-related software requires appropriate verification and validation,29 and the complex algorithms embodied therein require validation and testing. Data scientists, clinicians, commentators, and regulators are only beginning to identify the requirements necessary for responsible clinical implementation of complex algorithms, which may but do not always include machine learning algorithms and deep learning neural networks that are now involved in sequencing.Reference Vollmer, Wiens and Schrag30 Definitions of quality are not yet widely agreed on for medical algorithms.

B. Analytical Validity Challenges

In the context of genomic sequencing, analytical validity is particularly challenging because of the sheer scale of the genome (3 billion haploid base-pairs or 6 billion per individual). Current methodological limitations preclude analytic validation of every potentially meaningful variant, and certain regions of the genome are known to be extremely challenging to measure accurately with currently available laboratory methods.Reference Pant31 Most NGS produces short reads and then aligns them to a reference genome. Using ordinary methods and algorithms, however, NGS may not be able to detect differences in the copy number of a gene or gene segment because it might align all copies to the same place. This technology may also fail to detect major rearrangements because it simply aligns short segments to the reference and does not put them in the order in which they actually appear in the patient's genome. Sequencing algorithms may filter out short rearranged segments.Reference Wheeler32 Consequently, no extant analytical approach will reliably detect and map every variant in a single test. Moreover, whole genome or whole exome sequencing may be more accurate for some variants, while more focused sequencing (of a single gene or a few genes) will work better for others. Genomic sequencing may also result in the identification of many “novel” results, that is, variations that have not been previously observed. Novel findings need to be confirmed using a second (orthogonal) sequencing method to determine whether the variation is actually present or is instead an artifact resulting from technical error,33 whether inherent in the method or caused by the operator. These many limitations of the current sequencing technologies and processes make it critical to identify the strengths and weaknesses of different analytic approachesReference Scheuner34 and to specify areas of potential uncertainty for each, because clinicians and patients need to be able to assess the accuracy of the particular variants on which they are basing decisions about care.

At present, the best indicator of a laboratory's ability to analyze a DNA sequence accurately is comparison to a standardized reference sample developed based on sequencing and analysis of a significant number of other genomes. A number of organizations are working to develop suitable reference materials for clinical laboratories to use in method development, test validation, internal quality control, assay calibration, and proficiency testing,Reference Zook and Hardwick35 but many more reference materials are needed.

Bioinformatic algorithms that call bases, map bases to reference sequences, and interpret sequence data can be highly technical. While it is clear that complex analytical problems can be solved correctly using different mathematical and computational methods, bioinformatic systems also need to be validated and tested extensively so their performance parameters are well understood and can be communicated to users.36

C. Clinical Validity Challenges

In the context of genomic sequencing, the number of variants that can be identified vastly exceeds the number for which clinical significance has been established,Reference Dewey37 a gap that will persist even as our understanding of the impact of variants individually and in combination continues to grow. Moreover, clinical significance resides along a continuum, so that geneticists classify variants into one of five categories: pathogenic, likely pathogenic, uncertain significance, likely benign, and benign.Reference Nykamp38 If a variant with strong, reliable evidence of pathogenicity (e.g., co-segregation with disease status within large families and/or functionally tested in a valid assay) is identified in a patient who already has cardinal signs and symptoms of the related disease, then the clinical significance of the variant is typically not in doubt.

Even where a variant has strong, reliable evidence of pathogenicity, the impact of such a variant in the context of a specific patient can be uncertain. Even single-gene diseases may manifest a variety of possible phenotypes; for example, patients with sickle cell disease, the prototypical disease caused by a single base-pair change, can have a range of possible symptoms, including painful crises, acute chest syndrome, overwhelming sepsis, and/or strokes.Reference Pecker, Little, Meier, Abraham, Fasano and Beutler39 Furthermore, single-gene disorders frequently vary in their “penetrance,” that is, the proportion of individuals with a pathogenic variant who exhibit clinical symptoms. For example, many genes contribute to the development of cardiac arrhythmias. However, some individuals who carry one of these dominant mutations do not exhibit any symptoms.Reference Miko40 Incomplete penetrance complicates the ability to predict disease in currently asymptomatic individuals, as some individuals who carry a pathogenic mutation may never become ill.41 Further complicating clinical interpretation of genetic information is the fact that our understanding of a variant's clinical significance can shift over time, as new scientific evidence accumulates. Consequently, a variant suspected to increase risk today may turn out to be benign, while a variant believed to be unrelated to disease risk may turn out to increase risk in certain contexts.Reference SoRelle, Taber and David42 At any one time, experts may have differing opinions about how best to analyze the genome or to interpret the probable phenotypic effect of particular variants.

Computational algorithms for interpreting genomic data are essential tools for establishing clinical validity, but as in the case of analytic validity, challenges abound. Each algorithm may have unique filters and cutoffs. For instance, many algorithms filter in variants that are rarely seen in a population, but the algorithms can use different cut-offs for what counts as rare. Each algorithm also embodies its developers' views on how best to predict the pathogenicity or non-pathogenicity of a variant. Many users may not be able check the output of these algorithms by manually analyzing the clinical significance of variants. In any event, as sequencing becomes more common, the sheer volume of data may preclude most manual assessments.

Analysis algorithms search databases for reports of disease associated with the variants of interest in a clinical sequence. Unfortunately, many genomic databases contain mistakes or misleadingly incomplete information.Reference Coovadia43 In addition, databases at present do not adequately represent population diversityReference Popejoy, Fullerton and Manrai44 and may or may not include unaffected as well as symptomatic individuals. These gaps can lead to interpretive error, and consequently to misjudgments regarding the clinical validity and significance of a variant.Reference Kohane45 As discussed in further detail below, FDA is working to address these technical gaps through guidance.

Finally, difficulties assessing clinical validity also arise at the level of the gene and not the variant. Whether a given gene is associated with a given disease can be perplexing. In a recent example, it was found that standard panels for testing of Brugada Syndrome included a high number of genes for which there was insufficient evidence that they were causally related to the condition.Reference Hosseini46

Thus, genomic approaches face all the challenges that can bedevil the interpretation of single genes. But genomics also permits the study of more complex diseases, which may be influenced by the effects of numerous genes often in a particular pathway and numerous potential variants of variable penetrance or expression,Reference Vihinen47 leading to a variety of symptoms. Genomic risk scores — which combine the individual, often small, effects of tens or hundreds of different genes — are currently being developedReference Natarajan48 and marketed.Reference Ray49 At present, however, genomic risk scores are rife with interpretive uncertainty, raising questions about their value for diagnosis, much less for prediction of future disease state in an asymptomatic person.Reference Hunter, Drazen, Rosenberg and Curtis50 Creating a massive matrix of computer-searchable medical phenotypes associated with genotypes, as well as a wealth of environmental and other data from large numbers of individuals (as initiatives such as NIH's All of Us Research Project seek to do) may ultimately help to provide more insights into the clinical impact of most variants.51

D. Clinical Utility Challenges

Finally, clinical utility is more difficult to define and assess in the context of genomic sequencing than in interpreting the significance single-gene variants. Genomic approaches clearly can be useful in guiding clinical management for individuals with a family history suggesting a dominant disorder that could be attributable to any one of several genesReference Ma52 and for people who have an undiagnosed disease that appears to have a genetic contribution.Reference Splinter53 While these uses of sequencing fall easily within a narrow definition of clinical utility, they raise a number of questions that are beyond the scope of this paper, including deciding whether to pursue genome-based versus more focused approaches as well as whether to search for and return secondary findings. In the absence of a clinical indication for sequencing, however, the likelihood that a person will receive results that will improve long-term health depends in part on how much of the genome is being analyzed, the nature of the diseases or conditions being assessed, whether the diseases would otherwise be detected and adequately treated, competing morbidities, and the strength of the evidence regarding the efficacy of prevention or intervention.

There have been some efforts to assess the utility of genomic screening when the individual does not have a pertinent family history or current symptoms. For example, investigators at Geisinger Health System sought to understand whether genomic screening could identify previously undiagnosed patients with familial hypercholesterolemia (FH). They analyzed genomic sequencing and electronic health record data from 50,726 individuals from the Geisinger Health System to understand the prevalence and clinical impact of FH variants in this clinical cohort. They identified 229 individuals carrying one of the FH variants. They determined that only 24% of these carriers would have met the criteria for probable or definite FH diagnosis in the absence of variant identification, although most of the carriers, whether or not identified as having FH, were receiving lipid-lowering treatment. The study concluded that genomic data can augment the detection of individuals with FH and lead to identifying patients who would have been missed by using only clinical criteria in electronic health record (EHR) screening.Reference Abul54 In the eMERGE network, which examines variants in approximately 100 genes (including the ACMG list of 59 genes as well as a number of other genes) in approximately 25,000 individuals, preliminary data show that somewhere between 3-5 percent of the participants, some of whom had previously been diagnosed, had returnable results.Reference Gibbs, Rehm and Reuter55

Notwithstanding the previous examples, few studies to date have offered genome-based screening to adults in the absence of a clear clinical indication. This makes it difficult to determine whether and under what circumstances routine genome screening may improve patient outcomes, especially in light of the potential for overdiagnosisReference Elmore and Welch56 and iatrogenic costs and harms from non-indicated interventions.Reference Khoury, Meagher, Berg, Narod, Sopik, Cybulski and Vassy57 The likelihood of net benefit will depend not only on which genes are interrogated and the strength of the geno-type-phenotype correlation58 but also on the ability of providers, many of whom are not genetic specialists, correctly to interpret and act on the results.Reference Arora59 Some commentators have raised questions about whether primary care providers are willing and prepared to handle information from genomic screening, particularly in the absence of clinical decision support (CDS) tools, availability of genetics specialists for referral, and insurance coverage for the initial consultation and follow-up.Reference Christensen and Pet60 A 2017 pilot study sought to describe the effect on clinical care and outcomes of adding whole genome sequencing to a standardized family history assessment in primary care. The study concluded that adding whole genome sequencing (WGS) to primary care reveals new molecular findings of uncertain clinical utility, and that while non-geneticist providers may manage some genetic results appropriately, in other cases the information may prompt additional clinical actions of unclear value.61

Notwithstanding uncertainty around the clinical utility of genomic information, consumer interest in obtaining genetic testing outside the clinical setting as well as enrollment of individuals in longitudinal WGS research suggests that some people value receiving the information and perceive it as meaningful for themselves or their children.Reference Robinson and Genetti62 Recent studies have sought to assess the views of those who have taken part in WGS research and to evaluate whether their a priori expectations were met. One randomized study surveyed 202 primary care and cardiology patients in the Med-Seq study before and up to six months after receiving either WGS and family health history (FHH) or family history information alone. The study found that decisional regret overall was low in both groups, but that those who did not receive WGS information were more likely to report at least some regret about participating in the study. Participants who received both FHH and WGS information were much more likely to report that study results had provided new information with some level of personal or clinical utility. For example, they were over seven times more likely to report that study results had yielded accurate identification of disease risk and more than twice as likely to report that results had or would influence their medical treatment. Yet the majority of those who were found to have a pathogenic variant had no evidence of disease even after extensive clinical investigation. At the same time, they expected a higher level of benefits from the study than were actually achieved and also were more likely to report receiving too much information (particularly in the primary care group), which led the authors to recommend tempering patients' expectations.Reference Roberts63

Another study, involving 29 HealthSeq healthy participants who completed six-month follow-up, sought to gauge the psychological and behavioral impact of receiving WGS results at various time points. The study found that most patients had positive emotional reactions to receiving their results, although a few expressed negative reactions.Reference Sanderson64 Of the seven participants who received pathogenic or likely pathogenic rare disease variant results, two were concerned about and acted on the information. Two participants who had APOE e4/e4 variants (indicating increased Alzheimer disease risk) reported being concerned, while one participant who had APOE e4/e3 variants (associated with more modestly increased risk) was confused about the result. The data also showed that among those who reported distress from the information, such distress had largely subsided by the six-month assessment. The study concluded that currently neither the benefits nor harms of personal genome sequencing are significant for most individuals, but that there may be important exceptions warranting further investigation and that the impact of returning WGS results on a larger scale remains to be seen.

III. Regulating Quality in the Clinical Context

As mentioned above, no single government agency regulates the entire spectrum of genetic and genomic test quality, and some aspects of quality are not subject to any direct federal governmental regulation, whether because of current statutory limitations, competing agency priorities, or inherent limitations of federal control over health care delivery. Both FDA and the CMS currently regulate certain aspects of genomic test quality in certain situations. We begin by examining the role that federal regulatory agencies currently play in ensuring quality. CMS administers the CLIA program in partnership with CDC and FDA, which fulfill certain responsibilities in connection with CLIA.65 Other federal agencies, e.g., NIH, play supporting roles but do not exert direct oversight on the provision of clinical laboratory tests.

CMS is authorized by the CLIA statute to issue certificates to laboratories that meet certain standards set in order to ensure consistent performance of valid and reliable laboratory examinations and other procedures by clinical laboratories. “By controlling the quality of laboratory practices, CLIA standards are designed to ensure the analytical validity of genetic tests.”66 CLIA does not address the clinical validity or utility of tests.67 On the other hand, FDA regulates the safety and effectiveness of certain laboratory instruments, reagents, and test kits (collectively, in vitro diagnostic devices (IVDs)) sold to clinical laboratories. The FDA's authority to regulate stems from the Food, Drug, and Cosmetic Act (FDCA), which tasks the agency with ensuring the safety and efficacy of such articles.68 The two agencies thus both regulate analytical validity, albeit with respect to different aspects of the testing process.

A. Regulation of Clinical Laboratories

(1) CLIA

The CLIA statute is a federal law that applies to all clinical laboratories operating in or testing specimens from patients in the United States.69 CLIA defines a clinical laboratory as a facility that examines materials collected from the human body for the purpose of providing information for the diagnosis, prevention, or treatment of disease or the assessment of health.70 CLIA requires clinical laboratories to hold one of five types of certificates, depending on the complexity of the tests the laboratory performs. Clinical laboratories that provide genomic testing can elect to obtain a certificate of compliance (in which case they undergo inspection by CMS or state health departments that act as CMS's agents71) or a certificate of accreditation (in which case they are inspected by one of several private accreditation bodies “deemed” (approved) by CMS, such as the Joint Commission or the College of American Pathologists (CAP)72). Finally, a laboratory is CLIA-exempt if it has been licensed by a state whose laboratory requirements CMS has determined are equal to or more stringent than CLIA's requirements, and the state licensure program has been approved by CMS.73 Two states — New York and Washington — currently meet these conditions.74 Consequently, CLIA regulations serve as a “baseline” and laboratories that are CAP-accredited or permitted by New York or Washington may be subject to slightly different or additional requirements depending on the type of testing. In particular, New York requires laboratories that perform tests using methods not cleared or approved by FDA (e.g., laboratory-developed tests (LDTs)) to obtain approval for such tests in addition to a laboratory permit. Furthermore, CAP requires laboratories performing molecular genetic testing to use the Molecular Pathology Checklist to prepare for inspection.

CLIA regulations address, among other things, personnel qualification and training, record keeping, quality control, and proficiency testing. CLIA also requires laboratories to “maintain a quality assurance and quality control program adequate and appropriate for the validity and reliability of the laboratory examinations.”75 In addition, CLIA requires that laboratories “qualify under a proficiency testing program” meeting the standards established by CMS.76

Compliance with CLIA is ascertained through periodic inspections (surveys) either by a state inspection agency or accreditation body. CMS's State Operations Manual (SOM) provides guidance to laboratories and inspectors alike on interpretation of CLIA requirements. The SOM describes the survey as an “outcome oriented” process that focuses on the effect of a laboratory's practices on patient test results and/or patient care and that focuses the surveyor/inspector on “those requirements that will most effectively and efficiently assess the laboratory's ability to provide accurate, reliable, and timely test results.”77 Emphasis is placed on the laboratory's quality system as well as the “structures and processes throughout the entire testing process that contribute to quality test results.”78

The CLIA regulatory framework places significant responsibility on the laboratory director to ensure test quality. The laboratory director is “responsible for the overall operation and administration of the laboratory, including the employment of personnel who are competent to perform test procedures, record and report test results promptly, accurately, and proficiently, and for assuring compliance with the applicable regulations.”79 Laboratory directors must, as part of their duties, ensure that:

  • testing systems developed and used for each of the tests performed in the laboratory provide quality laboratory services for all aspects of test performance, including the preanalytic, analytic, and postanalytic phases of testing;

  • the test methodologies selected have the capability of providing the quality of results required for the patient's care;

  • verification procedures used are adequate to determine the accuracy, precision, and other pertinent performance characteristics of the method;

  • laboratory personnel are performing test methods as required for accurate and reliable results;

  • quality control and quality assessment programs are established and maintained to ensure the quality of laboratory services provided and to identify failures in quality as they occur;

  • acceptable levels of analytical performance for each test system are established and maintained;

  • all necessary remedial actions are taken and documented whenever significant deviations from established performance characteristics are identified and that patient test results are reported only when the system is functioning properly;

  • reports of test results include pertinent information required for interpretation; and

  • consultation is available to the laboratory's clients on matters relating to the quality of the test results and their interpretation concerning specific patient conditions.80

Additionally, for high complexity testing the laboratory director must ensure that the laboratory is enrolled in a CMS-approved proficiency testing program for each specialty in which testing is performed, that samples are properly tested and reported, that proficiency testing reports are reviewed and, where necessary, corrective actions are taken. Where a test specialty has not been established or compatible proficiency testing samples are not offered by a CMS-approved proficiency testing program, the laboratory must, at least twice annually, verify the accuracy of the test, including the accuracy of calculated results, if applicable.81

CLIA requirements apply to laboratory tests performed using assays manufactured by third parties (IVD test systems) as well as assays developed in-house by the laboratory either from scratch or by modifying a manufacturer-developed test kit (so-called laboratory developed tests or LDTs). When a laboratory uses a proprietary test system, the laboratory may not release test results prior to establishing performance specifications relating to analytical validity for the use of the test system in the laboratory's environment.82 Performance specifications must be established for:

(1) accuracy; (2) precision; (3) analytical sensitivity; (4) analytical specificity (including interfering substances); (5) reportable range of test results for the test system; (6) reference intervals (normal values); and (7) any other performance characteristic required for test performance.Reference Aziz83 With respect to accuracy, the laboratory is responsible “for verifying that the method produces correct results,” using testing reference materials, comparing results of tests performed by the laboratory against results of a reference method, or comparing split sample results with results obtained from another method that has already been shown to provide accurate results. For qualitative methods, the laboratory must verify that a method will identify the presence or absence of the analyte.84

Where the laboratory uses a third-party IVD test system, the laboratory is “responsible for verifying the performance specifications” of the test system “prior to reporting patient test results.” The “verification of method performance should provide evidence that the accuracy, precision, and reportable range of the procedures are adequate to meet the clients' needs.” The laboratory may use the manufacturer's performance specifications as a guideline, but “is responsible for verifying the manufacturer's analytical claims before initiating patient testing.”85

Failure to comply with CLIA certification and/or state clinical laboratory licensure requirements may result in a range of enforcement actions, including certificate or license suspension, limitation, or revocation; directed plan of action; onsite monitoring; civil monetary penalties; criminal sanctions; and revocation of the laboratory's approval to receive Medicare and Medicaid payment for its services. In practice, these penalties are infrequently applied, as the stated goal of regulators in the first instance is to educate laboratories and work collaboratively to correct non-compliance.86

There has been ongoing concern that CLIA's concept of “high-complexity” testing fails to capture the true level of complexity that genetic and genomic testing actually requires. CMS has established specific requirements for certain CLIA specialty areas, such as microbiology and cytogenetics, but has persistently declined to recognize genetic and genomic testing as a specialty area.87 In 1997 — back in the days of single-gene tests — a joint task force of NIH and the Department of Energy called on the Clinical Laboratory Improvement Advisory Committee (CLIAC), which advises CMS on CLIA matters, to consider creating a genetic testing specialty.88 Other concerned groups later filed citizens' petitions calling on CMS to create a genetic testing specialty.Reference Hudson89 CMS did not create such a specialty. The advent of genomic testing has only added to the concerns that CMS is not updating CLIA regulations to address the added complexity of new genomic tests to address the problem.

The goal of CLIA is to ensure the accuracy and reliability of test results (i.e., analytical verification and validity) as the test is performed in that specific laboratory. In the case of genomic testing, the current absence of a molecular genetics specialty or of a CMS-approved genetics-specific proficiency testing program poses a challenge to laboratories in demonstrating, and to surveyors in confirming, test accuracy. Although the CAP has made efforts to address the absence of standards and requires compliance as a condition of accreditation, not all genomic testing laboratories elect to be regulated under a CLIA certificate of accreditation, with CAP as their accrediting body. As previously noted, CLIA allows laboratories to pursue a CLIA certificate of compliance, in which case they would not be answerable to these standards.

CLIA does not directly address the clinical validity or clinical utility of laboratory tests. However, implicit in the responsibilities of the laboratory director is the requirement to determine whether there are sufficient data to support the inclusion of a test on the test menu, that is, to determine whether the test is clinically valid.90 A point of concern is that CLIA, even while specifying requirements for analytical validity, delegates responsibility for ensuring that a new test is clinically valid to the laboratory director; thus there is no external, data-driven regulatory review of clinical validity before a laboratory offers a new test.91 In the case of genomic sequencing, the laboratory director's responsibility necessarily includes determining whether there are sufficient data to report a specific variant as clinically significant. As noted above, however, because WGS necessarily will generate information about variants whose clinical significance is uncertain, genomic test results can include a signifi-cant amount of information for which the laboratory cannot provide “pertinent information required for interpretation”92 and for which the laboratory will not be able to assist clients in interpreting test results. Additionally, CLIA does not include an external review component (either before or after a laboratory begins to offer a test) to evaluate a laboratory's evidentiary basis for performing a test or for the interpretive conclusions included in the test report. Laboratories therefore have significant discretion as to what tests they include on their test menu and how they perform variant interpretation.

CLIA also does not specifically regulate the bioinformatics pipeline, that is, the software algorithms used to generate and interpret genomic sequence data. When the bioinformatics is performed in-house by the same clinical laboratory that performs the sequencing, there is an implicit obligation under CLIA for the bioinformatics to be validated, given its impact on test accuracy and reliability. However, CMS has not defined specific educational or training requirements for bioinformatics personnel even though that discipline requires different expertise than other aspects of laboratory testing, and CMS has not specified requirements for software validation. Furthermore, when the interpretive bioinformatics is performed by an entity separate from the laboratory that generated the sequencing data (i.e., the increasing use of a separate “dry lab” or “unbundled” interpretation services), CLIA arguably does not apply to that separate entity. Standing alone, bioinformatics does not involve direct examination of materials derived from the human body, but rather involves only the interpretation of digitally-stored data resulting from prior examination of a specimen by another entity.93

(2) FDA

The FDCA gives FDA authority to regulate medical devices, defined to include instruments, machines, reagents, in vitro diagnostic (IVD) devices, and similar or related articles or components, that are “intended for use in the diagnosis, prevention, cure, mitigation, or treatment of disease” or intended to affect the structure or function of the body.94 The FDCA prescribes a risk-based framework under which the regulatory requirements are stratified according to the device's risk. Low-risk devices generally may be marketed without prior FDA marketing authorization, as long as manufacturers comply with certain “general controls.”Reference see also95 High-risk devices are generally subject to pre-market approval and must submit, among other information, “[f]ull reports of all information, published or known to or which should be reasonably known to the applicant, concerning investigations which have been made to show whether or not the device is safe or effective.”96 Moderate-risk devices are subject to general controls and may be — but often are not — subject to ”special” controls to ensure safe and effective use, and in many cases must submit a premarket notification application and receive clearance before the device may be marketed (often referred to as a “510(k) clearance”).97 To obtain 510(k) clearance, the manufacturer must demonstrate “substantial equivalence” to a previously marketed (“predicate”) device, meaning that the device has the same intended use as the predicate and has technological characteristics that are either the same or that are at least as safe and effective as the predicate and must also demonstrate compliance with general controls and specific controls which may include guidance documents and post-market surveillance.98 The 510(k) process generally does not require the manufacturer to submit clinical evidence directly demonstrating the safety and effectiveness of its device.99

Test instruments and systems manufactured by third parties and sold to clinical laboratories for use in collection, preparation, or examination of specimens from the human body are regulated by FDA as IVD devices. Thus, for example, FDA regulates as a Class II medical device a “high throughput genomic sequence analyzer for clinical use,” which is defined as an “analytical instrument system intended to generate, measure and sort signals in order to analyze nucleic acid sequences in a clinical sample.”100 These devices, which include Illumina's MiSeqDx Platform,101 are subject to special controls that specify information that must be included in device labeling, as well as to 510(k) pre-market notification submission requirements. FDA's authority to regulate medical devices includes power to regulate embedded software that affects the safety and effectiveness of the overall device. This was seen, for example, when software limitations in the MiSeqDX sequencing system prompted a 2014 recall.102

Clinical laboratories that perform genomic testing generally use one or more test instruments, systems, or reagents regulated by FDA as part of their testing process, but the laboratories may make modifications to these products or may use them for purposes not specified in labeling or in ways not addressed in manufacturer instructions. A test system that is developed and validated by a clinical laboratory, even when it incorporates FDA-approved or cleared components, has traditionally been regulated as an LDT.103 FDA has historically taken the position that clinical laboratories using LDTs are “manufacturers” of IVD devices and that it has jurisdiction to regulate LDTs as IVDs.104 At the same time, FDA historically has exercised “enforcement discretion,”105 and thus generally has not required clinical laboratories performing LDTs to comply with FDA's IVD device regulatory requirements. A few developers have sought FDA pre-market authorization for genomic LDTs, primarily in the context of cancer diagnosis or prognosis but also for the use of next generation sequencing platforms for the diagnosis of specific conditions such as cystic fibrosis.106 Additionally, in a few instances FDA has declined to exercise enforcement discretion for certain types of genetic LDTs, including those offered by DTC companies. FDA recently issued a safety communication warning clinical laboratories against offering pharmacogenetic testing for certain drugs whose FDA-approved label does not describe how pharmacogenetic information can be used in determining therapeutic treatment.107 As a general matter, however, genomic LDTs offered by many clinical laboratories currently benefit from FDA's enforcement discretion policy and are not subject to FDA regulation.

In April 2018, the FDA issued two guidance documents intended to inform the development of NGS-based testing. The first guidance addresses the design, development, and analytical validation of NGS-based IVDs intended to aid in the diagnosis of suspected germline diseases.108 Germline diseases encompass “those genetic diseases or other conditions arising from inherited or de novo germline variants,” and FDA makes clear that the guidance does not address “tests intended for use in the sequencing of healthy individuals.”109 Although nonbinding, the guidance provides some insight as to the agency's current thinking about quality and the clinical implementation of NGS-based tests. The guidance, which FDA stated was intended to “spur development of standards” for NGS testing,110 discusses “performance characteristics.” The guidance describes analytical validation as “measuring a test's analytical performance over a set of predefined metrics to demonstrate whether the performance is adequate for its indications for use and meets predefined performance specifications.”111 This typically involves evaluating whether the test successfully identifies or measures, within defined statistical bounds, the presence or absence of a variant that will provide information on a disease or other condition in a patient.112 Per the guidance, “[t]he complete NGS-based test should be analytically validated in its entirety (i.e., validation experiments should be conducted starting with specimen processing and ending with variant calls, including documentation that performance meets predefined thresholds) prior to initiating use of the test.”113 The guidance lays out specific performance metrics to be assessed when analytically validating NGS-based tests, including accuracy (positive percent agreement, negative percent agreement, technical positive predictive value), precision (reproducibility and repeatability), limit of detection (establishing a minimum and maximum amount of DNA enabling the test to provide expected results in 95% of runs), and analytical specificity (interference, cross-reactivity, and cross-contamination).114

Concurrently, FDA issued a second guidance on the use of public human genetic variant databases to support clinical validity of genetic and genomic-based IVDs.115 The guidance describes an approach in which test developers may rely on clinical evidence from FDA-recognized public databases to support clinical claims for their tests and help provide assurance of the accurate clinical evaluation of genomic test results. The guidance describes how product developers can use these databases to support the clinical validation of NGS tests they are developing and states that FDA-recognized databases will provide test developers with an efficient path for marketing clearance or approval of a new test. Subsequently, in December 2018 FDA recognized ClinGen's expert panel approved variants as the first to meet the level of an FDA-recognized database. The existence of one quality-controlled database to aid in variant interpretation is a step forward, but much work remains to be done. Indeed, recognizing that theirs is a work in progress, the organizers of ClinGen are continuing to expand their efforts and encourage other organizations and databases to seek FDA recognition.116

In the past, FDA has signaled an intent to modify its enforcement discretion policy with regard to regulation of LDTs. In 2014, the agency proposed a draft regulatory framework for LDTs,117 whose implementation was subsequently abandoned in 2016.118 While FDA currently does not seem inclined to implement an LDT regulatory framework in a systematic way under existing statutory authorities, it is possible that Congress may enact legislation directing FDA to regulate LDTs. Congress has been considering diagnostic test legislation for several years including through the issuance of several discussion drafts setting forth possible legislative approaches. Most recently, a legislative discussion draft of the VALID Act (Verifying Accurate Leading-edge IVCT Development) was publicly released in December 2018. Although released too late for consideration in the 115th session of Congress, the draft legislation may be taken up at some point in the future.

FDA's role in the regulation of software also is potentially relevant to oversight of genomic testing. For many years, FDA has regulated “software in a medical device”119 — software embedded in traditional devices like pacemakers, drug infusion pumps, and in vitro diagnostic (IVD) test kits where the software affects the safety and effectiveness of the device as a whole.120 Thus, software incorporated as a component of an FDA-regulated genomic test — for example, software embedded in an FDA-regulated sequencing analyzer — is reviewed by FDA as part of its premarket review of the safety and effectiveness of the overall device. FDA has reviewed such software in the context of the small number of NGS-based tests that have undergone FDA review.

This, however, leaves a vast amount of stand-alone genomic testing software unregulated at the current time. Software used to interpret genomic data generated using LDTs may escape FDA scrutiny. The same is true of cloud-based software services that laboratories incorporate into their bioinformatics pipeline. In response to this problem, FDA has asserted its authority to regulate “software as a medical device” (SaMD), defined as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.”Reference Cortez121 Such software “utilizes an algorithm (logic, set of rules, or model) that operates on data input (digitized content) to produce an output for a medical use specified by the manufacturer.”122 Software used in the bioinformatics pipeline arguably could meet the definition of SaMD to the extent it is intended for use in clinical testing (i.e., as part of disease diagnosis or health assessment) and is not embedded in a device that is already subject to FDA regulation (such as a clinical sequencer). Unfortunately, FDA is still far from having a framework in place to support comprehensive regulation of SaMD.

FDA's regulation in this area is still very much a work in progress, and work to date has focused on medical software generally as opposed to genomic software more specifically. In a 2017 guidance document,123 FDA adopted the principles of the International Medical Device Regulators Forum (IMDRF),124 of which the agency is a member, for “clinical evaluation” of SaMD, that is, a “set of ongoing activities conducted in the assessment and analysis of a SaMD's clinical safety, effectiveness and performed as intended by the manufacturer in the SaMD's definition statement.”125 The SaMD guidance builds on previous IMDRF documents that addressed SaMD terminology, risk categorization, and quality management system principles, respectively. The guidance identifies three pillars of clinical evaluation:

  • establishing a valid clinical association between the SaMD output and the targeted clinical condition;

  • demonstrating that the SaMD is analytically valid, meaning that it correctly processes input data to generate accurate, reliable, and precise output data; and

  • demonstrating that the SaMD is clinically valid, meaning that the output data achieves the intended purpose in the target population in the context of clinical care.

The SaMD guidance explains that clinical evaluation should be a systematic and planned process that continues through the device lifecycle as part of the quality management system. Further, the guidance states that the “level of evaluation and independent review” of a particular SaMD should be commensurate with its risk. The guidance encourages manufacturers to leverage the connectivity inherent in SaMD to modify software based on “real-world” performance.

In its 2017 Digital Innovation Action PlanReference Gottlieb126 and a pilot Digital Health Software Precertification (Pre-Cert) Program proposal,127 FDA acknowledged that its traditional premarket review process for devices is not well suited for software:

FDA's traditional approach to moderate and higher risk hardware-based medical devices is not well suited for the faster iterative design, development, and type of validation used for software-based medical technologies. Traditional implementation of the premarket requirements may impede or delay patient access to critical evolutions of software technology, particularly those presenting a lower risk to patients.128

FDA pledged to “reimagin[e] its approach to digital health medical devices.”129 The Pre-Cert Program, currently in its pilot phase, focuses FDA's scrutiny at the level of the firm that develops the software, rather than on the specific software product.130 FDA would verify that the firm “demonstrate[s] a culture of quality and organizational excellence based on objective criteria, for example, that they can and do excel in software design, development, and validation (testing).”131 If so, FDA may allow a pre-certified firm to move its lower-risk software to market without premarket review or may provide a more cursory or faster review of the firm's moderate- and higher-risk software,132 possibly relying on postmarket evidence to validate safety and effectiveness after software already is in use.133

FDA acknowledges that the Pre-Cert program is only in its developmental phrase and will not provide a viable path to market in the near future.134 FDA also recognizes unresolved questions about whether FDA has the statutory authority it needs to regulate software, which ultimately may require new legislation.135 FDA notes that embedded software is not currently eligible for pre-certification and would, at least in the near future, continue to go through FDA's traditional premarket review process. In short, there are many unresolved issues, and it is still not clear whether — and how — FDA will be able to regulate software in the genomic testing bioinformatics pipeline.

In tacit recognition of this reality, FDA's 2018 guidance on analytical validity of genomic tests for germ-line diseases136 simply called on laboratories to specify and document all the software they are using, “including the source (e.g., developed in-house, third party), and any modifications” and to “document whether the software will be run locally or remotely (e.g., cloud-based).”137 There was no assertion that this bioinformatics software would necessarily receive any regulatory oversight, but laboratories were exhorted to “document and validat[e] their bioinformatics software performance in the context of the end-to-end NGS-based test.”

More recently, however, FDA signaled a more assertive posture toward regulating bioinformatics software used in genomic testing. The agency's September 2019 draft guidance on clinical decision support software138 states that FDA views bioinformatics software used to process high-volume “omics” data as being subject to FDA's device regulations if the software produces patient-specific information, whether or not the software is clinical decision support (CDS) software.139 FDA also stated that “bioinformatics software products that query multiple genetic variants against reference databases or other information sources to make patient-specific recommendations” are medical devices.140 This suggests that FDA views all phases of the genomic testing bioinformatics pipeline — including variant interpretation — as subject to FDA medical device regulation. The more recent draft guidance ended its public comment period in December 2019 and is under consideration by the agency. A companion article in this issue reflects on some of the potential impacts of FDA's plans to regulate software used in genomic testing.Reference Evans141

With software regulation unsettled, other entities are seeking to develop voluntary standards for both NGS testing and bioinformatics. In particular, the CDC-led workgroup Nex-StoCTReference Heger142 has published three consensus recommendations since 2012 that address standards for NGS testing and bioinformatics. Collectively, these documents address sequence generation, analysis of raw sequence data, and standardization of variant files to facilitate meaningful inter-laboratory comparisons and provide a common format for data contained within the variant file.Reference Lubin, Gargis and Gargis143 Although not legally binding, these recommendations may provide useful guidance to entities developing, implementing, or selecting among NGS-based test systems.

Finally, it is important to note that even when FDA regulates a genetic or other type of diagnostic test as a medical device, FDA evaluates the test's analytic and clinical performance for a specific intended use.144 FDA has no authority to interfere with clinicians' off-label use of lawfully marketed devices, so clinical uses of a test may stray beyond the use for which FDA has reviewed evidence.145 An oft-cited example of this problem, relating to a non-genetic test, is the fact that FDA has cleared prostate-specific antigen testing for monitoring men who already have been diagnosed with prostate cancer, yet the test is widely prescribed off-label for screening healthy individuals — a use for which the test may not be safe and effective.146 Fur ther, modern sequencing “technology allows broad and indication-blind testing”147 — in other words, genomic testing lacks a clearly enunciated intended use, because the data it generates can be put to a vast multiplicity of uses.148 Even if regulators ensure that genomic testing has analytical and clinical validity for one intended use — for example, diagnosing the cause of previously undiagnosed developmental delay in a child — the test generates data about thousands of other genetic variants, some of which are rare or never seen before, that lend themselves to many other uses for which analytical and clinical performance have not been assessed, which some have characterized as opportunistic screening.Reference Burke149 Finally, as mentioned previously in this section, FDA has followed an enforcement discretion policy for many — but not all — lab-developed tests,150 so there are many genomic tests for which FDA has never reviewed analytical and clinical performance for even one intended use.

B. Government Regulation of Clinical Utility

CMS does not regulate the clinical utility of tests under CLIA.151 The simplistic account of FDA regulation is that FDA also does not require evidence of clinical utility for genomic tests. The reality is more nuanced. First, FDA assesses the safety and effectiveness of tests relative to the manufacturer's intended use for the test. If the manufacture states a clinical intended use (e.g., “This test is useful for diagnosing cystic fibrosis”), as opposed to stating a purely analytic use (e.g., “This test accurately detects the presence or absence of specific CFTR genetic variants”), then the test's intended use implicitly asserts clinical utility. When FDA confirms that the test is safe and effective for a clinical intended use, FDA in effect is confirming that the test has its stated clinical utility. Second, FDA has authority to oversee not just the analytical and clinical performance of tests — that is, their analytical and clinical validity — but also the labeling of tests.152 FDA “want[s] to make sure that what a manufacturer is saying about their test in their labeling, in their materials about the test is truthful and accurate.”153 There is no requirement for test manufacturers to make any claims of clinical utility in their labeling, and a test can be brought to market with purely analytical claims. But if a manufacturer does elect to make any claims about clinical utility in a test's labeling, FDA will require evidence to support those claims.154 This is part of FDA's statutory mandate to ensure that drug and device labeling shall not be “false or misleading in any particular.”155

FDA recognizes, however, that clinical utility is, to a large degree, a medical practice issue. Clinical utility addresses whether a test (and subsequent interventions taken based on the result) leads to better clinical management or an improved health outcome among people with a positive test result.156 No matter how accurately a test does its job (e.g., detecting cancer), the test will have no clinical utility if the clinician orders an inappropriate course of treatment after receiving the test result. For this reason, ensuring the clinical utility of tests is largely the province of state medical practice regulators and tort law. Professional societies also offer guidance.157 FDA's role is generally confined to ensuring that any assertions test manufacturers make about clinical utility are supported by sound evidence.Reference Jillson158

For many people, genomic tests are clinically available as a practical matter only if they are covered by insurance. Importantly, both government and commercial payers evaluate clinical utility, usually in terms of the impact of test results on the patient's health outcome, as part of determining whether to cover clinical genomic testing.Reference Deverka and Dreyfus159 In light of the rapidly evolving but still incomplete understanding of the clinical utility of particular variants, payers are more likely to pay for focused genetic tests with clearly documented clinical impact than for broad-based genomic tests in which the clinical implications of individual variants vary widely.160 The local contractors that administer Medicare payments in specific regions of the country have significant discretion in determining whether to cover diagnostic testing, including genomic sequencing. The same is true of private payers, which often but not always, look to Medicare coverage policies in deciding whether to cover a particular test.Reference Phillips161 In 2018, CMS issued a national coverage decision stating that coverage of NGS for tumor profiling is required only where there is an FDA-approved companion diagnostic linked to the testing,162 but the agency is reconsidering its decision in light of significant stakeholder objections.163 Pressure is also growing for payers to provide coverage with evidence development in which patients are enrolled in clinical trials or registries to inform future reimbursement policy.Reference Eisenberg and Varmus164 Private payers often follow the CMS coverage determinations but, because private insurance is a matter of contract between private parties, they are not bound to do so. As a result, private payer coverage can be inconsistent from one payer to the next and can change over time.

To the extent a patient is willing and able to pay for testing directly, however, the ability to obtain genomic testing is limited only by a physician's willingness to order it, which may reflect at least in part the clinician's assessment of its value, and the laboratory's willingness to provide it. Some states allow direct-access testing, in which patients can order their own laboratory tests without having to go through a physician, although not all states allow this. In states that allow direct-access testing, physicians are less able to serve as a check on the use of tests with uncertain clinical utility, and patients' access to tests is limited only by their pocketbooks.165

C. The Role of Non-Governmental Entities in Ensuring Genomic Testing Quality

Many non-governmental entities, such as the Association for Molecular Pathology (AMP),Reference Schrijver166 CAP, and the American College of Medical Genetics and Genomics (ACMG), have issued standards and guidance documents designed to promote the quality of clinical genomic tests. One major topic has been improving the interpretation of sequence variants,Reference Roy, Richards and Kearney167 particularly in the area of oncology.Reference Jennings and Li168 The CAP issued broad-based laboratory standards for next-generation sequencing169 and has a number of standing committees on issues related to molecular pathology and genomics.170 The ACMG has issued a statement about the need for genomic data sharing to promote quality.171 As befits an organization focused on clinical care, a number of its statements are directed more toward clinical utility and practice.Reference David172 These guidelines, and the steps that lead to their development,173 play a crucial role in shaping practice. While these documents are not directly enforceable, their impact can be increased if they are adopted by payers as a condition of coverage and reimbursement.

IV. Recommendations for Governing Genomic Testing to Advance Quality

Because genomics (and clinical laboratory testing generally) is a field where the science is advancing rapidly, regulatory flexibility is essential to ensure safety and efficacy while supporting innovation. Fortunately, complexity and rapid change are not new problems for regulators of medical or other products. Over the past three decades, there have been many areas of administrative law where the regulated industries and technologies grew more complex and product life cycles grew faster, and where opposition to regulation was fierce. In response to this fluid regulatory landscape, some agencies have turned to “new governance” styles of oversight.Reference Bamburger, Burris, Kempa, Shearing, Dorf, Freeman, Lobel, Trubek, Sabel and Solomon174

In theory, new governance embraces “the challenge and the promise of destabilization and social plasticity” and takes account of the polycentric world in which knowledge relevant for oversight is dispersed among many entities and among people with different types of expertise.Reference Burris, Drahos, Shearing and Ford175 It aims to be more transparent, flexible, and democratic than command-and-control (top down) style regulation and to employ “centrally coordinated local problem solving” processes.Reference Sturm, de Búrca and Scott176 New governance is characterized by collaborative interactions among regulators, the regulated industry, and other stakeholders; by regulatory flexibility and responsiveness; and by the use of “soft law” techniques for shaping behavior within the regulated industry. Soft law includes: benchmarking and information sharing to improve practices within an industry; incentives for voluntary adherence to industry standards (including naming and shaming of entities that do not abide by appropriate standards); incentives for developing an exemplary record of adherence to regulatory requirements; education of relevant individuals within regulated entities; and other creative methods for encouraging improvements and safety within industries.Reference Chang177 New governance does not replace regulation but expands the toolbox with which agencies seek to shape behavior. In some cases, new governance may deemphasize regulation in favor of other governance tools, perhaps because the processes for promulgating regulations are slow, regulations are difficult to revise once they are implemented, and in many industries regulatory compliance is disappointingly low.178 In the U.S., examples of new governance primarily come from environmental law and occupational health and safety law.179

Interestingly, several recent proposals by the FDA reflect new governance themes. For instance, the agency has proposed making governance of medical devices more transparent and inclusive through the use of collaborative communities — ongoing forums that bring together numerous stakeholders whose input is relevant for identifying or addressing a medical device governance issue.180 Members of a collaborative community for genomic tests might include patients, care-partners, health care providers, genome scientists, professional organizations, bioethicists, regulators from federal and state agencies, other legal experts, software designers, algorithm designers, clinical laboratory representatives, and device industry representatives. A collaborative community may help to define and specify governance challenges, and it may produce recommendations or other deliverables. The FDA has also proposed innovative, iterative, and adaptive approaches for governing FDA-regulated software.181 Congress has pushed the agency to adopt some aspects of new governance. For instance, the 21st Century Cures Act mandated that the agency obtain early patient input to inform some regulatory decisions.182

Many of this article's recommendations (below) have a distinctively new governance flavor. The authors are cognizant, however, that new governance has its critics whose work often responds to problems that became apparent as agencies implemented new governance models. New governance may fail because its implementation costs are high. Agencies and other stakeholders must be willing to invest significant sums of money, time, and expertise to support robust participation of relevant stakeholders, to gather appropriate data for learning by doing, to provide appropriate feedback to regulated entities, and for other new governance activities.Reference Lee183 And even with appropriate investments, critics argue that soft law favors people or entities with more power over people or entities with less power, and favors stakeholders with concentrated and easily articulated interests over stakeholders whose interests are more diffuse.Reference Alexander and Nejaime184 Others note that new governance undermines traditional notions of government accountability and that its proceduralism may compromise substantive norms of justice.Reference Sabel, Simon, de Búrca, Scott and Simon185

This article takes criticisms of new governance seriously, recognizing that some of its recommendations could be implemented in a suboptimal manner that likely would not improve the quality of genomic tests. For instance, members of a collaborative community or similar group contemplating quality standards for genomic tests would need to be transparent about their interests and biases, and the group would have to be composed in a manner that helped offset biases. The group's processes should be designed to diminish the effects of biases and power imbalances among the participants. This article does not purport to design an entirely new governance process for clinical genomic tests, but to make recommendations that are capable of implementation by agencies, professional organizations, or other governance nodes in the polycentric world of quality control for genomic testing. That our recommendations reflect, to a large extent, a new governance mindset means that past experience with the successes and failures of new governance could help guide their implementation. In any case, these agencies must act within the scope of their statutory authority and comply with the requirements of the Administrative Procedures Act.

A. Findings

  1. Several barriers at times challenge laboratories' ability to deliver high quality genomic tests, including:

    • a. The clinical evidence base, while developing rapidly, is still incomplete.

    • b. Current testing methods are less accurate in some parts of the genome and for certain types of variants.

    • c. Interpretation of variants can be challenging, especially for individuals who are not of northern European ancestry due to a lack of diversity in individuals tested to date.

  2. Quality is a joint effort involving laboratories, expert input (including from physicians and genetic counselors, laboratorians, and bioinformaticists), industry, professional societies, patient advocacy groups, patients themselves, and regulators. Cooperative and consensus-based approaches should be developed to advance quality (see, for example, the “Collaborative Community” concept being discussed by FDA).186

    • a. Professional and technical standards can advance quality when developed in a transparent, evidence-based, and rigorous fashion that includes all stakeholders and that includes and addresses the interests of patients.

  3. Stakeholders, including patients, need certainty and clarity on:

    • a. The jurisdiction of each relevant oversight agency.

    • b. Requirements for demonstrating quality.

    • c. Regulatory processes.

  4. The same activity should be regulated (or not regulated) in the same way.

    • a. Genetic and genomic tests can be developed either by the medical device industry (i.e., test kits) or by laboratories (i.e., LDTs), but, in the current scheme, are often regulated differently depending on this distinction.

    • b. Patients, physicians, and other stakeholders do not care whether a test is a laboratory developed test or a test kit or what type of entity developed or performed the test. Patients, physicians, and others do and should care about whether the test is reasonably safe and effective for its claimed indications, and they should be aware when a test is being used outside its appropriate indications in ways that may lead the results and their interpretations to be misleading or even inaccurate.

  5. The Centers for Medicare & Medicaid Services in their enforcement of CLIA, along with accreditation bodies such as College of American Pathologists and state regulators in CLIA-exempt states, play a major role in overseeing laboratory operations to ensure analytic verification and validity but fail at times to update their regulations or guidance to take account of new laboratory methods and the complexity of modern testing technologies such as genomic testing.

  6. The Food and Drug Administration regulates both analytic and clinical validity of tests in their intended uses, although the scope of its jurisdiction has been challenged by some.

    • a. With respect to test kits manufactured by third parties and sold to clinical laboratories, FDA's medical device requirements generally address analytical validity and clinical validity as set forth in the intended use of the test and may sometimes touch on clinical utility depending again on the intended use claimed by the manufacturer. The “indication-blind” nature of genomic testing — the fact that it generates vast amounts of data that could be put to any number of uses beyond the problem that led to testing in the first instance — poses a major challenge for FDA's traditional oversight processes.

    • b. The FDA traditionally has exercised enforcement discretion over LDTs, and its recent attempts to regulate these tests have been spotty and inconsistent. Their actions have been met with opposition by clinical laboratories and other stakeholders and have led to some confusion and uncertainty on the part of stakeholders, including patients.

    • c. Regulation of software is essential to the quality of genome-scale tests and to interpreting the clinical significance of genomic test data and is a major challenge that FDA has only begun to deal with. It has been widely assumed — but is far from clear — that FDA is the appropriate regulator for all phases of the bioinformatics pipeline, some phases of which (e.g., variant interpretation) may be more in the nature of medical practice regulation than product regulation.

    • d. Clinical utility is generally addressed in the practice of medicine and in payer decision making. Although tests are not required to make claims about clinical utility as a condition for entering the market, FDA can regulate such claims if test developers choose to make them.

B. Recommendations

  • 1. Relevant regulatory agencies need to develop/maintain expertise in genetic and genomic test development and implementation.

  • Cooperative and consensus-based standards should be developed to advance quality.

    • 2. Regulators should have an efficient process for identifying, recognizing, and encouraging the adoption of such standards.

      • i. One of the FDA's current strategic priorities is to establish “collaborative communities” to work toward common objectives in device regulation.187 Such communities might serve as a nucleus or model for cooperative standards development as well as provide input to inform regulatory actions taken by the agencies.

  • 3. CLIA should be modernized and harmonized with broader quality systems and modern terminology.

    • a. Relevant agencies and scientific stakeholders should work to generate more samples for proficiency testing.

    • b. Laboratories need to have robust preanalytical processes and standards, including robust purchasing controls, processes for collecting and processing samples, and corrective and prevention action processes.

    • c. Bioinformatics pipelines need to be reviewed appropriately.

    • d. CMS needs to develop a genetic and genomics testing specialty.

      • i. In the interim, CMS should designate which professional organization standards and guidelines, such as the CAP Molecular Pathology Checklist, are recognized as the most authoritative sources with respect to particular aspects of genomic testing.

  • 4. FDA needs to continue to pursue improvements in several arenas.

    • a. Regulation needs to be risk-based, taking into account the characteristics of tests and the clinical context in which they will be used for their intended purpose.

    • b. FDA should work collaboratively with other oversight bodies, including state medical practice regulators and professional organizations, to discourage inappropriate uses of genomic tests for unintended purposes that may be unsubstantiated or unsafe.

    • c. Regulatory processes or standards need to be implemented to assess all phases of software development for use in genomic testing, and to ensure uniform, consistent oversight of this software regardless of whether it is embedded or stand-alone and whatever its provenance. These processes need to ensure that the strengths and weaknesses of software are transparent, both for regulators and user groups such as physicians and laboratorians who rely on software systems. This includes transparency about the limitations of software algorithms, transparent access to non-proprietary databases and useful descriptions of the limitations of proprietary databases on which the software relies, and transparent business practices that foster open discussion about software characteristics and performance.

      • i. Standards for developing and testing algorithms need to be developed.

      • ii. Several groups, including FDA, CDC, AMP, and ACMG have been working to develop standards or other regulatory and non-regulatory approaches to assess the validity of bioinformatics systems. Consensus approaches should be adopted, and software developers should be required to demonstrate their adherence.

      • iii. It would be helpful if scientists, regulatory professionals, and engineers involved in designing and marketing relevant algorithms and software encouraged their professional organizations to develop the expertise necessary to collaborate in proposing and implementing quality approaches.

    • d. To the extent that FDA chooses to rely on standards stated as special controls, the agency must develop strategies to ensure consideration of the interests of all stakeholders including patients in their development. FDA also needs to make sure that appropriate special controls are implemented in a timely fashion.

    • e. FDA needs to develop a pathway that relies more on post-market surveillance for rare diseases and breakthrough tests that meet an unmet clinical need or that represent a clinically significant advance over current technology. These tests should have analytical validity before being used clinically and while clinical validity is being confirmed in the post-market space. Using this approach requires application of sound methods for demonstrating clinical validity in the post-market context and a commitment by regulators to ensure that high quality post-market studies are conducted in a timely fashion.

  • 5. The genomics community and other stakeholders need to continue to improve genetic and genomic variant databases and their use in variant interpretation.

  • 6. Limitations of test accuracy and of interpretation of clinical validity need to be clearly communicated to clinicians and patients.

  • 7. Payment systems should recognize and reward quality. Payers should use coverage with evidence development in areas where evidence is not yet sufficiently strong for optimal decision making.

  • 8. CLIA and FDA should work jointly to develop a public, searchable database for use by clinicians, patients, and other stakeholders that displays all information about the regulatory status of genomic tests and devices.

Conclusion

Ensuring that patients receive accurate results from genomic testing is challenging given the field's complexity and rapid evolution. Federal regulators are already actively working to make certain that laboratories take the steps necessary to deliver high quality results, but room for improvement remains at many points along the process. Considering how best to oversee and measure the quality of laboratory diagnostic tests and meet the challenges of rare disease diagnostics are pressing issues. Another area of concern includes clarifying informatics pipelines and developing strategies for validating algorithms critical to sequence assembly and analysis. Although more data are needed in many areas to inform decision making, regulators can require the collection of only some of these data and will have to rely on the actions of others.

Process will be critical. Regulators will need to be inclusive to ensure that the needs of all stakeholders including patients are addressed. Agencies will need to pursue an array of strategies beyond formal regulation, which will require a higher level of transparency. Delivering high quality test results is only the first step because clinicians and patients need to be able to understand the limitations of current genomic knowledge and tests and to know how to use them if we are truly to reap the benefits of this knowledge.

Acknowlegements

Preparation of this article was supported by National Institutes of Health (NIH) grants R01 HG008605 as part of the project on “LawSeq: Building a Sound Legal Foundation for Translating Genomics into Clinical Application” and RM1 HG009034. The content is solely the responsibility of the authors and does not necessarily represent the views of the funders. We particularly thank our colleagues Kenny Beckman and Susan Berry for their helpful comments in discussion. Research assistance was provided by Emily Sachs, Hailey Verano, Jillian Heaviside, Margo Wilkinson, Ahsin Azim, and Kate Hanson. Coordination of the group's work was provided by the incomparable Audrey Boyle. All views expressed are those of the authors and not necessarily the funders or others who provided support and comments.

Footnotes

Authors report support from the NIH during the conduct of the study. Additionally, Ms. Javitt reports grants from the NIH and from Hyman, Phelps & McNamara during the conduct of the study. Mr. Hall reports being a principal with Leavitt Partners, a health care policy and strategy group. He is also part of MR 3, a startup medical device company working to advance new therapies for heart failure and cardiac arrhythmias. Dr. Morgan reports other support from Novartis Institutes for Biomedical Research outside of the submitted work. Dr. Ossario reports personal fees from Roche-Genentech and Eli Lilly outside of the submitted work.

References

See National Institutes of Health-Department of Energy Working Group on Ethical, Legal & Social Implications of Human Genome Research, Task Force on Genetic Testing, Holtzman, N. A. and Watson, M. S. eds., Promoting Safe and Effective Genetic Testing in the United States (1997): at ch. 2, available at <https://www.genome.gov/10001733/genetic-testing-report> (last visited June 6, 2019); National Institutes of Health (NIH), Secretary's Advisory Committee on Genetic Testing, Enhancing the Oversight of Genetic Tests: Recommendations of the SACGT (2000): at 15 n.10, available at <https://osp.od.nih.gov/wp-content/uploads/2013/11/oversight_report.pdf> (last visited June 4, 2019) [hereinafter SACGT Recommendations]; NIH, Secretary's Advisory Committee on Genetics, Health, and Society, A Roadmap for the Integration of Genetics and Genomics into Health and Society (2004): at 34, available at <https://osp.od.nih.gov/wp-content/uploads/2013/11/SACGHSPriorities.pdf> (last visited June 4, 2019); U.S. Department of Health and Human Services (DHHS), Secretary's Advisory Committee on Genetics, Health, & Society (SACGHS), U.S. System of Oversight of Genetic Testing (2008): at 65–67, available at <https://osp.od.nih.gov/wp-content/uploads/2013/11/SACGHS_oversight_report.pdf> (last visited June 6, 2019); DHHS, SACGHS, The Integration of Genetic Technologies Into Healthcare and Public Health: A Progress Report and Future Directions of the Secretary's Advisory Committee on Genetics, Health, and Society (2009), available at <https://osp.od.nih.gov/wp-content/uploads/2013/11/SACGHS%20Progress%20and%20Priorities%20Report%20to%20HHS%20Secretary%20Jan%202009.pdf> (last visited June 4, 2019); Pratt, V. and Leonard, D. G. B., Analytic Validity of Genomic Testing (Washington, DC: Institute of Medicine of the National Academies, 2015), available at <https://nam.edu/wp-content/uploads/2015/06/AnalyticValidityPersp.pdf> (last visited June 4, 2019); Kaul, K. L. et al., “Oversight of Genetic Testing: An Update,” The Journal of Molecular Diagnostics 3, no. 3 (2001): 85–91.Google Scholar
Van Driest, S. L. et al., “Association of Arrhythmia-Related Genetic Variants with Phenotypes Documented in Electronic Medical Records,” JAMA 315, no. 1 (2016): 4757.CrossRefGoogle Scholar
Burke, W. et al., “Improving Recommendations for Genomic Medicine: Building an Evolutionary Process from Clinical Practice Advisory Documents to Guidelines,” Genetics in Medicine 21, no. 11 (2019): 24312438.CrossRefGoogle Scholar
As of May, 2019, the Genetic Testing Registry (GTR) operated by the U.S. National Center for Biotechnology Information (NCBI) listed 59,576 genetic tests for 11,541 health conditions. The GTR relies on information that is voluntarily submitted by test providers; it does not purport to catalog all genetic tests currently used in healthcare and does not independently verify the accuracy of information submitted to it. Genetic Testing Registry, available at <https://www.ncbi.nlm.nih.gov/gtr> (last visited June 3, 2019).+(last+visited+June+3,+2019).>Google Scholar
Gornick, M. C. et al., “Oncologists’ Use of Genomic Sequencing Data to Inform Clinical Management,” JCO Precision Oncology, DOI: 10.1200/PO.17.00122 (published online February 21, 2018); Hertz, D. L. and McLeod, H. L., “Integrated Patient and Tumor Genetic Testing for Individualized Cancer Therapy,” Clinical Pharmacology and Therapeutics 99, no. 2 (2016): 143146.Google Scholar
See Kalia, S. S. et al., “Recommendations for Reporting of Secondary Findings in Clinical Exome and Genome Sequencing, 2016 Update (ACMG SF v2.0): A Policy Statement of the American College of Medical Genetics and Genomics,” Genetics in Medicine 19, no. 2 (2017): 249255; CORRIGENDUM: Recommendations for Reporting of Secondary Findings in Clinical Exome and Genome Sequencing, 2016 Update (ACMG SF v2.0): A Policy Statement of the American College of Medical Genetics and Genomics,” Genetics in Medicine 19, no. 4 (2017): 484.CrossRefGoogle Scholar
Collins, F. S., The Language of Life: DNA and the Revolution in Personalized Medicine (New York: Harper Collins, 2010): at 208; Church, G., “How We Benefit from Getting Our Genomes Sequenced,” Medium, August 29, 2018, available at <> (last visited June 4, 2019).Google Scholar
Various regulatory agencies, professional societies, and standard-setting organization such as the Food and Drug Administration, the Centers for Medicare & Medicaid Services, the Centers for Disease Control and Prevention, the American College of Medical Genetics and Genomics, the College of American Pathologists, and the Association of Molecular Pathologists use variations on the definitions of the following terms. See, e.g., ACMG Board of Directors, “Clinical Utility of Genetic and Genomic Services: A Position Statement of the American College of Medical Genetics and Genomics,” Genetics in Medicine 17, no. 6 (2015): 505507; FDA, Considerations for Design, Development, and Analytical Validation of Next Generation Sequencing (NGS) - Based In Vitro Diagnostics (IVDs) Intended to Aid in the Diagnosis of Suspected Germ-line Diseases: Guidance for Stakeholders and Food and Drug Administration Staff(2018): at 13–19, available at <https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-design-development-and-analytical-validation-next-generation-sequencing-ngs-based> [hereinafter FDA, Considerations for NGS] (last visited October 22, 2019); FDA, Use of Public Human Genetic Variant Databases to Support Clinical Validity for Genetic and Genomic-Based In Vitro Diagnostics, Guidance for Stakeholders and Food and Drug Administration Staff(2018), available at <https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationand-Guidance/GuidanceDocuments/ucm509837.pdf> [hereinafter FDA, Use of Public Databases] (last visited October 22, 2019); Joseph, L. et al., “The Spectrum of Clinical Utilities in Molecular Pathology Testing Procedures for Inherited Conditions and Cancer: A Report of the Association for Molecular Pathology,” Journal of Molecular Diagnostics 18, no. 5 (2016): 605–619, at Table 1; Micheel, C. M. et al., Evolution of Translational Omics (Washington, DC: National Academies Press, 2012): at 327, Part 3, Part 4; Strande, N. T. et al., “Evaluating the Clinical Validity of Gene-Disease Associations: An Evidence-Based Framework Developed by the Clinical Genome Resource,” American Journal of Human Genetics 100, no. 6 (2017): 895–906; Sung, F. et al., Agency for Healthcare Research and Quality, Quality, Regulation and Clinical Utility of Laboratory-Developed Molecular Tests: Technology Assessment Report (2010): at 11, 16, available at <https://www.cms.gov/Medicare/Coverage/DeterminationProcess/downloads/id72TA.pdf> (last updated October 6, 2010) (last visited June 7, 2019); Teutsch, S. M. et al., “The Evaluation of Genomic Applications in Practice and Prevention (EGAPP) Initiative: Methods of the EGAPP Working Group,” Genetics in Medicine 11, no. 1 (2009): 3–14, available at <https://www.ncbi.nlm.nih.gov/pubmed/18813139> (last visited June 7, 2019). The article does not seek to resolve these differences, but rather to utilize general definitions in order to guide the discussion.Google Scholar
See SACGT Recommendations, supra note 1 (explaining that analytical validity is an indicator of how well a test measures the property or characteristic it is intended to measure and addresses such matters as the test's accuracy, rate of false positives and negatives, and reliability in the sense of repeatedly getting the same result). See, e.g., Burke, W., “Clinical Validity and Clinical Utility,” Current Protocols in Human Genetics 81 (2014): 9.15.1–9.15.8.CrossRefGoogle Scholar
In previous rubrics for analysis of quality assessment, analytic verification was considered within analytic validity. See, e.g., Rehm, H. L. et al., “ACMG Clinical Laboratory Standards for Next-Generation Sequencing,” Genetics in Medicine 15, no. 9 (2013): 733747.Google Scholar
It is important to note that the term “validity” is sometimes used in different ways by different entities or regulatory structures. For example, some in the genetics community consider analytic verification as part of analytic validity, but the two concepts are increasingly viewed separately because they are regulated differently. See Lathrop, J. T., FDA, “Analytical Validation and Points for Discussion,” available at <https://www.fda.gov/media/88823/download> (last visited June 3, 2019). Most commonly, analytical verification refers to the correct performance of the test while analytical validity refers to the ability of the test, if performed properly, to detect, identify, calculate, or analyze the presence or absent of a particular gene.+(last+visited+June+3,+2019).+Most+commonly,+analytical+verification+refers+to+the+correct+performance+of+the+test+while+analytical+validity+refers+to+the+ability+of+the+test,+if+performed+properly,+to+detect,+identify,+calculate,+or+analyze+the+presence+or+absent+of+a+particular+gene.>Google Scholar
See SACGT Recommendations, supra note 1, at 15 n.11 (explaining that clinical validity refers to the accuracy with which a test predicts the presence or absence of a clinical condition or predisposition, addressing whether there is a strong and well validated association between having a particular gene variant and having a particular health condition, and asking whether knowing that a person has the gene variant offers meaningful insight into the person's health or reproductive risks); see also Fabsitz, R. R. et al., “Ethical and Practical Guidelines for Reporting Genetic Research Results to Study Participants: Updated Guidelines from a National Heart, Lung and Blood Institute Working Group,” Circulation: Cardiovascular Genetics 3, no. 6 (2010): 574580, at 575 (expressing this concept by stating that a test result has an “established” meaning).Google Scholar
See Burke, supra note 10.Google Scholar
See Fabsitz et al., supra note 13.Google Scholar
See ACMG, supra note 9.Google Scholar
See Terry, S. F., “The Tension Between Policy and Practice in Returning Research Results and Incidental Findings in Genomic Biobank Research,” Minnesota Journal of Law, Science & Technology 13, no. 2 (2012): 693712, at 710-11 (discussing the emerging concept of “personal utility” and distinguishing it from clinical utility); see also Wolf, S. M. et al., “Managing Incidental Findings in Human Subjects Research: Analysis and Recommendations,” Journal of Law, Medicine & Ethics 36, no. 2 (2008): 219–248, at 219, 231 n.80 (noting debate about what constitutes “clinical utility,” with some definitions focusing narrowly on health outcomes while others note that a result may have utility if it is important to the individuals and families involved).Google Scholar
Fabsitz et al., supra note 13, at 578 (noting that some members of the NHLBI working group dissented from its recommendation that investigators “may choose” to disclose “results related to reproductive risks, personal meaning or utility, or health risks” subject to various conditions).Google Scholar
See SACGT Recommendations, supra note 1.Google Scholar
See, e.g., FDA, Discussion Paper from Public Workshop-Standards Based Approach to Analytical Performance Evaluation of Next Generation Sequencing In Vitro Diagnostic Tests, Developing Analytical Standards for NGS Testing (November 12, 2015), available at <http://wayback.archive-it.org/7993/20170111165836/ http://www.fda.gov/down-loads/MedicalDevices/NewsEvents/WorkshopsConferences/UCM468521.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
Id. at 5, Fig. 1 (citing FDA, Design Control Guidance for Medical Devices, and distinguishing analytical validation from analytical verification).Google Scholar
Celesti, F. et al., “Why Deep Learning Is Changing the Way to Approach NGS Data Processing: A Review,” IEEE Review in Biomedical Engineering 11 (2018): 6876; Goldfeder, R. L. et al., “Human Genome Sequencing at the Population Scale: A Primer on High-Throughput DNA Sequencing and Analysis,” American Journal of Epidemiology 186, no. 8 (2017): 1000–1009; Moorthie, S. et al., “Informatics and Clinical Genome Sequencing: Opening the Black Box,” Genetics in Medicine 15, no. 3 (2013): 165–171; see also FDA, Transcript of Workshop, Standards Based Approach to Analytical Performance Evaluation of Next Generation Sequencing in Vitro Diagnostic Tests (2015): at 228, available at <http://wayback.archive-it.org/7993/20170113000252/ http://www.fda.gov/downloads/MedicalDevices/NewsEvents/WorkshopsConferences/UCM478417.pdf> (remarks of Kevin Jacobs) (last visited June 4, 2019).CrossRefGoogle Scholar
See Celesti et al., supra note 24; Goldfeder, supra note 24; Moorthie et al., supra note 24.Google Scholar
Hsi-Yang, F. M. et al., “Efficient Storage of High Throughput DNA Sequencing Data Using Reference-Based Compression,” Genome Research 21, no. 5 (2011): 734740.Google Scholar
See Celesti et al., supra note 24; Goldfeder, supra note 24; Moorthie et al., supra note 24.Google Scholar
See FDA, supra note 24.Google Scholar
FDA, Considerations for NGS, supra note 9; see also FDA, General Principles of Software Validation; Final Guidance for Industry and FDA Staff(2002), available at <https://www.fda.gov/regulatory-information/search-fda-guidance-documents/general-principles-software-validation> (last visited June 4, 2019); FDA, supra note 24, at p. 226-41.+(last+visited+June+4,+2019);+FDA,+supra+note+24,+at+p.+226-41.>Google Scholar
Vollmer, S. et al., “Machine Learning and AI Research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness” (2018), available at <https://arxiv.org/abs/1812.10404> (last visited June 4, 2019); Wiens, J. et al., “Do No Harm – A Roadmap for Responsible ML for Healthcare,” Nature Medicine 25, no. 9 (2019): 13371340; FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback (2019), available at <https://www.fda.gov/downloads/MedicalDevices/DigitalHealth/Softwareasa-MedicalDevice/UCM635052.pdf> [hereinafter FDA Artificial Intelligence/Machine Learning] (last visited June 4, 2019). Beyond the healthcare context, several organizations have begun working on responsible implementation of machine learning or “AI” in society. See, e.g., Schrag, D., “Ensuring Responsible Use of Artificial Intelligence,” Belfer Center for Science and International Affairs, Harvard Kennedy School Belfer Center Newsletter (2018), available at <https://www.belfercenter.org/publication/ensuring-responsible-use-artificial-intelligence> (last visited June 4, 2019); The European Commission High Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (2019), available at <https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai> (last visited June 4, 2019).Google Scholar
Moorthie et al., supra note 24; Pant, S. et al., “Navigating the Rapids: The Development of Regulated Next-Generation Sequencing-Based Clinical Trial Assays and Companion Diagnostics,” Frontiers in Oncology 4 (2014): 120, available at <https://www.frontiersin.org/articles/10.3389/fonc.2014.00078/full> (last visited June 6, 2019).CrossRefGoogle Scholar
See Celesti et al., supra note 24; Goldfeder et al., supra note 24; Moorthie et al., supra note 24; Wheeler, M. M. et al., “Genomic Characterization of the RH Locus Detects Complex and Novel Structural Variation in Multiethnic Cohorts,” Genetics in Medicine 21, no. 2 (2018): 477486 (describing methodology to genetically characterize the RH locus through novel methods for doing alignment so that their algorithms could recognize and report structural variation).CrossRefGoogle Scholar
Pratt and Leonard, supra note 1.Google Scholar
See, e.g., The Jackson Laboratory, “Risks and Benefits of Expanded Genetic Testing,” available at <https://www.jax.org/education-and-learning/clinical-and-continuing-education/breast-cancer-awareness-genetic-testing-strategy#> (last visited June 4, 2019); Scheuner, M. T., “Webinars for Health Insurers and Payers: Understanding Genetic Testing, Selecting the Right Genetic Test” (December 8, 2015), available at <https://www.genome.gov/Multimedia/Slides/WGT/Scheuner.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019);+Scheuner,+M.+T.,+“Webinars+for+Health+Insurers+and+Payers:+Understanding+Genetic+Testing,+Selecting+the+Right+Genetic+Test”+(December+8,+2015),+available+at++(last+visited+June+4,+2019).>Google Scholar
See, e.g., Zook, J. M. et al., “An Open Resource for Accurately Benchmarking Small Variant and Reference Calls,” Nature Biotechnology (2019), available at <https://www.nature.com/articles/s41587-019-0074-6> (last visited June 4, 2019); Hardwick, S. A. et al., “Reference Standards for Next Generation Sequencing,” Nature Reviews Genetics 18, no. 8 (2017): 473484.Google Scholar
See Vollmer et al., supra note 30; Weins et al., supra note 30.Google Scholar
See, e.g., Dewey, F. E. et al., “Clinical Interpretation and Implications of Whole-Genome Sequencing,” JAMA 311, no. 10 (2014): 10351044.CrossRefGoogle Scholar
Nykamp, K. et al., “Sherloc: A Comprehensive Refinement of the ACMG-AMP Variant Classification Criteria,” Genetics in Medicine 19, no. 10 (2017): 11051117.CrossRefGoogle Scholar
Pecker, L. H. and Little, J., “Clinical Manifestations of Sickle Cell Disease Across the Lifespan,” in Sickle Cell Disease and Hematopoietic Stem Cell Transplantation, Meier, E. R., Abraham, A., and Fasano, R. M. eds. (New York: Springer, 2018): 339; Beutler, E., “Discrepancies Between Genotype and Phenotype in Hematology: An Important Frontier,” Blood 98, no. 9 (2001): 2597–2602.CrossRefGoogle Scholar
Miko, I., “Phenotype Variability: Penetrance and Expressivity,” Nature Education 1, no. 1 (2008): 137.Google Scholar
Van Driest et al., supra note 2.Google Scholar
SoRelle, J. A. et al., “Clinical Utility of Reinterpreting Previously Reported Genomic Epilepsy Test Results for Pediatric Patients,” JAMA Pediatrics 173, no. 1 (2018): e182302, available at <https://jamanetwork.com/journals/jamapediatrics/fullarticle/10.1001/jamapediatrics.2018.2302> (last visited June 4, 2019); Taber, J. M. et al., “Reactions to Clinical Reinterpretation of a Gene Variant by Participants in a Sequencing Study,” Genetics in Medicine 20, no. 3 (2018): 337–345; David, K. L. et al., “Patient Re-contact After Revision of Genomic Test Results: Points to Consider — A Statement of the American College of Medical Genetics and Genomics (ACMG),” Genetics in Medicine 21, no. 4 (2019): 769–771.CrossRefGoogle Scholar
Coovadia, A., “Lost in Interpretation: Evidence of Sequence Variant Database Errors,” Journal of the Association of Genetic Technologists 43, no. 1 (2017): 2328.Google Scholar
Popejoy, A. N. and Fullerton, S. M., “Genomics Is Failing on Diversity,” Nature 538, no. 7624 (2016): 161164; Manrai, A. K. et al., “Genetic Misdiagnoses and the Potential for Health Disparities,” New England Journal of Medicine 375 (2016): 655–665.CrossRefGoogle Scholar
Kohane, I. S. et al., “Taxonomizing, Sizing, and Overcoming the Incidentalome,” Genetics in Medicine 14, no. 4 (2012): 399404.CrossRefGoogle Scholar
Hosseini, S. M. et al., “Reappraisal of Reported Genes for Sudden Arrhythmic Death: Evidence-Based Evaluation of Gene Validity for Brugada Syndrome,” Circulation 138, no. 12 (2018): 11951205.CrossRefGoogle Scholar
Vihinen, M., “How to Define Pathogenicity, Health, and Disease?” Human Mutation: Variation, Informatics, and Disease 38, no. 2 (2017): 129136.CrossRefGoogle Scholar
Natarajan, P. et al., “Polygenic Risk Score Identifies Subgroup with Higher Burden of Atherosclerosis and Greater Relative Benefit from Statin Therapy in the Primary Prevention Setting,” Circulation 135, no. 22 (2017): 20912101.CrossRefGoogle Scholar
Ray, T., “NorthShore, Ambry Make Prostate Cancer Polygenic Risk Score Available Nationally,” GenomeWeb, September 28, 2018, available at <https://www.genomeweb.com/business-news/northshore-ambry-make-prostate-cancer-polygenic-risk-score-available-nationally#.XPa1o9NKhAY> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See Hunter, D. J. and Drazen, J. M., “Has the Genome Granted Our Wish Yet?” New England Journal of Medicine 380, no. 25 (2019): 23912393, available at <https://www.nejm.org/doi/full/10.1056/NEJMp1904511?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed> (last visited June 4, 2019); Rosenberg, N. et al., “Interpreting Polygenic Scores, Polygenic Adaptation, and Human Phenotypic Differences,” Evolution, Medicine, and Public Health 2010, no. 1 (2019): 26–34; Curtis, D., “Clinical Relevance of Genome-Wide Polygenic Risk Score May Be Less than Claimed,” Annals of Human Genetics 83, no. 4 (2019): 274–277.CrossRefGoogle Scholar
See NIH, All of Us, available at <https://allofus.nih.gov> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
Ma, H. et al., “Pathology and Genetics of Hereditary Colorectal Cancer,” Pathology 50 no. 1 (2018): 4959.CrossRefGoogle Scholar
Splinter, K. et al., “Effect of Genetic Diagnosis on Patients with Previously Undiagnosed Disease,” New England Journal of Medicine 379 (2018): 21312139 (finding that 98/382 received diagnosis, and more than half led to changes in care).CrossRefGoogle Scholar
Abul, N. S.-Husn et al., “Genetic Identification of Familial Hypercholesterolemia within a Single U.S. Health Care System,” Science 354, no. 6319 (2016): aaf7000-1 to aaf7000-7.Google Scholar
The eMERGE Consortium, Gibbs, R.A. and Rehm, H.L., “Harmonizing Clinical Sequencing and Interpretation for the Emerge III Network,” American Journal of Human Genetics 105, no. 3 (2019): 588605; see also Reuter, M. S. et al., “The Personal Genome Project Canada: Findings from Whole Genome Sequences of the Inaugural 56 Participants,” CMAJ 190, no. 5 (2018): E126–E136 (finding that 25% had pathogenic variants in disease-causing genes, and that all had recessive or pharmacogenomic variants).Google Scholar
Elmore, J. G., “Solving the Problem of Overdiagnosis,” New England Journal of Medicine 375 (2016): 14831486; Welch, H. G., “Cancer Screening, Overdiagnosis, and Regulatory Capture,” JAMA Internal Medicine 177, no. 7 (2017): 915–916.CrossRefGoogle Scholar
Khoury, M. J. et al., “The Scientific Foundation for Personal Genomics: Recommendations from a National Institutes of Health–Centers for Disease Control and Prevention Multidisciplinary Workshop,” Genetics in Medicine 11, no. 8 (2010): 559567; Meagher, K. M. and Berg, J. S., “Too Much of a Good Thing? Overdiagnosis, or Overestimating Risk in Preventive Genomic Screening,” Personalized Medicine 15, no. 5 (2018): 343–346; Narod, S., Sopik, V., and Cybulski, C., “Testing Ashkenazi Jewish Women for Mutations Predisposing to Breast Cancer in Genes Other than BRCA1 and BRCA2,” JAMA Oncology 4, no. 7 (2018): 1012; Vassy, J. L. et al., “The Impact of Whole-Genome Sequencing on the Primary Care and Outcomes of Healthy Adult Patients: A Pilot Randomized Trial,” Annals of Internal Medicine 167, no. 3 (2017): 159–169.Google Scholar
See Burke, supra note 10.Google Scholar
Arora, N. S. et al., “Communication Challenges for Nongeneticist Physicians Relaying Clinical Genomic Results,” Personalized Medicine 14, no. 5 (2016): 423431.CrossRefGoogle Scholar
Christensen, K. D. et al., “Are Physicians Prepared for Whole Genome Sequencing? A Qualitative Analysis,” Clinical Genetics 89, no. 2 (2016): 228234; Pet, D. B. et al., “Physicians’ Perspectives on Receiving Unsolicited Genomic Results,” Genetics in Medicine 21, no. 2 (2018): 311–318.CrossRefGoogle Scholar
Vassy et al., supra note 57.Google Scholar
Robinson, J. O. et al., “Participants and Study Decliners’ Perspectives about the Risks of Participating in a Clinical Trial of Whole Genome Sequencing,” Journal of Empirical Research on Human Research Ethics 11, no. 1 (2016): 2130 (finding that 173/514 declined to participate in MedSeq); Genetti, C. A. et al., “Parental Interest in Genomic Sequencing of Newborns: Enrollment Experience from the BabySeq Project,” Genetics in Medicine 21, no. 3 (2019): 622–630 (explaining that fewer than 7% of parents enrolled their healthy newborns in sequencing research; decliners cited a variety of reasons).CrossRefGoogle Scholar
Roberts, J. S. et al., “Patient Understanding of, Satisfaction with, and Perceived Utility of Whole-Genome Sequencing: Findings from the MedSeq Project,” Genetics in Medicine 20, no. 9 (2018): 10691076.Google Scholar
Sanderson, S. C. et al., “Psychological and Behavioural Impact of Returning Personal Results from Whole-Genome Sequencing: The HealthSeq Project,” European Journal of Human Genetics 25, no. 3 (2017): 280292.CrossRefGoogle Scholar
See Centers for Disease Control and Prevention, Clinical Laboratory Improvement Amendments (CLIA), available at <https://www.cdc.gov/clia/> (last visited June 8, 2019) (noting that “CDC, in partnership with CMS and FDA, supports the CLIA program and clinical laboratory quality”).+(last+visited+June+8,+2019)+(noting+that+“CDC,+in+partnership+with+CMS+and+FDA,+supports+the+CLIA+program+and+clinical+laboratory+quality”).>Google Scholar
NIH, “How Can Consumers Be Sure a Genetic Test Is Valid and Useful?” available at <https://ghr.nlm.nih.gov/primer/testing/validtest> (last visited March 31, 2019).+(last+visited+March+31,+2019).>Google Scholar
21 U.S.C. § 360c (2019).Google Scholar
42 U.S.C. § 263a (2019).Google Scholar
42 U.S.C. § 263a(a) (2019).Google Scholar
CMS, Clinical Laboratory Improvement Amendments State Survey Agency Contacts, available at <https://www.cms.gov/Regulations-and-Guidance/Legislation/CLIA/Downloads/CLIASA.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
CMS, List of Approved Accreditation Organizations under the Clinical Laboratory Improvement Amendments (CLIA), available at <https://www.cms.gov/Regulations-and-Guidance/Legislation/CLIA/Downloads/AOList.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
42 U.S.C. § 263a(p)(2).Google Scholar
CMS, List of Exempt States under the Clinical Laboratory Improvement Amendments (CLIA), available at <https://www.cms.gov/Regulations-and-Guidance/Legislation/CLIA/Downloads/ExemptStatesList.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
42 U.S.C. § 263a(f)(1)(A) (2019).Google Scholar
42 U.S.C. § 263a(f)(1)(D) (2019).Google Scholar
42 C.F.R. § 493.1445 (2019).Google Scholar
42 CFR § 493.1236 (2019).Google Scholar
42 CFR § 493.1253(b)(2) (2019).Google Scholar
Id.; see also CMS, “CLIA Overview,” available at <https://www.cms.gov/regulations-and-guidance/legislation/clia/downloads/ldt-and-clia_faqs.pdf> (last visited June 4, 2019); Aziz, N. et al., “College of American Pathologists’ Laboratory Standards for Next-Generation Sequencing Clinical Tests,” Archives of Pathology & Laboratory Medicine 139, no. 4 (2015): 481493.CrossRefGoogle Scholar
CMS, supra note 77.Google Scholar
Id. at Appendix C – Survey Procedures and Interpretive Guidelines for Laboratories and Laboratory Services.Google Scholar
See Government Accountability Office (GAO), Clinical Lab Quality – CMS and Survey Organization Oversight Should Be Strengthened (June 2006), available at <https://www.gao.gov/assets/260/250504.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
DHHS, supra note 1, at 30.Google Scholar
Hudson, K. L., “Petition Requesting a Genetic Testing Specialty and Standards for Proficiency Testing,” Public Citizen (September 26, 2006), available at <https://www.citizen.org/article/petition-requesting-a-genetic-testing-specialty-and-standards-for-proficiency-testing> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See, e.g., American Association for Clinical Chemistry (AACC), “Oversight of Laboratory Developed Tests: Position Statement,” April 20, 2017, available at <https://www.aacc.org/health-and-science-policy/advocacy/position-statements/2017/oversight-of-laboratory-developed-tests> (last visited June 4, 2019); Association for Molecular Pathology, “Association for Molecular Pathology Position Statement: Oversight of Laboratory Developed Tests,” January 2010, available at <https://www.amp.org/AMP/assets/File/resources/LDTOversightPositionStatement.pdf> (last visited June 4, 2019); CMS, Laboratory Director Responsibilities (August 2006), available at <https://www.cms.gov/Regulations-and-Guidance/Legislation/CLIA/Downloads/brochure7.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019);+Association+for+Molecular+Pathology,+“Association+for+Molecular+Pathology+Position+Statement:+Oversight+of+Laboratory+Developed+Tests,”+January+2010,+available+at++(last+visited+June+4,+2019);+CMS,+Laboratory+Director+Responsibilities+(August+2006),+available+at++(last+visited+June+4,+2019).>Google Scholar
See CMS's Authority LDTs, supra note 67 (noting that “unlike the FDA regulatory scheme, CMS’ CLIA program does not address the clinical validity of any test”).Google Scholar
42 C.F.R. § 493.1445(e)(8) (2019).Google Scholar
At a November 2017 meeting of the Clinical Laboratory Improvement Advisory Committee (CLIAC), a CDC advisory body to the CMS CLIA program, a CMS official acknowledged the existence of “non-traditional multiple site testing” models, involving different entities separately performing the wet and dry functions of a test, and stated that the entity performing the dry functions of the process “may or may not be a CLIA laboratory.” See CDC, Clinical Laboratory Improvement Advisory Committee, “CLIAC Meetings,” available at <https://www.cdc.gov/cliac/meeting.html> (last updated March 21, 2019) (last visited June 4, 2019).+(last+updated+March+21,+2019)+(last+visited+June+4,+2019).>Google Scholar
21 U.S.C. § 321(h) (2019).Google Scholar
21 U.S.C. § 360c (2019); see also, FDA, General Controls for Medical Devices, available at <https://www.fda.gov/medical-devices/regulatory-controls/general-controls-medical-devices> (last updated March 22, 2018) (last visited June 4, 2019).+(last+updated+March+22,+2018)+(last+visited+June+4,+2019).>Google Scholar
21 U.S.C. § 360e(c)(1)(A) (2019).Google Scholar
Institute of Medicine, Board on Population Health and Public Health Practice, Committee on the Public Health, Medical Devices and the Public's Health: The FDA 510(k) Clearance Process at 35 Years (Washington, DC: National Academies Press, 2011), available at <https://www.nap.edu/catalog/13150/medical-devices-and-the-publics-health-thefda-510k-clearance> (last visited June 4, 2019). See id. at 51 (noting that only 15% of Class II (moderate risk) devices actually have special controls in place).+(last+visited+June+4,+2019).+See+id.+at+51+(noting+that+only+15%+of+Class+II+(moderate+risk)+devices+actually+have+special+controls+in+place).>Google Scholar
21 U.S.C. § 360c(a)(1)(B) (2019).Google Scholar
21 U.S.C. § 360e (2019).Google Scholar
21 C.F.R. § 862.2265 (2019).Google Scholar
FDA, Decision Summary, Evaluation of Automatic Class III Designation for MiSeqDx Platform (February 24, 2017), available at <https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN130011.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
In November of 2013 the FDA cleared the first tests using NGS — the MiSeqDx Cystic Fibrosis Clinical Sequencing Assay and the MiSeqDx Cystic Fibrosis 139-Variant Assay. See FDA, 510(k) Premarket Notification Database, “Illumina, Inc., System, Cystic Fibrosis Transmembrane Conductance Regulator Gene, Variant Gene Sequence Detection” (November 19, 2013), available at <https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPMN/pmn.cfm?ID=K132750> (decision on the MiSeqDx Cystic Fibrosis Clinical Sequencing Assay) (last visited June 4, 2019); FDA, 510(k) Premarket Notification Database, “Illumina, Inc., System, Cystic Fibrosis Transmembrane Conductance Regulator Gene, Mutations & Variants Panel Sequencing Detection” (November 19, 2013), available at <https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPMN/pmn.cfm?ID=K124006> (decision on the MiSeqDx Cystic Fibrosis 139-Variant Assay) (last visited June 4, 2019). These tests comprise the Illumina MiSeqDx sequencing platform, accompanying reagent kits (MiSeqDx Universal Kit 1.0), and software components. On December 23, 2014, the FDA announced the recall of both the CF Clinical Sequencing Assay and the MiSeqDx Universal Kit 1.0 because of a “software limitation.” FDA, Class 2 Device Recall MiSeqDx Universal Kit 1.0 (November 13, 2014), available at <https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfRES/res.cfm?id=131490> (last visited June 4, 2019). Apparently, when running the sequencing assay with the reagent kit supplied by Illumina, the MiSeq Reporter software was unable to call certain deletions it was intended to detect.+(decision+on+the+MiSeqDx+Cystic+Fibrosis+Clinical+Sequencing+Assay)+(last+visited+June+4,+2019);+FDA,+510(k)+Premarket+Notification+Database,+“Illumina,+Inc.,+System,+Cystic+Fibrosis+Transmembrane+Conductance+Regulator+Gene,+Mutations+&+Variants+Panel+Sequencing+Detection”+(November+19,+2013),+available+at++(decision+on+the+MiSeqDx+Cystic+Fibrosis+139-Variant+Assay)+(last+visited+June+4,+2019).+These+tests+comprise+the+Illumina+MiSeqDx+sequencing+platform,+accompanying+reagent+kits+(MiSeqDx+Universal+Kit+1.0),+and+software+components.+On+December+23,+2014,+the+FDA+announced+the+recall+of+both+the+CF+Clinical+Sequencing+Assay+and+the+MiSeqDx+Universal+Kit+1.0+because+of+a+“software+limitation.”+FDA,+Class+2+Device+Recall+MiSeqDx+Universal+Kit+1.0+(November+13,+2014),+available+at++(last+visited+June+4,+2019).+Apparently,+when+running+the+sequencing+assay+with+the+reagent+kit+supplied+by+Illumina,+the+MiSeq+Reporter+software+was+unable+to+call+certain+deletions+it+was+intended+to+detect.>Google Scholar
See FDA, Framework for Regulatory Oversight of Laboratory Developed Tests (LDTs): Draft Guidance (2014): 56, available at <https://www.fda.gov/media/89841/download> (last visited June 9, 2019) (explaining that lab-developed tests that incorporate components from a test manufacturer do not, strictly speaking, meet FDA's definition of an LDT, but that many laboratories offer such tests as LDTs and FDA has traditionally treated them as though they were LDTs).+(last+visited+June+9,+2019)+(explaining+that+lab-developed+tests+that+incorporate+components+from+a+test+manufacturer+do+not,+strictly+speaking,+meet+FDA's+definition+of+an+LDT,+but+that+many+laboratories+offer+such+tests+as+LDTs+and+FDA+has+traditionally+treated+them+as+though+they+were+LDTs).>Google Scholar
FDA reaffirmed its position on jurisdiction most recently in a Warning Letter to a laboratory performing pharmacogenetic testing. FDA, Center for Devices and Radiological Health, Warning Letter: Inova Genomics Laboratory (2019), available at <www.fda.gov/ICECI/EnforcementActions/Warn-ingLetters/ucm634988.htm> (“FDA has not created a legal ‘carve-out’ for LDTs such that they are not required to comply with the requirements under the Act that otherwise would apply. FDA has never established such an exemption. As a matter of practice, FDA, however, has exercised enforcement discretion for LDTs, which means that FDA has generally not enforced the premarket review and other FDA legal requirements that do apply to LDTs. Although FDA has generally exercised enforcement discretion for LDTs, the Agency always retains discretion to take action when appropriate, such as when it is appropriate to address significant public health concerns.”) (last visited June 4, 2019).+(“FDA+has+not+created+a+legal+‘carve-out’+for+LDTs+such+that+they+are+not+required+to+comply+with+the+requirements+under+the+Act+that+otherwise+would+apply.+FDA+has+never+established+such+an+exemption.+As+a+matter+of+practice,+FDA,+however,+has+exercised+enforcement+discretion+for+LDTs,+which+means+that+FDA+has+generally+not+enforced+the+premarket+review+and+other+FDA+legal+requirements+that+do+apply+to+LDTs.+Although+FDA+has+generally+exercised+enforcement+discretion+for+LDTs,+the+Agency+always+retains+discretion+to+take+action+when+appropriate,+such+as+when+it+is+appropriate+to+address+significant+public+health+concerns.”)+(last+visited+June+4,+2019).>Google Scholar
“Foundation Medicine Gains FDA Approval, CMS Coverage Proposal for NGS Cancer Profiling Test,” GenomeWeb, November 30, 2017, available at <https://www.genomeweb.com/molecular-diagnostics/foundation-medicine-gains-fda-approval-cms-coverage-proposal-ngs-cancer> (last visited June 4, 2019); FDA, Decision Summary, Evaluation of Automatic Class III Designation for MSK-Impact (Integrated Mutation Profiling Of Actionable Cancer Targets), available at <https://www.accessdata.fda.gov/cdrh_docs/reviews/DEN170058.pdf> (last visited June 4, 2019); see also 510(k) Premarket Notification, Illumina MiSeqDX Cystic Fibrosis Clinical Sequencing Assay, available at <https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPMN/pmn.cfm?ID=K132750> (last visited June 9, 2019).+(last+visited+June+4,+2019);+FDA,+Decision+Summary,+Evaluation+of+Automatic+Class+III+Designation+for+MSK-Impact+(Integrated+Mutation+Profiling+Of+Actionable+Cancer+Targets),+available+at++(last+visited+June+4,+2019);+see+also+510(k)+Premarket+Notification,+Illumina+MiSeqDX+Cystic+Fibrosis+Clinical+Sequencing+Assay,+available+at++(last+visited+June+9,+2019).>Google Scholar
FDA, “The FDA Warns Against the Use of Many Genetic Tests with Unapproved Claims to Predict Patient Response to Specific Medications,” November 1, 2018, available at <https://www.fda.gov/MedicalDevices/Safety/AlertsandNotices/ucm624725.htm> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See FDA, Considerations for NGS, supra note 9.Google Scholar
Id. The guidance lists a number of excluded tests, including those NGS-based tests intended for aiding in diagnosis of microbial infection, cell-free DNA testing, DTC uses, fetal testing, and others. Id.Google Scholar
See FDA, supra note 107.Google Scholar
FDA, Use of Public Databases, supra note 9.Google Scholar
Clinical Genome Resource, “FDA Recognizes ClinGen Assertions in ClinVar - Frequently Asked Questions,” February 1, 2019, available at <https://www.clinicalgenome.org/docs/fda-recognizes-clingen-assertions-in-clinvar-frequently-asked-questions> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
FDA, “Framework for Regulatory Oversight of Laboratory Developed Tests; Draft Guidance for Industry, Food and Drug Administration Staff, and Clinical Laboratories; Availability,” Federal Register 79, no. 192 (2014): 5977659779, available at <https://www.govinfo.gov/app/details/FR-2014-10-03/2014-23596> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See FDA, “Discussion Paper on Laboratory Developed Tests (LDTs),” January 13, 2017, available at <https://www.fda.gov/media/102367/download> (announcing that FDA did not intend to finalize its draft LDT guidances because public comments revealed more complexity and stakeholder resistance than FDA initially anticipated).+(announcing+that+FDA+did+not+intend+to+finalize+its+draft+LDT+guidances+because+public+comments+revealed+more+complexity+and+stakeholder+resistance+than+FDA+initially+anticipated).>Google Scholar
See International Medical Regulators Device Forum, IMDRF SaMD Working Group, Software as a Medical Device (SaMD): Key Definitions, December 9, 2013, available at <http://www.imdrf.org/docs/imdrf/final/technical/imdrf-tech-131209-samd-key-definitions-140901.pdf> (distinguishing software in a device from software as a device) (last visited June 4, 2019).+(distinguishing+software+in+a+device+from+software+as+a+device)+(last+visited+June+4,+2019).>Google Scholar
Id.; see also FDA, “What Are Examples of Software as a Medical Device?” available at <https://www.fda.gov/medicalde-vices/digitalhealth/softwareasamedicaldevice/ucm587924.htm> (last updated December 6, 2017) (last visited June 4, 2019).+(last+updated+December+6,+2017)+(last+visited+June+4,+2019).>Google Scholar
See FDA, Software as a Medical Device (SAMD): Clinical Evaluation (December 8, 2017): at 11, available at <https://www.fda.gov/media/100714/download> (last visited October 22, 2019).+(last+visited+October+22,+2019).>Google Scholar
International Medical Device Regulators Forum, available at <http://www.imdrf.org> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
FDA, supra note 122, at 9.Google Scholar
See FDA, “Digital Health Software Precertification (Pre-Cert) Program,” available at <https://www.fda.gov/MedicalDevices/DigitalHealth/UCM567265> (last updated February 15, 2018) (last visited June 4, 2019); see also FDA, Developing Software Precertification Program: A Working Model v.0.2, June 18, 2018, available at <https://www.fda.gov/media/113802/download> (last visited June 4, 2019); FDA, Developing a Software Precertification Program: A Working Model v. 1.0, January 2019, available at <https://www.fda.gov/media/119722/download> (last visited June 4, 2019).+(last+updated+February+15,+2018)+(last+visited+June+4,+2019);+see+also+FDA,+Developing+Software+Precertification+Program:+A+Working+Model+v.0.2,+June+18,+2018,+available+at++(last+visited+June+4,+2019);+FDA,+Developing+a+Software+Precertification+Program:+A+Working+Model+v.+1.0,+January+2019,+available+at++(last+visited+June+4,+2019).>Google Scholar
FDA, Digital Health Innovation Action Plan, supra note 126, at 2.Google Scholar
See generally id.Google Scholar
See FDA, “Precertification (Pre-Cert) Pilot Program: Frequently Asked Questions,” available at <https://www.fda.gov/medical-devices/digital-health-software-precertificationpre-cert-program/precertification-pre-cert-pilot-program-frequently-asked-questions> (noting that the precertification pathway is not currently available) (last updated May 24, 2019) (last visited June 4, 2019).+(noting+that+the+precertification+pathway+is+not+currently+available)+(last+updated+May+24,+2019)+(last+visited+June+4,+2019).>Google Scholar
See FDA, “Software Precertification Program: Regulatory Framework for Conducting the Pilot Program within Current Authorities,” January 2019, available at <https://www.fda.gov/media/119724/download> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
FDA, Considerations for NGS, supra note 9.Google Scholar
Id. at 9-10.Google Scholar
See DHHS, FDA, Clinical Decision Support Software: Draft Guidance for Industry and Food and Drug Administration Staff, September 27, 2019, available at <https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software> (last visited October 20, 2019) (providing FDA's current draft guidance on clinical decision support software).+(last+visited+October+20,+2019)+(providing+FDA's+current+draft+guidance+on+clinical+decision+support+software).>Google Scholar
Id. at 27.Google Scholar
Evans, B. J., “The Streetlight Effect: Regulating Genomics Where the Light Is,” Journal of Law, Medicine & Ethics 48, no. 1 (2020): 105118.CrossRefGoogle Scholar
Nex-StoCT comprises CDC researchers, representatives from clinical laboratories who have already developed sequencing-based tests, industry representatives, and representatives from CMS and CAP. See Heger, M., “CDC Workgroup Publishes Guidelines for NGS-Based Clinical Tests,” GenomeWeb, November 28, 2012, available at <https://www.genomeweb.com/sequencing/cdc-workgroup-publishes-guidelines-ngs-based-clinical-tests> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See Lubin, I. M. et al., “Principles and Recommendations for Standardizing the Use of Next-Generation Sequencing Variant Files in Clinical Settings,” Journal of Molecular Diagnostics 19, no. 3 (2017): 417426; Gargis, A. S. et al., “Good Laboratory Practice for Clinical Next-Generation Sequencing Informatics Pipelines,” Nature Biotechnology 33, no. 7 (2015): 689–693; Gargis, A. S. et al., “Assuring the Quality of Next-Generation Sequencing in Clinical Laboratory Practice,” Nature Biotechnology 30, no. 11 (2012): 1033–1036.CrossRefGoogle Scholar
See FDA, supra note 3. (noting that “FDA typically requires the test developer to establish that the variant identified and reported is clinically meaningful to any disease or condition the test is intended for use in diagnosing.”)Google Scholar
Federal Food, Drug, and Cosmetics Act § 1006 (1938) (codified as amended at 21 U.S.C. § 396).Google Scholar
See FDA, “Prostate Cancer: Symptoms, Tests, and Treatment,” available at <https://www.fda.gov/consumers/consumer-updates/prostate-cancer-symptoms-tests-and-treatment> (noting that the U.S. Preventive Services Task Force recommends against screening use of PSA testing in men over 70 due to lack of evidence of benefit and risks of overtreatment) (last visited June 4, 2019).+(noting+that+the+U.S.+Preventive+Services+Task+Force+recommends+against+screening+use+of+PSA+testing+in+men+over+70+due+to+lack+of+evidence+of+benefit+and+risks+of+overtreatment)+(last+visited+June+4,+2019).>Google Scholar
See id. at 20-21 (statement of David Litwack) (characterizing the lack of a well-defined intended use as the “big” problem that stymies traditional FDA oversight of genomic testing).Google Scholar
Burke, W. et al., “Recommendations for Returning Genomic Incidental Findings? We Need to Talk!” Genetics in Medicine 15, no. 11 (2013): 854859.CrossRefGoogle Scholar
FDA, supra note 117.Google Scholar
See CMS's Authority LDTs, supra note 67.Google Scholar
See, e.g., FDA, supra note 147, at 18-19 (statement of David Litwack) (explaining that FDA's review of genomic tests encompasses three things: analytical performance, clinical performance, and labeling).Google Scholar
Id. at 1819.Google Scholar
See, e.g., FDA, Center for Devices and Radiological Health, supra note 104 (warning an LDT test provider that it was making unsupported claims of clinical utility (pharmacogenetic claims) in its test labeling, and that FDA had received no evidence to support those claims).Google Scholar
21 U.S.C. § 352(a)(1) (2019).Google Scholar
See Burke, supra notes 1416 and accompanying text.Google Scholar
ACMG Board of Directors, supra note 9.Google Scholar
In a related area, the Federal Trade Commission has recently indicated its willingness to prosecute those who make claims about genetic testing kits that are not supported by “competent and reliable scientific evidence.” Jillson, E., “Selling Genetic Testing Kits? Read on.” FTC Business Blog, March 21, 2019, available at <https://www.ftc.gov/news-events/blogs/business-blog/2019/03/selling-genetic-testing-kits-read> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See generally Deverka, P. A. and Dreyfus, J. C., “Clinical Integration of Next Generation Sequencing: Coverage and Reimbursement Challenges,” Journal of Law, Medicine & Ethics 42, no. s1 (2014): 2241 (discussing the roles that payers and Medicare Administrative Contractors have in assessing whether genomic tests have sufficient clinical utility to warrant coverage).CrossRefGoogle Scholar
Phillips, K. A. et al., “Insurance Coverage for Genomic Tests,” Science 360, no. 6386 (2018): 278279.Google Scholar
CMS, Decision Memo for Next Generation Sequencing (NGS) for Medicare Beneficiaries with Advanced Cancer (CAG-00450N) (March 16, 2018), available at <https://www.cms.gov/medicare-coverage-database/details/nca-decision-memo.aspx?NCAId=290&bc=AAAAAAAAACAA> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
See, e.g., Palmetto GBA MolDX, available at <https://www.palmettogba.com/moldx> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
Eisenberg, R. and Varmus, H., “Insurance for Broad Genomic Tests in Oncology,” Science 358, no. 6367 (2017): 11331134.CrossRefGoogle Scholar
CMS, “Direct Access Testing (DAT) and the Clinical Laboratory Improvement Amendments (CLIA) Regulations,” available at <https://www.cms.gov/Regulations-and-Guidance/Legislation/CLIA/Downloads/directaccesstesting.pdf> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
Schrijver, I. et al., “Opportunities and Challenges Associated with Clinical Diagnostic Genome Sequencing: A Report of the Association for Molecular Pathology,” Journal of Molecular Diagnostics 14, no. 6 (2012): 525540.CrossRefGoogle Scholar
Roy, S. et al., “Standards and Guidelines for Validating Next-Generation Sequencing Bioinformatics Pipelines: A Joint Recommendation of the Association for Molecular Pathology and the College of American Pathologists,” Journal of Molecular Diagnostics 20, no. 1 (2018): 427; Richards, S. et al., “Standards and Guidelines for the Interpretation of Sequence Variants: A Joint Consensus Recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology,” Genetics in Medicine 17, no. 5 (2015): 405–424; Kearney, H. M. et al., “American College of Medical Genetics Standards and Guidelines for Interpretation and Reporting of Postnatal Constitutional Copy Number Variants,” Genetics in Medicine 13, no. 7 (2011): 680–685.CrossRefGoogle Scholar
Jennings, L. J. et al., “Guidelines for Validation of Next-Generation Sequencing-Based Oncology Panels: A Joint Consensus Recommendation of the Association for Molecular Pathology and College of American Pathologists,” Journal of Molecular Diagnostics 19, no. 3 (2017): 341365; Li, M. M. et al., “Standards and Guidelines for the Interpretation and Reporting of Sequence Variants in Cancer: A Joint Consensus Recommendation of the Association for Molecular Pathology, American Society of Clinical Oncology, and College of American Pathologists,” Journal of Molecular Diagnostics 19, no. 1 (2017): 4–23.CrossRefGoogle Scholar
Aziz et al., supra note 83.Google Scholar
College of American Pathologists, “Councils and Committees,” available at <https://www.cap.org/member-resources/councils-committees> (last visited June 4, 2019).+(last+visited+June+4,+2019).>Google Scholar
ACMG Board of Directors, “Laboratory and Clinical Genomic Data Sharing Is Crucial to Improving Genetic Health Care: A Position Statement of the American College of Medical Genetics and Genomics,” Genetics in Medicine 19, no. 7 (2017): 721722.CrossRefGoogle Scholar
ACMG Board of Directors, “Direct-To-Consumer Genetic Testing: A Revised Position Statement of the American College of Medical Genetics and Genomics,” Genetics in Medicine 18, no. 2 (2016): 207208; David, K. L. et al., “Patient Re-Contact After Revision of Genomic Test Results: Points to Consider — A Statement of the American College of Medical Genetics and Genomics (ACMG),” Genetics in Medicine 21, no. 4 (2018): 769–771; Kalia et al., supra note 7; CORRIGENDUM, supra note 7.Google Scholar
See generally Burke et al., supra note 4.Google Scholar
See Bamburger, K. A., “Regulation as Delegation: Private Firms, Decision-making and Accountability in the Administrative State,” Duke Law Review 56, no. 2 (2006): 377468; Burris, S., Kempa, M., and Shearing, C., “Changes in Governance: A Cross-Disciplinary Review of Current Scholarship,” Akron Law Review 41, no. 1 (2008): 1–66; Dorf, M. C., “After Bureaucracy,” University of Chicago Law Review 71 (2004): 1245–1273; Freeman, J., “Collaborative Governance in the Administrative State,” UCLA Law Review 45, no. 1 (1997): 1–98; Lobel, O., “The Renew Deal: The Fall of Regulation and the Rise of Governance in Contemporary Legal Thought,” Minnesota Law Review 89, no. 2 (2004): 342–470; Trubek, L. G., “New Governance and Soft Law in Health Care Reform,” Indiana Health Law Review 3, no. 1 (2006): 139–170; Sabel, C. F., “Beyond Principal-Agent Governance: Experimentalist Organizations, Learning and Accountability,” in De Staat Van De Democratie: Democratie Voorbij De Staat 173 (2004), available at <http://www2.law.columbia.edu/sabel/papers/Sabel.definitief.doc> (last visited June 4, 2019). Some commentators have argued that post-New-Deal, “command-and-control” (top down) regulation went out of fashion by the mid-1990s, when President Bill Clinton announced that the era of big government had come to an end. See, e.g., Solomon, J. M., “Law and Governance in the 21st Century Regulatory State,” Texas Law Review 86, no. 4 (2008): 819–855, at 827-828.Google Scholar
Burris, S., Drahos, P., and Shearing, C., “Nodal Governance,” Australian Journal of Legal Philosophy 30, (2005): 3058, available at <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=760928> (last visited June 4, 2019); Dorf, supra note 174; Ford, C., “New Governance in the Teeth of Human Frailty: Lessons from Financial Regulation,” Wisconsin Law Review 2010, no. 2 (2010): 441–488.Google Scholar
Sturm, S., “Gender Equity Regimes and the Architecture of Learning,” in Law and New Governance in the EU and the US (de Búrca, G. and Scott, J. eds., Oxford and Portland, Oregon: Hart Publishing, 2006): at 323.Google Scholar
See Chang, E., “Institutional Reform Shaming,” Penn State Law Review 120, no. 1 (2015): 53108. Trubek, supra note 174.Google Scholar
Dorf, supra note 174, at 3 (“[T]hat implementation [of regulation] is inconsistent and that enforcement is at best sporadic are by now uncontroversial claims.”); Solomon, supra note 174.Google Scholar
Solomon, supra note 174.Google Scholar
FDA, “Digital Health Software Precertification (Pre-cert) Program,” available at <https://www.fda.gov/medical-devices/digital-health/digital-health-software-precertification-precert-program> (last visited June 4, 2019); FDA Artificial Intelligence/Machine Learning, supra note 30.+(last+visited+June+4,+2019);+FDA+Artificial+Intelligence/Machine+Learning,+supra+note+30.>Google Scholar
See, e.g., 21st Century Cures Act §§ 2031, 2054, 3011, 3001-3004, 3055, 3076.Google Scholar
Lee, J. A., “Can You Hear Me Now? Making Participatory Governance Work for the Poor,” Harvard Law & Policy Review 7, no. 2 (2013): 405422, at 412-13.Google Scholar
Alexander, L. T., “Stakeholder Participation in New Governance: Lessons from Chicago's Public Housing Reform Experiment,” Georgetown Journal of Poverty Law & Policy 16, no. 1 (2009): 117186; Ford, supra note 175; Lee, supra note 183; Nejaime, D., “When New Governance Fails,” Ohio State Law Journal 70, no. 2 (2009): 323–402.Google Scholar
Sabel, C. F. and Simon, W. H., “Epilogue: Accountability Without Sovereignty,” in Law and New Governance in the EU and the US (de Búrca, G. and Scott, J. eds., Oxford and Portland, Oregon: Hart Publishing, 2006): at 400, 407; Alexander, supra note 184; Simon, W. H., “New Governance Anxieties: A Deweyan Response,” Wisconsin Law Review 2010, no. 2 (2010): 727–736, at 731–34; Solomon, supra note 174.Google Scholar
FDA, “Collaborative Communities: Addressing Healthcare Challenges Together,” available at <https://www.fda.gov/about-fda/cdrh-strategic-priorities-and-updates/collaborative-communities-addressing-healthcare-challenges-together> (last updated February 28, 2019) (last visited June 4, 2019).+(last+updated+February+28,+2019)+(last+visited+June+4,+2019).>Google Scholar