Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-25T18:44:05.417Z Has data issue: false hasContentIssue false

HEALTH TECHNOLOGY ASSESSMENT OF MEDICAL DEVICES IN EUROPE: PROCESSES, PRACTICES, AND METHODS

Published online by Cambridge University Press:  27 September 2016

Sabine Fuchs
Affiliation:
Department of Health Care Management, Berlin University of Technologysabine.fuchs@tu-berlin.de
Britta Olberg
Affiliation:
Department of Health Care Management, Berlin University of Technology and Federal Joint Committee (G-BA)
Dimitra Panteli
Affiliation:
Department of Health Care Management, Berlin University of Technology
Reinhard Busse
Affiliation:
Department of Health Care Management, Berlin University of Technology
Rights & Permissions [Opens in a new window]

Abstract

Objectives: To review and compare current Health Technology Assessment (HTA) activities for medical devices (MDs) across European HTA institutions.

Methods: A comprehensive approach was adopted to identify institutions involved in HTA in European countries. We systematically searched institutional Web sites and other online sources by using a structured tool to extract information on the role and link to decision making, structure, scope, process, methodological approach, and available HTA reports for each included institution.

Results: Information was obtained from eighty-four institutions, forty-seven of which were analyzed. Fifty-four methodological documents from twenty-three agencies in eighteen countries were identified. Only five agencies had separate documents for the assessment of MDs. A few agencies made separate provisions for the assessment of MDs in their general methods. The amount of publicly available HTA reports on MDs varied by device category and agency remit.

Conclusions: Despite growing consensus on their importance and international initiatives, such as the EUnetHTA Core Model®, specific tools for the assessment of MDs are rarely developed and implemented at the national level. Separate additional signposts incorporated in existing general methods guides may be sufficient for the evaluation of MDs.

Type
Methods
Copyright
Copyright © Cambridge University Press 2016 

INTRODUCTION

Health Technology Assessment (HTA) as a decision support tool for coverage has been most frequently formally established to evaluate pharmaceuticals (Reference Hutton, McGrath, Frybourg, Tremblay, Bramley-Harker and Henshall1). The suitability of this methodology for medical devices (MDs) has been gaining interest as a topic of scientific discourse, especially in light of the discussion of the introduction of new regulatory provisions for their market authorization (Reference Campillo-Artero2). Europe is one of the biggest markets for MDs, which encompass a broad and heterogeneous range of technologies. According to the European Union, a medical device is defined as “any instrument, apparatus, appliance, software, material or other article, whether used alone or in combination, including the software intended by its manufacturer to be used specifically for diagnostic and/or therapeutic purposes and necessary for its proper application, intended by the manufacturer to be used for human beings for the purpose of diagnosis, prevention, monitoring, treatment or alleviation of disease” (3).

There are different classifications of MDs ranging from risk-based (such as the EU Directives 93/42/EEC & 90/385/EEC) to those incorporating financial elements (e.g., OECD System of Health Accounts) or aiming to facilitate common device identification (e.g., Global Medical Device Nomenclature). In a recent classification incorporating the HTA perspective (Reference Henschke, Panteli, Perleth and Busse4), Henschke et al. argue that MDs can be divided into three main groups: (i) assistive technology devices (directly used by patients, e.g., wheel chair), (ii) artificial body parts (implanted by a medical procedure, e.g., stents), and (iii) MDs for the assistance of medical professionals (e.g., PET/CT scanner).

Currently, there is no overview of existing structural, procedural, and/or methodological approaches of HTA institutions for assessing MDs in European countries. Previous comparative research has looked, among other things, at HTA institutional practice in general (Reference Allen, Pichler, Wang, Patel and Salek5) mostly with an international focus (Reference Stephens, Handke and Doshi6;7), concentrated on specific aspects of HTA such as economic evaluation (Reference Mathes, Jacobs, Morfeld and Pieper8), focused on pharmaceuticals (Reference Kleijnen, George and Goulden9) or on selected emerging settings (Reference Gulacsi, Rotar and Niewada10). Ciani et al. recently published a relevant overview of practices among institutions outside the European Union (Reference Ciani, Wilcher and Blankart11).

The aim of this work was to (i) identify institutions involved in HTA of MDs in Europe and to (ii) explore their structural, procedural, and methodological characteristics, particularly in regard to MD assessment.

METHODS

Selection of HTA Institutions

A comprehensive approach was adopted to identify institutions involved in HTA in European countries. The identification process was based on previous research (Reference Panteli, Kreis and Busse12), which was modified to fit the project objectives. The membership lists of INAHTA, EUnetHTA, HTAi, and HTAi Vortal Europe were combined and supplemented by institutions identified in comparative articles published in this journal in 2011 and 2012. From the resulting pool, institutions from EU Member States and the European Free Trade Association (EFTA) countries were included in the analysis.

Data Collection

For each included institution, the institutional Web site and other online sources (e.g., INAHTA Web site, where available) were searched with the aim of obtaining relevant information on structural, procedural, and methodological characteristics. Information directly displayed on the Web site as well as uploaded documents were considered (see “Data Extraction”).

To supplement these findings, a systematic literature search was performed in MEDLINE, EMBASE and the Cochrane Library. The search strategy consisted of a list of included institutions with both their original name and their preferred English translation, the type of technology, e.g., “MDs” and the type of publication, for example, “methods” and “evaluation” (an overview of the main search components is shown in Supplementary Table 1; full search strategy is available on request). The search was performed in September 2013. After removal of duplicates, the remaining citations were screened for relevance. Publications were included if they referred to the methodology or process of HTA for MDs by an institution from the predetermined pool. For this purpose, our underlying understanding of MDs encompassed all three types described by Henschke et al. (Reference Henschke, Panteli, Perleth and Busse4) (see introduction), both for diagnostic and therapeutic purposes. A 5-year window was selected to ensure up-to-dateness. Only full-text documents were included. The selection of publications was performed in two steps (title-abstract and full-text screening).

To gain more information on the health system context, Health Systems in Transition country reports (latest version per included country) available from the European Observatory on Health Systems and Policies were also consulted.

Data Extraction

To systematize information collection, an extraction tool with twenty items was developed based on Drummond's key principles for HTA programs (Reference Drummond, Schwartz, Jönsson, Luce and Neumann13). The tool captured the domain's role and link to decision making (e.g., institution's place in the country's HTA system), structure (e.g., annual funding), scope (e.g., types of technologies addressed), process (e.g., priority-setting for topic selection), HTA report production (e.g., producing/commissioning reports), and methodology (e.g., available methodological documents).

Once the overview of information on these domains was completed, our research focused mainly on methodological elements. For this purpose, we screened and analyzed all methodological documents identified during the systematic information collection using a second extraction tool, which also incorporated elements from Drummond's framework.

This second tool captured the domains assessment elements (e.g., clinical effectiveness), evidence procurement and selection (e.g., manufacturer submissions), appraisal of evidence quality (e.g., tools for appraisal), review process and transparency (e.g., stakeholder involvement), re-assessment (e.g., specific interval), knowledge exchange and transferability (e.g., reports from HTA agencies), and cost and economic evaluation (e.g., type of analysis). Both tools are available on request.

Data Analysis

Every step of the process described above was performed independently by two reviewer pairs. Discrepancies were resolved by discussion and consensus. Based on the extracted information, institution-specific profiles were compiled and aggregated into two overview tables containing the most relevant information (see Supplementary Tables 2 and 3). The main results are presented below following further abstraction.

RESULTS

Selected HTA Institutions

The composed pool included eighty-four institutions after the removal of duplicates (n = 99). In a first step, information on all institutions was obtained from the Web sites and online sources. Institutions were excluded from further analysis if they (i) were not involved in general with HTA production (n = 33; neither producing nor commissioning HTAs, e.g., funding/coordinating HTA activities); or (ii) focused only on pharmaceuticals (n = 4). Forty-seven institutions were thus included in the analysis (see Supplementary Figure 1).

Collected Institution-specific Data

Information on included institutions was supplemented by the systematic literature search (Figure 1). The search yielded 4,393 publications. After removal of duplicates and screening, thirty-seven publications remained for analysis.

Figure 1. Flow chart of the publication selection process during the systematic literature search.

Role and Scope of Included Institutions

Table 1 presents selected information on the role and scope of the included institutions. The majority of institutions (36 percent; n = 17) represent governmental institutions, followed by independent research entities which function as governmental institutions (23 percent; n = 11).

Table 1. Overview of Information about Role and Scope of Included Institutions

Note. Type of institution, evolutionary stage of technologies assessed, and definition of medical devices: own categorization based on available information; Type of technologies addressed: based on categorization by Banta and Luce (Reference Banta and Luce14); Criteria for selection and prioritization of technologies for assessment: own compilation based on Perleth et al. (Reference Perleth, Zentner, Hoffmann, Gibis, Perleth, Busse, Gehardus, Gibis, Lühmann and Zentner15); *Identified information clearly stated that no explicit process for priority setting/no specific definition of medical devices exists/is used;**Prioritization carried out by the commissioning institution; For more details on each institution, see Supplementary Table 2

The structural elements explored (not shown in Table 1) comprised information regarding annual funding, number and background of members/staff and resources explicitly dedicated to MDs. None of the included institutions provides publicly available information on all four aspects. Excluding consultants and experts, staff numbers range from 8 (UTA) to more than 500 (NICE) and vary according to institution remit. The professional background of staff comprises a broad range and encompasses nearly every scientific field related to health care.

Following the categorization of health technologies provided by Banta and Luce (Reference Banta and Luce14), 53 percent (n = 25) of included institutions cover a broad range of technologies including drugs, MDs, procedures, and systems (e.g., public health programs). Eighty-seven percent (n = 41) address drugs, 83 percent (n = 39) procedures, and 62 percent (n = 29) systems. Information about the evolutionary stage in which technologies are assessed was provided by forty-one of the forty-seven institutions. Mostly “new technologies” (80 percent; n = 36) are assessed.

Information about the definition of MDs used was obtainable in twenty-two cases (47 percent). Most institutions (41 percent, n = 9; e.g., FinOHTA) refer either to a general definition for (health) technologies which include MDs (e.g., INAHTA/HTAi definition from HTA glossary) or to the EU-Directives on MDs (36 percent; n = 8; e.g., AAZ). A small proportion of institutions (18 percent; n = 4; e.g., OGYÉI TEI) have (and provide) their own definition of MDs, often based on national legal or regulatory provisions.

An explicit process for priority setting is used by 34 percent (n = 16) of all institutions; for 26 percent (n = 12) a defined prioritization process is not applicable because this is done by the commissioning institution (e.g., MoH). In the case of an existing explicit process, the following categories proposed by Perleth et al. (Reference Perleth, Zentner, Hoffmann, Gibis, Perleth, Busse, Gehardus, Gibis, Lühmann and Zentner15) are considered most often: medical-scientific criteria (e.g., efficacy of intervention; 75 percent; n = 12), criteria related to epidemiological significance of disease/burden of disease (e.g., importance of assessment; 69 percent; n = 11), as well as economic criteria (e.g., better allocation of resources; 63 percent; n = 10).

HTA Report Production

In total, forty of forty-seven institutions (85 percent) produce reports (in-house or in collaboration with other institutions), two (4 percent) institutions commission reports (e.g., NIHR_NETSCC) and four (9 percent) institutions do both (e.g., G-BA). For forty (85 percent) institutions, MD-specific reports are publicly available (see Table 2).

Table 2. Overview of Included Institutions and Information Identified Online about HTA Report Production and Methodology

Note. MDs = medical devices; HTA report production: Producing reports, including in collaboration with other institutions/partnership; Reports defined as available even if only abstract/summary of full reports are available (but not only the title); Methodology: General information on methodological approach available on institutions Web site not considered in this table; *For definition of MDs used in this table, see Henschke et al. (Reference Henschke, Panteli, Perleth and Busse4); **Due to language barrier, no estimation possible; For more details on each institution, see Supplementary Table 3.

Methodology

For 49 percent (n = 23) of included institutions, at least one methodological guide or other official document detailing applied methods was publicly available. In total, fifty-four methodological documents could be identified. These documents represent mainly (n = 36) general provisions for the methodology/process underlying the institution's outputs, named for example “handbook” or “method manual,” or papers concentrating on methods for specific HTA domains, such as economic evaluation and/or budget impact analysis (n = 6) and stakeholder engagement (n = 1). A further three are regulatory documents. Only nine of the fifty-four documents concentrate solely on the evaluation of a specific type of technology, namely MDs.

Identified Methodological Documents

From the identified fifty-four methodological documents, forty-five were analyzed using the developed extraction tool (see Figure 2 and Supplementary Table 4).

Figure 2. Overview of identified and analyzed methodological documents. Note. MDs = medical devices; DACEHTA = Danish Centre for Health Technology Assessment; HAS = Haute Autorité de Santé; LBI = Ludwig Boltzmann Institute for Health Technology Assessment; NICE = National Institute for Health and Care Excellence; ZiN = Zorginstituut Nederland; *Nine out of 45 general methodological documents could not be extracted due to language barrier and thus are not considered in the analysis.

The following sections focus primarily on methodological documents specifically addressing MDs (n = 9; see Figure 2), aiming at describing their characteristics and key content. A brief insight into provisions for MD assessment included in general methodological documents (n = 36) is provided at the start.

Analysis of General Methodological Documents Regarding Provisions for MDs

A common characteristic of all general methodological guides included in the sample is that they are intended to be applicable to all healthcare technologies within the institution's remit. However, two institutions state that within the assessment process “[. . .] some aspects may be more relevant to particular technologies than others” (HIQA) (16) or need to be adapted (AOTMiT) (17). Three further institutions give more specific information, recommending the use of MD registers (BIQG/GÖG) (Reference Fröschl, Bornschein and Brunner-Ziegler18), providing for a differentiated approach to topic selection and prioritization (AAZ) (19) or highlighting organizational features that should be taken into account: “When relevant, the technology description can furthermore include: who is to operate the technology, technical and professional requirements of the operator. . .” (DACEHTA) (Reference Kristensen and Sigmund20).

Two general methodological documents explicitly include additional sections for the evaluation of MDs. IQWiG's general methods guide (21) includes separate sections on the assessment of nondrug interventions, diagnostic procedures, early diagnosis and screening, as well as determining the potential for benefit of newly developed technologies considered for coverage with evidence development (§137e, Social Code Book V). However, several cross-references to general sections of the document highlight that there are no fundamental differences in the evaluation of these technologies when compared with pharmaceuticals. The CRD guidance (22) for undertaking systematic reviews contains a chapter for diagnostic and prognostic tests. According to the document “[. . .] much of the research on diagnostic tests is in the form of test accuracy studies”. Thus, the chapter discusses among others methods developed specifically to deal with such studies.

Analysis of Methodological Documents Discussing MDs Only

The following synthesis has been structured to reflect differences in the objectives and target groups of documents detailing the assessment of MDs only.

The documents of HAS (23), LBI (Reference Kisser and Zechmeister-Koss24), NICE (2527), and ZiN (28) describe the institutions own methodological approaches when assessing MDs, concentrating on diagnostic technologies (25), medical/biomarker tests (Reference Kisser and Zechmeister-Koss24;28), and interventional/medical and surgical procedures (23;26;27). The majority of these documents concern full evaluations with the exception of the HAS document (23), which describes rapid assessments. LBI's (Reference Kisser and Zechmeister-Koss24) guidance aims to complement its general internal manual for evidence synthesis on the specific issue of effectiveness and safety of biomarker tests. Therefore, it covers only certain steps in the assessment which deviate or need special attention.

Three further identified documents are primarily intended as tools for other stakeholders. DACEHTA (29) provides a specific support tool for health professionals and health care managers in the hospital setting. An HAS (30) document addresses manufacturers, research organizations, and project developers and aims to present methods and conditions for high-quality clinical assessment of MDs. LBI's one (Reference Nachtnebel31) is intended for decision makers assessing diagnostic procedures. The latter provides recommendations and a list of guiding questions derived from methods used by other institutions (e.g., IQWiG, NICE) to appraise the evidence base for diagnostic technologies. In these documents intended for other parties, some of the extraction elements of interest (e.g., re-assessment) are not addressed. Nevertheless, a synthesis of the most relevant provisions from all separate documents per domain is presented below (see Supplementary Tables 5a and b for details).

(Reference Hutton, McGrath, Frybourg, Tremblay, Bramley-Harker and Henshall1) Assessment Elements

All institutional documents focus on clinical effectiveness (including test accuracy) and safety. LBI (Reference Nachtnebel31) and ZiN (28) also address clinical utility of diagnostic tests. The former specifically refers to the six-level model for the evaluation of diagnostic technologies by Fryback and Thornbury. Moreover, NICE's Diagnostics Assessment (DAP) (25) and Medical Technologies Evaluation Programmes (DAP and MTEP, respectively) (26), DACEHTA (29), and ZiN (28) consider costs and/or economic evaluation. Social and organizational aspects are addressed by DACEHTA (29), LBI (Reference Kisser and Zechmeister-Koss24), and NICE (2527). Finally, ethical and legal aspects are taken into account in LBI's document (Reference Kisser and Zechmeister-Koss24).

(Reference Campillo-Artero2) Evidence Procurement and Selection

Assessment Base

HAS (23) and LBI (Reference Kisser and Zechmeister-Koss24) and the NICE Interventional Procedures Programme (IPP) (27) base their assessments on internally conducted research. Within NICE's DAP the assessment report is produced by an external assessment group following the DAP manual (25). NICE MTEP (26), and ZiN (28) expect applicants/sponsors to conduct a systematic review and submit their assessments and underlying data for evaluation; in MTEP's case this is carried out by the external assessment center. DACEHTA (29) and LBI (Reference Nachtnebel31) suggest that target stakeholders carry out a systematic review and refer to their general documents for related methodological guidance.

Type of Evidence

All institutions state a clear preference for direct evidence based on randomized controlled trials (RCTs), but also accept or suggest other designs under certain circumstances. In this respect, NICE IPP (27) states that “[. . .] the highest value has traditionally been placed on evidence from meta-analysis of RCTs or one or more well-designed and executed RCTs [. . .] In some instances, non-randomized studies may be more informative about outcomes”. LBI's document on biomarker tests (Reference Kisser and Zechmeister-Koss24) notes that for some specific research questions “[. . .] the only evidence feasible and/or ethical will be from observational studies and different evidence hierarchies may apply”, such as the ones elaborated by the Australian National Health and Medical Research Council.

ZiN (28) proposes a search for indirect evidence by means of an “[. . .] analysis framework based on a comparison of the usual test-plus-treatment-strategy and the proposed strategy”. LBI (Reference Nachtnebel31) also considers linked evidence as an option for the evaluation of diagnostic technologies but states clearly “[. . .] that the use must be reasonably justified.” HAS (30) describes alternative methods for conducting MD studies, including experimental designs such as Zelen's design or the randomized consent design, when conventional RCTs may be difficult to implement (e.g., due to lack of direct evidence, randomization or blinding). But they also clearly emphasize that “[. . .] The choice of an observational study should remain the exception [. . .]”.

Endpoints

All documents stress the importance of patient-relevant endpoints regarding both assessment (LBI) (Reference Kisser and Zechmeister-Koss24;Reference Nachtnebel31), NICE IPP (27) and clinical trial development (HAS) (30): “Evidence of improved survival, reduced morbidity or improved quality of life carry more weight in decision making than surrogate outcomes” (27). If intermediate end points are used they “[. . .] must have been justified and validated in previous studies” (30). LBI (Reference Kisser and Zechmeister-Koss24) additionally states that an “accurate diagnosis is a prerequisite for a successful therapy, but it should not be seen in isolation. Instead, the benefit to patients resulting from diagnosis should be measured in patient-relevant outcomes [. . .]”.

Comparator

LBI (Reference Kisser and Zechmeister-Koss24;Reference Nachtnebel31), NICE DAP (25), and ZiN (28) refer to a so called “standard” technology as the comparator to consider. In the NICE MTEP manual (26) this is defined as “[. . .] a similar or equivalent technology used as part of current management, but it can be no intervention”. In NICE IPP (27), the comparator also depends on the circumstances and is either active treatment or placebo. The HAS guide (30) discusses the ethical acceptance of inactive controls in more detail.

(3) Appraisal of Evidence Quality

All documents include provisions on the critical appraisal of evidence before conclusions are drawn. For example, LBI (Reference Kisser and Zechmeister-Koss24), NICE DAP (25), and ZiN (28) recommend the use of the QUADAS instrument or its revised iteration when assessing the accuracy of tests. In addition, ZiN (28) and LBI (Reference Kisser and Zechmeister-Koss24) use the GRADE instrument, including its adaptation for diagnostic accuracy and prognostic studies.

(Reference Henschke, Panteli, Perleth and Busse4) Review process and transparency

All institutional documents endorse stakeholder involvement in the assessment production process as well as a subsequent external review/consultation. Depending on the institution, draft and/or final reports will be published online to ensure transparency.

(Reference Allen, Pichler, Wang, Patel and Salek5) Re-assessment

Except for NICE MTEP (26) which “updates the literature search every 3 years to ensure that relevant new evidence is identified”, no specific intervals for the regular reassessment of MDs were given in the documents (where applicable). NICE (25;27) indicates that a renewed evaluation is advisable if newer evidence becomes available and HAS (30) suggests for an ideal assessment process surveillance and regular re-assessment of the use of a technology in practice.

(Reference Stephens, Handke and Doshi6) Knowledge Exchange and Transferability

NICE MTEP (26) and IPP (27) explicitly state that they draw on other HTA reports for their assessments. To ensure transferability of results differences between study and application context (e.g., patient population, intervention setting) should be documented (NICE DAP) (25).

(7) Cost and Economic Evaluation

Detailed information could be obtained from the documents on NICE's DAP (25) and MTEP (26). Within MTEP (26) cost-consequence analysis is used for most technologies (including cost-saving diagnostics), whereas DAP (25) undertakes complex assessments of diagnostic technologies using cost-effectiveness analysis.

(Reference Mathes, Jacobs, Morfeld and Pieper8) Other Device-specific Factors

The methodological documents of HAS (30), NICE's MTEP (26), and IPP (27) point out other relevant factors with respect to the assessments of MDs, such as the effect of operator or user experience on the results of a technique (“learning curve”) or dynamic pricing. NICE (26) underlines that “The technology of devices may advance rapidly. This means that both efficacy and safety outcomes reported in the published literature may not accord with ‘current practice’ using technologically more advanced devices.” Thus, “[. . .] the guidance may refer to the potentially important influence of different devices on the safety and/or efficacy of the procedure, or to rapid technological developments described by the Specialist Advisers, manufacturers or other sources” (26).

In NICE's MTEP (25), “The Committee may make recommendations for use of the technology in specific circumstances only (e.g., by staff with certain training)”. HAS (30) recommends that “During the development of a new medical device, provisions must be made for training and learning plans”. Also the volume of activity has to be taken into account because there is a “[. . .] significant association between favorable clinical results and the doctor's volume of activity [. . .]”. In addition, LBI (Reference Nachtnebel31) emphasizes research gaps in the field of diagnostic technologies, for example, inaccurate reference standards.

DISCUSSION

Main Findings of the Study

Out of the eighty-four identified institutions, forty-seven are actively involved in the commissioning or performing of HTA reports on MDs (assessment and/or appraisal). Sufficient information was not publicly available for all institutions. Variability still exists in the understanding of what the term MDs entails, which is also reflected in different structural, procedural and methodological elements among institutions. Although a large number of general methodological documents were identified, only five institutions developed specific documents for the assessment of MDs. Interestingly, five out of nine separate documents focus on diagnostic technologies (including tests). Similarities between documents for internal use are mainly related to the type of preferred evidence and outcome parameters to be considered, appraisal of evidence quality and stakeholder involvement. Differences mainly concern the assessment base and comparator used, largely reflecting the different types of devices evaluated (diagnostic versus therapeutic).

Institutions such as NICE and HAS also mention additional parameters, such as learning curves or usage setting, as crucial elements that should be considered. Only few institutions made separate provisions for the assessment of MDs in their general methodological documents. This reflects that certain evaluation steps described in a general methodological paper apply to all types of technologies, including MDs.

Comparison to Previous Literature and Current Activities

Several comparative studies have investigated HTA practices in Europe (Reference Allen, Pichler, Wang, Patel and Salek5Reference Gulacsi, Rotar and Niewada10), however, none with a specific focus on MDs. The most recent example is the WHO Global Survey on HTA from 2015, which includes fifty-three European countries (7).

In parallel to the present study, Ciani et al. (Reference Ciani, Wilcher and Blankart11) conducted a survey on MD activities in non-EU HTA countries using a similar approach (adapted for non-EU countries). They identified thirty-six institutions whose remit included the evaluation of MDs, and to which we will briefly compare our findings. However, Ciani et al. only consider twenty-seven of the identified institutions to be MD-specific (i.e., with MD-specific elements of organizational structure, process, or methods).

In the study by Ciani et al., mostly governmental institutions could be identified as actively involved in MD assessment (50 percent), which is more or less comparable to our findings (36 percent). Fifty percent of thirty-six institutions use an MD classification system/definition, which is also similar to our findings (47 percent). Of interest, 70 percent of all institutions in Ciani's survey have a process for priority setting, compared with 34 percent in our sample. This difference may be attributable to the fact that Ciani et al. included additional participatory elements in their methodological approach, which may have led to higher information availability on this issue; publicly available information on priority-setting in our sample was often lacking.

Ciani et al. reported nearly the same percentage of institutions with publicly available methodological documents as our study (50 percent versus 51 percent). However, only one institution has a MD-specific guide compared with five in our sample. This could reflect the fact that more countries in Ciani's sample are considered emerging settings regarding HTA and are, therefore, less likely to have differentiated practices yet.

Recent activities by HTA networks, HTA institutions, health service research institutions, and at regulatory level show that the methodology of MD evaluation is being discussed and taken forward: EUnetHTA developed a methodological guideline for HTA of therapeutic MDs (32), the Belgian Health Care Knowledge Centre demanded that efficacy requirements for obtain a CE label for high-risk medical devices be raised and transparency of clinical data underlying decision making be granted (Reference Baeyens, Pouppez and Slegers33) and the Royal Netherlands Academy of Arts and Sciences provides guidance for research suitable for assessing and evaluating benefits and performance tailored to various types of devices (34). In Germany, the “Act to Further Develop the Financial Structures and Quality in SHI”, enforced in 2015, has been considered a door-opener for the benefit assessment of MDs in conjunction with reimbursement (16). It introduces a systematic approach to the evaluation of new methods incorporating the application of high-risk MDs.

Strengths and Limitations

The strengths of the presented research lie with the broad systematic approach adopted to identify institutions involved in HTA in Europe combined with a focus on MDs, the collection of a comprehensive range of information and the quality assurance of all steps of the systematic approach by reviewer pairs.

However, reliance on published literature and online sources alone meant that the study did not identify sufficient information for all included institutions. This was due to both a lack of publicly available information and language barriers. Nevertheless, existing papers which used surveys to gather data directly from the representatives of HTA institutions faced the problem of low response rates, leading to similarly partial overviews (Reference Stephens, Handke and Doshi6;7). Despite having been conceived to be sufficiently broad, the systematic approach used to identify institutions involved in HTA production in European countries, may not have captured every institution involved in HTA production. Overlooked institutions might include those not part of international networks, not discussed in comparative publications or with a lack of publicly available information about their MD-specific focus. Despite our best efforts, we cannot rule out the possibility that available information (including documents) was overlooked. As most of the institutions seem to use their general methodological documents to assess MDs, a more in-depth analysis of these would be necessary to get an overall picture. In addition, due to the varying objectives and target groups, the presented overview of results from MD-specific documents does not necessarily depict all interesting details.

Implication for Policy and Research

In Europe there is a growing recognition of the importance of methodological guidelines for HTA production, reflected also in collaborative initiatives toward methodological standardization (e.g., EUnetHTA) (22). However, the development and implementation of specific methodological tools for the assessment of MDs is still limited to the national level. Although some HTA institutions already consider different approaches for therapeutic and diagnostic technologies, other elements related to the use of MDs, such as device-operator interaction and the level of device activity require further methodological discussion. In conjunction with the efficient use of resources, our results raise the question if fully separate methodological guides are needed for the evaluation of MDs or if it is sufficient to include supplementary specifications in the general manuals of each institution. We aim to further explore this issue: an interview survey among selected HTA institutions included in this overview with varying experience in the assessment of MDs aims to contextualize and expand information obtained so far and explore potential ideas for the future.

CONCLUSIONS

The work carried out aimed to identify and compare current methods, processes, and institutional practices for the evaluation of MDs in European countries to advance the debate on whether existing assessment tools have to be modified or adapted or if a wholly new approach is needed.

Despite growing consensus on the importance of the assessment of especially high-risk devices, existing initiatives for differentiated assessment practices, and relevant international activities, specific methodological tools for the assessment of MDs are rarely developed and implemented at the European level. Separate additional signposts incorporated in existing general methods guides may be sufficient for the evaluation of MDs.

CONFLICTS OF INTEREST

B.O. works for the Federal Joint Committee (G-BA), which is the highest decision-making body of the joint self-government of physicians, dentists, hospitals and health insurance funds in Germany. One of its tasks is issuing directives determining the benefit basket of the statutory health insurance funds (GKV). B.O. is also a PhD candidate at Berlin University of Technology. S.F., D.P. and R.B. report no conflict of interest.

References

REFERENCES

1. Hutton, J, McGrath, C, Frybourg, J, Tremblay, M, Bramley-Harker, E, Henshall, C. Framework for describing and classifying decision-making systems using technology assessment to determine the reimbursement of health technologies (fourth hurdle systems). Int J Technol Assess Health Care. 2006;22:1018.Google Scholar
2. Campillo-Artero, C. A full-fledged overhaul is needed for a risk and value-based regulation of medical devices in Europe. Health Policy. 2013;113:3844.Google Scholar
3. European Parliament and Council of the European Union. Directive 2007/47/EC of the European Parliament and of the Council of 5 September 2007 amending Council Directive 90/385/EEC on the approximation of the laws of the Member States relating to active implantable medical devices, Council Directive 93/42/EEC concerning medical devices and Directive 98/8/EC concerning the placing of biocidal products on the market. http://ec.europa.eu/consumers/sectors/medical-devices/files/revision_docs/2007-47-en_en.pdf (accessed October 29, 2015).Google Scholar
4. Henschke, C, Panteli, D, Perleth, M, Busse, R. A taxonomy of medical devices in the logic of HTA. Int J Technol Assess Health Care. 2015;31:17.Google Scholar
5. Allen, N, Pichler, F, Wang, T, Patel, S, Salek, S. Development of archetypes for non-ranking classification and comparison of European National Health Technology Assessment systems. Health Policy. 2013;113:305312.Google Scholar
6. Stephens, JM, Handke, B, Doshi, JA, et al. International survey of methods used in health technology assessment (HTA): Does practice meet the principles proposed for good research? J Comp Eff Res. 2012;2:2944.Google Scholar
7. World Health Organization. Global survey on health technology assessment by national authorities. Main findings. Geneva: WHO; 2015. http://www.who.int/health-technology-assessment/MD_HTA_oct2015_final_web2.pdf?ua=1 (accessed May 16, 2016).Google Scholar
8. Mathes, T, Jacobs, E, Morfeld, JC, Pieper, D. Methods of international health technology assessment agencies for economic evaluations- A comparative analysis. BMC Health Serv Res. 2013;13:371.Google Scholar
9. Kleijnen, S, George, E, Goulden, S, et al. Relative effectiveness assessment of pharmaceuticals: Similarities and differences in 29 jurisdictions. Value Health. 2012;15:954960.CrossRefGoogle ScholarPubMed
10. Gulacsi, L, Rotar, A, Niewada, M, et al. Health technology assessment in Poland, the Czech Republic, Hungary, Romania and Bulgaria. Eur J Health Econ. 2014;15 (Suppl 1):S13S25.Google Scholar
11. Ciani, O, Wilcher, B, Blankart, C, et al. Health technology assessment of medical devices: A survey of non-European union agencies. Int J Technol Assess Health Care. 2015;31:154165.Google Scholar
12. Panteli, D, Kreis, J, Busse, R. A systematic approach for identifying current practices of doing HTAs across international HTA agencies [poster]. Berlin; 2012. http://www.mig.tu-berlin.de/fileadmin/a38331600/2012.publication/Panteli__Kreis___Busse_2012.pdf (accessed October 29, 2015).Google Scholar
13. Drummond, MF, Schwartz, JS, Jönsson, B, Luce, BR, Neumann, PJ. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Tech Assess Health Care. 2008;24:244258.Google Scholar
14. Banta, HD, Luce, B. Health care technology and its assessments: An international perspective. Oxford: Oxford University Press; 1993.Google Scholar
15. Perleth, M, Zentner, A, Hoffmann, C, Gibis, B. Priorisierung von HTA-Themen. In: Perleth, M, Busse, R, Gehardus, A, Gibis, B, Lühmann, D, Zentner, A, eds. Health technology assessment. Konzepte, Methoden, Praxis für Wissenschaft und Entscheidungsfindung, 2., aktualisierte und erweiterte Auflage, MWV Medizinisch Wissenschaftliche Verlagsgesellschaft; 2014. p. 114.Google Scholar
16. Health Information and Quality Authority. Guidelines for evaluating the clinical effectiveness of health technologies in Ireland; 2014. http://www.hiqa.ie/system/files/HTA-Clinical-Effectiveness-Guidelines.pdf (accessed October 29, 2015).Google Scholar
17. Agency for Health Technology Assessment in Poland. Guidelines for conducting Health Technology Assessment (HTA); 2009. http://www.aotm.gov.pl/www/assets/files/wytyczne_hta/2009/Guidelines_HTA_eng_MS_29062009.pdf (accessed October 29, 2015).Google Scholar
18. Fröschl, B, Bornschein, B, Brunner-Ziegler, S, et al. Methodenhandbuch für health technology assessment Version 1.2012 [methods handbook for health technology assessment; version 1.2012]. Wien: Gesundheit Österreich GmbH; 2012. http://hta.lbg.ac.at/uploads/tableTool/UllCmsPage/gallery/Methodenhandbuch.pdf (accessed November 30, 2015).Google Scholar
19. Agency for Quality and Accreditation in Health Care, Department for Development, Research and Health Technology Assessment. The Croatian guideline for health technology assessment process and reporting. 1st ed. Zagreb, 2011 Feb. http://aaz.hr/sites/default/files/hrvatske_smjernice_za_procjenu_zdravstvenih_tehnologija.pdf (accessed November 30, 2015).Google Scholar
20. Kristensen, FB, Sigmund, H, eds. Health technology assessment handbook. Copenhagen: Danish Centre for Health Technology Assessment, National Board of Health, 2007. http://sundhedsstyrelsen.dk/~/media/ECAAC5AA1D6943BEAC96907E03023E22.ashx (accessed November 30, 2015).Google Scholar
21. Institute for Quality and Efficiency in Health Care. General Methods, Version 4.2 of 22 April 2015. https://www.iqwig.de/download/IQWiG_General_Methods_Version_%204-2.pdf (accessed November 30, 2015).Google Scholar
22. Centre for Reviews and Dissemination. Systematic reviews. CRD's guidance for undertaking reviews in health care. York: University of York; 2009. http://www.york.ac.uk/media/crd/Systematic_Reviews.pdf (accessed November 30, 2015).Google Scholar
23. Haute Autorité de Santé. Rapid assessment method for assessing medical and surgical procedures. HAS; 2007. http://www.has-sante.fr/portail/upload/docs/application/pdf/rapid_assessment_method_eval_actes.pdf (accessed October 29, 2015).Google Scholar
24. Kisser, A, Zechmeister-Koss, I. Procedural guidance for the systematic evaluation of biomarker tests [Decision Support Document 77]. Vienna: Ludwig Boltzmann Institute for Health Technology Assessment; 2014. http://eprints.hta.lbg.ac.at/1041/1/DSD_77.pdf (accessed November 30, 2015).Google Scholar
25. National Institute for Health and Care Excellence. Diagnostics assessment programme manual. NICE; 2011. http://www.nice.org.uk/Media/Default/About/what-we-do/NICE-guidance/NICE-diagnostics-guidance/Diagnostics-assessment-programme-manual.pdf (accessed October 29, 2015).Google Scholar
26. National Institute for Health and Care Excellence. Medical technologies evaluation programme methods guide. NICE; 2011. http://www.nice.org.uk/Media/Default/About/what-we-do/NICE-guidance/NICE-medical-technologies/Medical-technologies-evaluation-programme-methods-guide.pdf (accessed October 29, 2015).Google Scholar
27. National Institute for Health and Care Excellence. Interventional procedures programme methods guide. NICE; 2007. http://www.nice.org.uk/Media/Default/About/what-we-do/NICE-guidance/NICE-interventional-procedures/The-interventional-procedures-programme-methods-guide.pdf (accessed October 29, 2015).Google Scholar
29. Danish Centre for Evaluation and Health Technology Assessment. Introduction to mini-HTA – A management and decision support tool for the hospital service. DACEHTA; 2005. http://sundhedsstyrelsen.dk/~/media/AF4E8E32B4E34A5BA70206556DDFF757.ashx (accessed October 29, 2015).Google Scholar
30. Haute Autorité de Santé. Methodological choices for the clinical development of medical devices. HAS; 2013. http://www.has-sante.fr/portail/upload/docs/application/pdf/2014-03/methodological_choices_for_the_clinical_development_of_medical_devices.pdf (accessed October 29, 2015).Google Scholar
31. Nachtnebel, A. Evaluation diagnostischer Technologien - Hintergrund, Probleme, Methoden. [HTA-Projektbericht; Nummer 36]. Vienna: Ludwig Boltzmann Institute for Health Technology Assessment; 2010. [Evaluation of diagnostic technologies]; http://eprints.hta.lbg.ac.at/898/1/HTA-Projektbericht_Nr36.pdf (accessed November 30, 2015).Google Scholar
32. EUnetHTA. Therapeutic medical devices [Second Draft Guideline]. http://www.eunethta.eu/sites/5026.fedimbo.belgium.be/files/news-attachments/2015-10-02_md_gl_sag_publiccons.pdf (accessed November 27, 2015).Google Scholar
33. Baeyens, H, Pouppez, C, Slegers, P, et al. Towards a guided and phased introduction of high-risk medical devices in Belgium. Brussels: Belgian Health Care Knowledge Centre (KCE); 2015. KCE Reports 249. D/2015/10.273/63.Google Scholar
34. Koninklijke Nederlandse Akademie van Wetenschappen (KNAW). Evaluation of new technology in health care. In need of guidance for relevant evidence. Amsterdam: KNAW; 2014.Google Scholar
35. Bundesministerium für Gesundheit. Gesetz zur Stärkung der Versorgung in der gesetzlichen Krankenversicherung (GKV-Versorgungsstärkungsgesetz–GKV-VSG) vom 16. Juli 2015 [Federal Ministry of Health. ‘Act to Further Develop the Financial Structures and Quality in SHI, 16 July 2015’]. http://www.bgbl.de/xaver/bgbl/start.xav?startbk=Bundesanzeiger_BGBl&jumpTo=bgbl115s1211.pdf (accessed November 30, 2015).Google Scholar
Figure 0

Figure 1. Flow chart of the publication selection process during the systematic literature search.

Figure 1

Table 1. Overview of Information about Role and Scope of Included Institutions

Figure 2

Table 2. Overview of Included Institutions and Information Identified Online about HTA Report Production and Methodology

Figure 3

Figure 2. Overview of identified and analyzed methodological documents.Note. MDs = medical devices; DACEHTA = Danish Centre for Health Technology Assessment; HAS = Haute Autorité de Santé; LBI = Ludwig Boltzmann Institute for Health Technology Assessment; NICE = National Institute for Health and Care Excellence; ZiN = Zorginstituut Nederland; *Nine out of 45 general methodological documents could not be extracted due to language barrier and thus are not considered in the analysis.

Supplementary material: File

Fuchs supplementary material S1

Supplementary Figure

Download Fuchs supplementary material S1(File)
File 49.6 KB
Supplementary material: File

Fuchs supplementary material S2

Supplementary Table

Download Fuchs supplementary material S2(File)
File 25.4 KB
Supplementary material: File

Fuchs supplementary material S3

Supplementary Table

Download Fuchs supplementary material S3(File)
File 28.7 KB
Supplementary material: File

Fuchs supplementary material S4

Supplementary Table

Download Fuchs supplementary material S4(File)
File 24.4 KB
Supplementary material: File

Fuchs supplementary material S5

Supplementary Table

Download Fuchs supplementary material S5(File)
File 41.9 KB
Supplementary material: File

Fuchs supplementary material S6

Supplementary Table

Download Fuchs supplementary material S6(File)
File 30.5 KB
Supplementary material: File

Fuchs supplementary material S7

Supplementary Table

Download Fuchs supplementary material S7(File)
File 19.2 KB
Supplementary material: File

Fuchs supplementary material S8

Supplementary Table

Download Fuchs supplementary material S8(File)
File 37.2 KB