INTRODUCTION
Health Technology Assessment (HTA) as a decision support tool for coverage has been most frequently formally established to evaluate pharmaceuticals (Reference Hutton, McGrath, Frybourg, Tremblay, Bramley-Harker and Henshall1). The suitability of this methodology for medical devices (MDs) has been gaining interest as a topic of scientific discourse, especially in light of the discussion of the introduction of new regulatory provisions for their market authorization (Reference Campillo-Artero2). Europe is one of the biggest markets for MDs, which encompass a broad and heterogeneous range of technologies. According to the European Union, a medical device is defined as “any instrument, apparatus, appliance, software, material or other article, whether used alone or in combination, including the software intended by its manufacturer to be used specifically for diagnostic and/or therapeutic purposes and necessary for its proper application, intended by the manufacturer to be used for human beings for the purpose of diagnosis, prevention, monitoring, treatment or alleviation of disease” (3).
There are different classifications of MDs ranging from risk-based (such as the EU Directives 93/42/EEC & 90/385/EEC) to those incorporating financial elements (e.g., OECD System of Health Accounts) or aiming to facilitate common device identification (e.g., Global Medical Device Nomenclature). In a recent classification incorporating the HTA perspective (Reference Henschke, Panteli, Perleth and Busse4), Henschke et al. argue that MDs can be divided into three main groups: (i) assistive technology devices (directly used by patients, e.g., wheel chair), (ii) artificial body parts (implanted by a medical procedure, e.g., stents), and (iii) MDs for the assistance of medical professionals (e.g., PET/CT scanner).
Currently, there is no overview of existing structural, procedural, and/or methodological approaches of HTA institutions for assessing MDs in European countries. Previous comparative research has looked, among other things, at HTA institutional practice in general (Reference Allen, Pichler, Wang, Patel and Salek5) mostly with an international focus (Reference Stephens, Handke and Doshi6;7), concentrated on specific aspects of HTA such as economic evaluation (Reference Mathes, Jacobs, Morfeld and Pieper8), focused on pharmaceuticals (Reference Kleijnen, George and Goulden9) or on selected emerging settings (Reference Gulacsi, Rotar and Niewada10). Ciani et al. recently published a relevant overview of practices among institutions outside the European Union (Reference Ciani, Wilcher and Blankart11).
The aim of this work was to (i) identify institutions involved in HTA of MDs in Europe and to (ii) explore their structural, procedural, and methodological characteristics, particularly in regard to MD assessment.
METHODS
Selection of HTA Institutions
A comprehensive approach was adopted to identify institutions involved in HTA in European countries. The identification process was based on previous research (Reference Panteli, Kreis and Busse12), which was modified to fit the project objectives. The membership lists of INAHTA, EUnetHTA, HTAi, and HTAi Vortal Europe were combined and supplemented by institutions identified in comparative articles published in this journal in 2011 and 2012. From the resulting pool, institutions from EU Member States and the European Free Trade Association (EFTA) countries were included in the analysis.
Data Collection
For each included institution, the institutional Web site and other online sources (e.g., INAHTA Web site, where available) were searched with the aim of obtaining relevant information on structural, procedural, and methodological characteristics. Information directly displayed on the Web site as well as uploaded documents were considered (see “Data Extraction”).
To supplement these findings, a systematic literature search was performed in MEDLINE, EMBASE and the Cochrane Library. The search strategy consisted of a list of included institutions with both their original name and their preferred English translation, the type of technology, e.g., “MDs” and the type of publication, for example, “methods” and “evaluation” (an overview of the main search components is shown in Supplementary Table 1; full search strategy is available on request). The search was performed in September 2013. After removal of duplicates, the remaining citations were screened for relevance. Publications were included if they referred to the methodology or process of HTA for MDs by an institution from the predetermined pool. For this purpose, our underlying understanding of MDs encompassed all three types described by Henschke et al. (Reference Henschke, Panteli, Perleth and Busse4) (see introduction), both for diagnostic and therapeutic purposes. A 5-year window was selected to ensure up-to-dateness. Only full-text documents were included. The selection of publications was performed in two steps (title-abstract and full-text screening).
To gain more information on the health system context, Health Systems in Transition country reports (latest version per included country) available from the European Observatory on Health Systems and Policies were also consulted.
Data Extraction
To systematize information collection, an extraction tool with twenty items was developed based on Drummond's key principles for HTA programs (Reference Drummond, Schwartz, Jönsson, Luce and Neumann13). The tool captured the domain's role and link to decision making (e.g., institution's place in the country's HTA system), structure (e.g., annual funding), scope (e.g., types of technologies addressed), process (e.g., priority-setting for topic selection), HTA report production (e.g., producing/commissioning reports), and methodology (e.g., available methodological documents).
Once the overview of information on these domains was completed, our research focused mainly on methodological elements. For this purpose, we screened and analyzed all methodological documents identified during the systematic information collection using a second extraction tool, which also incorporated elements from Drummond's framework.
This second tool captured the domains assessment elements (e.g., clinical effectiveness), evidence procurement and selection (e.g., manufacturer submissions), appraisal of evidence quality (e.g., tools for appraisal), review process and transparency (e.g., stakeholder involvement), re-assessment (e.g., specific interval), knowledge exchange and transferability (e.g., reports from HTA agencies), and cost and economic evaluation (e.g., type of analysis). Both tools are available on request.
Data Analysis
Every step of the process described above was performed independently by two reviewer pairs. Discrepancies were resolved by discussion and consensus. Based on the extracted information, institution-specific profiles were compiled and aggregated into two overview tables containing the most relevant information (see Supplementary Tables 2 and 3). The main results are presented below following further abstraction.
RESULTS
Selected HTA Institutions
The composed pool included eighty-four institutions after the removal of duplicates (n = 99). In a first step, information on all institutions was obtained from the Web sites and online sources. Institutions were excluded from further analysis if they (i) were not involved in general with HTA production (n = 33; neither producing nor commissioning HTAs, e.g., funding/coordinating HTA activities); or (ii) focused only on pharmaceuticals (n = 4). Forty-seven institutions were thus included in the analysis (see Supplementary Figure 1).
Collected Institution-specific Data
Information on included institutions was supplemented by the systematic literature search (Figure 1). The search yielded 4,393 publications. After removal of duplicates and screening, thirty-seven publications remained for analysis.
Role and Scope of Included Institutions
Table 1 presents selected information on the role and scope of the included institutions. The majority of institutions (36 percent; n = 17) represent governmental institutions, followed by independent research entities which function as governmental institutions (23 percent; n = 11).
Note. Type of institution, evolutionary stage of technologies assessed, and definition of medical devices: own categorization based on available information; Type of technologies addressed: based on categorization by Banta and Luce (Reference Banta and Luce14); Criteria for selection and prioritization of technologies for assessment: own compilation based on Perleth et al. (Reference Perleth, Zentner, Hoffmann, Gibis, Perleth, Busse, Gehardus, Gibis, Lühmann and Zentner15); *Identified information clearly stated that no explicit process for priority setting/no specific definition of medical devices exists/is used;**Prioritization carried out by the commissioning institution; For more details on each institution, see Supplementary Table 2
The structural elements explored (not shown in Table 1) comprised information regarding annual funding, number and background of members/staff and resources explicitly dedicated to MDs. None of the included institutions provides publicly available information on all four aspects. Excluding consultants and experts, staff numbers range from 8 (UTA) to more than 500 (NICE) and vary according to institution remit. The professional background of staff comprises a broad range and encompasses nearly every scientific field related to health care.
Following the categorization of health technologies provided by Banta and Luce (Reference Banta and Luce14), 53 percent (n = 25) of included institutions cover a broad range of technologies including drugs, MDs, procedures, and systems (e.g., public health programs). Eighty-seven percent (n = 41) address drugs, 83 percent (n = 39) procedures, and 62 percent (n = 29) systems. Information about the evolutionary stage in which technologies are assessed was provided by forty-one of the forty-seven institutions. Mostly “new technologies” (80 percent; n = 36) are assessed.
Information about the definition of MDs used was obtainable in twenty-two cases (47 percent). Most institutions (41 percent, n = 9; e.g., FinOHTA) refer either to a general definition for (health) technologies which include MDs (e.g., INAHTA/HTAi definition from HTA glossary) or to the EU-Directives on MDs (36 percent; n = 8; e.g., AAZ). A small proportion of institutions (18 percent; n = 4; e.g., OGYÉI TEI) have (and provide) their own definition of MDs, often based on national legal or regulatory provisions.
An explicit process for priority setting is used by 34 percent (n = 16) of all institutions; for 26 percent (n = 12) a defined prioritization process is not applicable because this is done by the commissioning institution (e.g., MoH). In the case of an existing explicit process, the following categories proposed by Perleth et al. (Reference Perleth, Zentner, Hoffmann, Gibis, Perleth, Busse, Gehardus, Gibis, Lühmann and Zentner15) are considered most often: medical-scientific criteria (e.g., efficacy of intervention; 75 percent; n = 12), criteria related to epidemiological significance of disease/burden of disease (e.g., importance of assessment; 69 percent; n = 11), as well as economic criteria (e.g., better allocation of resources; 63 percent; n = 10).
HTA Report Production
In total, forty of forty-seven institutions (85 percent) produce reports (in-house or in collaboration with other institutions), two (4 percent) institutions commission reports (e.g., NIHR_NETSCC) and four (9 percent) institutions do both (e.g., G-BA). For forty (85 percent) institutions, MD-specific reports are publicly available (see Table 2).
Note. MDs = medical devices; HTA report production: Producing reports, including in collaboration with other institutions/partnership; Reports defined as available even if only abstract/summary of full reports are available (but not only the title); Methodology: General information on methodological approach available on institutions Web site not considered in this table; *For definition of MDs used in this table, see Henschke et al. (Reference Henschke, Panteli, Perleth and Busse4); **Due to language barrier, no estimation possible; For more details on each institution, see Supplementary Table 3.
Methodology
For 49 percent (n = 23) of included institutions, at least one methodological guide or other official document detailing applied methods was publicly available. In total, fifty-four methodological documents could be identified. These documents represent mainly (n = 36) general provisions for the methodology/process underlying the institution's outputs, named for example “handbook” or “method manual,” or papers concentrating on methods for specific HTA domains, such as economic evaluation and/or budget impact analysis (n = 6) and stakeholder engagement (n = 1). A further three are regulatory documents. Only nine of the fifty-four documents concentrate solely on the evaluation of a specific type of technology, namely MDs.
Identified Methodological Documents
From the identified fifty-four methodological documents, forty-five were analyzed using the developed extraction tool (see Figure 2 and Supplementary Table 4).
The following sections focus primarily on methodological documents specifically addressing MDs (n = 9; see Figure 2), aiming at describing their characteristics and key content. A brief insight into provisions for MD assessment included in general methodological documents (n = 36) is provided at the start.
Analysis of General Methodological Documents Regarding Provisions for MDs
A common characteristic of all general methodological guides included in the sample is that they are intended to be applicable to all healthcare technologies within the institution's remit. However, two institutions state that within the assessment process “[. . .] some aspects may be more relevant to particular technologies than others” (HIQA) (16) or need to be adapted (AOTMiT) (17). Three further institutions give more specific information, recommending the use of MD registers (BIQG/GÖG) (Reference Fröschl, Bornschein and Brunner-Ziegler18), providing for a differentiated approach to topic selection and prioritization (AAZ) (19) or highlighting organizational features that should be taken into account: “When relevant, the technology description can furthermore include: who is to operate the technology, technical and professional requirements of the operator. . .” (DACEHTA) (Reference Kristensen and Sigmund20).
Two general methodological documents explicitly include additional sections for the evaluation of MDs. IQWiG's general methods guide (21) includes separate sections on the assessment of nondrug interventions, diagnostic procedures, early diagnosis and screening, as well as determining the potential for benefit of newly developed technologies considered for coverage with evidence development (§137e, Social Code Book V). However, several cross-references to general sections of the document highlight that there are no fundamental differences in the evaluation of these technologies when compared with pharmaceuticals. The CRD guidance (22) for undertaking systematic reviews contains a chapter for diagnostic and prognostic tests. According to the document “[. . .] much of the research on diagnostic tests is in the form of test accuracy studies”. Thus, the chapter discusses among others methods developed specifically to deal with such studies.
Analysis of Methodological Documents Discussing MDs Only
The following synthesis has been structured to reflect differences in the objectives and target groups of documents detailing the assessment of MDs only.
The documents of HAS (23), LBI (Reference Kisser and Zechmeister-Koss24), NICE (25–27), and ZiN (28) describe the institutions own methodological approaches when assessing MDs, concentrating on diagnostic technologies (25), medical/biomarker tests (Reference Kisser and Zechmeister-Koss24;28), and interventional/medical and surgical procedures (23;26;27). The majority of these documents concern full evaluations with the exception of the HAS document (23), which describes rapid assessments. LBI's (Reference Kisser and Zechmeister-Koss24) guidance aims to complement its general internal manual for evidence synthesis on the specific issue of effectiveness and safety of biomarker tests. Therefore, it covers only certain steps in the assessment which deviate or need special attention.
Three further identified documents are primarily intended as tools for other stakeholders. DACEHTA (29) provides a specific support tool for health professionals and health care managers in the hospital setting. An HAS (30) document addresses manufacturers, research organizations, and project developers and aims to present methods and conditions for high-quality clinical assessment of MDs. LBI's one (Reference Nachtnebel31) is intended for decision makers assessing diagnostic procedures. The latter provides recommendations and a list of guiding questions derived from methods used by other institutions (e.g., IQWiG, NICE) to appraise the evidence base for diagnostic technologies. In these documents intended for other parties, some of the extraction elements of interest (e.g., re-assessment) are not addressed. Nevertheless, a synthesis of the most relevant provisions from all separate documents per domain is presented below (see Supplementary Tables 5a and b for details).
(Reference Hutton, McGrath, Frybourg, Tremblay, Bramley-Harker and Henshall1) Assessment Elements
All institutional documents focus on clinical effectiveness (including test accuracy) and safety. LBI (Reference Nachtnebel31) and ZiN (28) also address clinical utility of diagnostic tests. The former specifically refers to the six-level model for the evaluation of diagnostic technologies by Fryback and Thornbury. Moreover, NICE's Diagnostics Assessment (DAP) (25) and Medical Technologies Evaluation Programmes (DAP and MTEP, respectively) (26), DACEHTA (29), and ZiN (28) consider costs and/or economic evaluation. Social and organizational aspects are addressed by DACEHTA (29), LBI (Reference Kisser and Zechmeister-Koss24), and NICE (25–27). Finally, ethical and legal aspects are taken into account in LBI's document (Reference Kisser and Zechmeister-Koss24).
(Reference Campillo-Artero2) Evidence Procurement and Selection
Assessment Base
HAS (23) and LBI (Reference Kisser and Zechmeister-Koss24) and the NICE Interventional Procedures Programme (IPP) (27) base their assessments on internally conducted research. Within NICE's DAP the assessment report is produced by an external assessment group following the DAP manual (25). NICE MTEP (26), and ZiN (28) expect applicants/sponsors to conduct a systematic review and submit their assessments and underlying data for evaluation; in MTEP's case this is carried out by the external assessment center. DACEHTA (29) and LBI (Reference Nachtnebel31) suggest that target stakeholders carry out a systematic review and refer to their general documents for related methodological guidance.
Type of Evidence
All institutions state a clear preference for direct evidence based on randomized controlled trials (RCTs), but also accept or suggest other designs under certain circumstances. In this respect, NICE IPP (27) states that “[. . .] the highest value has traditionally been placed on evidence from meta-analysis of RCTs or one or more well-designed and executed RCTs [. . .] In some instances, non-randomized studies may be more informative about outcomes”. LBI's document on biomarker tests (Reference Kisser and Zechmeister-Koss24) notes that for some specific research questions “[. . .] the only evidence feasible and/or ethical will be from observational studies and different evidence hierarchies may apply”, such as the ones elaborated by the Australian National Health and Medical Research Council.
ZiN (28) proposes a search for indirect evidence by means of an “[. . .] analysis framework based on a comparison of the usual test-plus-treatment-strategy and the proposed strategy”. LBI (Reference Nachtnebel31) also considers linked evidence as an option for the evaluation of diagnostic technologies but states clearly “[. . .] that the use must be reasonably justified.” HAS (30) describes alternative methods for conducting MD studies, including experimental designs such as Zelen's design or the randomized consent design, when conventional RCTs may be difficult to implement (e.g., due to lack of direct evidence, randomization or blinding). But they also clearly emphasize that “[. . .] The choice of an observational study should remain the exception [. . .]”.
Endpoints
All documents stress the importance of patient-relevant endpoints regarding both assessment (LBI) (Reference Kisser and Zechmeister-Koss24;Reference Nachtnebel31), NICE IPP (27) and clinical trial development (HAS) (30): “Evidence of improved survival, reduced morbidity or improved quality of life carry more weight in decision making than surrogate outcomes” (27). If intermediate end points are used they “[. . .] must have been justified and validated in previous studies” (30). LBI (Reference Kisser and Zechmeister-Koss24) additionally states that an “accurate diagnosis is a prerequisite for a successful therapy, but it should not be seen in isolation. Instead, the benefit to patients resulting from diagnosis should be measured in patient-relevant outcomes [. . .]”.
Comparator
LBI (Reference Kisser and Zechmeister-Koss24;Reference Nachtnebel31), NICE DAP (25), and ZiN (28) refer to a so called “standard” technology as the comparator to consider. In the NICE MTEP manual (26) this is defined as “[. . .] a similar or equivalent technology used as part of current management, but it can be no intervention”. In NICE IPP (27), the comparator also depends on the circumstances and is either active treatment or placebo. The HAS guide (30) discusses the ethical acceptance of inactive controls in more detail.
(3) Appraisal of Evidence Quality
All documents include provisions on the critical appraisal of evidence before conclusions are drawn. For example, LBI (Reference Kisser and Zechmeister-Koss24), NICE DAP (25), and ZiN (28) recommend the use of the QUADAS instrument or its revised iteration when assessing the accuracy of tests. In addition, ZiN (28) and LBI (Reference Kisser and Zechmeister-Koss24) use the GRADE instrument, including its adaptation for diagnostic accuracy and prognostic studies.
(Reference Henschke, Panteli, Perleth and Busse4) Review process and transparency
All institutional documents endorse stakeholder involvement in the assessment production process as well as a subsequent external review/consultation. Depending on the institution, draft and/or final reports will be published online to ensure transparency.
(Reference Allen, Pichler, Wang, Patel and Salek5) Re-assessment
Except for NICE MTEP (26) which “updates the literature search every 3 years to ensure that relevant new evidence is identified”, no specific intervals for the regular reassessment of MDs were given in the documents (where applicable). NICE (25;27) indicates that a renewed evaluation is advisable if newer evidence becomes available and HAS (30) suggests for an ideal assessment process surveillance and regular re-assessment of the use of a technology in practice.
(Reference Stephens, Handke and Doshi6) Knowledge Exchange and Transferability
NICE MTEP (26) and IPP (27) explicitly state that they draw on other HTA reports for their assessments. To ensure transferability of results differences between study and application context (e.g., patient population, intervention setting) should be documented (NICE DAP) (25).
(7) Cost and Economic Evaluation
Detailed information could be obtained from the documents on NICE's DAP (25) and MTEP (26). Within MTEP (26) cost-consequence analysis is used for most technologies (including cost-saving diagnostics), whereas DAP (25) undertakes complex assessments of diagnostic technologies using cost-effectiveness analysis.
(Reference Mathes, Jacobs, Morfeld and Pieper8) Other Device-specific Factors
The methodological documents of HAS (30), NICE's MTEP (26), and IPP (27) point out other relevant factors with respect to the assessments of MDs, such as the effect of operator or user experience on the results of a technique (“learning curve”) or dynamic pricing. NICE (26) underlines that “The technology of devices may advance rapidly. This means that both efficacy and safety outcomes reported in the published literature may not accord with ‘current practice’ using technologically more advanced devices.” Thus, “[. . .] the guidance may refer to the potentially important influence of different devices on the safety and/or efficacy of the procedure, or to rapid technological developments described by the Specialist Advisers, manufacturers or other sources” (26).
In NICE's MTEP (25), “The Committee may make recommendations for use of the technology in specific circumstances only (e.g., by staff with certain training)”. HAS (30) recommends that “During the development of a new medical device, provisions must be made for training and learning plans”. Also the volume of activity has to be taken into account because there is a “[. . .] significant association between favorable clinical results and the doctor's volume of activity [. . .]”. In addition, LBI (Reference Nachtnebel31) emphasizes research gaps in the field of diagnostic technologies, for example, inaccurate reference standards.
DISCUSSION
Main Findings of the Study
Out of the eighty-four identified institutions, forty-seven are actively involved in the commissioning or performing of HTA reports on MDs (assessment and/or appraisal). Sufficient information was not publicly available for all institutions. Variability still exists in the understanding of what the term MDs entails, which is also reflected in different structural, procedural and methodological elements among institutions. Although a large number of general methodological documents were identified, only five institutions developed specific documents for the assessment of MDs. Interestingly, five out of nine separate documents focus on diagnostic technologies (including tests). Similarities between documents for internal use are mainly related to the type of preferred evidence and outcome parameters to be considered, appraisal of evidence quality and stakeholder involvement. Differences mainly concern the assessment base and comparator used, largely reflecting the different types of devices evaluated (diagnostic versus therapeutic).
Institutions such as NICE and HAS also mention additional parameters, such as learning curves or usage setting, as crucial elements that should be considered. Only few institutions made separate provisions for the assessment of MDs in their general methodological documents. This reflects that certain evaluation steps described in a general methodological paper apply to all types of technologies, including MDs.
Comparison to Previous Literature and Current Activities
Several comparative studies have investigated HTA practices in Europe (Reference Allen, Pichler, Wang, Patel and Salek5–Reference Gulacsi, Rotar and Niewada10), however, none with a specific focus on MDs. The most recent example is the WHO Global Survey on HTA from 2015, which includes fifty-three European countries (7).
In parallel to the present study, Ciani et al. (Reference Ciani, Wilcher and Blankart11) conducted a survey on MD activities in non-EU HTA countries using a similar approach (adapted for non-EU countries). They identified thirty-six institutions whose remit included the evaluation of MDs, and to which we will briefly compare our findings. However, Ciani et al. only consider twenty-seven of the identified institutions to be MD-specific (i.e., with MD-specific elements of organizational structure, process, or methods).
In the study by Ciani et al., mostly governmental institutions could be identified as actively involved in MD assessment (50 percent), which is more or less comparable to our findings (36 percent). Fifty percent of thirty-six institutions use an MD classification system/definition, which is also similar to our findings (47 percent). Of interest, 70 percent of all institutions in Ciani's survey have a process for priority setting, compared with 34 percent in our sample. This difference may be attributable to the fact that Ciani et al. included additional participatory elements in their methodological approach, which may have led to higher information availability on this issue; publicly available information on priority-setting in our sample was often lacking.
Ciani et al. reported nearly the same percentage of institutions with publicly available methodological documents as our study (50 percent versus 51 percent). However, only one institution has a MD-specific guide compared with five in our sample. This could reflect the fact that more countries in Ciani's sample are considered emerging settings regarding HTA and are, therefore, less likely to have differentiated practices yet.
Recent activities by HTA networks, HTA institutions, health service research institutions, and at regulatory level show that the methodology of MD evaluation is being discussed and taken forward: EUnetHTA developed a methodological guideline for HTA of therapeutic MDs (32), the Belgian Health Care Knowledge Centre demanded that efficacy requirements for obtain a CE label for high-risk medical devices be raised and transparency of clinical data underlying decision making be granted (Reference Baeyens, Pouppez and Slegers33) and the Royal Netherlands Academy of Arts and Sciences provides guidance for research suitable for assessing and evaluating benefits and performance tailored to various types of devices (34). In Germany, the “Act to Further Develop the Financial Structures and Quality in SHI”, enforced in 2015, has been considered a door-opener for the benefit assessment of MDs in conjunction with reimbursement (16). It introduces a systematic approach to the evaluation of new methods incorporating the application of high-risk MDs.
Strengths and Limitations
The strengths of the presented research lie with the broad systematic approach adopted to identify institutions involved in HTA in Europe combined with a focus on MDs, the collection of a comprehensive range of information and the quality assurance of all steps of the systematic approach by reviewer pairs.
However, reliance on published literature and online sources alone meant that the study did not identify sufficient information for all included institutions. This was due to both a lack of publicly available information and language barriers. Nevertheless, existing papers which used surveys to gather data directly from the representatives of HTA institutions faced the problem of low response rates, leading to similarly partial overviews (Reference Stephens, Handke and Doshi6;7). Despite having been conceived to be sufficiently broad, the systematic approach used to identify institutions involved in HTA production in European countries, may not have captured every institution involved in HTA production. Overlooked institutions might include those not part of international networks, not discussed in comparative publications or with a lack of publicly available information about their MD-specific focus. Despite our best efforts, we cannot rule out the possibility that available information (including documents) was overlooked. As most of the institutions seem to use their general methodological documents to assess MDs, a more in-depth analysis of these would be necessary to get an overall picture. In addition, due to the varying objectives and target groups, the presented overview of results from MD-specific documents does not necessarily depict all interesting details.
Implication for Policy and Research
In Europe there is a growing recognition of the importance of methodological guidelines for HTA production, reflected also in collaborative initiatives toward methodological standardization (e.g., EUnetHTA) (22). However, the development and implementation of specific methodological tools for the assessment of MDs is still limited to the national level. Although some HTA institutions already consider different approaches for therapeutic and diagnostic technologies, other elements related to the use of MDs, such as device-operator interaction and the level of device activity require further methodological discussion. In conjunction with the efficient use of resources, our results raise the question if fully separate methodological guides are needed for the evaluation of MDs or if it is sufficient to include supplementary specifications in the general manuals of each institution. We aim to further explore this issue: an interview survey among selected HTA institutions included in this overview with varying experience in the assessment of MDs aims to contextualize and expand information obtained so far and explore potential ideas for the future.
CONCLUSIONS
The work carried out aimed to identify and compare current methods, processes, and institutional practices for the evaluation of MDs in European countries to advance the debate on whether existing assessment tools have to be modified or adapted or if a wholly new approach is needed.
Despite growing consensus on the importance of the assessment of especially high-risk devices, existing initiatives for differentiated assessment practices, and relevant international activities, specific methodological tools for the assessment of MDs are rarely developed and implemented at the European level. Separate additional signposts incorporated in existing general methods guides may be sufficient for the evaluation of MDs.
SUPPLEMENTARY MATERIAL
Supplementary Figure 1: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 1: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 2: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 3: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 4: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 5a: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 5b: http://dx.doi.org/10.1017/S0266462316000349
Supplementary Table 6: http://dx.doi.org/10.1017/S0266462316000349
CONFLICTS OF INTEREST
B.O. works for the Federal Joint Committee (G-BA), which is the highest decision-making body of the joint self-government of physicians, dentists, hospitals and health insurance funds in Germany. One of its tasks is issuing directives determining the benefit basket of the statutory health insurance funds (GKV). B.O. is also a PhD candidate at Berlin University of Technology. S.F., D.P. and R.B. report no conflict of interest.