Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T07:12:41.569Z Has data issue: false hasContentIssue false

7 - Listening to people: measuring views, experiences and perceptions

Published online by Cambridge University Press:  10 July 2020

Ellen Nolte
Affiliation:
London School of Hygiene and Tropical Medicine
Sherry Merkur
Affiliation:
European Observatory on Health Systems and Policies
Anders Anell
Affiliation:
Lunds Universitet, Sweden
Jonathan North
Affiliation:
European Observatory on Health Systems and Policies

Summary

The universal challenge facing health policy-makers is how to ensure the delivery of high-quality health care to a given population with a defined level of resource. While there is often disagreement on how to achieve this goal, most agree that health care should aim to be clinically effective, safe, equitable, efficient and responsive to those it aims to serve. The concept of responsiveness is often equated with the notion of person-centredness. Person-centred care means ensuring that care delivery responds to people’s physical, emotional, social and cultural needs, that interactions with staff are informative, empathetic and empowering, and that patients’ values and preferences are taken into account. This is important, not just because people want it, but also because their health care experiences can influence the effectiveness of their treatment and ultimately their state of health.

Type
Chapter
Information
Achieving Person-Centred Health Systems
Evidence, Strategies and Challenges
, pp. 173 - 200
Publisher: Cambridge University Press
Print publication year: 2020
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

The universal challenge facing health policy-makers is how to ensure the delivery of high-quality health care to a given population with a defined level of resource. While there is often disagreement on how to achieve this goal, most agree that health care should aim to be clinically effective, safe, equitable, efficient and responsive to those it aims to serve. The concept of responsiveness is often equated with the notion of person-centredness. Person-centred care means ensuring that care delivery responds to people’s physical, emotional, social and cultural needs, that interactions with staff are informative, empathetic and empowering, and that patients’ values and preferences are taken into account. This is important, not just because people want it, but also because their health care experiences can influence the effectiveness of their treatment and ultimately their state of health.

The best way to check whether services are meeting these person-centred standards is to ask the users themselves. This chapter looks at why patients’ perceptions on the quality of care are viewed as a key indicator and how people’s views and experiences can be measured. In thinking about the scope of measurement we include all aspects of care that are important to patients and observable by them, either as a result of their direct experience or through their perceptions and beliefs about health systems.

Why patients’ perspectives matter

Patients’ experiences of health care are important for both intrinsic and extrinsic reasons. Numerous studies have looked at what people value when using health services and what they prefer to avoid. While there are demographic, cultural, socioeconomic and health status variations in people’s values and priorities, there is a great deal of agreement on what matters to us when we are patients. We all hope to be treated with dignity, kindness, compassion, courtesy, respect, understanding and honesty when using public services and we expect our rights to information and privacy to be respected. We want the security of knowing that appropriate health services will be readily accessible when we need them, that our physical and emotional needs will be carefully assessed by competent staff, that our rights to information and involvement will be acknowledged and acted upon, that we will be listened to and treated with empathy and understanding, and that our treatment and care will be well coordinated and speedily delivered.

Various conceptual frameworks have been developed to categorize these issues into distinct domains to enhance understanding and facilitate measurement. Starting with the Picker/Commonwealth Dimensions of Patient-Centred Care, the first of the international efforts to categorize and measure issues of importance to patients (Reference GerteisGerteis et al., 1993) and their incorporation into the Institute of Medicine’s Six Aims for Improvement (Institute of Medicine, 2001), the field was later strengthened by the World Health Organization’s work on system responsiveness (Reference Murray, Kawabata and ValentineMurray, Kawabata & Valentine, 2001) and many subsequent academic and policy initiatives (see Chapter 2). These frameworks view patient experience, or responsiveness, as a unique dimension of health care quality, to be used alongside more traditional indicators of clinical effectiveness, equity and efficiency. Many countries have introduced systems for monitoring performance against these or similar frameworks. For example, in England the focus on measuring and improving patients’ experience of care is underpinned by the publication of quality statements outlining specific criteria against which performance can be monitored and evaluated (National Institute for Health and Care Excellence, 2012).

Person-centred care incorporates functional aspects – access arrangements, organizational issues, physical environment and amenities – and interpersonal or relational aspects, especially communications between patients and professional staff. Both are very important but relational aspects, while more complex and difficult to change, probably have the greatest influence on the way patients evaluate the care they receive (Reference EntwistleEntwistle et al., 2012). The subjective features of care are important at a basic human level but also because there is evidence that those who report better experiences tend to have better health outcomes. Clinical care that is technically correct is crucial, but if this is delivered in a brusque manner without demonstrating empathy or respect for individual autonomy, then the results are likely to be less than optimal. For example, studies have found that more positive experiences are associated with better clinical indicators such as blood glucose levels, fewer complications or side-effects, better functional ability and quality of life, greater adherence to treatment recommendations, lower resource use, and less likelihood of premature death (Reference Doyle, Lennox and BellDoyle, Lennox & Bell, 2013; Reference PricePrice et al., 2014).

The precise mechanisms underlying the positive associations between experience and health outcomes are not well understood. While it may appear intuitively obvious that patients who trust and respect their physicians are more likely to follow their advice, leading to improved adherence and better self-care, the connections are not always straightforward and may not be directly causal (Reference PricePrice et al., 2014). There is some evidence that hospitals with better work environments (as reported by nurses) and lower nurse-to-patient ratios are more highly rated by patients (Reference AikenAiken et al., 2012). Perhaps those hospitals or departments that attract high ratings from patients are better resourced or better managed, or maybe clinicians in these facilities are more likely to follow evidence-based guidelines, leading to safer or higher quality care.

Critics have sometimes argued that patients’ judgements are too subjective to be useful, but these objections miss the point. Measurement of patients’ experience is not intended as a substitute for more objective clinical measures (Reference ManaryManary et al., 2013; Reference Anhang PriceAnhang Price et al., 2015). Instead it taps into an important dimension of health care not represented by more traditional indicators.

Purpose, methods and scope of measurement

The main reasons for adopting a systematic approach to the elicitation of patients’ views are to inform quality improvement policy and practice and to hold providers to account for maintaining quality standards. Specific goals may include the following:

  • to track public attitudes to the health system;

  • to identify and monitor problems in care delivery;

  • to facilitate performance assessment and benchmarking between services or organizations;

  • to help professionals reflect on their own, their team’s or their organization’s performance;

  • to inform service redesign and monitor the impact of any changes;

  • to promote informed choice of provider by patients and/or clinicians; and

  • to enable public accountability and transparency (Reference Coulter, Fitzpatrick and CornwellCoulter, Fitzpatrick & Cornwell, 2009).

There are many ways to collect data for these purposes, including qualitative methods (such as focus groups or in-depth interviews) and analysis of administrative data or written complaints. Quantitative surveys using structured self-completion questionnaires are the most commonly used type of patient-based measure. Structured surveys are popular because they can be analysed statistically and used to compare results for whole populations or sub-groups. Data can be collected by mail, telephone or electronic means, as well as by more expensive face-to-face surveys. Questionnaires that are well designed, tested with patients to ensure salience and comprehensibility, checked for validity and reliability using appropriate psychometric methods, and rigorously implemented to achieve adequate response rates and minimize bias can yield useful information (Reference BeattieBeattie et al., 2015).

Surveys do have important limitations, however. Questions are usually ‘closed’, offering a specific set of pre-coded response options. This necessarily imposes restrictions, so awareness of the design process is crucial for interpreting the findings and identifying potential sources of bias. Response rates are sometimes low, raising the risk of systematic error if certain groups in the population are less likely to respond. For example, it is known that responses to postal surveys tend to be lower among males, younger adults, the very old, people in poorer health, and those in socioeconomically deprived groups (Reference Zaslavsky, Zaborski and ClearyZaslavsky, Zaborski & Cleary, 2002). Systematic biases can also affect surveys with high response rates; for example, ‘acquiescence bias’ when respondents tend to answer ‘yes’ to questions instead of other response options, or ‘social desirability bias’ when respondents choose options that they think are socially desirable but may not be accurate indications of their behaviour or beliefs. The likelihood of responding positively to questions about health experiences tends to vary by health status too (Reference HewitsonHewitson et al., 2014; Reference PaddisonPaddison et al., 2015). These issues can be handled statistically through adjustment if enough is known about the factors influencing specific responses, but users of survey data should always interpret the results cautiously (Reference RaleighRaleigh et al., 2015).

Qualitative, unstructured feedback methods also have an important part to play. These may include in-depth face-to-face interviews, focus groups, patient stories, web-based free-text comments, suggestion boxes, video boxes, direct observations and shadowing, or mystery shopping (Reference ZieblandZiebland et al., 2013). These methods usually yield a deeper understanding of the meanings that people attribute to their health care experiences, but they are generally unsuitable for use as performance indicators.

Routine health data can be used to assess certain elements of patients’ experience, for example waiting times, lengths of stay, place of death, etc. There is also scope for more systematic use of complaints, looking for patterns and trends instead of just treating each complaint as an isolated event. No single method is ideal for every purpose and each of the approaches has strengths and weaknesses (Reference Coulter, Fitzpatrick and CornwellCoulter, Fitzpatrick & Cornwell, 2009). It is often a good idea to use multiple sources to gain the fullest picture. For example, qualitative data from interviews, focus groups or observations can be used to inform the scope and wording of structured surveys; conversely, survey results may be used to identify issues requiring more in-depth investigation using qualitative methods.

Patient experience and satisfaction

Quantitative patient or public surveys are used to inform health policy in three main ways: monitoring patients’, service users’ or carers’ experience of, and satisfaction with, care; measuring health outcomes from the patients’ point of view; and assessing public attitudes, health beliefs and behaviours. Selected examples of how this is being tackled in various European countries are illustrated below.

Most people working in health care are familiar with the notion of patient satisfaction, but there is often confusion between this and the related notion of patient experience. Satisfaction is a broad and often ill-defined concept that has been measured in many different ways. Derived from marketing theory, it looks at the extent to which health care fulfils people’s expectations (Reference BatbaatarBatbaatar et al., 2015). Satisfaction surveys ask patients to evaluate the quality of care they have received, often using pre-defined response categories. A typical question might be: ‘Please rate the care you received at this hospital’ – ‘excellent/very good/good/fair/poor.’

Patient satisfaction is sometimes treated as an outcome measure (satisfaction with health status following treatment) and sometimes as a process measure (satisfaction with the way in which care is delivered). It is generally recognized as multi-dimensional in nature, and there is no consensus on what domains should be included, nor which are most important. The best questionnaires are developed with patient involvement, covering topics that are known to be important to patients. Very general questions are often less informative than more specific, detailed ones. While it can be useful to know how satisfied people are with the process and outcomes of care, satisfaction ratings can be hard to interpret because they may be influenced by individual expectations and preferences, health status, personal characteristics or cultural norms, as well as the actual quality and outcomes of care received. Nevertheless, there is continuing interest in gauging people’s satisfaction with services, and newer methods include inviting unstructured feedback via websites and brief ‘exit’ surveys designed to produce rapid results.

Many researchers now believe that the complexities of modern health care and the diversity of patients’ expectations and experiences cannot be reliably evaluated by asking general satisfaction-style rating questions such as ‘How satisfied were you with your care in hospital x?’ or, as is sometimes the case, by focusing solely on food and amenities while ignoring people’s concerns about their clinical care or the way staff dealt with them. More recently, therefore, the focus has shifted to asking people to give factual reports on what actually happened to them during an episode of care, instead of evaluative ratings. This style of questionnaire is known as a patient experience survey, sometimes referred to as Patient Reported Experience Measures (PREMs).

PREMs questionnaires are designed to elicit responses to specific questions about people’s experiences of a particular service, hospital episode, general practice or clinician, or in other words to provide factual information instead of evaluations. Experience surveys are often quite lengthy, posing detailed questions about a specific episode or time period. For example, a typical experience question about a recent hospital episode might be: ‘When you had important questions to ask a doctor, did you get answers you could understand?’ – ‘Yes always/yes sometimes/no.’ This type of question tends to be easier to interpret and respond to than responses to more general questions about levels of satisfaction, and hence it is more actionable. Patient experience questionnaires form the basis of the national patient survey programme in England and several other countries (Box 7.1).

Box 7.1  Measurement of patients’ experience in England

The promotion of person-centred care has been a key policy goal for the British NHS for more than twenty years since the publication of the original Patients’ Charter in 1991, later reconstituted as the NHS Constitution (Department of Health, 2015). To monitor this policy national patient experience surveys have been conducted in England since 2002 under the auspices of the Care Quality Commission.

The national patient experience surveys, which are mostly carried out on an annual basis, are specially designed for each of the following services: inpatient, outpatient, emergency hospital care, maternity, and community mental health services (http://www.cqc.org.uk/content/surveys). Questionnaires and sampling strategies are designed and tested by a national survey coordination centre run by Picker Institute Europe, but individual health care facilities are responsible for implementation, with most employing external survey companies to carry them out. Results are fed back to staff and made available to the public via a national website (www.nhs.uk).

A large survey of general practice patients is also carried out on behalf of NHS England (https://gp-patient.co.uk/), together with a cancer patient experience survey (https://www.quality-health.co.uk/surveys/national-cancer-patient-experience-survey) and a national staff survey (http://www.nhsstaffsurveys.com/).

Other countries collecting patient experience data at a national level include France, the Netherlands, Norway, Sweden, Switzerland and the United States (Reference RechelRechel et al., 2016). Elsewhere a combination of approaches is used, including satisfaction surveys and household health surveys. For example, in Poland a set of standardized questionnaires on patient satisfaction, called PASAT, was developed by the Centre for Quality Monitoring in Health Care, a World Health Organization Collaborating Centre. Separate questionnaires were distributed to hospital patients (PASAT HOSPIT), parents of treated children (PASAT PEDIATRIA) and users of primary health care (PASAT PHC). In addition, a household health survey conducted by the Chief Statistical Office was recently expanded to include a set of questions on health care quality (Reference BoulholBoulhol et al., 2012).

In countries with devolved responsibility for health care, Germany (Box 7.2), Italy (Box 7.3) and Spain for example, data collection tends to be coordinated at regional level.

Box 7.2 Measurement of patients’ experience in Germany

Responsibility for the health care system in Germany is shared between the 16 Länder (states), the federal government and civil society organizations, while health insurance is provided by competing, not-for-profit, non-governmental ‘sickness’ funds and private agencies. This devolved system of care has led to multiple initiatives for gathering data on patients’ experiences rather than a single national approach, and quality assurance is the responsibility of a number of agencies.

Germany does not currently run a systematic national survey of patients’ experience, but this situation has changed following the establishment of a new federal institute. The Institute for Quality Assurance and Transparency in Health Care (IQTIG) was established in 2016 to monitor quality across the German health care system, with a particular focus on hospitals (IHS Reference MarkitMarkit, 2015). It is charged with developing quality indicators and measurement instruments, including patient surveys.

Patient satisfaction surveys in ambulatory and hospital care are also undertaken by sickness funds in cooperation with the Bertelsmann Foundation twice a year, and by professional stakeholders, such as the National Association of Statutory Health Insurance Physicians.

Box 7.3 Measurement of patients’ experience in Italy

Since 1980 ISTAT (the Italian National Institute of Statistics) has conducted a five-yearly Health Conditions and Use of Medical Services Survey which includes some aspects of patients’ experience, in particular questions about access to care. The questionnaire is comprised of more than 70 pages, with the most recent publication in 2013 (ILO, 2013).

National legislation requires the Ministry of Health, in collaboration with patients’ and citizens’ associations, to establish a set of indicators to systematically measure the quality of health services from the patient’s point of view. The indicators cover four areas: personalizing and humanizing care, citizens’ information rights, quality of hospital accommodation services, and disease prevention policies. A Ministerial Decree published on 15 October 1996 identified 79 patient satisfaction indicators in these areas. Topics under ‘personalized and humanized care’ include the ability to book appointments by telephone and the percentage of general practitioners who set up out-of-hours services. The implementation of this national framework on patients’ rights and empowerment has not been uniform: regions such as Emilia-Romagna, Tuscany and Veneto have given systematic attention to this issue, while others have not. Since each region has adopted distinctive and different solutions to seeking patients’ views, there is a lack of comparability across the country about this aspect of person-centred care (Reference PaparellaPaparella, 2016).

At regional level data on Italian citizens’ satisfaction compiled by ISTAT show that satisfaction varies across the north–south divide, with the northern and central regions consistently obtaining above average results, whereas all southern regions score below average.

The turnaround time between data collection and publication of results of patient experience surveys can sometimes be lengthy due to the number of health care organizations involved, the need to check sampling and mailing procedures (an initial mailing plus up to two reminders is usually required to secure a good response), the quantity of data obtained, analytical complexities and other bureaucratic reasons. Frustration with this lack of timeliness has led to a search for briefer, easy-to-implement measures.

The introduction of a new type of national patient survey for England, the Friends and Family Test (FFT), was an attempt to deal with this problem. Based on the Net Promoter Score (www.netpromotersystem.com), a marketing concept used extensively by retail companies, the FFT is an on-site or exit survey that asks people to evaluate their experiences using a single question: ‘How likely are you to recommend our [hospital/ward/maternity service/GP practice] to friends and family if they needed similar care or treatment?’, followed by an invitation to provide free text comments. It was hoped that use of this simple form of ‘real-time’ feedback would provide relevant and timely data to inform quality improvement efforts. This approach has certainly succeeded in amassing a great deal of data, some of which has been used to stimulate quality improvements, but problems of interpretation due to unsystematic approaches to data collection hamper its reliability as a performance measure (Reference CoulterCoulter, 2016; Reference Sizmur, Graham and WalshSizmur, Graham & Walsh, 2015).

Social media is an important source of data on people’s experiences of health care and independent websites gathering unstructured feedback are becoming popular. Examples include Patient Opinion in the UK (https://www.careopinion.org.uk/), iWantGreatCare (https://www.iwantgreatcare.org/), and PatientsLikeMe (https://www.patientslikeme.com/). Their primary purpose is to collect feedback on people’s experiences to improve quality, but people’s accounts of their care are also used to inform other patients and to stimulate research.

There is considerable interest in using patient experience indicators to compare performance between countries. This is the purpose of the US-based Commonwealth Fund’s international health policy surveys (www.commonwealthfund.org) (Box 7.4).

Box 7.4 Commonwealth Fund international surveys

The Commonwealth Fund’s international surveys have been conducted on a regular basis since 2000. Now (2018) covering eleven countries (Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Sweden, Switzerland, the UK and the USA), these telephone surveys include questions both about recent experience of using health care and about people’s opinions of their local systems. The surveys target nationally representative random samples using common questionnaires that are translated and adjusted for country-specific wording. The published comparisons, which are weighted to reflect the demographic characteristics of the adult population in each country, tend to attract considerable interest among governments and the media in the various countries.

Inter-country comparisons face considerable methodological problems, especially if there is a lack of consistency in what is being measured and how the measurement is carried out. The Organisation for Economic Cooperation and Development’s (OECD) Health Care Quality Indicators project is an attempt to encourage greater international coordination of patient experience surveys to facilitate inter-country comparisons (Reference Fujisawa and KlazingaFujisawa & Klazinga, 2016) (Box 7.5).

Box 7.5 OECD Health Care Quality Indicators

The OECD Health Care Quality Indicators project (HCQI), working in conjunction with the Directorate General for Health and Consumers of the European Commission, has been striving since 2006 to develop better methods for comparing the quality of health service provision in different countries on a routine basis, including measurement of patients’ experiences (Reference Fujisawa and KlazingaFujisawa & Klazinga, 2016).

As part of this programme a number of countries have agreed to field OECD-proposed questions in their national surveys to stimulate cross-national learning, including Belgium, Estonia, Italy, Poland and Spain. The agreed list of indicator questions about patients’ experience of ambulatory care covers the following topics:

  • costs to the patient of medical consultations;

  • costs to the patient of medical tests, treatment or follow-up;

  • costs to the patient of prescribed medicines;

  • waiting times to get an appointment with a specialist;

  • doctor spending enough time with patient during the consultation;

  • doctor providing easy-to-understand explanations;

  • doctor giving opportunity to ask questions or raise concerns; and

  • doctor involving patient in decisions about care and treatment.

Patient-reported outcomes

While patient experience surveys ask respondents to report on the process of care, patient-reported outcome measures (referred to as PROs or PROMs) ask them to report on their health state following a clinical intervention, the outcome of care. PROMs are standardized questionnaires that are designed to elicit subjective reports of the personal impact (outcomes) of illness and treatment, focusing on physical functioning (ability to maintain daily activities) and emotional well-being, often referred to more generally as health-related quality of life. The aim is to obtain important information that is not reflected in traditional clinical measures. This is done by asking respondents to describe their current state, by means of a structured interview or self-completion questionnaire, either paper-based or electronic. The resulting reports can then be compared to previous measurements from the same individual or group (to measure change over time) or to those from a reference group or sub-groups (to compare against an external norm or standard).

PROMs must be carefully developed and tested to conform to accepted statistical and psychometric standards, including evidence of validity, reliability and sensitivity to change. The best PROMs are developed with extensive input from patients, ensuring that they cover topics that are salient and meaningful from the patient’s perspective. While they are intended primarily for use at two or more time points to measure health gain (or loss), for example before and after treatment, or at various time points during a period of illness or recovery, they can also be used to obtain a single snapshot of the prevalence of quality of life problems.

PROMs fall into three distinct types (Table 7.1). Some measure general health status regardless of the clinical diagnosis (generic PROMs), while others ask about health perceptions in relation to specific conditions (condition-specific PROMs). A third type is patient-generated measures, where respondents are asked to define their own outcome goals and achievement of these is then assessed after a period of time.

Table 7.1 Types of PROM

TypeExamples
GenericMedical Outcomes Study: SF-36 (Reference Ware and SherbourneWare & Sherbourne, 1992); EuroQol: EQ-5D (EuroQol Reference GroupGroup, 1990)
Condition-specificOsteoarthritis of the hip: Oxford Hip Score (Reference DawsonDawson et al., 1996); Depression: PHQ-9 (Reference Kroenke, Spitzer and WilliamsKroenke, Spitzer & Williams, 2001)
Patient-generatedMeasure Yourself Medical Outcome Profile: MYMOP (Reference PatersonPaterson, 1996); Schedule for Evaluation of Individual Quality of Life: SEIQol (Reference JoyceJoyce et al., 2003)

Most PROMs cover a number of quality of life dimensions or domains. For example, the widely used EuroQol measure (EQ-5D) (EuroQol Reference GroupGroup, 1990) includes five domains (mobility, self-care, usual activities, pain/discomfort, anxiety/depression) and an overall rating of the respondent’s health state. The results can be scored separately for each domain or summarized in a single index score to monitor variations and time trends or for ranking providers. The EQ-5D has been adopted by the Department of Health in England for inclusion in its national PROMs programme (Box 7.6) and is also included in NHS England’s large general practice patient survey mentioned earlier, giving population data that can be used as a ‘normal population’ reference for comparison purposes.

Condition-specific PROMs ask about issues relating to the specific problem (for example, ‘Have you had trouble washing or drying yourself because of your knee?’). Like EQ-5D, the results can be used descriptively or summed to produce a single score. Condition-specific measures tend to be more responsive to change than generic measures, but both types can be used to describe pre- and post-treatment health status and health gain.

Patient-generated measures are intuitively appealing, but they are not often used for large-scale data collection because it is harder to score and summarize the results. Their primary use is for facilitating the exchange of information in clinical consultations and in care planning for long-term conditions.

PROMs are currently being collected on a routine basis in Denmark, England, Estonia, the Netherlands and Sweden, as well as in numerous individual studies of treatment effects.

Box 7.6 National PROMs programme in England

Since 2009 the Department of Health in England has funded a programme of work to measure patient-reported outcomes of elective surgery. The aim of the national PROMs programme is to measure and describe outcomes in a meaningful way in the hope that patients, GPs and health care commissioners would use the data to seek out those hospitals and clinicians that achieve the best outcomes, thereby driving up standards. Alongside this experiment in routine data collection from elective surgery patients, a research programme was launched to pilot the use of PROMs for patients with other conditions, including long-term conditions, mental health, coronary revascularization and cancer care.

The national surgical PROMs programme, also launched in 2009, monitors outcomes of care for patients undergoing hip replacement, knee replacement, hernia repair and varicose veins surgery. Everyone undergoing these procedures (i.e. a census, not a sample) is invited to complete a questionnaire before their operation. Those who complete the pre-surgery questionnaire are then sent a postal questionnaire three to six months after their operation to measure changes in their health status. A single generic standardized PROM (EQ-5D) is included in all questionnaires, together with condition-specific PROMs for three of the four diagnostic groups. Results from this continuous survey are collated, linked with data from the Hospital Episode Statistics, case-mix adjusted, and published at regular intervals by the NHS.

Response rates to individual surveys have been good to date – during 2016–2017, response rates were 86% for the pre-operative hip questionnaires and 76% for the post-operative ones – but the final response rate is subject to some attrition when before-surgery and after-surgery surveys are matched up. Not surprisingly, the results show that most patients experience improvements in their health-related quality of life, especially those undergoing hip or knee surgery where more than 80% of patients report better health six months after undergoing the procedure. Factors external to the health system, such as socioeconomic and environmental influences, do seem to make a difference to how quickly people are treated and how well they recover (Reference SoljakSoljak et al., 2009; Reference NeuburgerNeuburger et al., 2013), but somewhat surprisingly these and other studies (Reference Varagunam, Hutchings and BlackVaragunam, Hutchings & Black, 2014; Reference Varagunam, Hutchings and BlackVaragunam et al., 2014; Reference Black, Varagunam and HutchingsBlack, Varagunam & Hutchings, 2014; Reference ApplebyAppleby et al., 2013) suggest that there is in fact a great deal of uniformity across England in quality of life outcomes after these surgical procedures. Instead of the expected variations in quality of care, all providers appear to be performing at similar levels of competence, producing similar results.

It is important to consider carefully how the outputs will be used before planning PROMs data collection. Decisions about where and in what way the information will be gathered, when and from whom, are likely to vary according to its intended purpose. In theory, PROMs can be used for a variety of purposes, the most common of which is as an outcome measure in clinical trials to evaluate medical interventions and technologies. They can also be used to monitor performance across specialties, organizations, departments or whole systems, and in clinical care to inform diagnosis, treatment and provider choice (Reference Devlin and ApplebyDevlin & Appleby, 2010) (Table 7.2).

Table 7.2 Potential uses of PROMs

Level of aggregationPurposeRelevance
Health systemSystem-wide performance assessmentTo monitor variations in health outcomes between population sub-groups and provider organizations
Determining value for moneyTo determine the extent to which the current pattern of service provision is delivering good value for money
Commissioners/payersProcurement/contractingTo encourage providers to monitor health outcomes and to incentivize better care
Monitoring qualityTo use as a key performance indicator to monitor health outcomes and value for money
Provider organization/specialtyClinical auditTo better understand patients’ needs and assess how well these are being met by the organization
Quality improvementTo help plan innovations, monitor progress and incentivize staff
Clinical trialsTrial recruitmentTo screen for eligibility for participation in trials and for use as a baseline measure
Trial outcomesTo measure outcomes in intervention and control groups
Clinical careScreening and diagnosisTo help make a diagnosis, including co-morbidities and impact on quality of life
Health needs assessment and monitoringTo improve communication, identify needs for self-management support and monitor how the patient is getting on
Choosing providersTo select ‘the best’ provider for an individual patient
Choosing treatments and self-management supportTo inform patients to facilitate shared decision-making and personalized care planning

Population surveys

People’s views on the general functioning of a health system and their opinions on the efficiency of its administration are of considerable interest to policy-makers, so a number of countries have invested in regular opinion surveys. These usually target general population samples instead of just health service users. For example, the British Social Attitudes survey regularly includes the following question: ‘All in all, how satisfied or dissatisfied would you say you are with the way in which the National Health Service runs nowadays?’ (Reference Appleby and RobertsonAppleby & Robertson, 2016). Such questions enable comparison of attitudes and perceptions between different population groups and, if repeated at regular intervals, can throw light on trends over time.

Estonia has been monitoring public views on its health care system since the late 1990s (Reference LaiLai et al., 2013). Annual population surveys measure public perceptions of health care quality, access to, and satisfaction with, family doctors, specialists, dentists and hospitals. Since 2003 there has been a steady improvement in perceptions of care quality and in 2012, 79% of the Estonian population expressed their satisfaction with the quality of care.

In Poland research on public attitudes to the health care system is regularly carried out by the Public Opinion Research Centre (Centrum Badania Opinii Społecznej – CBOS). While access to primary care doctors and availability of health information received positive reports, in the most recent survey conducted in July 2014, two-thirds (68%) of respondents were dissatisfied with waiting times and the overall efficiency of the system (Reference Public OpinionCBOS Public Opinion Research Centre, 2014).

Comparison across countries can also be informative, so national efforts have been complemented by the European Commission’s Eurobarometer surveys, some of which have focused on attitudes to health care across countries (Box 7.7).

Box 7.7 Eurobarometer surveys

The European Commission’s series of Eurobarometer surveys has provided a useful source of comparison of public attitudes to health systems in all 28 Member States. The surveys elicit responses from people aged 15 years and over, resident in each of the countries, and the results are weighted to be representative of the local populations. Interviews are conducted face-to-face in people’s homes in the appropriate local language.

The most recent health survey, which focused on people’s perceptions of the safety and quality of health care in their country, was carried out in 2013 and repeated questions first used in 2009 (European Reference CommissionCommission, 2014).

A majority of EU citizens (71%) felt that the overall quality of health care in their country was good, but there were considerable differences between countries. Respondents in western and northern areas tended to give more positive responses than those in the south and east of Europe. People’s main priorities were well-trained staff and effective treatment. Awareness of patient safety issues was fairly high, at over 50%, but this varied between countries from 82% of respondents in Cyprus to 21% of those in Austria.

Interpretation of the results of opinion surveys must take account of the local context. People’s attitudes are influenced by many factors, including personal experiences and those of family and friends, media reports, commercial and lobby groups, and political affiliations. For example, the question about satisfaction with the NHS tends to produce more positive responses from people with recent experience of health care than from those who have not used it recently, and from supporters of the government in power than from supporters of opposition parties (Reference Appleby and RobertsonAppleby & Robertson, 2016). People’s expectations are also likely to influence their experiences. For example, those who believe their local health system is deteriorating may be pleasantly surprised if they receive acceptable care, whereas those who expect a prompt service at all times may be disappointed if they have to wait. For these reasons, such surveys should not be seen as definitive measures of performance.

Population surveys are also used to gather epidemiological data and explore health needs. For example, the Spanish Institute of Statistics organizes a national health survey at periodic intervals to gather data on the population’s state of health and social determinants, broken down by autonomous region (Instituto Nacional de Estadistica, 2013). In Germany the Robert Koch Institute, an agency subordinate to the Federal Ministry of Health and responsible for the control of infectious diseases and health reporting, conducts the German Health Interview and Examination Survey for Adults every two or three years (Robert Koch Reference InstituteInstitute, 2013). While primarily a face-to-face epidemiological survey, this also includes some questions about patients’ experience and subjective health status. Similarly, the Health Survey for England, a regularly conducted face-to-face household survey, includes questions about people’s needs for health and social care and the support needs of family carers. Health needs surveys may also include questions about health behaviours and attitudes to lifestyle change, for example smoking cessation or dietary modification. These are used to monitor the impact of public health programmes and to identify sub-groups where more precise targeting of health education may be required.

Using the data

Capturing public and patients’ perspectives on health care is becoming increasingly important as systems strive to be more responsive to the needs of those using their services. As outlined above, many European countries now have programmes of work that include national or regional surveys undertaken at regular intervals, either as a component of household-based epidemiological health surveys or as patient experience or outcome surveys focused on health care facilities. Countries where these are in place include Austria, Belgium, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Iceland, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Spain, Sweden and the UK (Reference Fujisawa and KlazingaFujisawa & Klazinga, 2016).

Publication of PREMs and PROMs is seen as an important mechanism for holding providers to account for the quality of care (‘voice’) and for empowering patients to act as discerning consumers (‘choice’) (Reference CoulterCoulter, 2010). Policy-makers hope that giving people (providers, patients and public) access to comparative information on performance will stimulate improvements. There is evidence that some providers do take note of published survey results leading to improvements (Reference HafnerHafner et al., 2011), but change is often hard to discern at a national level and public disclosure does not seem to have made much impact on the behaviour of patients as yet (Reference KetelaarKetelaar et al., 2014). While better performing organizations will often act on the results of patient surveys, those at the lower end of the rankings probably need stronger stimuli to provoke action, for example financial incentives (Reference RaleighRaleigh et al., 2012). Patients do make choices about their care, but their choice of provider is often based on informal information from family and friends rather than statistical information (Reference CoulterCoulter, 2010).

It is self-defeating, and arguably unethical, to ask patients to take time to report on their health care experiences if the results are not used to make improvements. It is therefore discouraging to note that after more than ten years of gathering patient experience data in England, only a minority of hospital providers had taken effective action leading to demonstrable change (Reference DeCourcy, West and BarronDeCourcy, West & Barron, 2012). Similar disappointing results have been reported in other countries, despite the presence of incentives such as public disclosure, pay-for-performance and encouragement to patients to exercise choice of provider. While many policy-makers are convinced of the usefulness of patient and public survey data, clinicians are often more sceptical. Lack of clinical engagement is a significant barrier to improvement (Reference AspreyAsprey et al., 2013; Reference RozenblumRozenblum et al., 2013).

Identifying the reasons for the failure to act is not straightforward. Lack of understanding of the issues is unlikely to be sufficient explanation – a considerable amount is known about which aspects of care matter to patients (Reference Robert and CornwellRobert & Cornwell, 2011). A recent systematic review identified several barriers to improvement, including lack of time or resources to collect, analyse and act on the data; competing priorities, including workload and financial pressures; survey results that were insufficiently timely or too general to be relevant at the provider level; and a lack of effective leadership (Reference GleesonGleeson et al., 2016).

The good news is that change is possible, as evidenced by many successful local initiatives (Reference HaugumHaugum et al., 2014). The challenge is to mainstream this learning across whole health systems. This requires committed leadership, clear goals, active engagement of patients and families, human resources policies that embed quality improvement skills in training and staff development, adequate resourcing and effective institutional support, and effective dissemination of the results (Reference CoulterCoulter et al., 2014). There is considerable scope for cross-country learning in this field. The OECD initiative to develop a common set of principles and indicators is of major importance, offering a real opportunity to raise standards and ensure that patients’ views really count in health policy development (Reference Fujisawa and KlazingaFujisawa & Klazinga, 2016). A similar effort may be needed to share best practice in using the data to improve the quality of patient care.

Future developments

This field is likely to develop in various directions over the next few years, leading to greater diversity of methods for obtaining feedback from patients and the public. Electronic data collection is becoming more prevalent and the days of paper-based surveys must be numbered. Online surveys that can provide automated data collection, instant analysis and feedback in a well-presented comprehensible format could provide a much more efficient and effective means of generating valuable information.

As the focus of policy interest turns increasingly towards integrated care, methods that focus on single episodes, institutions, services or conditions may become less relevant. The development of survey tools or sets of indicators that adequately reflect people’s experiences across clinical pathways and service boundaries will be challenging, but work is currently under way to develop such measures (Reference Strandberg-Larsen and KrasnikStrandberg-Larsen & Krasnik, 2009; Reference GrahamGraham et al., 2013). There is great interest in the development of more broad-based indicators that better reflect service users’ goals and outcome preferences (Reference HunterHunter et al., 2015; Reference PetersPeters et al., 2016). Better measures are needed of concepts such as empowerment, autonomy, care coordination and self-management capabilities.

Public use of web-based ratings and social media to share information about health experiences is growing apace (Reference Rozenblum and BatesRozenblum & Bates, 2013). Historically, use of unstructured comments has been restricted to local quality initiatives, but new methods of data analysis offer the potential to gain generalizable insights from these types of data. There may be scope to collate this information electronically to supplement or even replace traditional surveys (Reference GreavesGreaves et al., 2012; Reference BardachBardach et al., 2013). Techniques such as ‘sentiment analysis’ could be used to produce overviews of patients’ experiences as expressed on social media, for example (Reference GreavesGreaves et al., 2013). Patient narratives are often more interesting to staff than statistics, and the availability of collections of video interviews of patients talking about their experiences may prove to be a useful adjunct to statistical reports of survey data. These videos are already being used in quality improvement initiatives such as experience-based co-design (Reference LocockLocock et al., 2014).

More work is needed to develop efficient means of combining patient narratives and PROMs data into user-friendly decision aids to support shared decision-making (Reference CoulterCoulter et al., 2013). These tools can help patients understand treatment choices and participate in decisions about their care, but they are time-consuming to produce, and dissemination and uptake have proved challenging (Reference StiggelboutStiggelbout et al., 2012).

Use of electronic data collection to incorporate patient feedback directly into clinical record systems is attracting considerable interest at present (Reference KotronoulasKotronoulas et al., 2014; Reference EtkindEtkind et al., 2014). Many experts are convinced that incorporation of PROMs into regular patient care is the way forward, recognizing that a number of challenges will need to be overcome (Reference GilbertGilbert et al., 2015). For example, the use of e-PROMs in routine primary care to monitor the impact of long-term conditions and multi-morbidities on people’s physical and emotional health over time could transform the management of these conditions. Personalized care planning, in which patients and clinicians work together to agree goals and develop proactive action plans, reviewing these at regular intervals, could be facilitated by the use of these instruments, enabling clinicians and patients to better understand symptom fluctuations and identify effective self-management strategies.

The establishment of ‘virtual’ clinics using e-PROMs has also been mooted, enabling remote monitoring to avoid unnecessary post-treatment appointments (Reference GilbertGilbert et al., 2015). Patients could be called up only when their PROMs scores indicate unmet needs for specialist help, potentially leading to more efficient use of resources and a reduction in the ‘treatment burden’ for patients. Trials are under way to evaluate this type of system for use by cancer patients, but it may have the potential for wider application (Reference AbsolomAbsolom et al., 2017).

Maintaining clarity of purpose will be essential if patients’ perspectives are to be usefully incorporated into efforts to improve equitable and responsive delivery of health care. Governments, health authorities or health care organizations may be primarily concerned to gauge public views on the adequacy of arrangements made for health care delivery, the quality of care processes or the effectiveness of treatments. Each of these goals requires a thoughtful and well-designed approach to measurement. The seven principles for establishing national systems of patient experience measurement proposed by an OECD expert group should be given serious consideration:

  1. 1. Patient measurement should be patient-based, using survey instruments formulated with patient input.

  2. 2. The goal of patient measurement should be clear, whether for external reasons such as provision of information for consumer choice, public accountability or pay-for-performance, or for internal use by providers as part of quality improvement schemes.

  3. 3. Patient measurement tools should undergo cognitive testing by patients and their psychometric properties should be known.

  4. 4. The actual measurement and analyses of patient experiences should be standardized and reproducible.

  5. 5. Reporting methods should be carefully designed and tested.

  6. 6. International comparability should be enhanced with the development of agreed indicator questions.

  7. 7. National systems for the measurement of patient experiences should be sustainable, supported by appropriate infrastructure (OECD, 2010).

We would propose the addition of one further principle – that countries should work together to develop and test methods for ensuring that the results of these types of surveys are taken seriously and incorporated into quality improvement initiatives, leading to real, measurable improvements in the quality of health care. This will require the establishment of appropriate structures and mechanisms to facilitate sharing and learning.

Ultimately, the success or failure of the initiatives described in this chapter will rest on the extent to which the information generated stimulates real improvements in our health systems.

Note

We are most grateful to Niek Klazinga for allowing us to see a pre-publication copy of the recent OECD report on measuring patient experiences (Reference Fujisawa and KlazingaFujisawa & Klazinga, 2016).

References

Absolom, K, et al.(2017). Electronic patient self-Reporting of Adverse-events: Patient Information and aDvice (eRAPID): a randomised controlled trial in systemic cancer treatment. BMC Cancer, 17:318.CrossRefGoogle ScholarPubMed
Aiken, LH, et al.(2012). Patient safety, satisfaction, and quality of hospital care: cross sectional surveys of nurses and patients in 12 countries in Europe and the United States. BMJ, 344:e1717.Google Scholar
Anhang Price, R, et al.(2015). Should health care providers be accountable for patients’ care experiences? Journal of General Internal Medicine, 30:2536.CrossRefGoogle ScholarPubMed
Appleby, J, Robertson, R (2016). Public satisfaction with the NHS in 2015: Results and trends from the British Social Attitudes survey. Available at: http://www.kingsfund.org.uk/sites/files/kf/BSA-public-satisfaction-NHS-Kings-Fund-2015.pdf (accessed 19 August 2016).Google Scholar
Appleby, J, et al.(2013). Using patient-reported outcome measures to estimate cost-effectiveness of hip replacements in English hospitals. Journal of the Royal Society of Medicine, 106:32331.Google Scholar
Asprey, A, et al.(2013). Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. British Journal of General Practice, 63:e2008.CrossRefGoogle Scholar
Bardach, NS, et al.(2013). The relationship between commercial website ratings and traditional hospital performance measures in the USA. BMJ Quality & Safety, 22:194202.Google Scholar
Batbaatar, E, et al.(2015). Conceptualisation of patient satisfaction: a systematic narrative literature review. Perspectives in Public Health, 135:24350.Google Scholar
Beattie, M, et al.(2015). Instruments to measure patient experience of healthcare quality in hospitals: a systematic review. Systematic Reviews, 4:97.Google Scholar
Black, N, Varagunam, M, Hutchings, A (2014). Influence of surgical rate on patients’ reported clinical need and outcomes in English NHS. Journal of Public Health (Oxford), 36:497503.CrossRefGoogle ScholarPubMed
Boulhol, H, et al. (2012). Improving the health-care system in Poland. Available at: http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=ECO/WKP%282012%2934&docLanguage=En (accessed 24 August 2016).Google Scholar
Public Opinion, CBOS Research Centre (2014). Polish Public Opinion: Functioning of the Healthcare System Available at: http://www.cbos.pl/PL/publikacje/public_opinion/2014/07_2014.pdf (accessed 24 August 2016).Google Scholar
Coulter, A (2010). Do patients want a choice and does it work? BMJ, 341:c4989.Google Scholar
Coulter, A (2016). Patient feedback for quality improvement in general practice. BMJ, 352, i913.CrossRefGoogle ScholarPubMed
Coulter, A, Fitzpatrick, R, Cornwell, J (2009). Measures of patients’ experience in hospital: purpose, methods and uses. Available at: https://www.kingsfund.org.uk/sites/default/files/kf/Point-of-Care-Measures-of-patients-experience-in-hospital-Kings-Fund-July-2009_0.pdf (accessed 11 December 2019).Google Scholar
Coulter, A, et al.(2013). A systematic development process for patient decision aids. BMC Medical Informatics and Decision Making, 13 Suppl 2:S2.Google Scholar
Coulter, A, et al.(2014). Collecting data on patient experience is not enough: they must be used to improve care. BMJ, 348:g2225.Google Scholar
Dawson, J, et al.(1996). Questionnaire on the perceptions of patients about total hip replacement. Journal of Bone and Joint Surgery (British), 78:18590.Google Scholar
DeCourcy, A, West, E, Barron, D (2012). The National Adult Inpatient Survey conducted in the English National Health Service from 2002 to 2009: how have the data been used and what do we know as a result? BMC Health Services Research, 12:71.Google Scholar
Department of Health (2015). The NHS Constitution. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/480482/NHS_Constitution_WEB.pdf (accessed 24 August 2016).Google Scholar
Devlin, NJ, Appleby, J (2010). Getting the most out of PROMs. London: King’s Fund.Google Scholar
Doyle, C, Lennox, L, Bell, D (2013). A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open, 3.Google Scholar
Entwistle, V, et al.(2012). Which experiences of health care delivery matter to service users and why? A critical interpretive synthesis and conceptual map. Journal of Health Services Research & Policy, 17:708.Google Scholar
Etkind, SN, et al.(2014). Capture, Transfer, and Feedback of Patient-Centered Outcomes Data in Palliative Care Populations: Does It Make a Difference? A Systematic Review. Journal of Pain and Symptom Management, 49(3):61124.Google Scholar
Commission, European (2014). Patient Safety and Quality of Care. Available at: http://ec.europa.eu/public_opinion/archives/ebs/ebs_411_sum_en.pdf (accessed 24 August 2016).Google Scholar
Group, EuroQol (1990). EuroQol – a new facility for the measurement of health related quality of life. Health Policy, 16:199208.Google Scholar
Fujisawa, R, Klazinga, N (2016). Measuring patient experiences (PREMS): progress made by the OECD and its member countries between 2006 and 2015. Paris: OECD.Google Scholar
Gerteis, M, et al.(1993). Through the patient’s eyes: understanding and promoting patient-centred care. San Francisco: Jossey Bass.Google Scholar
Gilbert, A, et al.(2015). Use of patient-reported outcomes to measure symptoms and health related quality of life in the clinic. Gynecologic Oncology, 136, 42939.Google Scholar
Gleeson, H, et al.(2016). Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open, 6:e011907.Google Scholar
Graham, C, et al. (2013). Options appraisal on the measurement of people’s experience of integrated care. Available at: http://www.pickereurope.org/wp-content/uploads/2014/10/Options-appraisal-on-integrated-care.pdf (accessed 24 August 2016).Google Scholar
Greaves, F, et al.(2012). Associations between Internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study. BMJ Quality & Safety, 21:6005.Google Scholar
Greaves, F, et al.(2013). Harnessing the cloud of patient experience: using social media to detect poor quality healthcare. BMJ Quality & Safety, 22:2515.Google Scholar
Hafner, JM, et al.(2011). The perceived impact of public reporting hospital performance data: interviews with hospital staff. International Journal for Quality in Health Care, 23:697704.CrossRefGoogle ScholarPubMed
Haugum, M, et al.(2014). The use of data from national and other large-scale user experience surveys in local quality work: a systematic review. International Journal for Quality in Health Care, 26:592605.Google Scholar
Hewitson, P, et al.(2014). People with limiting long-term conditions report poorer experiences and more problems with hospital care. BMC Health Services Research, 14:33.Google Scholar
Hunter, C, et al.(2015). Perspectives from health, social care and policy stakeholders on the value of a single self-report outcome measure across long-term conditions: a qualitative study. BMJ Open, 5:e006986.Google Scholar
Markit, IHS (2015). IQTiG established in Germany to evaluate quality in medical care. Available at: https://www.ihs.com/country-industry-forecasting.html?ID=1065997751 (accessed 24 August 2016).Google Scholar
ILO (2013). Health interview survey. Available at: http://www.ilo.org/surveydata/index.php/catalog/923/related_materials (accessed 24 August 2016).Google Scholar
Institute of Medicine (2001). Crossing the quality chasm: a new health system for the 21st century. Washington: National Academy Press.Google Scholar
Instituto Nacional de Estadistica (2013). National Health Survey (ENSE). Available at: http://www.ine.es/dyngs/INEbase/en/operacion.htm?c=Estadistica_C&cid=1254736176783&menu=resultados&idp=1254735573175 (accessed 24 August 2016).Google Scholar
Joyce, CR, et al.(2003). A theory-based method for the evaluation of individual quality of life: the SEIQoL. Quality of Life Research, 12:27580.Google Scholar
Ketelaar, NA, et al.(2014). Exploring consumer values of comparative performance information for hospital choice. Quality in Primary Care, 22:819.Google Scholar
Kotronoulas, G, et al.(2014). What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. Journal of Clinical Oncology, 32:1480501.Google Scholar
Kroenke, K, Spitzer, RL, Williams, JB (2001). The PHQ-9: validity of a brief depression severity measure. Journal of General Internal Medicine, 16:60613.Google Scholar
Lai, T, et al. (2013). Estonia: health system review. Health Systems in Transition [Online], 6. Available at: https://www.sm.ee/sites/default/files/content-editors/Ministeerium_kontaktid/Uuringu_ja_analuusid/Tervisevaldkond/hit-estonia.pdf (accessed 24 August 2016).Google Scholar
Locock, L, et al.(2014). Testing accelerated experience-based co-design: a qualitative study of using a national archive of patient experience narrative interviews to promote rapid patient-centred service improvement. Health Services and Delivery Research, 2.CrossRefGoogle Scholar
Manary, MP, et al.(2013). The patient experience and health outcomes. New England Journal of Medicine, 368:2013.Google Scholar
Murray, CJ, Kawabata, K, Valentine, N (2001). People’s experience versus people’s expectations. Health Affairs (Millwood), 20:214.CrossRefGoogle ScholarPubMed
National Institute for Health and Care Excellence (NICE) (2012). Patient experience in adult NHS services. Available at: https://www.nice.org.uk/guidance/QS15/chapter/quality-statement-1-respect-for-the-patient#quality-statement-1-respect-for-the-patient (accessed 24 August 2016).Google Scholar
Neuburger, J, et al.(2013). Socioeconomic differences in patient-reported outcomes after a hip or knee replacement in the English National Health Service. Journal of Public Health (Oxford), 35:11524.CrossRefGoogle ScholarPubMed
OECD (2010). Improving value in health care: measuring quality. OECD Health Policy Studies.Google Scholar
Paddison, CA, et al.(2015). Why do patients with multimorbidity in England report worse experiences in primary care? Evidence from the General Practice Patient Survey. BMJ Open, 5:e006172.Google Scholar
Paparella, G (2016). Person-centred care in Europe: a cross-country comparison of health system performance, strategies and structures. Available at: http://www.pickereurope.org/wp-content/uploads/2016/02/12–02-16-Policy-briefing-on-patient-centred-care-in-Europe.pdf (accessed 24 August 2016).Google Scholar
Paterson, C (1996). Measuring outcomes in primary care: a patient generated measure, MYMOP, compared with the SF-36 health survey. BMJ, 312:101620.Google Scholar
Peters, M, et al.(2016). The Long-Term Conditions Questionnaire: conceptual framework and item development. Patient Related Outcome Measures, 7:10925.Google Scholar
Price, RA, et al.(2014). Examining the role of patient experience surveys in measuring health care quality. Medical Care Research and Review, 71:52254.Google Scholar
Raleigh, V, et al.(2012). Do some trusts deliver a consistently better experience for patients? An analysis of patient experience across acute care surveys in English NHS trusts. BMJ Quality & Safety, 21:38190.Google Scholar
Raleigh, V, et al.(2015). Impact of case-mix on comparisons of patient-reported experience in NHS acute hospital trusts in England. Journal of Health Services Research & Policy, 20:929.Google Scholar
Rechel, B, et al.(2016). Public reporting on quality, waiting times and patient experience in 11 high-income countries. Health Policy, 120:37783.Google Scholar
Robert, G, Cornwell, J (2011). What matters to patients? Developing the evidence base for measuring and improving patient experience. Warwick: NHS Institute for Innovation and Improvement.Google Scholar
Institute, Robert Koch (2013). Positive signals and warning signs – 34 articles with results of the “German Health Interview and Examination Survey for Adults” published (press release). Available at: http://www.rki.de/EN/Content/Service/Press/PressReleases/2013/05_2013.html (accessed 24 August 2016).Google Scholar
Rozenblum, R, Bates, DW (2013). Patient-centred healthcare, social media and the internet: the perfect storm? BMJ Quality & Safety, 22:1836.CrossRefGoogle ScholarPubMed
Rozenblum, R, et al.(2013). The patient satisfaction chasm: the gap between hospital management and frontline clinicians. BMJ Quality & Safety, 22:24250.Google Scholar
Sizmur, S, Graham, C, Walsh, J (2015). Influence of patients’ age and sex and the mode of administration on results from the NHS Friends and Family Test of patient experience. Journal of Health Services Research & Policy, 20:510.Google Scholar
Soljak, M, et al.(2009). Is there an association between deprivation and pre-operative disease severity? A cross-sectional study of patient-reported health status. International Journal for Quality in Health Care, 21:31115.Google Scholar
Stiggelbout, AM, et al.(2012). Shared decision making: really putting patients at the centre of healthcare. BMJ, 344:e256.CrossRefGoogle ScholarPubMed
Strandberg-Larsen, M, Krasnik, A (2009). Measurement of integrated healthcare delivery: a systematic review of methods and future research directions. International Journal of Integrated Care, 9:e01.Google Scholar
Varagunam, M, Hutchings, A, Black, N (2014). Do patient-reported outcomes offer a more sensitive method for comparing the outcomes of consultants than mortality? A multilevel analysis of routine data. BMJ Quality & Safety, 24(3):195202.Google Scholar
Varagunam, M, et al.(2014). Impact on hospital performance of introducing routine patient reported outcome measures in surgery. Journal of Health Services Research & Policy, 19:7784.CrossRefGoogle ScholarPubMed
Ware, JE, Sherbourne, CD (1992). The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Medical Care, 30:47383.Google Scholar
Zaslavsky, AM, Zaborski, LB, Cleary, PD (2002). Factors affecting response rates to the Consumer Assessment of Health Plans Study survey. Medical Care, 40:48599.CrossRefGoogle Scholar
Ziebland, S, et al.(eds.) (2013). Understanding and Using Health Experiences. Oxford: Oxford University Press.Google Scholar
Figure 0

Table 7.1 Types of PROM

Figure 1

Table 7.2 Potential uses of PROMs

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×