Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-28T01:12:40.394Z Has data issue: false hasContentIssue false

CJEM Debate Series#PhysicianProductivity – Measuring and understanding causes of variability in emergency physician performance are essential to improve emergency department efficiency

Published online by Cambridge University Press:  28 November 2018

W. Richard Bukata
Affiliation:
Department of Emergency Medicine, Keck School of Medicine, USC Medical Center, University of Southern California, Los Angeles, CA
Heather Murray*
Affiliation:
Department of Emergency Medicine, Queen’s University, Kingston, ON Department of Public Health Sciences, Queen’s University, Kingston, ON
Paul Atkinson
Affiliation:
Department of Emergency Medicine, Dalhousie University (Halifax, NS), Saint John Regional Hospital, Saint John, NB.
*
*Correspondence to: Dr. Heather Murray, Department of Emergency Medicine, Queen’s University, Kingston, ON K7L 2V7; Email: Heather.Murray@kingstonhsc.ca

Abstract

Type
Commentary
Copyright
Copyright © Canadian Association of Emergency Physicians 2018 

INTRODUCTION

Paul Atkinson (@eccucourse)

This series of editorials will provide CJEM readers with an opportunity to hear differing perspectives on topics pertinent to the practice of emergency medicine. The debaters have been allocated opposing arguments on topics where there is some controversy or perhaps scientific equipoise.

We continue with the topic of physician performance measurement, a source of debate and discussion among health administrators and clinicians. Should the work environment be structured to monitor and reward highly productive physicians? Or should concerns about quality outweigh metrics based on throughput? Is it the role of the emergency physician to provide the greatest good possible for the most patients, or does every patient deserve all of the resources required for their condition? As emergency departments (EDs) operate as “businesses,” even if inside a public system, in that they provide services in return for money, is the measurement of productivity essential, as it would be in any other business? What possible excuse would there be to say that the ED is any different? Or is measuring throughput a disincentive for all of the things that make us better than robots: personalized care, a human connection, and the sense to patients that they have been heard? Further, although productivity in the ED is impacted by many factors, is optimizing the behaviour of individual physicians key to optimizing the system? Or do behaviours of individuals simply follow incentives, which may end up skewing practice in unforeseen ways? Finally, does measuring and highlighting variability lead to change anyway? What does a physician who finds out he or she orders more scans than others do with that information?

Dr. W. Richard Bukata, Former ED Director, Founder of Emergency Medical Abstracts, Medical Director, the Center for Medical Education and Adjunct Clinical Professor of Emergency Medicine at the Keck School of Medicine at the University of Southern California, proposes that the measurement of physician productivity and clinical performance is an essential part of ensuring efficiency in the ED; and Dr. Heather Murray, Associate Professor, Departments of Emergency Medicine and Public Health Sciences, Queen’s University, responded that measuring efficiency gives little, if any, information about quality of care and that well trained physicians will work efficiently, if provided with the appropriate supportive environment.

[Readers can follow the debate on Twitter and vote for either perspective, by going to @CJEMonline or by searching #CJEMdebate.]

For: W. Richard Bukata (@ccmecourses)

The case for productivity and clinical care measurement

What physician would ever support the idea that he or she should be subjected to the humiliation of being measured and compared? After all, physicians are professionals and not widget makers in some factory. Oh no. It’s almost embarrassing – the mere thought that somebody would be looking at my performance.

If time and money were not issues in ED care – picture the scenario where clinicians had all the time in the world to sit down and treat patients; there would still be the necessity to measure and compare physician performance. Why? Well, what if physician A routinely uses the HEART score to assess clinical risk and disposition in chest pain patients, while physician B has no idea what the HEART score is? Also, what if physician A routinely uses the pulmonary embolism rule-out criteria (PERC) rule to assess patients with suspected pulmonary embolisms, while physician B has no idea what the PERC rule is? Also, what if physician A routinely doesn’t measure electrolytes in mildly dehydrated children (you’re not supposed to per the American Academy of Pediatrics) while physician B does? So measuring physician performance involves a lot more than throughput and relative value units (the unit of billing in the U.S. system). Independent of any funding differences between the United States and Canadian systems, there are compelling reasons to measure physician performance, as follows:

  • Substantial variations in throughput skills exist.

    1. You are scheduled to be on for the evening shift tonight and you look at whom the other physician is who is going to be working with you. Your jaw drops – it’s Frank, the slowest physician in the group who never met a test he didn’t love (the nurses call him “thorough”). You know you are in for a rough night, given you work in a busy ED. Does Frank know he is the slowest physician in the group? Does Frank know that he orders way too many tests compared with his peers? And, even if it has been hinted to Frank regarding these issues, well just how far off the bell-shaped curve is Frank? If you don’t measure performance (throughput and elements of clinical practice), how can these issues be thoughtfully addressed? According to management guru Peter Drucker, “You can’t manage what you can’t measure” (or, in our case, won’t measure).

  • Inappropriate testing inappropriately raises patient charges (and system costs).

    1. At least in the United States, the more tests you order the greater the patient’s bill (and I’m not certainly defending the practice). In Canada, perhaps the effect of over-testing is not directly associated with the cost to the patient (where there is no direct charge), but it has to be ultimately related to the cost that the province is paying for healthcare. The more money spent needlessly on patients, the less money there is to go around.

  • Patients can be harmed when physician behaviour is not evidence-based.

    1. So a young child with a bump on her head gets a computed tomography (CT) scan – despite that, according to the Pediatric Emergency Care Applied Research Network (PECARN) guidelines, she doesn’t need one. So there is the needless radiation with its associated risks, the needless cost, and the needless waste of time. Why? Because the physician either was not aware of the PECARN guidelines or chose not to follow them. Just how frequently do children get head CTs they don’t need? In a study of 42,412 kids treated at 25 of the pediatric EDs in the PECARN network, one ED’s head CT rate in children was 19%, whereas at the other extreme, another performed CTs for 69% of blunt head trauma.Reference Stanley, Hoyle and Dayan 1

    2. Here’s another typical egregious example: 49 emergency physicians at a Level 1 trauma centre were compared regarding CT usage for 44,724 patients.Reference Levine, Moore and Franck 2 The rate of CT scanning varied almost threefold between the lowest and highest users (11.5% v. 32.7%). When variation by specific complaint was compared, it was pretty remarkable: abdominal pain (12% v. 52%); chest pain (4% v. 32%); shortness of breath (4% v. 29%), and headache (17% v. 76%). Even their own authors concluded that the variation was “dramatic” and that “large deviation from the mean by a group of providers may suggest inappropriate use.” Certainly, a very charitable conclusion at best.

    3. One of the most important decisions an emergency physician can make is who needs to be hospitalized and who doesn’t. A study from the University of Utah reported admission rates for 2,069 pneumonia patients.Reference Dean, Jones and Aronsky 3 The admission rate for one of the faculty was 38% and for another it was 79% with the average being 58%. And, as anticipated, the admit rate variances were unrelated to severity or mortality.

Articles that focus on variation in clinical care are extraordinarily plentiful. Since 2012, there have been 170 citations in the Emergency Medical Abstracts database with the term “variation” in the title alone. 4 There are articles on the variation in the prescribing of antibiotics and opioids, on the costs of care, on the differences in care between men and women, variation in guidelines between organizations and countries, variation in workups for all manner of complaints – you name it. And do these articles say there is no or little variation of concern? Of course not. The magnitude of variation is often striking – just as in the previous examples.

Now to tackle the tough issue – variation in throughput. Sure, over-ordering of tests and inappropriate admissions are clearly factors in throughput, but there are also other elements of variation that are provider specific that relate to throughput. Let’s face it, some of our colleagues have one speed – the waiting room is full but there is no second gear. Others practise very inefficiently; orders for a single patient placed in serial aliquots – a few now, a few later – can drive the nurses crazy. There’s a whole litany of skills that separate the super-efficient physicians from the slow-pokes, and many can be taught if there is sufficient motivation that is provided through the use of data.

However, no one is going to say that there are not substantial external factors that affect throughput. Yet all of the physicians in the group have to deal with them to a varying degree: inadequate staffing of nurses, clerks, and techs; an electronic medical record system that is a rock around physicians’ necks; a computerized physician order entry system that is primitive with lots of unnecessary steps; and an administration that won’t work to move admitted patients out of the ED – all major impediments to departmental throughput.

These external factors do not invalidate the idea of measuring physician performance – both clinical and from the point-of-view of throughput. Certainly, the data need to be as clean and unassailable as possible (the first thing physicians will do will be to attack the data). Consideration has to be taken of the differences between night shifts and single covered shifts and which shifts also have advanced practice clinicians and scribes and any other variables that may influence the data; however, none of this precludes the value of obtaining data and using them effectively to manage ED performance.

Having been the director of a community hospital in Los Angeles for 25 years, I have had the opportunity first-hand to experience the value of data to assess the performance of our ED. Since 1985, we had a system that told us everything we could possibly want to know on each patient seen – throughput, tests, drugs, supplies, procedures, you name it.

Sure, there were lots of frustrations that impeded us from reaching the service goals that we wanted, and, sure, many were largely outside of our control; but without the data, we would not have been nearly as successful as we were. Like clinical studies, data on ED and clinician performance will always have limitations, but the limitations shouldn’t invalidate the entire endeavour. Of course, some (most) of the emergency physicians will not like having their work scrutinized and will come up with all manner of excuses that try to void the initiative (pointing fingers at administration is a reflex action). However, when they learn that the data can be used 1) to make the case for improved staffing to the administration, 2) to create increases in their hourly pay, 3) to equitably compensate physicians, 4) to decrease bottlenecks, and 5) as a carrot rather than a stick to improve throughput and clinical care, then it’s hard not to make a compelling case for the value of physician performance measurement.

Against: Heather Murray (@HeatherM211)

“The beatings will continue until morale improves.”

I worked a crazy-busy shift recently. How did I manage, other than ignoring my own health? (I see you, unused bathroom and uneaten dinner). I cut corners. I batched charts, seeing people in groups and quickly ordering tests I thought they needed after a truncated interaction and cursory exam. I made decisions more rapidly than usual. At the end of the day, I had achieved a personal best “throughput” – an administrator’s dream. But not everyone had a great experience. One of my patients had sustained a hand injury playing sports. Glancing briefly at the bruised, swollen hand, I arranged for an X-ray. Reviewing the X-ray, I correctly diagnosed and casted a metacarpal fracture. All was well until the nurse asked me how the reduction had gone. A second, more careful look at the X-ray showed me a missed second injury, completely obvious once I removed the cast and looked again. This near miss was the cost of my increased efficiency.

Some definitions: Productivity can be defined as the average “output” divided by the “resources consumed” by the process. In emergency medicine, this is simplistically defined as the patients per hour (PPH) seen by each physician, also known as throughput. Efficiency is a comparison of what is achieved with a collection of resources, compared with what can be achieved using those same resources. Here, the most common metric is to compare the PPH of various different emergency physicians in order to find the most efficient provider. Using this framework, faster patient care = more efficient = better.

Electronic records and ED patient tracking systems allow us to enthusiastically measure whatever can be measured. To this end, I receive quarterly printouts of my “performance,” assigning me a rank in comparison to my colleagues. The performance targets include average patients seen per hour, average patients seen per shift, the average length of stay of those patients, how many CT scans and venous blood gases I order, how many of my patients are consulted to a specialist service, and what proportion of those consulted patients is admitted. These are common productivity metrics in EDs across the country, given out as “scorecards” to their physicians. The idea that measuring and comparing emergency physicians with each other will improve efficiency is fundamentally flawed on many levels, and is the wrong way to approach the essential task of improving ED care.

Throughput performance measurements are not a valid marker of quality of care. These scorecards do not help me provide better care. They create a vague sense of either superiority (if I happen to fall close to the top of a category) or inadequacy (if I fall near the bottom). There are no targeted actions that might help me improve my practice. If I’m ordering more CT scans than my colleagues, is that too many? Which ones are unnecessary? Are my shifts exactly the same as my colleagues, with the same mix of patients? Even if they are, should we all have a standardized pattern of practice and patient interaction? None of these questions are addressed by analysing the ranked performance of the ED physicians in my group. Additionally, some interventions that improve care paradoxically slow down ED throughput metrics. My average length of stay for mental health patients got longer when we initiated a structured social work support service. The social workers unsurprisingly spent more time talking to patients and their families than the emergency physicians. This intervention improved care for our mental health patients, but using the throughput yardstick would deem it a failure.

Throughput performance measurements have no gold standard. They do not measure quality, only speed. No one would choose to fly on an airline whose planes always leave and land on time, but lose luggage or crash during the process. We all know emergency physicians who can see more patients than anyone else and yet seem to find the sick ones. But we also know the fast doctors whose patients regularly return a few days later, bouncing back with an undiagnosed illness or missed problem. And what about the slow doctors? During my residency, there was one attending I worked with who was known for being slower (patients refer to this trait less disparagingly as “being thorough”). Interestingly, he was the doctor whom all of the nurses requested when they were sick – the one whose opinion they trusted the most. Was he in need of an efficiency intervention? We are missing the point by comparing emergency physicians with each other. Our approaches are not the same and our patients and their individual needs are certainly not the same. We should not be afraid of individual variability. If we want standardized throughput rates, we should staff our EDs with artificial intelligence rather than with human beings.

Throughput performance measurements do not increase physician job satisfaction or patient satisfaction. No one ever said that they wanted to see as many patients as possible, as fast as possible, during their medical school interview. Most of us enjoy seeing a little slice of humanity in our daily encounters. Finding out that someone was a World War 2 veteran, or that they have an alpaca farm, or collect rare stamps are the result of small non-medical conversations and human connection. These are the first things to go when we start cracking the productivity whip and time pressure reigns supreme. One recent study showed that doctors interrupt patients on average after about 11 seconds – eleven seconds!Reference Ospina, Phillips and Rodriguez-Gutierrez 5 This is how time-pressured humans behave. No one wants to see the unhappy doctor running through the day with no time for anyone or anything, no matter how efficient he or she might be. Forcing people to try to do their jobs faster leads to unsatisfied patients and physician burnout, not to better performance or outcomes.

Here is a radical idea: Let’s assume that our highly trained and motivated emergency physician work force is anxious to provide excellent, timely care and ask them how that can be achieved. There have been many examples where putting the focus on improving working conditions has increased productivity, even though increasing productivity was not the primary goal. Explicitly prioritizing the health, working environment and safety of the employees as an institutional target at the aluminium company Alcoa paradoxically resulted in dramatic improvements on productivity and company net worth.Reference Duhigg 6 If we want to make our emergency physicians more efficient, we should start by assuming that everyone is doing the best job they can and work on helping them do it better. Ask your physicians what would help their day-to-day lives and implement the things they suggest, for example, fix the electronic record, hire scribes or employ physician assistants. Clear out the boarded patients. Support the team in identifying and intervening to prevent medical errors. Ensure that they are protected from dangerous patients and/or families. Invest in their continuing education. Develop a mentoring and/or physician coaching program. Any one of these interventions will have a far greater effect on productivity than measuring and analysing individual physician performance against that of their colleagues.

So, to ED administrators everywhere: Stop comparing us, and start supporting us. Maybe then we can talk about efficiency. Then again, maybe we won’t have to.

Competing interests: None declared.

References

REFERENCES

1. Stanley, RM, Hoyle, Jr JD, Dayan, PS, et al. Emergency department practice variation in computed tomography use for children with minor blunt head trauma. J Pediatr 2014;165(6):12011206.Google Scholar
2. Levine, MB, Moore, AB, Franck, C, et al. Variation in use of all types of computed tomography by emergency physicians. Am J Emerg Med 2013;31(10):14371442.10.1016/j.ajem.2013.07.003Google Scholar
3. Dean, NC, Jones, JP, Aronsky, D, et al. Hospital admission decision for patients with community-acquired pneumonia: variability among physicians in an emergency department. Ann Emerg Med 2012;59(1):3541.Google Scholar
4. Emergency Medicine: Reviews and Perspectives. Emergency medical abstracts/EM:RAP; 2018. Available at: https://www.emrap.org/ema (accessed 18 September 2018).Google Scholar
5. Ospina, NS, Phillips, KA, Rodriguez-Gutierrez, R, et al. Eliciting the patient’s agenda-secondary analysis of recorded clinical encounters. J Gen Intern Med 2018; epub, 10.1007/s11606-018-4540-5.Google Scholar
6. Duhigg, C. How “keystone habits” transformed a corporation; 2012. Available at: https://www.huffingtonpost.com/charles-duhigg/the-power-of-habit_b_1304550.html (accessed 18 September 2018).Google Scholar