Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-10T16:10:51.861Z Has data issue: false hasContentIssue false

Daily encounter cards facilitate competency-based feedback while leniency bias persists

Published online by Cambridge University Press:  21 May 2015

Glen Bandiera*
Affiliation:
Department of Medicine and the Wilson Centre for Research in Education, University of Toronto, Toronto, Ont. Department of Emergency Medicine, St. Michael's Hospital, Toronto, Ont.
David Lendrum
Affiliation:
University of Toronto FRCP(EM) Residency Program, Toronto, Ont.
*
St. Michael's Hospital, 1-056 Bond Wing, 30 Bond St., Toronto ON M5B 1W8; bandierag@smh.toronto.on.ca

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Objective:

We sought to determine if a novel competency-based daily encounter card (DEC) that was designed to minimize leniency bias and maximize independent competency assessments could address the limitations of existing feedback mechanisms when applied to an emergency medicine rotation.

Methods:

Learners in 2 tertiary academic emergency departments (EDs) presented a DEC to their teachers after each shift. DECs included dichotomous categorical rating scales (i.e., “needs attention” or “area of strength”) for each of the 7 CanMEDS roles or competencies and an overall global rating scale. Teachers were instructed to choose which of the 7 competencies they wished to evaluate on each shift. Results were analyzed using both staff and resident as the units of analysis.

Results:

Fifty-four learners submitted a total of 801 DECs that were then completed by 43 different teachers over 28 months. Teachers' patterns of selecting CanMEDS competencies to assess did not differ between the 2 sites. Teachers selected an average of 3 roles per DEC (range 0–7). Only 1.3% were rated as “needs further attention.” The frequency with which each competency was selected ranged from 25% (Health Advocate) to 85% (Medical Expert).

Conclusion:

Teachers chose to direct feedback toward a breadth of competencies. They provided feedback on all 7 CanMEDS roles in the ED, yet demonstrated a marked leniency bias.

Résumé

RÉSUMÉObjectif:

Nous avons cherché à déterminer si le nouveau concept de fiche de rencontre quotidienne (FRQ), axée sur les compétences et conçue pour réduire au minimum le biais d'évaluation et maximiser l'évaluation indépendante des compétences, pourrait remédier aux limites des mécanismes de rétroaction existants appliqués à une rotation en médecine d'urgence.

Méthodes:

Les apprenants dans deux services d'urgence de soins tertiaires à vocation universitaire ont présenté une FRQ à leurs enseignants après chaque quart de travail. Les fiches comportaient, d'une part, des échelles d'évaluation à variables catégorielles dichotomiques (c.-à-d. « attention à apporter » ou « point fort ») pour chacune des sept compétences CanMEDS et, d'autre part, une échelle d'évaluation générale. On a demandé aux enseignants de choisir une de ces sept compétences et de l'évaluer pendant chaque période de travail. Puis, les résultats ont été analysés, les enseignants et les résidents constituant les unités d'analyse.

Résultats:

Sur une période de 28 mois, 54 apprenants ont soumis 801 FRQ, qui ont été remplies par 43 enseignants. Le schéma selon lequel les enseignants ont choisi les compétences CanMEDS à évaluer était similaire dans les deux services d'urgence. Les enseignants ont choisi en moyenne trois rôles par FRQ (gamme de 0 à 7). La note « attention à apporter » n'a été accordée que sur seulement 1,3 % des fiches. La fréquence selon laquelle chaque compétence a été sélectionnée variait de 25 % (promoteur de la santé) à 85 % (expert médical).

Conclusion:

Les enseignants ont choisi de donner de la rétroaction à l'égard d'une variété de compétences. Ils ont fourni des commentaires sur chacune des sept compétences CanMEDS dans un service d'urgence et ont par ailleurs fait montre d'indulgence marquée dans leur évaluation.

Type
Education • Enseignement
Copyright
Copyright © Canadian Association of Emergency Physicians 2008

References

1. Bandiera, G, Lee, S, Tiberius, R. Creating effective learning in today’s emergency departments: how accomplished teachers get it done. Ann Emerg Med 2005;45:253–61.Google Scholar
2. Atzema, C, Bandiera, G, Schull, MJ. Emergency department crowding: the effect on resident education. Ann Emerg Med 2005;45:276–81.Google Scholar
3. Carter, AJ, McCauley, WA. Off-service residents in the emergency department: the need for learner-centeredness. CJEM 2003;5:400–5.Google Scholar
4. Chisholm, CD, Collison, EK, Nelson, DR, et al. Emergency department workplace interruptions: are emergency physicians “interrupt-driven” and “multitasking”? Acad Emerg Med 2000; 7:1239–43.Google Scholar
5. Frank, JR, ed. The CanMEDS 2005 physician competency framework. Better standards. Better physicians. Better care. Ottawa (ON): The Royal College of Physicians and Surgeons of Canada; 2005.Google Scholar
6. Bandiera, GW, Sherbino, J, Frank, JR. The CanMEDS assessment tools handbook. 1st ed. Ottawa (ON): Royal College of Physicians and Surgeons of Canada; 2006.Google Scholar
7. Bandiera, GW, Morrison, L, Regehr, G. Predictive validity of the global assessment form used in a final year undergraduate rotation in emergency medicine. Acad Emerg Med 2002;9:889–95.Google Scholar
8. Davis, J. Inamdar, S., Stone, RK. Interrater agreement and predictive validity of faculty ratings of pediatric residents. J Med Educ 1986;61:901–5.Google Scholar
9. Dudek, NL, Marks, MB, Regehr, G. Failure to fail: the perspectives of clinical supervisors. Acad Med 2005;80(10 Suppl):S84–7.Google Scholar
10. Resnick, R, Taylor, B, Maudsley, R, et al. In-training evaluation — it’s more than just a form. Ann R Coll Phys Surg Can 1991; 24:415–20.Google Scholar
11. Streiner, D. Global rating scales. In: Assessing clinical competence. New York (NY): Springer Verlag;1985. p. 119–41.Google Scholar
12. Gray, JD. Global rating scales in residency education. Acad Med 1996;71(1 Suppl):S55–63.Google Scholar
13. Borman, W. Effects of instruments to avoid halo error on reliability and validity of performance evaluation ratings. J Appl Psychol 1975;60:556–60.Google Scholar
14. Davis, JK, Inamdar, S. The reliability of performance assessment during residency. Acad Med 1990;65:716.Google Scholar
15. Scheuneman, AL, Carley, JP, Baker, WH. Residency evaluations. Are they worth the effort? Arch Surg 1994;129:1067–73.Google Scholar
16. Cadwell, J, Jenkins, J. Teachers’ judgments about their students: the effects of cognitive simplification strategies on the rating process. Am Educ Res J 1986. 23:460–75.Google Scholar
17. Paukert, JL, Richards, ML, Olney, C. An encounter card system for increasing feedback to students. Am J Surg 2002;183:300–4.Google Scholar
18. Luker, K, Beaver, K, Austin, L, et al. An evaluation of information cards as a means of improving communication between hospital and primary care for women with breast cancer. J Adv Nurs 2000;31:1174–82.Google Scholar
19. Kim, S, Kogan, JR, Bellini, LM, et al. A randomized-controlled study of encounter cards to improve oral case presentation skills of medical students. J Gen Intern Med 2005;20:743–7.Google Scholar
20. Brennan, BG, Norman, GR. Use of encounter cards for evaluation of residents in obstetrics. Acad Med 1997;72(Suppl 1):S43–4.Google Scholar
21. Al-Jarallah, KF, Moussa, MA, Shehab, D, et al. Use of interaction cards to evaluate clinical performance. Med Teach 2005;27:369–74.Google Scholar
22. Ross, L, Nesbitt, RE. The person and the situation: perspectives of social psychology. Philadelphia(PA): Temple University Press;1991.Google Scholar
23. Farrell, SE. Evaluation of student performance: clinical and professional performance. Acad Emerg Med 2005;12:302e6–10.Google Scholar