Evaluation of physician practice is necessary, both to provide feedback for self-improvement and to guide department heads during yearly evaluations.
Objective:To develop and implement a peer-based performance evaluation tool and to measure reliability and physician satisfaction.
Methods:Each emergency physician in an urban emergency department evaluated their peers by completing a survey consisting of 21 questions on effectiveness in 4 categories: clinical practice, interaction with coworkers and the public, nonclinical departmental responsibilities, and academic activities. A sample of emergency nurses evaluated each emergency physician on a subset of 5 of the questions. Factor analysis was used to assess the reliability of the questions and categories. Intra-class correlation coefficients were calculated to determine inter-rater reliability. After receiving their peer evaluations, each physician rated the process’s usefulness to the individual and the department.
Results:225 surveys were completed on 16 physicians. Factor analysis did not distinguish the nonclinical and academic categories as distinct; therefore, the survey questions fell into 3 domains, rather than the 4 hypothesized. The overall intra-class correlation coefficient was 0.43 for emergency physicians, indicating moderate, but far from perfect, agreement. This suggests that variability exists between physician evaluators, and that multiple reviewers are probably required to provide a balanced physician evaluation. The intra-class correlation coefficient for emergency nurses was 0.11, suggesting poor reliability. Overall, 11 of 15 physicians reported the process valuable or mostly valuable, 3 of 15 were unsure and 1 of 15 reported that the process was definitely not valuable.
Conclusion:Physician evaluation by a single individual is probably unreliable. A useful physician peer evaluation tool can be developed. Most physicians view a personalized, broad-based, confidential peer review as valuable.