Although this proposal raises an intriguing question related to the present utility of external reviews in promotion and tenure decisions, its conjectures regarding the cause—and therefore appropriate solutions—seem problematic. Kurt Weyland assumes a universal institution while reflecting the perspective of only elite universities, and he presumes that less-stringent evaluations have resulted in uniformly positive assessments of candidate portfolios. The claims made in “Promotion Letters: Current Problems and a Reform Proposal” are empirical—more specifically, external reviews hold less value in decision making because they are now of lower quality and almost uniformly positive. However, the only evidence provided for this claim is discussions with colleagues, personal observations, and references to that “mystical past” when universities were uniquely about quality and the life of the mind.
I am concerned about these references to a time when higher education was so much better because (1) this critique of deterioration and frivolousness is made about every new generation by every aging one; and (2) people like me (based, in my case, on gender and class) typically were not included in higher education. I do not accuse Weyland of this rationale; I simply note that the existence of this more robust, romanticized past as compared to our more contested and messy current reality can rarely be documented. Instead, I suggest that there may be other reasons why external reviews tend to skew more positively than merely a decline in their quality. One change I have observed in more than 25 years as a full-time academic and a department chair at three different types of institutions (i.e., private Midwestern, public Southern historically black, and Mid-South regional comprehensive) is that we do a better job anticipating who will not proceed successfully to tenure. Most institutions now expect a more rigorous third-year review, which gives faculty members who may not be successful at tenure and promotion the time to migrate to institutions that better fit their academic priorities. At my current institution, the promotion and tenure committee and the department head both provide annual feedback to all tenure-track faculty members. Universities have become more precise at measuring and stating tenure and promotion expectations, and the committees have more precise guidelines as well as training about what they can and cannot consider in their decision making. We also allow for a wider range of types of institutions of higher education and accept a broader definition of a successful and productive academic; this means that the template of what is a promotable or tenureable faculty member allows for more variance. These factors could result in greater self-selection or midcourse corrections prior to tenure decisions or mean different types of academics (those who wish to focus on teaching over research, for instance) can now be tenured.
I am concerned about these references to a time when higher education was so much better because (1) this critique of deterioration and frivolousness is made about every new generation by every aging one; and (2) people like me (based, in my case, on gender and class) typically were not included in higher education.
Another factor that has influenced this landscape is that a wider variance of universities now requires external reviewers as part of the tenure and promotion decision. As more institutions demand these reviews, and because the recent waves of retirements have decreased the ranks of full professors who can meet this need, the pool of faculty capable of providing detailed, thorough reviews may have become shallower. The proposal of paying more for the external review of a faculty member’s scholarship than we usually pay for an external program review may change only the nature of the problem, if one exists, rather than resolve it.
I am not sure why the inability to secure reviews of a faculty member’s scholarly record does not serve as peer review. If a faculty member comes up for tenure and the department cannot find an adequate number of reviewers willing to evaluate their colleague’s research output, the professor’s network and significance of their contribution already may have been evaluated.
I have one other concern regarding the presentation of this proposal. Kurt Weyland assumes a perspective on academia in which universities that are “top” house “lead scholars with higher academic standards” and all the remaining academics are merely an “unimpressive list of evaluators.” What a narrow and depressing way to view the diverse realm of higher education! Different institutions have diverging missions, and excellent—as well as mediocre—scholars can be found in all types of programs. In seeking reviews for my tenure and promotion candidates, I look for scholars familiar with the research questions on which my faculty publish and who know that literature well. The specific institution where the scholars are housed is less significant than their CV. Because this discussion relies so heavily on personal experience, I find it intriguing that many external reviewers—especially those from more elite institutions—want to determine whether my candidate for promotion could receive tenure at their institution—an unasked for and frankly irrelevant conclusion. We want to know the impact and potential of the candidate’s scholarship, and we will decide if that evaluation meets our standards and expectations. As Weyland notes, these standards can hardly be universal.
The proposal for payment that he devises also raises concerns. For a department (like mine) seeking three external reviews for each candidate, the cost is $6,000. When three of my colleagues go up for tenure and promotion in 2021, I would face an $18,000 hit to my departmental budget. If this recommendation is only for well-endowed institutions, Weyland should be clear about that instead of assuming a universal scenario. More to the point, in the current system he describes, the strongest candidates (or best connected) are able to garner reviews regardless of their institution. In his “pay-to-play” proposal, there is no merit—merely the best endowed are reviewed. To me, this is an even less-reliable system for the discipline than what we currently embrace. If we collectively agree that we have a problem with the external-review process, then before we endorse a specific solution, we should better understand the problem. An empirical question can be better measured and more clearly defined than by mere conversations and reminisces with friends who most likely work in similar environments. The discipline is broader than the relatively few more-elite institutions, and the question of how to best determine the next generation of tenured political scientists is worthy of a disciplinary-wide answer.