Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-26T06:20:31.706Z Has data issue: false hasContentIssue false

Powerless Conservatives or Powerless Findings?

Published online by Cambridge University Press:  25 June 2020

Stephen M. Utych*
Affiliation:
Boise State University
Rights & Permissions [Opens in a new window]

Abstract

Noting the lack of “anti-man” bias research in the 2016 election, Zigerell (2019) argued that a relative lack of conservatives in political science can lead to bias in publications against political science research supporting conservative viewpoints. This article offers an alternative explanation for this lack of research: that this research produces null findings and therefore is subject to the “file-drawer problem,” in which null effects are less likely to be published than positive effects. Using data from the 2016 American National Election Studies, I provide an illustrative example to support this claim and suggest some solutions.

Type
Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press on behalf of the American Political Science Association

In his recent PS article, Zigerell (Reference Zigerell2019) argued that the political ideology and partisanship of (generally liberal) academics can lead to publication bias, especially on research related to gender. He argued that research on bias toward men is not examined or is not published based on the liberal ideology of political science—and, likely, social science generally—journal reviewers (Zigerell Reference Zigerell2019). This article considers an alternative explanation to this perceived bias in publication: the “file-drawer problem,” in which results that reject a null hypothesis are more likely to be published than results that do not (Rosenthal Reference Rosenthal1979). Indeed, the publication of null results is decreasing over time, suggesting that this problem may be even more important today (Fanelli Reference Fanelli2012).

Zigerell (Reference Zigerell2019) argued that although there is substantial work demonstrating how bias against women influenced the 2016 US presidential election between Hillary Clinton and Donald Trump, there is no work that accounts for an alternative explanation (i.e., that bias against men also influenced the election). Using data from the 2016 American National Election Studies (ANES), I find support for the “file-drawer problem” explanation: after accounting for standard demographic and personality characteristics as well as perceptions of discrimination toward a variety of groups in society, perceptions of discrimination toward men do not significantly predict vote choice for Trump.

…I find support for the “file-drawer problem” explanation: after accounting for standard demographic and personality characteristics as well as perceptions of discrimination toward a variety of groups in society, perceptions of discrimination toward men do not significantly predict vote choice for Trump.

It is accurate to state that considerable research has found that sexism negatively influenced Clinton’s 2016 presidential campaign (Cassese and Holman Reference Cassese and Holman2019; Knuckey Reference Knuckey2019; Monteith and Hildebrand Reference Monteith and Hildebrand2019; Schaffner, MacWilliams, and Nteta Reference Schaffner, MacWilliams and Nteta2018; Valentino, Wayne, and Oceno Reference Valentino, Wayne and Oceno2018). However, these findings are not without critique; Zigerell (Reference Zigerell2019) claimed that current measurements of sexism measure bias only toward women and do not account for anti-man bias.

However, Zigerell (Reference Zigerell2019) presented an alternative measure of attitudes toward men that is comparable to a question asked about attitudes toward women. The 2016 ANES asked about how much discrimination occurs in the United States today against both men and women. Zigerell (Reference Zigerell2019) presented mean-frequency comparisons showing that Clinton voters were more likely than Trump voters to perceive no discrimination against men. Drawn from arguments in Cassese and Barnes (Reference Cassese and Barnes2019), he asserted that this suggests that Clinton may have been advantaged by gender discrimination because “Clinton voters denied that there is discrimination among men in the United States today, suggesting that many of these voters harbored anti-men attitudes” (Zigerell Reference Zigerell2019). Of course, this result does not suggest publication bias against measures of anti-man attitudes. Using the same dataset, Monteith and Hildebrand (Reference Monteith and Hildebrand2019) published similar results, using the full five-point scale, showing that Trump supporters perceive greater discrimination against men than Clinton supporters and that this is mediated by both hostile and modern sexism among men.

Published work examines the effects of perceptions of anti-man bias. Studies show that status-legitimizing beliefs can lead to more positive evaluations of a man making gender-bias claims (Wilkins, Wellman, and Schad Reference Wilkins, Wellman and Schad2017) and that perceptions of discrimination among men and women have different consequences for their psychological well-being (Schmitt et al. Reference Schmitt, Branscombe, Kobrynowicz and Owen2002). Indeed, Schmitt et al. (Reference Schmitt, Branscombe, Kobrynowicz and Owen2002) provided a good example of the potential for the file-drawer problem to occur when they found that perceptions of discrimination have a negative effect on the psychological well-being among women but have no effect among men. Manzi (Reference Manzi2019) provided an in-depth review of numerous studies that examined gender discrimination against men from various social science disciplines. Manzi (Reference Manzi2019) noted that the results of these existing studies are mixed, again suggesting that there may be a simpler, less nefarious explanation for the relative dearth of studies about gender bias against men compared to gender bias against women.Footnote 1 That is, studies examining gender bias against men do not produce statistically significant results as often as studies of gender bias against women, making them less likely to be published.

To determine the role of gender bias against men in the 2016 election, I used a measure of perception of discrimination against various groups in the United States from the 2016 ANES to predict two-party vote choice and feeling-thermometer ratings of Trump. Zigerell (Reference Zigerell2019) argued that this measure can be seen as a proxy for “anti-men attitudes.”Footnote 2 These analyses produced an illustrative example of how the file-drawer problem, rather than anti-conservative biases, could produce the same pattern of publications that Zigerell (Reference Zigerell2019) mentioned. The results in table 1 include controls for ideology, partisanship, authoritarianism, egalitarianism, gender, age, race, political interest, the importance of religion, income, marital status, children under the age of 18, and education. Survey weights were used in all analyses.

Table 1 Gender Discrimination

Note: Table entries are logit (column 1) or OLS (column 2) coefficients with standard errors in parentheses. *p<0.05, **p<0.01

At first glance, these results suggest that perceptions of discrimination against both men and women influence support for Trump in directions that we may expect: perceptions of discrimination against women predict a decrease in Trump support, whereas perceptions of discrimination against men predict an increase. However, these perceptions do not necessarily happen in a vacuum and are imperfect proxies for actual gender-biased attitudes. For example, we could imagine two types of people who perceive high levels of discrimination against women: (1) those who hold no gender bias but have observed considerable gender bias in the world; and (2) those who hold considerable gender bias and take their own biased beliefs to mean that discrimination against women is prevalent. Zigerell (Reference Zigerell2019) argued this well and I agree that these are imperfect measures of bias against men. Thankfully, the ANES also asks questions about perceptions of discrimination toward other groups. Some are majority groups, such as men (whites and Christians), whereas others are minority groups, such as women (African Americans, Hispanics, and Muslims). Examining correlations among these variables reveals that perceptions of bias are correlated in interesting ways. Perceptions of discrimination against men are correlated relatively strongly: against whites (r=0.57) and Christians (r=0.41) but not African Americans (r=0.02), Hispanics (r=0.10) or Muslims (r=-0.06). Conversely, perceptions of discrimination against women are correlated strongly: against African Americans (r=0.57), Hispanics (r=0.55), and Muslims (r=0.42) but not whites (r=0.09) and Christians (r=0.13). Perhaps these measures of perceptions of discrimination against men and women serve as proxies for how people feel about majority and minority groups. That is, those who perceive higher discrimination against men may be demonstrating a “majority grievance” whereas those who perceive higher discrimination against women may be demonstrating a “minority grievance,” for lack of more succinct terms. Accounting for these perceptions in analyses should illustrate how specific perceptions of gender-based discrimination influenced decision making in the 2016 election. Discrimination against majority groups was an additive index of perceptions of discrimination against whites and Christians, whereas discrimination against minority groups was an additive index of perceptions of discrimination against African Americans, Hispanics, and Muslims (table 2).Footnote 3

Table 2 Gender Discrimination and Status Discrimination

Note: Table entries are logit (column 1) or OLS (column 2) coefficients with standard errors in parentheses. *p<0.05, **p<0.01.

These results tell a different story about how perceived gender discrimination influenced the 2016 election. Accounting for perceptions of discrimination against minority and majority groups in society may help us understand the underlying correlation between perceptions of gender discrimination and other types of discrimination. Discrimination against women retains a significant effect on attitudes toward Trump; that is, those who perceive more discrimination against women were less likely to vote for Trump and rated him lower. However, there was no significant effect of perceptions of discrimination against men on support for Trump—although signed in the expected direction, these estimates do not approach even generous levels of statistical significance. It appears that perceptions of discrimination against other majority groups (i.e., majority grievance) is the real predictive work, not simply perceptions of bias against men alone.

This example illustrates the exact type of scenario in which the file-drawer problem may be most prevalent. Reading too much into an imperfect measure may lead to a conclusion that anti-man attitudes did influence the 2016 election. A researcher could see those measurement imperfections and attempt to correct them by controlling, as much as possible, for an alternative explanation, thereby producing null results. These results become inconclusive and we see support for rejecting a null hypothesis in one model. However, accounting for measurement imperfections leads to not rejecting the null hypothesis although effects remain signed in the expected direction. Given the potential for type II errors when presenting results that “support” a null hypothesis, these results are likely to be less well received than a precisely estimated effect of essentially zero. Indeed, null and mixed results are both less likely to be published and also less likely to be written up than strong results (Franco, Malhotra, and Simonovits Reference Franco, Malhotra and Simonovits2014). Therefore, results like this are less likely to be published, leading to a conflating factor: Are results presenting a “conservative” argument not published because studies are not being conducted due to bias against conservatives or because the findings are not as compelling? This question is difficult to answer with available data.

These results become inconclusive and we see support for rejecting a null hypothesis in one model. However, accounting for measurement imperfections leads to not rejecting the null hypothesis although effects remain signed in the expected direction.

There are many reasons why studies about bias against men in the 2016 presidential election are not published at nearly the same rate (if at all) as studies about bias against women. Zigerell (Reference Zigerell2019) suggested that the primary explanation is the ideology of a reviewer pool for these manuscripts. Another explanation is the availability of measures of bias against men—although this may be related to the ideology of political science researchers. However, I have another explanation: perhaps studies examining the effect of bias against men in the 2016 election do not find an independent effect of bias against men on vote choice. These null results are less likely to be published, showing that differential rates of publication are not due to nefarious ideological biases of gatekeepers but rather to a different (and likely more pervasive) type of bias in political science: bias against publishing null results, especially imprecisely estimated nulls. Of course, this is only one example of the file-drawer problem; previous studies of bias against men have found similarly mixed results (Manzi Reference Manzi2019).

Issues with measurement for determining the extent of anti-man bias using the ANES question are legitimate: perceptions of bias against groups in society do not wholly map onto an actual bias of the respondent toward that group. A better test of the effects of anti-man bias on political attitudes would involve better data, including a scale to determine respondents’ actual attitudes toward men rather than their perceptions of bias. Of course, it is possible that due to their ideology, individuals simply are not attempting to study anti-man bias in the 2016 election or in general. This is a reasonable claim but one that might be remedied by changing the incentives to attempt to uncover null results in research. If null results are more publishable, scholars who do not believe anti-man bias is occurring may be encouraged to develop scales and attempt to demonstrate this empirically, which would allow for a rigorous test of the effects of this bias.

Why do we not see measures of individual bias against majority groups on large social science surveys (e.g., the ANES) like those we see for minority groups? Part of the explanation certainly may be ideological. However, a more compelling explanation may be that we choose topics to research—regardless of our ideology—based on what we observe about the world. Many scholars simply do not observe considerable bias against majority groups in society. Indeed, this idea is supported by the 2016 ANES data; even Trump supporters view discrimination against women as occurring more often than discrimination against men (Monteith and Hildebrand Reference Monteith and Hildebrand2019; Zigerell Reference Zigerell2019). Even among those most predisposed to believe that discrimination against men is occurring, discrimination against women still is perceived as more prevalent in society. This is not an unreasonable finding given that women (as well as other traditionally marginalized groups) have faced considerably more historical discrimination than men (or those in majority groups). Therefore, it is likely that discrimination against these groups is correctly perceived as more prevalent. Given the history of discrimination against minority groups in society, it is likely that these attitudes have more significant effects on political attitudes and behaviors. Of course, this is a compelling rationale for why the file-drawer problem may be more likely to exist in studies in which discrimination against majority groups is examined. If such discrimination does exist, it is likely that the effects are smaller in magnitude and therefore less likely to produce the statistically significant relationships that are more likely to be published.

For this reason, the lack of measures of individual bias against men and other majority groups on large national surveys can connect to the file-drawer problem. Given limited time and resources, if a researcher expects null effects, it is likely not a fruitful project to pursue because null results are considerably more difficult to publish than results that differ from zero. Therefore, it may not make sense to include measures in large surveys that may not be fruitful for researchers who are unlikely to embark on their own data collection—which often is expensive and time consuming—to collect data where they may expect null results.

Indeed, groups other than conservatives (e.g., women and racial or ethnic minorities) traditionally have been and remain underrepresented in the social sciences. This underrepresentation, however, has not led to a lack of research concerning gender, race, and ethnicity in political science scholarship. Although we easily can argue that these fields traditionally have been underrepresented in political science research, it is difficult to state that research on these topics is being actively suppressed, given the wealth of existing scholarship. We could argue that this research is published because it fits with a broader liberal worldview. However, this does not explain the extensive debate about measurement issues on important topics including race, such as the validity of the Implicit Association Test for racial attitudes (Blanton et al. Reference Blanton, Jaccard, Klick, Mellers, Mitchell and Tetlock2009) and the racial resentment scale (Cramer Reference Cramer2020). Important and well-cited work in social science, at varying levels, pushed back against a general liberal worldview on controversial topics such as racial bias (Sniderman and Piazza Reference Sniderman and Piazza1993) and gun control (Lott and Mustard Reference Lott and Mustard1997). It is important to note that these debates occurred and that work considered to contrast with a generalized liberal worldview has been published.

The file-drawer problem leaves us with an unknown of how many scholars have attempted to examine how bias against men (or other majority groups) works in the political realm. One explanation is that the liberal gatekeepers of academic journals are suppressing quality research that opposes their worldview. Another explanation is that these effects simply do not exist and that efforts to examine them have led to broad null results that are not entirely conclusive in one way or the other.

Of course, the answer is probably somewhere between these two extremes. Given that these cognitive biases often are implicit (Gampa et al. Reference Gampa, Wojcik, Motyl, Nosek and Ditto2019), it could create structural biases that are further exacerbated by results that simply are not as compelling. That is, we easily could imagine that cognitive biases cause generally conservative findings to be held to a higher standard than work that aligns with a (generally liberal) set of beliefs. However, this is not limited to political ideology—reviewers can be more critical of methods they dislike or findings that oppose conventional wisdom in political science (and perhaps, especially, their own research). Although these biases can be clearly frustrating for researchers, there is little empirical evidence to suggest that publication bias against work that challenges a liberal worldview is occurring systematically. Cognitive biases can be a serious problem in the publication of a host of findings but they also seem to be a structural problem of the current review process, which is becoming increasingly competitive.

Zigerell’s (Reference Zigerell2019) solution to this problem is a more rigorous peer-review process, including a post-acceptance public-comment period. I do not view this as a solution, especially given that the real issue may be the file-drawer problem rather than ideological bias. First, it seems unclear how a post-acceptance public-comment process would lead to fewer issues with ideological bias if it is as pervasive in the discipline as Zigerell (Reference Zigerell2019) claimed. If ideological biases cause individuals to be more critical of a piece’s methodology (Gampa et al. Reference Gampa, Wojcik, Motyl, Nosek and Ditto2019) and political scientists skew liberal, this should mean that reviewers will be more critical of studies objecting to their worldview—given that more reviewers (who will be self-selected) will provide (presumably critical) comments on accepted work. Of course, this may lead to more balance, where conservatives take on more of these reviewers to lead to more rigorous and transparent review for articles of which progressives may be predisposed to approve. However, given that the most common reason for declining to engage in peer review is, in some form, being too busy to do more reviews (Bruening et al. 2015), we can imagine reviewers being too overwhelmed with existing peer-review work to volunteer for more, unless they had their own predisposition either strongly in favor of or strongly against the work.

Perhaps the real solution to assist with potential bias, although not a less rigorous review process, is a more open view about how we think of research. This includes an increased willingness not only to publish null findings that precisely and convincingly estimate null effects but also to publish research that is inconclusive or has null effects that do not provide strong evidence of an effect of zero. Whereas answers to a research question of “probably but I am not sure” may not be wholly satisfying to scholars, these results are common in social science research. Showing a demonstrated commitment to publishing inconclusive results could serve to open up the types of research questions that scholars ask, perhaps by encouraging them to pursue fewer “low-hanging-fruit” research questions and take more risks in their scholarship, understanding that a well-executed study is still publishable regardless of results. Current, small efforts to focus on results-blind studies are working to accomplish this; however, as a discipline, we would strongly benefit from an increase in this type of review.

Perhaps the real solution to assist with potential bias, although not a less rigorous review process, is a more open view about how we think of research.

Whereas conservatives admittedly may find an ideological landscape in political science more daunting than liberals, I argue that there are multiple explanations for why studies of bias may not be published as frequently as findings of bias against women, most of which are not ideological. Although conservatives in political science remain a minority, many scholars in the field identify as conservative (Ceaser and Maranto Reference Ceaser, Maranto, Maranto, Redding and Hess2009). In fact, many conservative scholars have found political science to be a discipline in which they can thrive, especially post-tenure (Maranto and Woessner Reference Maranto and Woessner2012).

Zigerell (Reference Zigerell2019) presented arguments that research supporting a conservative ideology is less likely to be published than research supporting a liberal ideology, focusing on the most serious accusations of ideological bias and research malfeasance. This article considers another less sinister explanation—that research about issues such as anti-man bias may not be published because it is difficult to show conclusive evidence that it exists or has an effect on the political world. A more open review process, focused on research design rather than research results, could address this alternative explanation.

Footnotes

1. This ignores, for the sake of argument, the fact that the existence of gender bias against women in society is a considerably more noticeable and pervasive issue than gender bias against men.

2. Of course, this also could be (and, in my estimation, is) a measure of not observing any gender bias against men.

3. Cronbach’s α is 0.654 for majority discrimination and 0.818 for minority discrimination.

References

REFERENCES

Blanton, Hart, Jaccard, James, Klick, Jonathan, Mellers, Barbara, Mitchell, Gregory, and Tetlock, Philip E.. 2009. “Strong Claims and Weak Evidence: Reassessing the Predictive Validity of the IAT.” Journal of Applied Psychology 94 (3): 567–82.CrossRefGoogle ScholarPubMed
Breuning, Marijke, Backstrom, Jeremy, Brannon, Jeremy, Gross, Benjamin Isaak, and Widmeier, Michael. 2015. “Reviewer Fatigue? Why Scholars Decline to Review Their Peers’ Work.” PS: Political Science & Politics 48 (4): 595600.Google Scholar
Cassese, Erin C., and Barnes, Tiffany D.. 2019. “Reconciling Sexism and Women’s Support for Republican Candidates: A Look at Gender, Class, and Whiteness in the 2012 and 2016 Presidential Races.” Political Behavior 41:677700.CrossRefGoogle Scholar
Cassese, Erin C., and Holman, Mirya R.. 2019. “Playing the Woman Card: Ambivalent Sexism in the 2016 US Presidential Race.” Political Psychology 40 (1): 5574.CrossRefGoogle Scholar
Ceaser, James W., and Maranto, Robert. 2009. “Why Political Science Is Left but Not Quite PC.” In The Politically Correct University, ed. Maranto, Robert, Redding, Richard, and Hess, Frederick, 3955. Washington, DC: American Enterprise Institute Press.Google Scholar
Cramer, Katherine. 2020. “Understanding the Role of Racism in Contemporary US Public Opinion.” Annual Review of Political Science 23:9.1–9.17.CrossRefGoogle Scholar
Fanelli, Daniele. 2012. “Negative Results Are Disappearing from Most Disciplines and Countries.” Scientometrics 90 (3): 891904.CrossRefGoogle Scholar
Franco, Annie, Malhotra, Neil, and Simonovits, Gabor. 2014. “Publication Bias in the Social Sciences: Unlocking the File Drawer.” Science 345 (6203): 1502–505.CrossRefGoogle ScholarPubMed
Gampa, Anup, Wojcik, Sean, Motyl, Matt, Nosek, Brian A., and Ditto, Peter. 2019. “(Ideo)logical Reasoning: Ideology Impairs Sound Reasoning.” Social Psychological and Personality Science. Available at https://doi.org/10.1177/1948550619829059.CrossRefGoogle Scholar
Knuckey, Jonathan. 2019. “’I Just Don’t Think She Has a Presidential Look’: Sexism and Vote Choice in the 2016 Election.” Social Science Quarterly 100 (1): 342–58.CrossRefGoogle Scholar
Lott, John R., and Mustard, David B.. 1997. “Crime, Deterrence, and Right-to-Carry Concealed Handguns.” Journal of Legal Studies 26 (1): 168.CrossRefGoogle Scholar
Manzi, Francesca. 2019. “Are the Processes Underlying Discrimination the Same for Women and Men? A Critical Review of Congruity Models of Gender Discrimination.” Frontiers in Psychology 10:469.CrossRefGoogle Scholar
Maranto, Robert, and Woessner, Matthew. 2012. “Diversifying the Academy: How Conservative Academics Can Thrive in Liberal Academia.” PS: Political Science & Politics 45 (3): 469–74.Google Scholar
Monteith, Margo J., and Hildebrand, Laura K.. 2019. “Sexism, Perceived Discrimination, and System Justification in the 2016 US Presidential Election Context.” Group Processes & Intergroup Relations. Available at https://doi.org/10.1177/1368430219826683.CrossRefGoogle Scholar
Rosenthal, Robert. 1979. “The ‘File-Drawer Problem’ and Tolerance for Null Results.” Psychological Bulletin 86 (3): 638–41.CrossRefGoogle Scholar
Schaffner, Brian F., MacWilliams, Matthew, and Nteta, Tatishe. 2018. “Understanding White Polarization in the 2016 Vote for President: The Sobering Role of Racism and Sexism.” Political Science Quarterly 133 (1): 933.CrossRefGoogle Scholar
Schmitt, Michael T., Branscombe, Nyla R., Kobrynowicz, Diane, and Owen, Susan. 2002. “Perceiving Discrimination Against One’s Gender Group Has Different Implications for Well-Being in Women and Men.” Personality and Social Psychology Bulletin 28 (2): 197210.CrossRefGoogle Scholar
Sniderman, Paul M., and Piazza, Thomas. 1993. The Scar of Race. Cambridge, MA: Harvard University Press.Google Scholar
Wilkins, Clara L., Wellman, Joseph D., and Schad, Katherine D.. 2017. “Reactions to Anti-Male Sexism Claims: The Moderating Roles of Status-Legitimizing Belief Endorsement and Group Identification.” Group Processes & Intergroup Relations 20 (2): 173–85.CrossRefGoogle Scholar
Valentino, Nicholas A., Wayne, Carly, and Oceno, Marzia. 2018. “Mobilizing Sexism: The Interaction of Emotion and Gender Attitudes in the 2016 US Presidential Election.” Public Opinion Quarterly 82 (S1): 799821.CrossRefGoogle Scholar
Zigerell, Lawrence J. 2019. “Left Unchecked: Political Hegemony in Political Science and the Flaws It Can Cause.” PS: Political Science & Politics. Available at https://doi.org/10.1017/S1049096519000854.CrossRefGoogle Scholar
Figure 0

Table 1 Gender Discrimination

Figure 1

Table 2 Gender Discrimination and Status Discrimination