Responding to Zigerell (Reference Zigerell2019), Utych (Reference Utych2020a, 5) suggested that “research about issues such as anti-man bias may not be published because it is difficult to show conclusive evidence that it exists or has an effect on the political world.” However, evidence of anti-man bias is available in publishable measures of bias against a group, such as negative stereotypes and experimental discrimination, as in the following surveys:
-
• In a 2014 survey (N=1,835), 9% of US adults indicated that “intelligent” is more true of women than men (Pew Research Center 2015, 17).
-
• In a 2018 survey (N=2,301), 31% of US adults indicated that women in high political offices are, in general, better than men in high political offices at being honest and ethical (Pew Research Center 2018, 36).
-
• The Schwarz and Coppock (Reference Schwarz and Coppock2020) meta-analysis of candidate-choice survey experiments reported an on-average favoring of women candidate targets over men candidate targets.
Utych (Reference Utych2020a) reported an illustrative example to suggest that research on anti-man bias suffers from the file-drawer problem. In table 1, an individual-level five-point “perceptions of discrimination against men” measure of anti-man bias associated at p<0.05 with two Trump-related outcome variables, net of controls such as ideology, partisanship, authoritarianism, and egalitarianism. This measure of anti-man bias lost statistical significance in table 2 due to the addition of controls for perceived discrimination against majority groups (Whites and Christians) and perceived discrimination against minority groups (Blacks, Hispanics, and Muslims).
My analyses (Zigerell Reference Zigerell2020) indicated that the measure of anti-man bias retains statistical significance in table 1 analyses when table 1 “perceptions of discrimination” measures are coded 1 for indicating that the amount of discrimination in the United States today is “none at all” and 0 for other substantive responses. This might be a better measure of bias than the five-point coding because “none at all” is the only response that is negative and clearly untrue (see Edelman, Luca, and Svirsky Reference Edelman, Luca and Svirsky2017; Starr Reference Starr2015; Yavorsky Reference Yavorsky2019).
Properly concluding that a predictor suffers from the file-drawer problem requires application of no more rigor than is needed to publish. Thus, for this purpose, table 1 results are preferable because table 1 statistical control is more rigorous than the statistical control in some recent publications (e.g., Utych Reference Utych2020b) that have predicted candidate evaluations using a measure of anti-woman bias. Utych (Reference Utych2020b) did not control for attitudes about racial or religious groups, so for assessing whether research on anti-man bias suffers from the file-drawer problem, table 2 results would be informative only if authors or journal gatekeepers required more rigorous statistical control for the anti-man analyses in Utych (Reference Utych2020a) than for the anti-woman analyses in Utych (Reference Utych2020b).
Regardless, p-values are irrelevant for the Zigerell (Reference Zigerell2019, 720) complaint about “the dearth of gender-attitudes items about men.” Instead, the complaint is valid because measurement of attitudes about men is needed to produce a proper inference about the net effect of sexist attitudes. Research on sexist attitudes should incorporate measures of attitudes about men due to considerations about research design, not considerations about p-values.
ACKNOWLEDGMENT
I thank the PS editors for this opportunity to respond. Pew Research Center bears no responsibility for the analyses or interpretations of the data presented here. The opinions expressed, including any implications for policy, are those of the author and not of Pew Research Center.
DATA AVAILABILITY STATEMENT
Replication materials are available on Dataverse at https://doi.org/10.7910/DVN/CJQROH.