Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-27T08:54:34.797Z Has data issue: false hasContentIssue false

Analytic Transparency, Radical Honesty, and Strategic Incentives

Published online by Cambridge University Press:  04 April 2018

Sean Yom*
Affiliation:
Temple University
Rights & Permissions [Opens in a new window]

Abstract

As a pillar of Data Access and Research Transparency (DA-RT), analytic transparency calls for radical honesty about how political scientists infer conclusions from their data. However, honesty about one’s research practices often means discarding the linguistic template of deductive proceduralism that structures most writing, which in turn diminishes the prospects for successful publication. This dissonance reflects a unique dilemma: transparency initiatives reflect a vision of research drawn from the biomedical and natural sciences, and struggle with the messier, iterative, and open-ended nature of political science scholarship. Analytic transparency requires not only better individual practices, such as active citations, but also institutional strategies that reward radical honesty. Journals can provide authors with protected space to reveal research practices, further blind the review process, and experiment with special issues. More broadly, analytic openness can be mandated through procedural monitoring, such as real-time recording of research activities and keystroke logging for statistical programs.

Type
Articles
Copyright
Copyright © American Political Science Association 2018 

The Data Access and Research Transparency (DA-RT) initiative, as outlined in the revised APSA Guide to Professional Ethics in Political Science and other documents, invites constructive scrutiny regarding its first two pillars of data access and production transparency.Footnote 1 These two pillars call for scholars to make publicly available their data (e.g., statistics, interviews, and archives) and clarify how they collected and curated those data in the first place. Institutions and journals responded to this challenge with new policies, such as creating quantitative and qualitative data repositories, requiring replication for publishable studies, preregistering experimental hypotheses, and sharing metadata including coding rules (Lupia and Elman Reference Lupia and Elman2014, 23). These strategies maximize data-related openness, ensure greater replicability, and contribute to DA-RT’s stated goal of making political science a more credible enterprise.

By contrast, far less attention focuses on the third pillar of DA-RT: analytic transparency. Analytic transparency calls for political scientists to “clearly explicate the links connecting data to conclusions” (APSA Committee on Professional Ethics, Rights, and Freedoms 2012, 10). They should publicize the “precise interpretive process” by which they infer that any gathered evidence “supports a specific descriptive, interpretive, or causal claim” (Moravcsik Reference Moravcsik2014, 48). As King (1995, 444) noted, “The only way to understand and evaluate an empirical analysis fully is to know the exact process by which the data were generated and the analysis produced.”

Like DA-RT’s other pillars of data access and production transparency, analytic transparency is a clarion call for radical honesty. It means that scholars must disclose all of the steps needed to reach their conclusions, regardless of methodology. However, it is unclear what this would entail in practice due to an unspoken tension that has long dogged political science: what we proclaim to “do” in any given study often is not what we did in real life when undertaking that study. For decades, journal articles and book monographs have read like classic expositions of deductive proceduralism: first, the author deduces axiomatic hypotheses from existing theories, then gathers and presents all relevant data, and finally tests the hypothetical predictions against the data. Almost always, the study is a declarable success, insofar as the hypotheses are confirmed and the findings are presented as significant.

Yet, for many, this type of science can seem more like science fiction. Behind the scenes, political scientists make innumerable missteps and revisions that bring them closer to meaningful results but which hardly read like elegant protocol if revealed. For instance, we may begin a new project with rough hunches that only morph into falsifiable hypotheses when grant applications are due. Often, we react to contradictory results by inductively revising the original research design by reconfiguring data and recalibrating concepts. Frequently, we treat a hypothesis test not as a singular decisive “aha! moment” but instead as the metaphorical summary of our own reflective thinking as we iteratively cycle back and forth between theory and evidence to determine which argument makes sense. However, the discipline’s professional incentive structure frowns upon admitting this subjectivity. Advancement requires publishing through peer-reviewed journals and presses, which induces authors to hide these recalibrations and instead adopt the linguistic cloak of deductive proceduralism and the legitimacy that it anoints.

Advancement requires publishing through peer-reviewed journals and presses, which induces authors to hide these recalibrations and instead adopt the linguistic cloak of deductive proceduralism and the legitimacy that it anoints.

This article engages how analytic transparency as radical honesty translates into practice. First, it traces how transparency initiatives originated in the biomedical and natural sciences, which illuminates the assumptions and biases behind analytic transparency. Second, it links analytic transparency to deductive proceduralism. The laboratory-based ideal of deductivist research resonates in DA-RT’s language but does not capture how political scientists often conduct their work. Third, the article considers ways to advance analytic transparency. Whereas individualistic solutions such as active citations hold promise, radical honesty may require institutional strategies. For instance, journals can incentivize authors to make voluntary concessions and freely discuss their research practices. Consortia also can create procedural mandates, such as daily activity logging, statistical keystroke monitoring, and other timestamped recordings, which would fully show readers what occurred behind closed doors.

These strategies promote analytic transparency with the same intensive effort that DA-RT’s other components attract. After all, data access does not mean upload only half a dataset to the repository; production transparency does not mean divulge only some coding rules or interview methods. As a principled stance, analytic transparency calls for scholars to show all of their cards and hide nothing. If not intended to be taken seriously, why be part of DA-RT at all?

TRANSPARENCY, NOW

The implications of DA-RT and analytic transparency are best understood in comparison with the origins of modern transparency initiatives in other scientific disciplines. Almost 60 years have passed since Theodore Sterling (1959, 34) famously evoked suspicion about publication bias, in which pressures to publish were pushing researchers to commit malpractices. Afterwards, concerns about scientists “cheating” through shortcuts such as hypothesis tampering, cherry-picking data, and statistical p-fishing to reach artificially positive results provoked periodic warnings but seldom public discussion (Rosenthal Reference Rosenthal1979, 640). Only in the past decade have medicine, psychology, and other fields been forced to confront this dilemma due to the “replication crisis” because meta-studies including the Reproducibility Project began to reveal how shockingly few results from experimental and observational studies could be replicated (Open Science Collaboration 2015, 943). Indeed, one prominent medical review argued that systematic bias and research fraudulence were likely so widespread that most published causal associations were false (Ioannidis Reference Ioannidis2005, 696).

Today’s suspicions about the credibility of peer-reviewed research extend beyond the occasional alarm accompanying high-profile retractions of studies proven to have outright fabricated data or results, such as The Lancet’s renunciation of the 1998 study linking measles, mumps, and rubella (MMR) vaccination to autism (Deer Reference Deer2011, 81) and, more recently, Science’s retraction of a political science study about the impact of canvassing on opinions about gay marriage (McNutt Reference McNutt2015). Rather, the concerns are more about mundane practices of everyday research. Because peer reviewers and readers see only the final results of a study, it is difficult to verify that hypotheses, data, and analyses were not manipulated in even the smallest way. Many fear that what happens is the “Chrysalis effect”: that is, ugly initial results that disappoint investigators magically become beautiful publishable discoveries after behind-the-scenes chicanery (O’Boyle et al. Reference O’Boyle, Banks and Gonzalez-Mulé2017, 378).

In response, many scientific disciplines revitalized their commitment to transparency with new mandates such as enhancing data access, requiring preregistration, and encouraging collaborative openness. Data repositories have appeared in many fields, such as the National Science Foundation–funded Paleobiology Database, which serves scholars of fossil animals, plants, and microorganisms by holding all collection-based occurrence and taxonomic data from any epoch. Funding entities have imposed binding transparency and compliance rules. For instance, the National Institutes of Health requires researchers to register not only their results but also all metadata on ClinicalTrials.gov, and it penalizes those who refuse to divulge the raw information needed for replication. Journals, including the widely cited Science and Nature, also have championed this effort. Since 2014, Science has abided by far stricter submission standards such that authors now must disclose pre-experimental plans for analysis, sample-size estimation, signal-to-noise ratio assessments, and other metadata. In 2013, Nature abolished space restrictions on the methodology sections of accepted articles and began requiring authors to complete an exhaustive checklist of disclosures (e.g., cell-line identities, sampling and blinding techniques, and proof of preregistered hypotheses). Hundreds of other journals have since followed suit with similar requirements.

DA-RT is part of this wider trend emanating from the biomedical and natural sciences. To be sure, political scientists previously periodically raised transparency concerns, including problems of selecting historical sources (Lustick Reference Lustick1996, 606) and reproducing quantitative results (Gleditsch and Metelits Reference Gleditsch and Metelits2003, 72). Only with the replication crisis, however, did ethical anxiety compel the proposed new framework of standards guiding data access, production transparency, and analytic transparency. DA-RT entered the disciplinary lexicon after the 2012 changes made to APSA’s Guide to Professional Ethics in Political Science and the 2014 Journal Editors Transparency Statement. Since then, animated debates in print journals, APSA newsletters, and online forums such as Dialogue on DA-RT showed that whereas most political scientists accept the goal of enhancing their discipline’s credibility and integrity, many also remain suspicious of DA-RT’s language and feasibility (Schwartz-Shea and Yanow Reference Schwartz-Shea and Yanow2016, 2–8).

However, those discussions also showed how quickly institutions were moving to mirror the biomedical and natural sciences in the area of data access and production transparency. Among other initiatives, consortia and journals spearheaded new data repositories, required replication before publication, encouraged study preregistration, circulated research metadata, and used persistent digital identifiers for online links. Scholars continue to propose additional strategies in these two regards, such as revamping graduate training to include coursework on research and data ethics (Laitin and Reich Reference Laitin and Reich2017, 173). In addition, DA-RT is not the only platform pursuing new standards; it swims in an “alphabet soup” of similar undertakings across the behavioral sciences, among them COS, BITSS, EGAP, GESIS, OSF, OpenAIRE, and Project TIER. All reflect the same existential fear: that academia is bleeding credibility because too much work is riddled with questionable research practices that must be stopped. As several past APSA presidents declared, “Our legitimacy as scholars and political scientists in speaking to the public on issues of public concern rests in part on whether we adopt and maintain common standards for evaluating evidence-based knowledge claims” (Hochschild, Lake, and Hero Reference Hochschild, Lake and Hero2015).

Among other initiatives, consortia and journals spearheaded new data repositories, required replication before publication, encouraged study preregistration, circulated research metadata, and used persistent digital identifiers for online links.

A DOSE OF DEDUCTIVISM

Not only these initiatives but also DA-RT discussions focus primarily on how political scientists collect and generate data (i.e., data access and production transparency) rather than the third pillar of analytic transparency. Consider, for instance, semi-official DA-RT guidelines outlined in a recent issue of APSA’s Comparative Politics section newsletter. In the qualitative guidelines, only 2 of 26 clauses address analytic transparency, whereas in the quantitative guidelines, only 3 of 19 relevant clauses do so. Both list only generic prescriptions, such as authors must “be clear about the analytic processes they followed” (“Guidelines for Data Access and Research Transparency for Qualitative Research” 2016, 19) and be “clearly mapping the path from the data to the claims” (“Guidelines for Data Access and Research Transparency for Quantitative Research” 2016, 24). Such vague directives about analytic transparency run throughout other DA-RT writings, but they provide little direction in terms of concrete standards.

Why is it difficult for the simple exhortation to show the exact process by which results are generated to be translated to political science? One answer is that, as conceptualized within DA-RT, analytic transparency reflects an understanding drawn from the biomedical and natural sciences that all research should abide by deductivist procedures. Within political science, this model of deductive proceduralism has dominated for decades (Yom Reference Yom2015, 620–21). Except for some traditions, such as constructivist and interpretivist work, deductive proceduralism dictates that good research should be engaged in a Popperian quest to advance theory by testing hypotheses. The result is a standardized four-step model for how political scientists should operate: (1) deduce falsifiable hypotheses from existing theories; (2) predict all observable implications of the hypotheses without cheating (e.g., looking at data); (3) dispassionately collect all data and cases; and (4) test the hypotheses by comparing prior predictions with those data. The study succeeds if the hypotheses are corroborated, which means significant and positive findings.

This template is the “gold standard” of political science, evidenced not only in the language of journal articles and books but also in graduate training and conference norms (Clarke and Primo Reference Clarke and Primo2012, 26). However, such hegemony is troubling. First, it overlooks that some natural sciences (e.g., geology and astronomy) frequently reject deductive proceduralism in favor of inductive techniques to achieve critical breakthroughs (Waldner Reference Waldner, Ned Lebow and Lichbach2007, 147–9). Second, the disconnect between the image connoted by deductive proceduralism—that is, political scientists toiling away in laboratory-like conditions—and the eclectic nature of real-world research can be enormous. As Laitin (2013, 44) averred, “Nearly all research in political science…goes back and forth between a set of theories and an organically growing data archive with befuddled researchers trying to make sense of the variance on their dependent variables.” Indeed, substantial types of qualitative, game-theoretic, and quantitative research combine deductive reasoning with unspoken inductive techniques (Yom Reference Yom2015, 626). For instance, we often begin a project not with axiomatic hypotheses but rather with cruder hunches; we peruse data and cases well before we should; we revise assumptions and retool variables as rival explanations crystallize; and we react to “wrong” results not by ending the study but instead by pressing onward to piece together a viable explanation linking theory with data. Munck and Snyder’s interviews with the doyens of comparative politics verified this. In their volume, scholars including Moore, Przeworski, and Collier all recounted how their best works required inductive, iterative revisions—that is, not fully hypothesizing at the start, throwing in new cases and data when appropriate, repeatedly modifying explanations, and other retroactive “sins” against the deductive paradigm (Munck and Snyder Reference Munck and Snyder2007, 95, 475, 574).

To be sure, deductive proceduralism remains a crucial guide for some political science research. Within experiments, it is imperative when conducting controlled interventions that hypotheses, data, and variables not be changed ex post facto to make it appear that the obtained results were those predicted all along (Dunning Reference Dunning2016, 547–9). Likewise, much econometric analysis relies on the Neyman–Rubin model of causal inference, in which hypothetical models are tested by comparing theorized potential outcomes against observed outcomes. Without careful deductive adherence, unscrupulous users can mine datasets and curve-fit statistical models through “garbage-can” regressions (Kennedy Reference Kennedy2002, 577–9). Nevertheless, the more important point is that given political science’s methodological pluralism, many scholars do not follow deductive proceduralism in lockstep fashion but still write as if they did when presenting or publishing.

By contrast, explicitly showing how conclusions were inferred from evidence is less complicated in biomedical and some natural sciences. For example, a University of Virginia team recently discovered a neurological link between the human brain and the immune system, which generations of medical textbooks had treated as unconnected entities (Louveau et al. Reference Louveau, Smirnov, Keyes, Eccles and Rouhani2015, 340). The discovery of the existence of meningeal tissue carrying immune cells between the brain and lymphatic nodes was startling because scientists had assumed that all of the body’s tissue structures already had been mapped. It catalyzed the rethinking of neurological diseases such as Alzheimer’s: now, the brain can be treated mechanistically, with its meningeal tissues serving as a potential causal link between immunological dysfunction and mental impairment.

How did this significant result emerge? When mounting the brain tissue of mice on a slide, the medical team noticed unexpected vessel-like patterns. Excitedly hypothesizing that these could be lymphatic cells, researchers decided to test this proposition with new mice and human brain tissue. An exhaustive array of tests followed, including incubations and buffering, antibody injections, image analysis, statistical diagnostics, and multiphoton microscopies. Each hypothesis test existed in the literal sense; they were timestamped, recorded, and observed both linearly and sequentially. In this context, analytic transparency is easy to evince. The scientists could prove when vessel-like patterns first appeared on early screenings, spurring the hypothesis. They could provide encoded logs of each test that followed, along with computer-generated readouts, images, and results in chronological order. They could show the evidentiary thresholds indicating that the vessel-like patterns were lymphatic tissue. All of this occurred within a physical laboratory where testing could be done only with expensive technical equipment and where multiple monitoring mechanisms could validate each step.

For instance, when is the exact moment that a qualitative scholar tests a hypothesis about foreign aid causing African civil conflict: after the tenth interview in rural Guinea, when perusing notes on the return flight after fieldwork, or when drafting page 35 of the Senegalese case study months later at the office?

Political scientists do not operate with such undeviating linearity, which raises vexing questions. For instance, when is the exact moment that a qualitative scholar tests a hypothesis about foreign aid causing African civil conflict: after the tenth interview in rural Guinea, when perusing notes on the return flight after fieldwork, or when drafting page 35 of the Senegalese case study months later at the office? Or how does a quantitative analyst justify the multivariate framework behind econometric analysis linking US oil and gas production to higher labor demand? Does this mean that there were no exploratory statistics, no “eyeballing” of panels, and no model respecification such that only a single regression was ever run? Likewise, consider an applied-game theorist hoping to explain the end of the Vietnam War under the rules of deductive proceduralism. When proposing an extensive-form game in a cocoon of mathematical theory, how can all learned knowledge about foreign policy and Southeast Asian history—which could possibly bias the hypothesis and distort predictions—be forgotten?

PROMOTING RADICAL HONESTY

The radical honesty connoted by analytic transparency suggests that political scientists can reconcile their research practices with deductive proceduralism simply by admitting what they did in real life. However, suppose a scholar did so and wrote a paper recounting the full iterative process of research. The resulting text could read in a messy, even discomfiting way by exposing every revision and mistake necessary to have reached the conclusions. The introduction may list not sophisticated hypotheses but instead all of the half-formed claims and wrong estimates the evaluation of which was necessary before better causal propositions became possible. Substantive sections may unveil how the author collected the wrong data or cases and ran repeated analyses until new insights struck. The conclusion may present the first disappointing findings that made the scholar backtrack, oscillate between theory and data, and only then generate something convincing.

Unsurprisingly, political scientists are not rushing to write this way. Of course, critics may suggest that analytic transparency need not require disclosure of everything—only the most important steps. However, this reproduces the problematic status quo that DA-RT is supposed to address. Political scientists already use their judgment in deciding which research practices should be disclosed or hidden. Clearly, this state of affairs is unacceptable; otherwise, why have DA-RT in the first place?

Given this reluctance, more modest proposals are needed to normalize analytic transparency. One proposal involves the tightening of citation standards. As Kreuzer (2017, 3, 13) argued, footnotes and references can offer readers “backstage access” to an author’s most intimate analytic thinking by illuminating the units of analysis, theoretical biases, validity standards, and temporal horizons. However, too many scholars treat these desiderata as afterthoughts—for instance, by failing to mark sources by page numbers. One solution is to use “active citations”: scholars must hyperlink any empirical citation presented as support for a contestable claim to an online appendix containing larger excerpts (including scans or copies) from the original source, alongside an annotation explaining how it supports the claim (Moravcsik Reference Moravcsik2014, 50). This not only compels scholars to locate data more precisely within sources, it also provides space to clarify the logic behind why the quoted data justify the argument.

However, active citations apply mostly to qualitative researchers using written documents; it has little applicability to, say, quantitative scholars analyzing rectangular datasets. Furthermore, no citation standard can completely divulge the personal judgment behind the interpretive process of inferring conclusions. For instance, active citations do not eliminate the destructive temptation to cherry-pick sources, especially historical ones (Lustick Reference Lustick1996, 608). They allow readers to visualize how causal inferences flow from presented data but not what contrary evidence reveals or how competing sources were adjudicated.

Beyond citation standards, active transparency requires the same institutional efforts that now promote data access and production transparency. After all, strategic incentives behind publication are not disappearing. For many, publishing in peer-reviewed journals and book presses remains a critical antecedent to professional success. There is little individual motive for all but senior (and, therefore, secure) scholars to act as first movers and embrace uncompromising openness.

Given this, political science can systematically reward radical honesty by encouraging voluntary concessions or installing procedural mandates, which would remove penalties for digressions from deductive proceduralism. Regarding the former, book presses and journals can allow authors to make truthful concessions about what happened behind the scenes and then present those narratives as an integral section or appendix rather than a few throwaway lines in the introduction. This would capture the research process as it unfolded in real time, including the convoluted process undertaken to reach the conclusion. It also would articulate the author’s priors, including the rationale for wanting to reach certain results. Such openness requires protecting authors from referees who expect an orthodox narrative of deductive proceduralism and who might recommend excising these “honesty tracts” from manuscripts assigned for peer review.

Another idea is for journals to publish two versions of accepted papers: (1) an honest version in which authors disclose all of their immaculate and shambolic practices resulting in the conclusion, and (2) a standardized disciplinary version written in deductivist language. Inversely, journals can reduce procedural malpractice—and make actual procedures more closely resemble deductive proceduralism—by delinking publication decisions from positive results. In a recent special issue, for instance, Comparative Political Studies called for all prospective authors to preregister hypotheses and specify data up-front, while also barring reviewers from accessing results when evaluating manuscripts.Footnote 2 Yet, such “results-free” reviewing enhances analytic transparency at the cost of methodological pluralism: all submissions turned out to be experimental or quantitative, likely because case-oriented qualitative work requires inductive revisions that does not comport with preregistration schemes (Findley et al. Reference Findley, Jensen, Malesky and Pepinsky2016, 1689).

More strongly, procedural mandates can induce radical honesty by allowing readers and reviewers to access unmediated records of research activity. One example is deploying software platforms for daily research logging, similar to what networks such as the Open Science Framework offer.Footnote 3 Like a laboratory notebook for biologists, researchers would use this log to document and timestamp every procedure undertaken while writing their papers and books, with the final updated log made available alongside the original study. Although average readers would not be expected to pour over this protected document, it nonetheless would facilitate replication by enabling reviewers to retrace the author’s precise journey from evidence to conclusion. This type of logging applies across methodological fields. For instance, keystroke-monitoring modules within STATA or R-based software could register every command inputted by quantitative analysts, with the raw procedural log allowing readers to verify whether a hypothesized model was manipulated, whether statistical tests were executed multiple times, and other activities preceding the final results.

CONCLUSION

Analytic transparency is easier to implement in the biomedical and natural sciences than in political research, in which open-ended and inductive practices contravene deductive proceduralism. That deductive template is vital to the quest for scientific knowledge, but many political scientists do not follow it—and yet, the discipline still thrives. The different strategies suggested herein have varying costs, but they acknowledge that overcoming dissonance between principle and practice requires rewarding rather than penalizing radical honesty. However, these strategies will not crystallize until political scientists more seriously reflect on what analytic transparency entails in practice—and with the same efforts that have boosted data access and production transparency within the DA-RT framework.

Footnotes

1. Since 2014, special issues about DA-RT have appeared in journals including PS: Political Science & Politics, Security Studies, Qualitative and Multi-Method Research: Newsletter of the APSA Section for Qualitative and Multi-Method Research, and CP: Newsletter of the Comparative Politics Organized Section of the American Political Science Association. Online forums devoted to DA-RT debates include Dialogue on DA-RT (available at https://dialogueondart.org) and Qualitative Transparency Deliberations (available at www.qualtd.net).

2. I participated in this special issue’s blinded-evaluation process.

3. I am grateful to one anonymous reviewer for invoking this idea.

References

REFERENCES

APSA Committee on Professional Ethics, Rights, and Freedoms. 2012. A Guide to Professional Ethics in Political Science, revised edition. Washington, DC: American Political Science Association.Google Scholar
Clarke, Kevin, and Primo, David. 2012. A Model Discipline: Political Science and the Logic of Representation. New York: Oxford University Press.Google Scholar
Deer, John. 2011. “How the Case Against the MMR Vaccine Was Fixed.” British Medical Journal 342: 77–82.Google Scholar
Dunning, Thad. 2016. “Transparency, Replication, and Cumulative Learning: What Experiments Alone Cannot Achieve.” Annual Review of Political Science 19: 541–63.Google Scholar
Findley, Michael, Jensen, Nathan, Malesky, Edmund, and Pepinsky, Thomas. 2016. Comparative Political Studies 49: 1667–703.CrossRefGoogle Scholar
Gleditsch, Nils Petter, and Metelits, Claire. 2003. “The Replication Debate.” International Studies Perspectives 4: 72–9.Google Scholar
“Guidelines for Data Access and Research Transparency for Qualitative Research.” 2016. CP: Newsletter of the Comparative Politics Organized Section of the American Political Science Association 26: 1321.Google Scholar
“Guidelines for Data Access and Research Transparency for Quantitative Research.” 2016. CP: Newsletter of the Comparative Politics Organized Section of the American Political Science Association 26: 2124.Google Scholar
Hochschild, Jennifer, Lake, David, and Hero, Rodney. 2015. “Data Access and Research Transparency Initiative (DA-RT).” November 24. PSNow Blog. Available at www.politicalsciencenow.com/data-access-and-research-transparency-initiative-da-rt. Accessed June 3, 2016.Google Scholar
Ioannidis, John. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2: 696701.Google Scholar
Kennedy, Peter. 2002. “Sinning in the Basement: What Are the Rules? The Ten Commandments of Applied Econometrics.” Journal of Economic Surveys 16 (4): 569–89.CrossRefGoogle Scholar
King, Gary. 1995. “Replication, Replication.” PS: Political Science & Politics 28 (3): 444–52.Google Scholar
Kreuzer, Marcus. 2017. “Data Files, Footnotes, and Editors: Bridging Quantitative, Qualitative, and Editorial Transparency Practices.” Unpublished manuscript, last modified February 1.Google Scholar
Laitin, David. 2013. “Fisheries Management.” Political Analysis 21 (1): 42–7.Google Scholar
Laitin, David, and Reich, Rob. 2017. “Trust, Transparency, and Replication in Political Science.” PS: Political Science & Politics 50 (1): 172–5.Google Scholar
Louveau, Antione, Smirnov, Igor, Keyes, Timothy, Eccles, Jacob, Rouhani, Sherin, et al. 2015. “Structural and Functional Features of Central Nervous System Lymphatic Vessels.” Nature 523: 337–41.Google Scholar
Lupia, Arthur, and Elman, Colin. 2014. “Openness in Political Science: Data Access and Research Transparency.” PS: Political Science & Politics 47 (1): 1924.Google Scholar
Lustick, Ian. 1996. “History, Historiography, and Political Science: Multiple Historical Records and the Problem of Selection Bias.” American Political Science Review 90 (3): 605–18.Google Scholar
McNutt, Marcia. 2015. “Editorial Retraction.” Science. Published online May 28. doi:10.1126/science.aac6638.Google Scholar
Moravcsik, Andrew. 2014. “Transparency: The Revolution in Qualitative Research.” PS: Political Science & Politics 47 (1): 4853.Google Scholar
Munck, Gerardo, and Snyder, Richard. 2007. Passion, Craft, and Method in Comparative Politics. Baltimore, MD: The Johns Hopkins University Press.Google Scholar
O’Boyle, Ernest Hugh Jr., Banks, George Christopher, and Gonzalez-Mulé, Erik. 2017. “The Chrysalis Effect: How Ugly Initial Results Metamorphosize into Beautiful Articles.” Journal of Management 43 (2): 376–99.Google Scholar
Open Science Collaboration. 2015. “Estimating the Reproducibility of Psychological Science.” Science 349: 943.Google Scholar
Rosenthal, Robert. 1979. “The ‘File-Drawer Problem’ and Tolerance for Null Results.” Psychological Bulletin 86 (3): 638–41.CrossRefGoogle Scholar
Schwartz-Shea, Peregrine, and Yanow, Dvora. 2016. “Legitimizing Political Science or Splitting the Discipline? Reflections on DA-RT and the Policy-Making Role of a Professional Association.” Politics & Gender 12 (3): 119.Google Scholar
Sterling, Theodore. 1959. “Publication Decisions and Their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa.” Journal of the American Statistical Association 54 (285): 3034.Google Scholar
Waldner, David. 2007. “Transforming Inferences into Explanations: Lessons from the Study of Mass Extinctions.” In Theory and Evidence in Comparative Politics and International Relations, ed. Ned Lebow, Richard and Lichbach, Mark Irving, 145–76. New York: Palgrave Macmillan.Google Scholar
Yom, Sean. 2015. “From Methodology to Practice: Inductive Iteration in Comparative Research.” Comparative Political Studies 48 (5): 616–44.Google Scholar