Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-27T13:27:37.325Z Has data issue: false hasContentIssue false

On Separation of Powers and Obfuscation in US Supreme Court Opinions

Published online by Cambridge University Press:  01 December 2022

Daniel Lempert*
Affiliation:
Department of Politics, State University of New York, Potsdam, New York, USA
Rights & Permissions [Opens in a new window]

Abstract

A longstanding debate in American judicial politics concerns whether the US Supreme Court anticipates or responds to the possibility that Congress will override its decisions. A recent theory proposes that opinions that are relatively hard to read are more costly for Congress to review, and that as a result, the Court can decrease the likelihood of override from a hostile Congress by obfuscating its opinions (i.e., writing opinions that are less readable when congressional review is a threat). I derive a straightforward but novel empirical implication of this theory; I then show that the implication does not in fact hold. This casts serious doubt on the claim that justices strategically obfuscate opinion language to avoid congressional override. I also discuss sentence tokenization as a source of measurement error in readability statistics for judicial opinions.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Law and Courts Organized Section of the American Political Science Association

A longstanding debate in American judicial politics concerns whether the US Supreme Court anticipates or responds to the possibility that Congress will override its decisions. There is reason to believe that the Court takes congressional preferences into account. Strategic justices, “who care about the impact of the Court’s policies” have reason to “avoid congressional actions that undercut those policies (Baum Reference Baum2016, 141).” However, justices may be limited in their desire or ability to act strategically for any number of reasons; for example, they may view overrides as too uncommon to worry about, or see the likelihood of an override as too difficult to predict, or expect to shape policy even subsequent to a potential override (Baum Reference Baum2006, 120). Here, I focus on judicial writing style as a means to affect the probability of an override.

A widely-cited paper, Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 38–40) theorizes that opinions that are relatively hard to read are more costly for Congress to review.Footnote 1 Thus, the paper argues, the Court can decrease the likelihood of override from a hostile Congress by obfuscating its opinions: writing opinions that are less readable when congressional review is a threat. Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 48) provides evidence that the Court’s majority opinions are relatively less readable when the Court is constrained by Congress (in a sense to be made precise below).

On the one hand, as Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 52) acknowledges, this result is, broadly speaking, inconsistent with a fair bit of empirical research about congressional influence on the Court (see e.g., Owens Reference Owens2010; Owens Reference Owens2011; Sala and Spriggs Reference Sala and Spriggs2004; Segal Reference Segal1997; Segal, Westerland, and Lindquist Reference Segal, Westerland and Lindquist2011).Footnote 2 On the other hand, the theory in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) – which I will refer to as strategic obfuscation theory – has an important strength: it is elaborate, in the sense used by, for example, Rosenbaum (Reference Rosenbaum2010, Ch. 19). This is to say, in short, that strategic obfuscation theory’s proposed causal mechanism has several different testable implications.Footnote 3 In this research note, I propose and test one straightforward implication of the theory’s causal mechanism.

Theory

Strategic obfuscation theory draws loosely on literature formally modeling how courts or other agencies can raise the costs of review for supervisory bodies (e.g., Staton and Vanberg Reference Staton and Vanberg2008). The initial premise of strategic obfuscation theory is intuitive and well-supported: Congress has limited resources and time (e.g., Cox and McCubbins Reference Cox and McCubbins2005; Lee Reference Lee2010). As such, the costs of taking a given action are always relevant for Congress. Ownes, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 39) cites, for example, collective action problems, the need to regularly credit-claim and produce benefits for constituents, and the shrinking size of staffs as factors limiting congressional capacity to act on issues, particularly those that are complex.

The novel proposal in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) is that the Court can raise the costs for Congress to review and potentially override its opinions by obfuscating the language therein. The argument is as follows:

Obfuscated Court opinions can generate heightened review costs and thereby deter congressional responses. To understand complex and obscure Court decisions, Congress must expend additional – and scarce – resources. A member who wishes to alter the Court’s policies or otherwise punish the Court must examine the central logic and tenets of the Court’s opinions and may even need to examine how the opinion compares to others written in the past by the Court. In some cases, the Court’s opinion may be clear. In those cases, members may easily internalize the degree to which they favor the political content of the majority opinion. Yet the Court also has the ability to obfuscate opinions by making them less readable. In those instances, the heightened legislative costs required to address the opinion may increase. By writing a less readable opinion, justices might craft a desired judicial policy while simultaneously deterring a legislative response by making it more difficult for Congress to address it (Ownes, Wedeking, and Wohlfarth Reference Owens, Wedeking and Wohlfarth2013, 39).

Ownes, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 39–40) recognizes that making opinions less readable will not absolutely bar review, and that obfuscation has costs; for example, it may cause lower courts or relevant agencies to implement Court policies inaccurately (see also Black et al. Reference Black, Owens, Wedeking and Wohlfarth2016). Nonetheless, strategic obfuscation theory proposes that when the threat of congressional override is great – in particular, when the Court is constrained – the Court can reduce the chances of review by obfuscating the language in the majority opinion. To this end, under the proposed causal mechanism, the majority opinion author intentionally obfuscates when facing a hostile Congress. Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) present results indicating that majority opinions are written less readably when the Court is constrained; the magnitude of the effect is as much as one full grade level.Footnote 4

Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) does not discuss dissenting opinions or the theoretically-predicted behavior of dissenters. Nonetheless, as I argue here, strategic obfuscation theory has clear implications for how dissenting opinions should be written. Consider the incentives of the dissenters. When should dissenters seek to increase the probability of review? Assuming policy-motivated actors (as does strategic obfuscation theory), they should do so when they prefer the policy that would result from an override to the policy announced in the majority opinion. Let C be the policy announced in the Court’s majority opinion, D the dissenters’ most preferred policy, and R the policy that results from congressional review and override. Given a very basic spatial model, the dissenters prefer an override whenever $ \mid D-R\mid <\mid D-C\mid $ – whenever the dissenters are closer to Congress than to the Court majority.Footnote 5 Since, under strategic obfuscation theory, the probability of review increases as obfuscation decreases, dissenters should write particularly readably when the majority is constrained but the dissenters prefer an override on policy grounds. The benefit to dissenters from writing more clearly when the Court is constrained is in making an override – and thus, a policy outcome they prefer to that resulting from the majority opinion – more likely.Footnote 6

To summarize, under strategic obfuscation theory, while the Court majority has incentive to obfuscate when the Court is constrained, the dissent does not. Rather, the dissenters have the opposite incentive insofar as they prefer the policy that would result from an override: they should write particularly readably.Footnote 7 By making it easier for Congress to understand what the majority opinion implies, and where the majority opinion has erred, the dissenters can reduce the costs of review for Congress, and make it more likely that an opinion they disagree with is overridden. I test this implication of strategic obfuscation theory below.

Measurement, sample, hypothesis

Measurement of variables

Constructing a dependent variable requires a measure of obfuscation, that is, (lack of) readability.Footnote 8 Following Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 42–44), I use the Coleman-Liau Readability Index (CLI). This index is a function of word and sentence length. A key advantage of the measure is ease of interpretation, since it is scaled to approximate the (US) grade levels of education needed to understand a text. CLI is defined as:

(1) $$ \mathrm{CLI}\equiv 5.88\left(\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{Letters}}{\mathrm{Number}\ \mathrm{of}\ \mathrm{Words}}\right)-29.6\left(\frac{\mathrm{Number}\ \mathrm{of}\ \mathrm{Sentences}}{\mathrm{Number}\ \mathrm{of}\ \mathrm{Words}}\right)-15.8. $$

Thus, as intuitive, a text becomes more readable (or less obfuscated) as average word length and average sentence length decrease. Later I discuss specifics regarding implementation, and discuss of the non-trivial challenges associated with measuring CLI’s constituent terms.

The most straightforward way to construct an appropriate dependent variable is to use the average CLI for dissents in a given case; I call this variable Dissent CLI. Footnote 9 Under strategic obfuscation theory, the expectation is that Dissent CLI decreases when the Court (i.e., the majority) becomes constrained, since the dissenters then have incentive to make the opinions more readable.

The key independent variables are measures of congressional constraint. In the results I present in the main text, I follow Owens, Wedeking, and Wohlfarth’s (Reference Owens, Wedeking and Wohlfarth2013) operationalization in all respects.Footnote 10 Conceptually, the measures are set to 0 when the Court is unconstrained – that is, when it is located ideologically between the most extreme congressional pivots (very generally, see Krehbiel Reference Krehbiel1998). When the Court is constrained – that is, when it is to the right of the rightmost pivot or left of the leftmost pivot – it takes on the value of the ideological distance between the Court and the pivot closest to it. Figure 1 illustrates. If the scenario shown on the top axis obtains, the measure of constraint equals 0. If the scenario on the bottom holds, the measure equals the Euclidean distance between C and P L .

Figure 1. The Court (C) is unconstrained on the top axis, since it is between the leftmost congressional pivot (PL) and the rightmost pivot (PR). It is constrained on the bottom axis, because it is to the left of the leftmost pivot. Were the Court to the right of the rightmost pivot, it would also be constrained.

It remains to locate the Court and relevant pivots in ideological space. My approach is exactly that of Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013). I locate actors using Judicial Common Space (JCS) scores (Epstein et al. Reference Epstein, Segal, Spaeth and Walker2007); that is, 1st Dimension DW-NOMINATE scores for legislators (Lewis et al. Reference Lewis, Poole, Rosenthal, Boche, Rudkin and Sonnet2021) and Martin-Quinn (Reference Martin and Quinn2002) scores transformed into DW-NOMINATE space for justices. JCS scores thus range from −1 (most liberal) to 1 (most conservative). The Court’s ideal point in a given case is identified as that of the median in the majority coalition. There are four different ways of locating relevant pivots, each motivated by a different model of congressional policymaking (Owens, Wedeking, and Wohlfarth Reference Owens, Wedeking and Wohlfarth2013, 45). For each of the four models, the relevant pivots are the leftmost and the rightmost of the following actors:

  1. 1. Filibuster Pivot Model. For and after the 94th Congress: the House median, the 40th most conservative Senator (i.e., the Senator with the 40th greatest JCS score), and the 60th most conservative Senator. Before the 94th Congress: the House median, the Senator at the 33rd percentile of conservatism, and the Senator at the 67th percentile of conservatism.Footnote 11

  2. 2. Chamber Median Model. The median member of the Senate and the median member of the House.

  3. 3. Committee Median Model. The median member of the Senate Judiciary Committee and the median member of the House Judiciary Committee.Footnote 12

  4. 4. Majority Party Median Model. The median member of the majority party in the Senate, and the median member of the majority party in the House.

There are thus four variants of the key independent variable measuring the degree to which the Court majority is constrained: Distance to Filibuster Pivot, Distance to Chamber Median, Distance to Committee Median, and Distance to Majority Party Median. Each is defined as the absolute difference between the ideal point of the Court (i.e., the Judicial Common Space score for the median of the majority coalition in a case) and the ideal point of the closest pivot, as defined just above, if the Court is constrained, and 0 otherwise. Thus, as the Court becomes “more constrained,” these variables increase. Below, I refer to these variables collectively as the constraint variables.

I include the same control variables as Owens, Wedeking and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013). Lower Court Conflict, Case Complexity, Precedent Alteration, Judicial Review, and Coalition Heterogeneity. The definitions follow those given in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 46–47). Unless noted, the variables are based on information in Spaeth et al. (Reference Spaeth, Epstein, Segal, Ruger, Martin and Benesh2017).

Lower Court Conflict equals 1 if the Court notes that the sole reason it granted a case is to resolve a conflict in the lower federal or state courts, and 0 otherwise. Case Complexity is the number of amicus briefs in a case. These data are from Collins (Reference Collins2008) and Box-Steffensmeier and Christenson (Reference Box-Steffensmeier and Christenson2012). Precedent Alteration equals 1 if the majority opinion alters existing Court precedent, and 0 otherwise. Judicial Review equals 1 if the majority struck down a federal law, and 0 otherwise. Coalition Heterogeneity is the standard deviation of Martin-Quinn (Reference Martin and Quinn2002) scores for justices voting in a given majority or minority coalition.Footnote 13

Sample

Building on the Supreme Court Database (Spaeth et al. Reference Spaeth, Epstein, Segal, Ruger, Martin and Benesh2017), I construct a dataset of all signed Supreme Court majority opinions from October Terms 1947–2012. This is altogether 6,699 majority opinions, 6,690 once observations with data missing on covariates are dropped. I am chiefly concerned with those cases where at least one dissent was authored.Footnote 14 There are 3,374 such cases, 3,372 once observations with data missing on covariates are dropped.

I exclude one (small) subset of cases for reasons discussed above: those where the majority is constrained, but the dissenters prefer the policy in the majority opinion to that which would result from a congressional override (i.e., where $ \mid D-R\mid >\mid D-C\mid $ .) To locate the policy from an override (R), I take the midpoint between the House and Senate chamber medians for the Filibuster Pivot, Chamber Median, and Committee Median models, and the midpoint between the House and Senate majority party medians for the Majority Party Median Model (for detailed discussion, see, e.g., Harvey and Friedman Reference Harvey and Friedman2006, 540–542). I locate the dissent (D) at the median of dissenting justices’ ideal points. (And as stated above, I follow Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) in locating the Court majority opinion (C) at the median of the majority coalition.) This leaves between 2,755 and 2,964 cases in the estimation sample, depending on the theoretical model used to locate congressional pivots. Notably, this sample is significantly larger (and slightly broader in temporal scope) than the sample in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013), which is a random sample of 529 majority opinions 1953–2008; thus, a lack relative lack of power is not a concern.

Hypothesis

As discussed above, strategic obfuscation theory implies that dissent authors will write more readably when the Court majority is constrained. Thus, the hypothesis is that if the Court majority’s distance to a relevant pivot increases, readability of dissenting opinions increases. Precisely, as Distance to Filibuster Pivot, Distance to Chamber Median, Distance to Committee Median, and Distance to Majority Party Median, respectively, increase, Dissent CLI decreases.

Analysis

For each of the four constraint variables (Distance to Filibuster Pivot, Distance to Chamber Median, Distance to Committee Median, and Distance to Majority Party Median), I estimate three models predicting a dissenting opinion’s CLI. The Baseline model includes no controls, the Add Controls model includes the controls mentioned above (Lower Court Conflict, Case Complexity, Precedent Alteration, Judicial Review, Coalition Heterogeneity), while the Add Fixed Effects (FEs) model includes these controls and also fixed effects for dissenting opinion author and primary issue area. The models are Ordinary Least Squares (OLS) regressions with standard errors clustered by term.Footnote 15

Table 1 presents regression coefficients and standard errors for the twelve models. For none of the specifications is the coefficient on the constraint variable statistically significant and negative; this is clearly contrary to theoretical predictions. In fact, all of the coefficients are positive, and statistically significant in seven specifications, including all three specifications where Distance to Filibuster Pivot is the variable measuring constraint. Thus, there is no indication that dissenters write more readably when the majority is constrained, contrary to the expectation derived from strategic obfuscation theory.

Table 1. Dissenting Opinion Readability as a Function of Court Majority Constraint

Note: DV: Dissenting opinion CLI. OLS coefficients and standard errors (clustered by term), for twelve models: four variants of a distance to relevant pivot, and three model specifications. See text for details.

* p < 0.05.

Discussion

Can these results be made consistent with strategic obfuscation theory? One argument, based on a certain form of unobserved confounding, is as follows. Suppose that there is a case-level confounder that happens to be positively associated with an opinion’s CLI, and also (by unfortunate chance) with the constraint variables. On the face of it, at least the first association is not unreasonable: the fixed effects for primary issue are relatively crude, consisting of 13 issue categories (Spaeth et al. Reference Spaeth, Epstein, Segal, Ruger, Martin and Benesh2017). If such a confounder exists, the results in Table 1 could still hold if this confounding overwhelms dissenters’ efforts to write readably; in other words, one might propose that the coefficients in Table 1 would be even greater (due to the confounding) if dissenters did not make a particular effort to write readably when strategically warranted.Footnote 16

One way to rule out this out is to examine how the putative effects of the constraint variables vary between majority opinions and dissents. Even under the proposed confounding, the positive association between the constraint variables and opinion CLI should be greater for majority opinions, whose authors are trying to obfuscate, than for dissenting opinions, whose authors are trying to write readably.

I test for this possibility by analyzing both majority opinions and dissents in a single model; specifically, I add the majority opinion associated with each case whose dissent(s) are analyzed in Table 1. I estimate the same models shown in Table 1, except I include a binary variable indicating whether a given opinion is (= 1) or is not (= 0) a majority opinion, which I interact with the constraint variable in the model. If there is a stronger positive association between a constraint variable and CLI for majority opinions, the coefficient on the interaction term should be positive.

The results, given in Table 2, are contrary to the prediction derived from strategic obfuscation theory. For none of the 12 specifications is the coefficient attending the interaction term positive and statistically significant. In 11 of 12 specifications, the coefficient is negative; in nine of those 11 – in all but those where Distance to Majority Party Median is the measure of constraint – the coefficient is statistically significant. Thus, the weight of the evidence indicates that the effect of majority constraint is greater for dissents than for majority opinions. That is, dissenters apparently increase their level of obfuscation more than the majority, as the threat of congressional override increases.Footnote 17

Table 2. Opinion Readability — Conditional on Opinion Status (Majority vs. Dissent) — as a Function of Court Majority Constraint

Note: DV: Opinion CLI. OLS coefficients and standard errors (clustered by term), for twelve models: four variants of a distance to relevant pivot, and three model specifications. The table presents the coefficients and standard errors for the key interaction, which should be positive if majority opinions, more so than dissenting opinions, are written as majority opinions are predicted to be written by strategic obfuscation theory. See text for details.

* p < 0.05.

These results are robust. A similar pattern of results obtains if I modify the sample used in Table 2 by adding unanimous majority opinions. The same is true for a slightly different analytical approach: setting the case as the unit of analysis and the difference between the majority and dissent CLIs as the dependent variable. These results are in the Appendix, along with other specifications involving an alternative measure of ideology, definition of constraint, and location of Court majority opinion. In each of those specifications, as in all but one specification above, the dissenters’ response to constraint is closer to the response theoretically predicted for the majority, than the response of the majority itself.

In short, not only is there no evidence that dissenters write more readably when the majority is constrained, but there is not even evidence that majority opinion authors obfuscate as a function of Court constraint to a greater degree than dissenters. This is incompatible with strategic obfuscation theory, since majority obfuscation is strategically warranted when the Court is constrained, and dissenting opinion obfuscation is strategically unwarranted.

In sum, I have shown that a straightforward empirical implication derived from the theory of strategic obfuscation receives no support. This casts serious doubt on the theory – specifically, its proposed causal mechanism that justices strategically manipulate writing style to avoid review from Congress. To be explicit, strategic obfuscation theory has no straightforward theoretical explanation for why justices would seek to affect the probability of override when in the majority, but those same justices would not seek to do so when in dissent. (Even worse for the theory, justices in dissent tend to write in ways that are not just nonstrategic but strategically counterproductive.) Turning now from the theoretical to the practical, I consider the potential role of measurement error in the original result supporting strategic obfuscation theory.

On calculating accurate sentence counts

In this section, I sketch my approach to calculating the number of sentences in a legal text and show that it is a major improvement on the approach used by Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013). Then I replicate the original analysis in Owens, Wedeking and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) on my larger sample with an improved sentence counter, showing that the coefficients on the constraint variables are several times smaller than those originally reported; the discrepancy is likely due in large part to measurement error in the original study.

Recall the definition of that the definition of CLI (given in Eq. 1) requires counting the number of letters, words, and sentences in a text. Counting the number of letters and words in a legal opinion is relatively straightforward. The challenge is accurately counting the number of sentences in a text.

A naive approach is to count as a sentence any segment of text that ends in an end-of-sentence punctuation mark like “.”, “?”, or “!”. This is not satisfactory, however, since abbreviations within sentences can also contain periods (see also initials and ellipses). And legal opinions are full of abbreviations – most notably, but not only, as part of citations: U.S., L.Ed, F.2d, and so on.

I implement my approach in the Python programming language, relying on tools in Bird, Loper, and Klein (Reference Bird, Loper and Klein2009). The first step is to use the unsupervised sentence boundary detection method (or sentence tokenizer) in Kiss and Strunk (Reference Kiss and Strunk2006). Specifically, I train the tokenizer on a corpus of appellate opinions. Essentially, the tokenizer looks for collocations: pairs (more precisely, n-tuples) of text strings with a period between them that are likely to be abbreviations. I also add a set of abbreviations from the pre-trained English language sentence tokenizer from Kiss and Strunk (Reference Kiss and Strunk2006). In my application, this only gives a slight improvement over the naive approach.

Thus, I manually augment the abbreviations and collocations detected by the Kiss and Strunk (Reference Kiss and Strunk2006) tokenizer with various “legal” abbreviations and collocations (e.g., civ. p., id. at, u.s.c., and many more). This step gives the greatest improvement over the naive method. I then take sentences tokenized by the augmented tokenizer and disallow certain “sentences” that are unlikely to actually be sentences; for example, those that start with a lower-case letter, those that contain fewer than three words, and those that end with certain abbreviations. This gives a further slight improvement.

Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 43) gives a sentence count for one of the opinions in its sample: Washington v. Recueno (548 U.S. 212), which is classified as having 400 sentences (indeed, the opinion includes 400 periods). My method counts 99 sentences. The opinion is reproduced in the Appendix. An exact manual count of sentences is not entirely straightforward because readers might have the occasional disagreement on what constitutes a sentence, but in any case, it is clear that 99 is at worst a slight overestimate, and 400 overstates the number of sentences by at least a factor of 4.Footnote 18 This overestimate of course distorts the CLI; while Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) gives a CLI score of 6.1 (implying that the opinion is comprehensible to a sixth-grader), my estimate of the CLI is 13.3.

The face validity of some other scores cited in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 43, fn. 8) is also open to doubt; for example, Crane v. Cedar Rapids and Iowa City Railway Co. (395 U.S. 164) starts with these two (not atypical) sentences:

The Federal Safety Appliance Act of 1833 requires interstate railroads to equip freight cars “with couplers coupling automatically by impact,” but does not create a federal cause of action for employees or nonemployees seeking damages for injuries resulting from a railroad’s violation of the Act. The Federal Employers’ Liability Act of 1908 provides a cause of action for a railroad employee based on a violation of the Safety Appliance Act, in which he is required to prove only the statutory violation and the carrier is deprived of the defenses of contributory negligence and assumption of risk.

Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 43) score this opinion’s CLI as a 4.3, while my estimate is 12.6. In short, it is very likely that measurement error due to a naive sentence count affects the dependent variable in the original analysis.

Ex ante, it is not obvious how much this biases the central results in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013): the rank ordering of opinions’ CLI could be more-or-less preserved if there is a global overestimation of the number of sentences. I thus replicate the main analyses, using the larger sample I describe above (and the improved sentence counter/CLI score). Table 3, analogous to Table 1 in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013), presents these results.

Table 3. Majority Opinion Readability as a Function of Court Majority Constraint

Note: N = 6,690. DV: Majority opinion CLI. OLS coefficients; standard errors clustered by term. Constant and fixed effects for primary issue area and opinion author not shown. See text for details.

* p < 0.05.

The coefficient on each of the key independent variables (Distance To Filibuster Pivot, Distance To Chamber Median, Distance To Committee Median, and Distance To Majority Party Median) remain positive, as strategic obfuscation theory predicts. But, whereas in the original analysis, all except one of the key independent variables (Distance to Majority Party Median) had a statistically significant attending coefficient, now none of them remain significant. Even more to the point, the coefficients are much smaller than those reported in the original analysis. A one-unit increase in Distance to Filibuster Pivot increases CLI by less than a quarter grade level; in fact, Distance to Filibuster Pivot ranges only from 0 to about 0.55, so the maximum in-sample effect is approximately an eighth of a grade level.Footnote 19

This, then, explains why Table 1 and Table 2 are correct despite the results in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013). The results in the original article vastly overestimate, in all likelihood, the coefficients on the key independent variables. Once those results are corrected, by using a more accurate sentence tokenizer to count sentences, it is clear that the coefficients in Table 1 are larger than those in Table 3, which effectively implies the results in Table 2.Footnote 20

Conclusion

This research note has presented two central results. First, I have shown that a straightforward empirical implication derived from the theory of strategic obfuscation receives no support. Specifically, dissenting justices who prefer a congressional override on policy grounds do not write more readably to an increase the probability of an override when majority is constrained (and thus subject to potential override). This casts serious doubt on the theory – specifically, its proposed causal mechanism: that justices strategically manipulate writing style to affect the probability of review from Congress.

Still, there are positive lessons to be learned from this result. Because strategic obfuscation theory was elaborate (in the sense I discussed above), it allowed for testing of multiple implications. This is must be acknowledged as a strength of the theory, even though it did not ultimately find empirical support. As Rosenbaum (Reference Rosenbaum2015) eloquently points out, scholars should hesitate to accept theories that make only a single prediction or a few related predictions; an elaborate theory, which makes several independent predictions, is to be preferred.Footnote 21 True, the evidence from testing several implications of an elaborate theory may be ambiguous or even disappointing, but “inconsistency and uncertainty are necessary stepping stones on a path to greater consistency and greater certainty (Rosenbaum Reference Rosenbaum2015, 209).”

The second central result in the note is that the initial analysis supporting strategic obfuscation theory, in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013), was likely affected by measurement error in the outcome variable, CLI. Specifically, overestimation of the number of sentences in the opinions led to underestimation of CLI scores, which in turn appears to have inflated the effect estimates by a factor of at least eight.

This has implications not just for strategic obfuscation theory but more generally for researchers who seek to calculate readability metrics for legal texts. In particular, researchers should ensure that the sentence tokenizer (segmenter) used to calculate the number of sentences is adapted to the peculiarities of judicial opinions. Most importantly, the tokenizer should account for the abbreviations that are common in legal texts but not other English-language texts. The customized tokenizer I use here, which outperforms several off-the-shelf solutions, is included with the replication materials.Footnote 22

Acknowledgments

I thank Jeff Budziak for helpful discussion and William Minozzi for suggesting a data source. I appreciate the particularly constructive comments from the Editor and three reviewers.

Competing Interest

The author declares no competing interests exist.

Data Availability Statement

Replication materials for this article are available at the Journal of Law and Courts’ Dataverse archive.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/jlc.2022.7.

Footnotes

1 As of 7/5/21, the paper has been cited 71 times according to Google Scholar, including by at least three textbooks.

2 But see King (Reference King2007) and, somewhat more generally, Clark (Reference Clark2009).

3 The terminology dates back to R.A. Fisher and William G. Cochran (Rosenbaum Reference Rosenbaum2015). The perspective is of course consistent with mainstream philosophy of science, which prefers theories that make relatively more falsifiable predictions; see Rosenbaum (Reference Rosenbaum2017, Ch. 7) discussing, among others, Popper (Reference Popper2002) [1959].

4 This is as measured by the Coleman-Liau Index of readability. I describe the measure and detail other empirical specifics below.

5 I discuss operationalization of these policy locations below.

6 As stated just above, this is conditional on $ \mid D-R\mid <\mid D-C\mid $ ; my empirical tests below account for this caveat.

7 One might object that dissenters may wish to avoid overrides from Congress even if they favor Congress’ policy preferences over those of the majority, perhaps for institutional reasons. There are two answers to this point. First, we know that, on a regular basis, justices do explicitly invite overrides from Congress (e.g., Hausegger and Baum Reference Hausegger and Baum1999; Rice Reference Rice2019); thus, if explicit requests are normative, surely so are implicit actions that increase the chances of review. Second, even granting for the moment that dissenting justices do not intentionally seek to increase the readability of their opinions when Congress is hostile to the majority, they, at the very least, have no reason to affirmatively obfuscate under strategic obfuscation theory. This is a slightly different, in a sense weaker, implication; but given the results I present below, the distinction turns out to be irrelevant.

8 In the relevant literature, readability is also referred to as (rhetorical) clarity.

9 In practice, this involves combining all dissents for a given case into a single text file, and calculating a CLI for the combined text. Thus, this is a weighted average of all dissents in a case, where the weights are a function of individual opinion length.

10 I do this to limit researcher degrees of freedom (Simmons, Nelson, and Simonsohn Reference Simmons, Nelson and Simonsohn2011). But there are any number of reasonable alternative variable specifications; I discuss some of these in the Supplementary Appendix.

11 This is of course because before the 94th Congress, two thirds of Senators were required to vote for cloture to end a filibuster, while starting with the 94th Congress, only 60 Senators were required. I refer to percentiles since in the earliest years of my sample there were only 96 senators.

12 I identified committee members using two datasets: Swift et al. (Reference Swift, Brookshire, Canon, Fink, Hibbing, Humes, Malbin and Martis2009) and Stewart III and Woon (Reference Stewart and Woon2017).

13 In other words, if a given observation is a majority opinion, Coalition Heterogeneity is the standard deviation of the Martin-Quinn scores for the majority justices, and if the observation is a dissenting opinion (or opinions), Coalition Heterogeneity is the standard deviation of the Martin-Quinn scores for the dissenting justices). I define dissenting opinion Coalition Heterogeneity for a single dissenting justice as 0.

14 I exclude from the definition of dissent those dissents in part where the “dissenters” were coded as agreeing with the majority disposition by Spaeth et al. (Reference Spaeth, Epstein, Segal, Ruger, Martin and Benesh2017).

15 Again, this follows the procedure in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013), though there, of course, the fixed effects are for the majority opinion author, since the outcome is majority opinion CLI. I combine all justices writing fewer than 30 opinions into a single “Other Justice” category, to allow for valid estimation of the clustered standard errors.

16 But note that such confounding would imply that the results in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) are also overestimates.

17 The alert reader may note that the coefficients from the analysis in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013) are generally greater than those in Table 1 above. Below, I explain why, despite this apparent discrepancy, the results in Table 2 and the associated discussion should be credited.

18 Other off-the-shelf methods are only somewhat better. The quanteda package (Benoit et al. Reference Benoit, Watanabe, Wang, Nulty, Obeng, Muller, Matsuo and Lowe2021) in R counts 268 sentences. NLTK’s (Bird, Loper, and Klein Reference Bird, Loper and Klein2009) sent_tokenize – which is in fact an unmodified version of punkt trained on English language texts – gives 160 sentences; better than quanteda but still a significant overestimate.

19 This difference from the original result cannot be attributed to the different samples. I do not have access to the particular sample of 529 cases used in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013), but I investigate the role of sampling by re-running the analysis in Column 1 of Table 3 on 500 random samples of 529 cases. The median coefficient attending Distance to Filibuster Pivot across the 500 sample draws is 0.19 (standard deviation: 0.32); the largest across 500 draws is 1.20 – less than the 1.81 in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013, 48). It is possible of course that some of the difference in our results is due to an (un)lucky sample draw in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013); it is very unlikely that this accounts for any meaningful part of the difference. In any case, since my sample – which is effectively the population of relevant cases from 1947–2012 – subsumes the sample in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013), it is the results here that should be credited, at least on sampling grounds.

20 I use the qualifier “effectively” because the results in Table 2 exclude unanimous opinions and (as discussed) a small set of non-unanimous opinions, whereas the results in Table 3 above include both unanimous and non-unanimous majority opinions (as do the results in Owens, Wedeking, and Wohlfarth (Reference Owens, Wedeking and Wohlfarth2013)).

21 There is a statistical literature that makes these points more precise. For citations, see Rosenbaum (Reference Rosenbaum2015, 209) and Cook (Reference Cook2015, 145).

22 The tokenizer is most appropriate for federal appellate opinions (Courts of Appeals and Supreme Court). Some caution should be used if applied to legal texts other than opinions, if applied to state court or federal trial court opinions, or opinions distant from the time period considered here. It should not be used for opinions from non-US courts.

References

Baum, Lawrence. 2006. Judges and Their Audiences: A Perspective on Judicial Behavior. Princeton NJ: Princeton University Press.Google Scholar
Baum, Lawrence. 2016. The Supreme Court. 12th ed. Washington DC: Sage CQ Press.Google Scholar
Benoit, Kenneth, Watanabe, Kohei, Wang, Haiyan, Nulty, Paul, Obeng, Adam, Muller, Stefan, Matsuo, Akitaka, and Lowe, William. 2021. “quanteda: Quantitative analysis of textual data.” Version 3.0.0. https://cran.r-project.org/web/packages/quanteda/index.htmlGoogle Scholar
Bird, Steven, Loper, Edward, and Klein, Ewan. 2009. Natural Language Processing with Python. Sebastopol CA: O’Reilly Media Inc.Google Scholar
Black, Ryan C., Owens, Ryan J., Wedeking, Justin, and Wohlfarth, Patrick C.. 2016. Supreme Court Opinions and Their Audiences. Cambridge UK: Cambridge University Press.Google Scholar
Box-Steffensmeier, Janet, and Christenson, Dino P.. 2012. “Database on Supreme Court amicus curiae briefs.” Version 1.0 [Computer File]. https://www.amicinetworks.comGoogle Scholar
Clark, Tom S. 2009. “The separation of powers, court curbing, and judicial legitimacy.” American Journal of Political Science 53 (4): 971989.Google Scholar
Collins, Paul M. 2008. Friends of the Supreme Court: Interest Groups and Judicial Decision Making. Oxford UK: Oxford University Press.Google Scholar
Cook, William D. 2015. “The inheritance bequeathed to William G. Cochran that he willed forward and left for others to will forward again: The limits of observational studies that seek to mimic randomized experiments.” Observational Studies 1 (1): 141164.CrossRefGoogle Scholar
Cox, Gary W., and McCubbins, Matthew D.. 2005. Setting the Agenda: Responsible Party Government in the US House of Representatives. Cambridge UK: Cambridge University Press.Google Scholar
Epstein, Lee, Segal, Jeffrey A., Spaeth, Harold J., and Walker, Thomas G.. 2007. The Supreme Court Compendium. 4th ed. Washington DC: CQ Press.Google Scholar
Harvey, Anna, and Friedman, Barry. 2006. “Pulling punches: Congressional constraints on the Supreme Court’s constitutional rulings, 1987-2000.” Legislative Studies Quarterly 31 (4): 533562.Google Scholar
Hausegger, Lori, and Baum, Lawrence. 1999. “Inviting congressional action: A study of Supreme Court motivations in statutory interpretation.” American Journal of Political Science 43 (1): 162185.Google Scholar
King, Chad M. 2007. “Strategic selection of legal instruments on the U.S. Supreme Court.” American Politics Research 35 (5): 621642.Google Scholar
Kiss, Tibor, and Strunk, Jan. 2006. “Unsupervised multilingual sentence boundary detection.” Computational Linguistics 32 (4): 485525.CrossRefGoogle Scholar
Krehbiel, Keith. 1998. Pivotal Politics: A Theory of US Lawmaking. Chicago IL: University of Chicago Press.Google Scholar
Lee, Frances E. 2010. “Senate deliberation and the future of congressional power.” PS: Political Science and Politics 43 (2): 227229.Google Scholar
Lewis, Jeffrey B., Poole, Keith, Rosenthal, Howard, Boche, Adam, Rudkin, Aaron, and Sonnet, Luke. 2021. “Voteview: Congressional roll call votes database.” https://www.voteview.comGoogle Scholar
Martin, Andrew D., and Quinn, Kevin M.. 2002. “Dynamic ideal point estimation via Markov chain Monte Carlo for the US Supreme Court, 1953–1999.” Political Analysis 10 (2): 134153.Google Scholar
Owens, Ryan J. 2010. “The separation of powers and Supreme Court agenda setting.” American Journal of Political Science 54 (2): 412427.Google Scholar
Owens, Ryan J. 2011. “An alternative perspective on Supreme Court agenda setting in a system of shared powers.” Justice System Journal 32 (2): 183205.Google Scholar
Owens, Ryan, Wedeking, Justin, and Wohlfarth, Patrick. 2013. “How the Supreme Court alters opinion language to evade congressional review.” Journal of Law and Courts 1 (1): 3559.Google Scholar
Popper, Karl R. 2002 [1959]. The Logic of Scientific Discovery. New York NY: Routledge.Google Scholar
Rice, Douglas. 2019. “Placing the ball in Congress’ court.” American Politics Research 47 (4): 803831.Google Scholar
Rosenbaum, Paul R. 2010. Design of Observational Studies. New York NY: Springer.Google Scholar
Rosenbaum, Paul R. 2015. “Cochran’s causal crossword.” Observational Studies 1 (1): 205211.Google Scholar
Rosenbaum, Paul R. 2017. Observation and Experiment. Cambridge MA: Harvard University Press.Google Scholar
Sala, Brian R., and Spriggs, James F.. 2004. “Designing tests of the Supreme Court and the separation of powers.” Political Research Quarterly 57 (2): 197208.CrossRefGoogle Scholar
Segal, Jeffrey A. 1997. “Separation-of-powers games in the positive theory of congress and courts.” American Political Science Review 91 (1): 2844.Google Scholar
Segal, Jeffrey A., Westerland, Chad, and Lindquist, Stephane A.. 2011. “Congress, the Supreme Court, and judicial review: Testing a constitutional separation of powers model.” American Journal of Political Science 55 (1): 89104.Google Scholar
Simmons, Joseph P., Nelson, Leif D., and Simonsohn, Uri. 2011. “False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant.” Psychological Science 22 (11): 13591366.Google Scholar
Spaeth, Harold J., Epstein, Lee, Segal, Jeffrey A., Ruger, Ted, Martin, Andrew D., and Benesh, Sarah. 2017. “The Supreme Court database.” Washington University Law. scdb.wustl.eduGoogle Scholar
Staton, Jeffrey K., and Vanberg, Georg. 2008. “The value of vagueness: delegation, defiance, and judicial opinions.” American Journal of Political Science 52 (3): 504519.Google Scholar
Stewart, Charles III, and Woon, Jonathan. 2017. “Congressional committee assignments, 103rd to 114th Congresses, 1993–2017: House and Senate.” Massachusetts Institute of Technology. Updated November 17, 2017. http://web.mit.edu/17.251/www/data_page.html#2Google Scholar
Swift, Elaine K., Brookshire, Robert G., Canon, David T., Fink, Evelyn C., Hibbing, John R., Humes, Brian D., Malbin, Michael J. Martis, Kenneth C.. 2009. “Database of Congressional Historical Statistics, 1789–1989.” Inter-university Consortium for Political and Social Research. September 3, 2009.Google Scholar
Figure 0

Figure 1. The Court (C) is unconstrained on the top axis, since it is between the leftmost congressional pivot (PL) and the rightmost pivot (PR). It is constrained on the bottom axis, because it is to the left of the leftmost pivot. Were the Court to the right of the rightmost pivot, it would also be constrained.

Figure 1

Table 1. Dissenting Opinion Readability as a Function of Court Majority Constraint

Figure 2

Table 2. Opinion Readability — Conditional on Opinion Status (Majority vs. Dissent) — as a Function of Court Majority Constraint

Figure 3

Table 3. Majority Opinion Readability as a Function of Court Majority Constraint

Supplementary material: PDF

Lempert supplementary material

Appendix

Download Lempert supplementary material(PDF)
PDF 180.3 KB