Article contents
Measuring the Consequences of Rules
Published online by Cambridge University Press: 02 November 2010
Abstract
Recently two distinct forms of rule-utilitarianism have been introduced that differ on how to measure the consequences of rules. Brad Hooker advocates fixed-rate rule-utilitarianism (which measures the expected value of the rule's consequences at a 90 percent acceptance rate), while Michael Ridge advocates variable-rate rule-utilitarianism (which measures the average expected value of the rule's consequences for all different levels of social acceptance). I argue that both of these are inferior to a new proposal, optimum-rate rule-utilitarianism. According to optimum-rate rule-utilitarianism, an ideal code is the code whose optimum acceptance level is no lower than that of any alternative code. I then argue that all three forms of rule-utilitarianism fall prey to two fatal problems that leave us without any viable form of rule-utilitarianism.
- Type
- Research Article
- Information
- Copyright
- Copyright © Cambridge University Press 2010
References
1 See discussion in Mulgan, Tim, Future People (Oxford, 2006), pp. 130–3CrossRefGoogle Scholar.
2 Hooker, Brad, Ideal Code, Real World (Oxford, 2000)Google Scholar.
3 Ridge, Michael, ‘How to Be a Rule-Utilitarian: Introducing Variable-Rate Rule-Utilitarianism’, The Philosophical Quarterly 56.223 (2006), pp. 242–53CrossRefGoogle Scholar. For Hooker's response, see Hooker, Brad and Fletcher, Guy, ‘Variable versus Fixed-Rate Rule-Utilitarianism’, The Philosophical Quarterly 58 (2008), pp. 344–52CrossRefGoogle Scholar. See also Arneson, Richard, ‘Sophisticated Rule Consequentialism: Some Simple Objections’, Philosophical Issues 15: Normativity (2005), pp. 235–51CrossRefGoogle Scholar, and Hooker, Brad, ‘Reply to Arneson and McIntyre’, Philosophical Issues 15: Normativity (2005), pp. 264–81CrossRefGoogle Scholar.
4 Brad Hooker prefers the label ‘rule-consequentialism’, while Michael Ridge prefers ‘rule-utilitarianism’. I shall follow Ridge's usage.
5 Hooker, Ideal Code, p. 76. Here Hooker follows a number of earlier theorists.
6 See n. 4.
7 Hooker, Ideal Code, p. 32. Hooker later points out that the costs of maintaining and reinforcing the code must also be included (Hooker, Ideal Code, p. 79).
8 Hooker, Ideal Code, pp. 83–4.
9 See Hooker, Ideal Code, pp. 80–5.
10 Ridge, ‘How to Be’, pp. 244–5.
11 Ridge, ‘How to Be’, p. 248; see also the statement of the theory in Hooker and Fletcher, ‘Variable versus Fixed-Rate’, p. 348. I have slightly reformulated Ridge's statement to allow for acts that are permitted (but not required) by the code.
12 Ridge, ‘How to Be’, p. 253.
13 Ridge, ‘How to Be’, pp. 248–9.
14 Hooker, Ideal Code, p. 32; Hooker, ‘Reply’, pp. 267–9.
15 Of course, the circumstances in which each action would be performed play a major role as well.
16 It is possible that some codes have ‘feasibility’ gaps in their possible acceptance levels that are not represented in this graph.
17 This is not necessarily an argument for reducing the demands of even the best code, since the existence of these demands, even if unmet in practice, may inspire agents to try harder and comply more often than they would if the code made no such demands.
18 Because of its focus on the highest achievable social value, ORRU appears to best represent the consequentialist spirit of rule-utilitarianism even if we don't adopt the perspective of the ‘teaching generation’.
19 Hooker, Ideal Code, pp. 124–5.
20 Hooker, Ideal Code, pp. 124–5; Lyons, David, Forms and Limits of Utilitarianism (Oxford, 1965), pp. 128–31CrossRefGoogle Scholar.
21 Hooker, Ideal Code, p. 123.
22 Ridge, ‘How to Be’, pp. 249–50.
23 Hooker, Ideal Code, p. 124.
24 Hooker, Ideal Code, p. 125.
25 Some of Hooker's remarks suggest that he might invoke his ‘Prevent disasters’ rule as a way of dealing with partial compliance cases (see Hooker, Ideal Code, p. 98). However, this is not a satisfactory general solution. Hooker is vague on exactly what counts as a ‘disaster’, but he stipulates that a disaster must involve ‘large losses in aggregate well-being’, although ‘there are limits to how much self-sacrifice can be demanded in the name of this rule’ (Hooker, Ideal Code, p. 121). If the public ‘bads’ in my cases are deemed to be ‘disasters’, we simply need to substitute cases in which the bad effects are smaller scale. As he sees, it is not open to Hooker to define a ‘disaster’ as any consequence which has net negative utility (Hooker, Ideal Code, p. 98).
26 Hooker's FRRU evaluates all rules by the 90 percent acceptance test. There will be some minimizing-condition situations in which a public good will only be produced if more than 90 percent of the population – say, 95 percent – contribute to its production, although 100 percent contribution is not necessary. Hooker needs a solution to such situations, and conditionalized rules seem to be his best option. Hooker seems to have no problem with the general idea of codes incorporating conditions when the conditions refer to such facts as the degree of intelligence of the agent, or whether or not the agent is a parent. His discussion suggests openness to the conditionalized rules solution in partial compliance cases.
Of course many situations involving minimizing conditions are ones in which a threshold contribution level somewhere below 90 percent – say, 80 percent contribution – is required to produce the public good. Perhaps Hooker's best strategy for dealing with these cases is to argue that his theory endorses a rule such as R* for situations involving a greater than 90 percent threshold, and that this rule can be stated with enough generality that it covers all the cases at the lower thresholds as well. Even though his 90 percent acceptance level doesn't permit him to argue directly for the necessity of including rules that cover the lower threshold cases, there is no obvious reason why the ideal theory cannot include rules that are necessary at the 90 percent level and also work well at lower acceptance levels. The only kinds of cases for which this strategy would not succeed would be cases in which the nature of the situation requires that the rule's condition must overtly specify that contribution levels are less than 90 percent. Such cases would probably be few in number, so it appears that Hooker may avoid most of the difficulties arising in partial compliance cases by using the strategy I have just described.
27 Ridge, ‘How to Be’, p. 250. Ridge states these conditionalized rules in terms of people's acceptance of rules rather than acceptance of codes. I have simplified the discussion by restating the suggestion in terms of codes.
28 I originally argued for this point in ‘David Lyons on Utilitarian Generalization’, Philosophical Studies 26 (1974), pp. 77–94. The point was later stated in a broader theoretical context by Regan, Donald in Utilitarianism and Co-operation (Oxford, 1980), p. 87CrossRefGoogle Scholar.
29 The situation is best understood as one in which you cannot form an agreement with the other industrialists regarding your conduct.
An alternate, and perhaps deontologically more attractive, version of rules R1 and R3 would permit you either to burn or to discharge your waste. But in the cases covered by these rules, it would reduce net social welfare for you to burn the waste, since doing so costs you more money, and doesn't affect whether or not the river would be polluted. A version of this case incorporating such permissive rules would, in any event, have the same problem that I describe in the text for rules R1 and R3.
30 There are two distinct interpretations of this charge. One is that such conditionals as ‘If all three industrialists accepted code C, the river would not be polluted’ have no determinate truth-value. The other interpretation is that such conditionals may have a truth-value, but that (at least in most cases) the different patterns of action that count as ‘accepting the code’ mean that we are unable to ascertain what their truth-value is, so we cannot ascertain which moral code would be best. Certainly the latter is true for complex codes purporting to govern the conduct of an indefinite number of agents facing partial compliance situations (even though we might feel we could ascertain what some particular trio of industrialists would actually do if they accepted Code C). I shall not attempt to mediate between these two interpretations.
31 See n. 29 on versions of these rules including permissions to contribute even though doing so is costly. Hooker (Hooker, Ideal Code, pp. 124–5) approvingly cites Brandt's dismissal of any consequentialist case for permitting an agent to fail to contribute to some public good when enough others are already contributing (i.e. in a maximizing case). Brandt dismisses such rules on grounds that it would be all too easy for most people to believe that a sufficient number were already contributing (Richard Brandt, ‘Some Merits of One Form of Rule-Utilitarianism’, University of Colorado Studies in Philosophy (1967), n. 15, as cited in Hooker, Ideal Code, pp. 124–5). Of course sometimes this is so, but on other occasions it may be crystal clear that one's own contribution is not needed. A genuine consequentialist solution would prescribe not contributing when doing so would maximize social welfare.
32 The elaborate theory labeled ‘co-operative utilitarianism’ advanced by Donald Regan in Utilitarianism and Co-operation may successfully avoid this problem.
33 The complexity of this determination is compounded by the fact that the different choices selected by each type of code-rejecter at some base time (say, now) will quickly ramify into a wholly different set of opportunities and choices faced by them in the future that are not faced by the code-accepters or by agents who reject the code in favor of different options. The complexity is even further compounded by the fact that we must consider what the code-rejecters would do in a context in which a significantly different code is imagined as being in force, as compared with the actual world.
See n. 30 on the question of whether the indeterminacy in question is an indeterminacy with regard to fact, or with regard to epistemic ascertainability. Here I shall use terminology more appropriate to the latter.
34 Ridge notes the possibility of different patterns by which a given level of acceptance might be realized, but confines his attention to cases in which the only question is which agents accept and which reject (but the number of each is held constant). He claims (implausibly, in my judgment) that in most of these cases the upshot is likely to remain the same (Ridge, ‘How to Be’, p. 252).
35 Of course this procedure would still be subject to the complaint that it is arbitrary to assume that all code-rejecters do the same thing, rather than allowing they would do diverse things, as would be more natural.
36 Of course accepting and following a code will also generate positive psychological upshots, such as social approval and pride in oneself, but for simplicity of exposition I do not include such effects in this example.
37 For helpful discussion and comments, I am very grateful to Douglas Blair, Pavel Davydov, Nancy Gamburd, Meghan Sullivan, and Evan Williams.
- 25
- Cited by