Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-26T05:46:28.616Z Has data issue: false hasContentIssue false

Constraint Games and the Orthodox Theory of Rationality*

Published online by Cambridge University Press:  26 January 2009

Abstract

Moral theorists and game theorists are both interested in situations where rational agents are ‘called upon’ to constrain their future actions and co-operate with others instead of being free riders. These theorists have constructed a variety of hypothetical games which illuminate this problem of constraint. In this paper, I draw a distinction between ‘behaviour games’ like the Newcomb paradox and ‘disposition games’ like Kavka's toxin puzzle, a prisoner's dilemma and Parfit's hitchhiker example. I then employ this distinction to argue that agents who subscribe to the orthodox theory of rationality do significantly better in disposition games than those who subscribe to revisionist theories like David Gauthier's, while revisionist agents do marginally better in behaviour games. I argue that because of agents' ability to manipulate their own weakness of will, orthodox agents do better at all of these games than has previously been thought. And, by elucidating the distinction between behaviour games and disposition games, I uncover the virtues that underlie the success of each theory of rationality.

Type
Research Article
Copyright
Copyright © Cambridge University Press 1997

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

I am indebted to Michael Resnik, Geoffrey Sayre-McCord, Simon Blackburn, Christopher Morris, members of the Propositional Attitudes Task Force and members of the North Carolina Philosophical Society for very helpful comments on earlier drafts of this paper. Thanks are also due to David Gauthier, whose remarks on a separate paper of mine prompted me to write this paper.

References

1 See Gauthier, David, Morals by Agreement, Oxford, 1986Google Scholar. Gauthier's position has been subsequently revised. See Gauthier, , ‘In the Neighbourhood of the Newcomb Predictor’, Proceedings of the Aristotelian Society, lxxxix (1989)Google Scholar; and Assure and Threaten’, Ethics, civ (1994)Google Scholar. However, the basic position has remained the same. The received (i.e. orthodox) view of rationality is what Hobbes employs in Leviathan and what Hume employs in both the Treatise on Human Nature and the Enquiry Concerning the Principles of Morals. See also Von Neumann, John and Morgenstern, Oskar, Theory of Games and Economic Behavior, Princeton, 1944Google Scholar; Nash, John F., ‘Noncooperative Games’, Annals of Mathematics (1951)CrossRefGoogle Scholar; Luce, R. D. and Raiffa, H., Games and Decision, New York, 1957Google Scholar.

2 See McClennen, Edward, Rationality and Dynamic Choice, Cambridge, 1990CrossRefGoogle Scholar; Constrained Maximization and Resolute Choice’, Social Philosophy and Policy, v (1988), 95118Google Scholar; Danielson, Peter, Artificial Morality, New York, 1992Google Scholar.

3 This is unclear even if we set aside the possibility of deception. See Sayre-McCord, Geoffrey, ‘Deception and Reasons to be Moral’, American Philosophical Quarterly, xxvi (1989) 113–22Google Scholar.

4 As has been previously pointed out, the temporal order is not what is actually essential to the problem; rather it is merely a convenient means of indicating that taking the small payoff can have no effect on being awarded the big payoff.

5 The distinction between these two types of games concerns what is actually going on in the game, not what the player believes is going on. Even if the game is actually a disposition game, it may work just like a behaviour game if the player does not know that it is a disposition game. Even if the player knows that the game is some sort of disposition game, it may be rational for him to treat it as a behaviour game if he does not know what the ‘appropriate disposition’ in that particular game is. However, I shall ignore this complication for the remainder of this paper by assuming that the players know the nature of the game they are playing.

6 To simplify the discussion, I will take Gauthier's theory as representative of revisionist theories in general.

7 For example, one surprising claim is that in a Newcomb game where both boxes are transparent Gauthier claims that it is rational to choose $1,000,000 over $1,001,000. See Gauthier, ‘In the Neighbourhood …’; Barnes, R. Eric, ‘Rationality, Dispositions, and the Newcomb Paradox’, Philosophical Studies, lxxxviii (1997)Google Scholar. More generally, see also Contractarianism and Rational Choice, ed. Vallentyne, P., Cambridge, 1991Google Scholar, pt. iii; McClennen ‘Constrained Maximization …’; Kavka, Gregory, ‘Rationality Triumphant: Gauthier's Moral Theory’, Dialogue, xxxii (1993), 347–58CrossRefGoogle Scholar.

8 Gauthier, , ‘In the Neighbourhood …’ 184Google Scholar.

9 Ibid., 184.

10 Specifics of this rule are unimportant. See Gauthier, ‘In the Neighbourhood …’; ‘Assure and Threaten’.

11 ‘Ultimate aim’ refers to what some other writers (e.g. Gauthier) call an agent's ‘aim’. I do this so that I can use different terms for: that goal which an agent ultimately strives to achieve and the activity of an agent aiming at an action.

12 The precise distinction between following a theory and subscribing to a theory is given later in this section.

13 I will use the concept of subscribing to a theory of rationality, as opposed to the idea of following a theory. In this paper, I am concerned with agents who are not necessarily rigid rule followers, and the concept of subscribing to a theory allows for more flexibility and imperfection. In particular, it is important to the arguments that I will offer that it is possible for someone to subscribe to a theory of rationality, and yet not always follow its recommendations.

14 A theory of rationality recommending that an agent make himself do something irrational may seem peculiar, but it is not contradictory. See Schelling, Thomas, The Strategy of Conflict, Cambridge, MA, 1960Google Scholar; Parfit, Derek, Reasons and Persons, Oxford, 1984Google Scholar. Even Gauthier accepts that it can be rational in some cases to cause oneself to act in a generally irrational manner – though not in a particular irrational manner. It is unclear why Gauthier believes that there is an important distinction between these types of cases.

15 Interestingly, 0 and G will sometimes recommend the adoption of the exact same plans (i.e. dispositions). The two theories differ with regard to some of the individual actions that they recommend. This will be clarified in what follows.

16 By ‘a disposition to x’ I mean some property of the agent that will tend to cause the agent to do x. So, if I am someone who is generally disposed to keep promises, my promising to do x will give me a specific disposition to x, since it tends to make me more likely to x. The dispositions I am discussing are thus distinguished from predictions, and one can reasonably say that an agent possesses one or more dispositions to do x, even though they also possess dispositions to not do x and one predicts that they will not do x. Of course, this may depend upon rejecting a behaviourist philosophy of mind, as was pointed out to me by members of the Prepositional Attitudes Task Force.

17 More subtle issues related to this are discussed extensively in section V.

18 It is not important to my account whether or not the concept of following a theory entails that the agent following the theory does so because of the theory, or merely acts in accordance with the theory.

19 For my present purposes, I take weakness of will to be an agent choosing to perform an action other than the one she believes she has best reason to perform. Of course, weakness of will is much more complex than this suggests. But I am not attempting to explicate the concept of weakness of will in all of its complexity.

20 See McClennen, ‘Constrained Maximization …’; and Elster, Jon, Ulysses and the Sirens, Cambridge, 1979Google Scholar. McClennen recognizes that some strategies involve weakness of will, though he mistakenly charges Gauthier with requiring that agents be weak-willed. Elster also acknowledges the importance of taking imperfect rationality into account in the theory of rational choice. However, neither author seems to appreciate the unique significance that weakness of will has to the agent who subscribes to O.

21 This assumes that we can set aside problems of wayward causal chains, a side issue that should not detain us here.

22 It should be understood that I mean to include only ‘internal’ dispositions in this definition. A person might have a physical (i.e. non-mental) disposition to choke, but this would not be a weakness in the sense currently intended.

23 A disposition may count as a controlled weakness in one situation but not in another. In a situation for which the disposition was not intentionally induced as a part of a strategy, that disposition will count as an uncontrolled weakness, even if it was intentionally induced as a part of a strategy to deal with a different type of situation. Furthermore, I am assuming that agents' beliefs about what is rational to do in a situation do not change over time. This avoids certain issues that may be of independent interest, but that would be merely cumbersome to include within the current project.

24 There is much to be said about using these three forces to understand agents' behaviour, but saying too much about it here would distract from the main argument. See Barnes, R. Eric, ‘Rational Choice and Two Kinds of Weakness’, in Cooperation and Trust: Puzzles in Utilitarian and Contractarian Moral Theory (Doctoral Dissertation, University of North Carolina), Chapel Hill, 1997Google Scholar. The most relevant conclusions from this paper are: 1) the introduction of uncontrolled weakness harms both types of agents to the same degree in behaviour games, and it helps both types of agents in disposition games; and, 2) in disposition games, the increased force of uncontrolled weakness helps revisionist agents to a greater degree than orthodox agents, but both are always helped and orthodox agents always do better at these games than revisionist agents. It is worth repeating that I am not trying to give a subtle account of weakness of will; the term ‘weakness’ is simply a convenient way to refer to the idealized concept I am working with.

25 The agent who subscribes to G may take one box slightly more often than the agent who subscribes to O, and so might do marginally better. See my ‘Rational Choice and Two Kinds of Weakness’ and the end of this section.

26 Developing habits may not be the only method for adopting these dispositions, but it seems like the most obvious.

27 Readers may note that, practically speaking, this recommendation seems to be the same as McClennen's suggestion that agents make resolute choices (McClennen, Rationality and Dynamic Choice). As with G, what distinguishes O from McClennen's revisionist theory is that O condemns the actions which result from these resolute choices as irrational.

28 In fact, matters are somewhat more complicated than this suggests. For orthodox agents to gain the benefits in disposition games that I will discuss below, they will need to be able to control the strength of their resolve in varying situations. The acquisition of this skill will invariably add some cost, though this cost can also be distributed.

29 See Gauthier, Morals by Agreement; Gauthier, ‘In the Neighbourhood …’; McClennen, Rationality and Dynamic Choice; ‘Prisoner's Dilemma and Resolute Choice’, Paradoxes of Rationality and Cooperation, ed. Campbell, Richmond and Sowden, Lanning, Vancouver, 1985Google Scholar; Danielson, Artificial Morality.

30 It is worth noting that it is easy to change one type of game into the other. For instance, in Kavka's game, if the host were to base his decision to give the player the money on an unmediated prediction of whether the player would actually drink the toxin (as opposed to a judgement of the player's intention), then it would be a behaviour game.

31 A prisoner's dilemma has the following structure (l=best and 4=worst, and the left number gives Row's payoff):

32 The revisionist agent might be helped by uncontrolled weakness, but it will not help her any more than it will help the orthodox agent, so for simplicity I have excluded this from the present discussion. See Barnes, ‘Rational Choice …’; and the end of section II. Controlled weakness cannot help the revisionist agent either, because that would make the revisionist's theory indirectly self-defeating. Given this, the revisionist agent will never gain the benefits of defecting.

33 Parfit, p. 7.

34 See Kavka, Gregory, ‘The Toxin Puzzle’, Analysis, xliii (1983)Google Scholar; and Kavka, Gregory, Moral Paradoxes of Nuclear Deterrence, Cambridge, 1987Google Scholar, ch. 2.

35 In constraint games ‘co-operate’ means forgoing the small payoff and ‘defect’ means taking it.

36 After hearing an abbreviated version of this paper, David Gauthier suggested that instead of using a principle like P1 (discussed below) to argue for the impossibility of orthodox agents forming the relevant intentions, one could use a different principle. The principle he suggested was something like: An Agent can rationally intend to do x, only if that agent would intend to do x if she were perfectly rational. Such a principle would support the objection, but like Kavka's analysis this employs perfectly rational agents as a central part of the argument. This should not be allowed. It is a Trojan horse.

In general, when opponents of the orthodox theory argue that the orthodox agent's decision rule prevents them from best achieving the goal of maximizing utility, they use the following strategy. They say, ‘I'll give you the strongest assumptions you could ask for. I'll grant you that agents are fully informed and perfectly rational, and you still won't be able to show that an orthodox agent can maximize utility in a constraint game.’ However, what they seem to be graciously granting me as ‘perfect rationality’ is actually a crippling assumption that agents lack an important set of skills – the ability to constrain themselves through controlled weakness. The defender of the orthodox theory has no reason to accept the assumption of perfect rationality because a perfectly rational agent is not the ideal (i.e., perfect) agent.

37 This principle is inspired by work in simulation theory in the philosophy of mind.

38 For a defence of the claim that it is possible to adopt beliefs, see Lycan, William and Schlesinger, George, ‘You Bet Your Life: Pascal's Wager Defended’, Reason and Responsibility, ed. Feinberg, Joel, Belmont, Calif., 7th edn.Google Scholar

39 There is a disposition game in which this may not be true, but nothing of philosophical interest follows from it. Imagine a game like the Newcomb game, except the host's prediction is based solely on whether the player subscribed to O or to G (the player knows this). The host awards the big payoff if and only if he judges that the player subscribes to G. A player subscribing to G would do better. But nothing of interest follows from this. It is no surprise that there are games of this sort in which G does better. There is a much simpler game where the host gives $1 to G agents and punches O agents. G agents fare better in such games, but this can hardly be taken as a relevant or interesting criticism of O.

Interesting, but not of direct concern here, is the paradoxical game where the host pays the player $1 if he does not follow O and nothing if he does follow 0. Assuming that merely acting in accordance with 0 counts as following O, O would then tell agents not to follow O. But then what would count as following 0 or not following O?

40 Revisionist agents will never get the small payoff because uncontrolled weakness has been ruled out.

41 No one has explicitly endorsed this claim, but some have said things suggesting this. See Gauthier, Morals by Agreement; Dean, Richard, ‘A Defense of Constrained Maximization’, Dialogue, xxxvi (1997)Google Scholar.

42 How accurate the host must be for any constraint game to be interesting is a function of the payoffs. This has, I suspect, been implictly understood by others, though it has seemingly resisted articulation. The probability that the host's judgement is correct must be greater than:

43 The idea of being ‘more directly related’ is central here. In the Newcomb game, the player's disposition to take one box is only indirectly related to the host's criterion for paying the big payoff (which is a prediction of the actual behaviour that is likely to be brought about by the disposition). In the toxin game, the player's disposition to drink the toxin (i.e. her intention to do so) is directly related to the host's criterion (which is a judgement about that exact disposition). So, unless the player knows of a disposition that is more directly related to the criterion for the big payoff than is the eventual behaviour being predicted, the orthodox player will do no better than the revisionist player.

44 For attempted reductions see Lewis, David, ‘Prisoners' Dilemma is a Newcomb Problem’, Philosophy and Public Affairs, viii (1979)Google Scholar; and, Leslie, John, ‘Ensuring Two Bird Deaths With One Stone’, Mind, c (1991)Google Scholar.

45 See Axelrod, Robert M., The Evolution of Cooperation, New York, 1984Google Scholar. See also Elster, Ulysses, for more references.