Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-26T06:17:09.130Z Has data issue: false hasContentIssue false

Preference's Progress: Rational Self-Alteration and the Rationality of Morality*

Published online by Cambridge University Press:  13 April 2010

Duncan Macintosh
Affiliation:
Dalhousie University

Extract

On the received theory of rational choice, (a) a choice is rational if it maximizes one's individual expected utility. However in the Prisoner's Dilemma (PD), by this standard, each agent should Defect, since each maximizes no matter what the other does if he Defects. But then both will Defect, doing poorly; better for each if both had Co-operated.

Type
Articles
Copyright
Copyright © Canadian Philosophical Association 1991

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1 I identify the problem to which this is a solution in my Libertarian Agency and Rational Morality: Action-Theoretic Objections to Gauthier's Dispositional Solution of the Compliance Problem,” Southern Journal of Philosophy, 26 (1988): 399425Google Scholar; briefly sketch it as a way of saving Gauthier in my “Two Gauthiers?”; and defend the intelligibility of revising oneàs preferences on pragmatic pretexts in my “Preference Revision and the Paradoxes of Instrumental Rationality” (conditionally forthcoming, Canadian Journal of Philosophy). The latter, tracing from my “Retaliation Rationalized” (paper presented to the Canadian Philosophical Association, Learned Societies Meetings, May, 1988, Windsor, Ontario, now expanded and forthcoming as “Retaliation Rationalized: Gauthier's Solution to the Deterrence Dilemma,” in Pacific Philosophical Quarterly), and my “Kavka Revisited: Some Paradoxes of Deterrence Dissolved” (unpublished manuscript, Dalhousie University, 1990), asks whether a harmhater can rationally threaten nuclear retaliation to deter attack, and whether he can act on that threat if deterrence fails. This is congruent with the problems of the PD, since both involve a maximizing commitment to a non-maximizing action. E.g., see Gauthier, David, “Deterrence, Maximization, and Rationality,” Ethics, 94 (1984): 474495CrossRefGoogle Scholar; and “Afterthoughts,” in The Security Gamble: Deterrence Dilemmas in the Nuclear Age, edited by Douglas Maclean (Totowa, NJ: Rowan and Allenheld, 1984), p. 159–61; Kavka, Gregory, “Some Paradoxes of Deterrence,” Journal of Philosophy, 75 (1978): 285302CrossRefGoogle Scholar; “The Toxin Puzzle,” Analysis, 43 (1983): 33–36; “Responses to the Paradox of Deterrence,” in Maclean, ed., The Security Gamble, p. 155–59; David Lewis, “Devil's Bargains and the Real World,” ibid., p. 141–54, and Vorobej, Mark, “Gauthier on Deterrence,” Dialogue, 25 (1986): 471–76.CrossRefGoogle Scholar

2 For a defence of this view, see Shick, Frederic, Having Reasons: An Essay on Rationality and Sociality (Princeton, NJ: Princeton University Press, 1984).CrossRefGoogle Scholar

3 This construction can be placed on Gauthier, David, Morals By Agreement (Oxford: Clarendon Press, 1986)Google Scholar; and he explicitly defends it in his “In the Neighborhood of the Newcomb-Predictor (Reflections on Rationality)” (unpublished manuscript, University of Pittsburgh, 1985), and “Economic Man and the Rational Reasoner” (unpublished manuscript, University of Pittsburgh, 1987). He has also recommended this interpretation in correspondence. Danielson, Peter has argued for a similar position in correspondence, and also, I think, in his “Artificial Morality: How Morality is Rational” (unpublished manuscript, draft 0.4, York University, 1988)Google Scholar.

4 This exposition of Gauthier is based on his Morality and Advantage,” Philosophical Review, 76 (1967): 460–75CrossRefGoogle Scholar; “Deterrence,” “Afterthoughts,” “In the Neighborhood,” in Gauthier, Morals by Agreement, especially chaps. 1, 5, 6, and 11, and “Economic Man.”

5 This repair is suggested in Campbell, Richmond, “Moral Justification and Freedom,” Journal of Philosophy, 85 (1988): 192213CrossRefGoogle Scholar, criticized in detail in my “Libertarian Agency.”

6 I advocate this in my “Two Gauthiers?” –though promissorily and in much less detail than I will here. The idea that it is rational for agents to revise unilaterally their preferences should not be confused with Amartya Sen's proposals in his “Choice, Orderings and Morality,” in Practical Reasoning, edited by Korner, Stephan (Oxford: Basil Blackwell, 1974), p. 5467Google Scholar, nor with those in Sen's “Reply to Comments,” p. 78–82, and “Rationality and Morality: A Reply,” Erkenntnis, 11 (1977): 225–32; nor with those of McClennen, Edward F. in his “Prisoner's Dilemma and Resolute Choice,” in Paradoxes of Rationality and Cooperation: Prisoner's Dilemma and Newcomb's Problem, edited by Campbell, Richmond and Sowden, Lanning (Vancouver: University of British Columbia Press, 1985), p. 94104Google Scholar. For a summary review of the differences between their proposals and mine, see my “Two Gauthiers?” p. 54–55. For a more detailed critique of these early attempts to rationalize Co-operation in a PD, see my “Cooperative ‘Solutions’ to the Prisoner's Dilemma” (forthcoming, Philosophical Studies). My proposal is similar in certain ways to one in McClennen, Edward F., “Constrained Maximization and Resolute Choice,” Social Philosophy & Policy, 5 (1988): 95118CrossRefGoogle Scholar; and what I am saying here can be read as an elaboration and defence of his view as well.

7 My thanks to an anonymous referee for these objections.

8 I repeat: whether one initially wants to or not. Gauthier himself sees agents coming to prefer moral conduct, ceasing to experience it as constrained. But for him mdash;although not for me —one does not prefer it as a condition of its rationality, but merely as a means of reinforcing moral conduct already made rational by one's having rationally adopted a disposition to be moral. For details on the differences between our proposals, and on whether he himself can be interpreted as offering the preference-revision solution I favour, see my “Two Gauthiers?”

9 These preferences define Amartya Sen's Assurance Game. See Sen, “Choice.”

10 Richmond Campbell thinks some of my earlier work implies this proposal and has his own objections and alternatives to it. I am not sure my earlier work is clear enough to imply it; but in any case, I think it is probably wrong —see below, in the main text.

11 For a detailed critique of these and various other preference-functions that some philosophers have thought would rationalize PD Co-operation, see my “Co-operative ‘Solutions’.”

12 For more on this, see ibid.

13 We need this clause to cover the case where one discovers just before one chooses actions, after having chosen one's preferences, that the other agent did not, will not, or will likely not, act on his preferences; this clause gives one a basis for protecting oneself by Defection, should one learn that the other was not perfectly rational, or was prevented from acting rationally, and so failed to act as he should have, given his preferences.

14 Some other aspects and defences of this solution are given in my “Co-operative ‘Solutions’.”

15 My thanks to Terry Tomkow for help in formulating the proposal in these terms.

16 Thanks to Christopher Morris for asking.

17 See my “Co-operative ‘Solutions’.”

18 Again, thanks to Christopher Morris for asking.

19 My thanks to Peter Danielson, who worried in his refereeàs report for “Two Gauthiers?” that my CM preferences “violate conditions of independence required of standard preferences, since a CM preference for Co-operation is dependent on the other's similar preference.”

20 For a defence of other, Gauthier-type solutions from the charge of circularity (but ones which leave preference-functions unchanged), see Campbell, Richmond, “Critical Study: Gauthier's Theory of Morals by Agreement,” Philosophical Quarterly, 38 (1988): 343–64CrossRefGoogle Scholar, and Danielson, Peter, “The Visible Hand of Morality” (Review of Gauthier, Morals By Agreement), Canadian Journal of Philosophy, 18 (1988): 378–79.CrossRefGoogle Scholar

21 My thanks to Peter Danielson, Robert Bright, Terry Tomkow, and Sheldon Wein for this worry, which plagued an earlier version of this proposal. I hope this also begins to meet concerns tentatively expressed in Danielson, Artificial Morality. Danielson worries that permitting the revision of preferences makes the conditions of choice unstable; also that players will find themselves in a vicious regress of meta-games, each inflating his preference to drive up the price of his concession on something also valued by the other agent. I do not think our PD creates this problem, for, as I argue below, the agents have an interest in co-ordinating their preferences, not in trumping those of the other agent.

22 Thanks to Peter Danielson and Richmond Campbell for these worries, also found in the notes to Lewis, “Devil's Bargains.”

23 There is something odd about this conclusion, for if the second so-called preference is really a principle, it is not sensitive to variations in the content of the first preference in what actions it recommends. Rather, it kicks in whenever the first cannot be satisfied, whatever it is. Compare with the maximizing principle, which gives different advice for action depending on the preference's content. But never mind.

24 From his referee's report.

25 The question is, again, from Morris.

26 My thanks to Terry Tomkow for discussion on this point.

27 Compare also, Baier, Kurt, “Rationality and Morality,” Erkenntnis, 11 (1977): 213.CrossRefGoogle Scholar My thanks to Terry Tomkow for help with this section.

28 Thanks again to Peter Danielson for the following problems.

29 I have here tried to elaborate on the proposal I made in “Two Gauthiers?” and “Libertarian Agency and Rational Morality,” and to meet some of the objections I have received to it in correspondence and conversation. I have not had room to address objections to the very idea of a practically motivated revision in preferences. Does it make sense? Can agents revise their preferences? Is doing so consistent with all standards of rationality? with a conception of value as objective? I think the answer to these questions (except maybe the last) is “Yes”; but here I can only refer the reader to my “Preference Revision,” where I defend this position in detail.