Summary
The following is adapted from an example of Haim Gaifman's:
Rowena makes the following offer to Columna: Columna may have either box A (which is empty) or box B (which contains $100), but not both. Rowena also makes the following promise to Columna: if Columna makes an irrational choice in response to the first offer, Rowena will give her a bonus of $1,000. Let us assume that both are ideal reasoners and that Rowena always keeps her promises, and that both of these facts are common knowledge between Rowena and Columna.
How should Columna respond to this situation? If we suppose that taking box A would be irrational, then doing so would yield Columna $900 more than taking box B, which makes taking A the rational thing to do. If, alternatively, we suppose that taking box A would not be irrational, than taking box A would yield at least $ 100 less than taking box B, so taking box A would be irrational after all. Taking box A is irrational for Columna if and only if it is not irrational.
There is an obvious analogy between this situation and that of the liar paradox. In the liar paradox, we have a sentence that says of itself, ‘I am not true.’ Such a sentence is true if it is not true (since that is what it says), and it is false, and therefore not true, if it is true (since that is what it denies). Tarski (1956) demonstrated that this ancient puzzle constitutes a genuine antinomy by showing that any theory that implies every instance of an intuitively very plausible schema, convention T, is logically inconsistent.
- Type
- Chapter
- Information
- Paradoxes of Belief and Strategic Rationality , pp. 1 - 10Publisher: Cambridge University PressPrint publication year: 1992