Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-27T07:01:01.486Z Has data issue: false hasContentIssue false

Testing transitivity of preferences using linked designs

Published online by Cambridge University Press:  01 January 2023

Michael H. Birnbaum*
Affiliation:
Dept. of Psychology, CSUF H-830M, Box 6846, Fullerton, CA 92834–6846, USA
Jeffrey P. Bahra
Affiliation:
California State University, Fullerton
Rights & Permissions [Opens in a new window]

Abstract

Three experiments tested if individuals show violations of transitivity in choices between risky gambles in linked designs. The binary gambles varied in the probability to win the higher (better) prize, the value of the higher prize, and the value of the lower prize. Each design varied two factors, with the third fixed. Designs are linked by using the same values in different designs. Linked designs allow one to determine if a lexicographic semiorder model can describe violations of transitivity in more than one design using the same parameters. In addition, two experiments tested interactive independence, a critical property implied by all lexicographic semiorder models. Very few people showed systematic violations of transitivity; only one person out of 136 showed violations of transitivity in two designs that could be linked by a lexicographic semiorder. However, that person violated interactive independence, as did the majority of other participants. Most individuals showed systematic violations of the assumptions of stochastic independence and stationarity of choice responses. That means that investigators should evaluate models with respect to response patterns (response combinations) rather than focusing entirely on choice proportions.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2012] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Descriptive theories of risky decision making can be divided into two groups: those that satisfy transitivity of preference and those that do not. Transitivity of preference is the assumption that, if a person prefers A to B and prefers B to C, then that person should prefer A to C, apart from random error. We use the symbol, ≻, to denote preference, so the property can be denoted as follows: AB and BCAC.

Theories that represent each gamble by a single number automatically imply transitivity. These theories assume that ABU(A) > U(B), where U(A) and U(B) are the numerical values or utilities of the two gambles. Expected utility theory (EU), cumulative prospect theory (CPT), and the transfer of attention exchange model (TAX), as well as many other theories, fall in this class of theories that satisfy transitivity (Reference BirnbaumBirnbaum, 2008b; Reference Tversky and KahnemanTversky & Kahneman, 1992; Reference LuceLuce, 2000; Reference WakkerWakker, 2011).

Theories that represent choice in terms of contrasts between the components of the alternatives, however, need not satisfy transitivity of preference. Theories that violate transitivity include the family of lexicographic semiorder (LS) models, the priority heuristic, regret theory (RT), the stochastic difference model (SDM) and others (Reference BirnbaumBirnbaum, 2010; Reference Birnbaum and GutierrezBirnbaum & Gutierrez, 2007; Reference Birnbaum and SchmidtBirnbaum & Schmidt, 2008; Reference Brandstätter, Gigerenzer and HertwigBrandstätter, Hertwig, & Gigerenzer, 2006; Reference González-VallejoGonzález-Vallejo, 2002; Reference Loomes, Starmer and SugdenLoomes, Starmer, & Sugden, 1991; Reference LuceLuce, 1956; 2000; Reference Myung, Karabatsos and IversonMyung, Karabatsos, & Iverson, 2005; Reference Regenwetter, Dana and Davis-StoberRegenwetter, Dana, and Davis-Stober, 2010, 2011; Reference Rieskamp, Busemeyer and MellersRieskamp, Busemeyer, & Mellers, 2006; Reference TverskyTversky, 1969).

An example of a lexicographic semiorder (LS) is presented next to illustrate how such a model can account for intransitive preferences.

1.1 Lexicographic semiorders

Let G = (x, p; y) represent a two-branch gamble in which prize x is received with probability p and otherwise y is received, where x > y ≥ 0. In such two-branch gambles, there are three variables that can be manipulated experimentally: y = Lowest (L) consequence; x = Highest (H) consequence; and p = Probability (P) to win the higher prize.

We use the notation LPH LS to refer to the lexicographic semiorder (LS) model in which the person is assumed to compare the attributes in the order L, then P, then H. The three attributes might be examined by a participant in any of five other possible orders: LHP, HPL, HLP, PLH, and PHL.

In the LPH LS model, a person is assumed to compare two such gambles, G = (x, p; y) and F = (x , q; y ) by contrasting attributes with thresholds (ΔL and ΔP) as follows:

  1. 1. First compare L: If , choose the gamble with the higher lowest consequence;

  2. 2. Else, compare P: if , choose the gamble with the higher probability to win the better prize;

  3. 3. Else, check H: if , chose the gamble with the higher best prize;

  4. 4. Else, choose randomly.

The priority heuristic of Reference Brandstätter, Gigerenzer and HertwigBrandstätter, et al. (2006) is a variant of this LPH LS in which it is assumed that ΔL equals one tenth of the highest consequence in either gamble, rounded to the nearest prominent number, where prominent numbers are integer powers of 10 plus one-half and twice those values; i.e., 1, 2, 5, 10, 20, 50, 100, etc. If the highest prize always rounds to $100 (as in the experiments of this article), then ΔL = $10. Further, the priority heuristic assumes that ΔP = 0.10, presumably, due to the base 10 number system. Therefore, in these studies the priority heuristic is a special case of the LPH LS model. Reference Brandstätter, Gigerenzer and HertwigBrandstätter, et al. (2006) showed that with these selected parameters, this model approximates the results of several previously published papers; in addition, they claimed that the priority heuristic is more accurate than other models for these selected studies.

To illustrate how this LPH LS model can violate transitivity, consider the following five gambles: K = ($100, .50; $0), L = ($96, .54; $0), M = ($92, .58; $0), N = ($88, .62; $0), and O = ($84, .66; $0). According to the priority heuristic, ΔL = $10 and ΔP = 0.10, so people should prefer KL, LM, MN, and NO, because the differences in probability are only 0.04; these are too small to be decisive (less than ΔP = 0.10), so preferences are determined by the highest consequences. However, OK, because the difference in probability is 0.16, which exceeds the threshold of ΔP = 0.10. As long as 0.16 ≥ ΔP > 0.04, the LPH LS implies: KL, LM, MN, and NO, but OK, violating transitivity. When ΔP = 0.10, as in the priority heuristic, then two other violations are also predicted, OL and NK. If ΔP ≤ 0.04, the LPH LS model implies the transitive order ONMLK, and if ΔP > 0.16, it predicts the transitive order KLMNO.

Now consider a second design with choices among the following gambles: A = ($84, 0.5; $24), B = ($88, 0.5; $20), C = ($92, 0.5; $16), D = ($96, 0.5; $12), and E = ($100, 0.5; $8). According to the priority heuristic, ED, DC, CB, and BA, because in each of these choices, the lowest consequences differ by less than $10, and probabilities are equal, so these choices are determined by the highest consequences. However, in the choice between A and E, the lowest consequences differ by $16, which exceeds ΔL= $10, so AE, violating transitivity. As long as $16 ≥ ΔL > $4, the LPH LS implies there should be at least one intransitivity in this design, with AE. If ΔL = $10, as in the priority heuristic, then two other violations are also predicted, AD and BE. If ΔL ≤ $4, the LPH LS predicts the transitive order, ABCDE, and if ΔL > $16, it predicts the transitive order, EDCBA.Footnote 1

1.2 Intransitive preferences in linked designs

The LS models, including the priority heuristic, imply that choices in linked designs will be related. This study uses the two designs described above, and also a third design with choices among the following: F = ($100, 0.5; $24), G = ($100, 0.54; $20), H = ($100, 0.58; $16), I = ($100, 0.62; $12), and J = ($100, 0.66; $8). Note that the levels of lowest consequence match those in the design with A, B, C, D, and E and that probability values match those in the first design, with K, L, M, N, and O. Those two designs are linked in turn by the levels of the highest consequence. These designs with linked levels should show predictable patterns of transitivity or intransitivity, if a person used the same LS model in all three designs.

The stimuli used in these studies with linked levels are listed in Table 1. The designs are named after the variables manipulated: the LH design varies the lowest consequence (L) and the highest consequence (H), the LP design varies the lowest consequence and probability (P), and the PH design varied probability and highest consequence.

Table 1: Gambles used in linked tests of transitivity.

For example, suppose that a person conformed to the LPH LS model. If that person showed data consistent with the transitive order ONMLK in the PH design, it means that ΔP ≤ 0.04; and suppose that the same person showed intransitive choices in the LH design consistent with ΔL = $10. In that case, the model implies intransitive data in the LP design such that JI, IH, HG, GF and yet FJ, FI, and GJ. So, if results conformed to this prediction, they would represent successful confirmations of new predictions, and if not, the model(s) that predicted them would be disconfirmed. See Appendix A for all possible linked patterns in the LPH LS model.

1.3 TAX Model

A transitive model that has been fairly successful in describing violations of EU and CPT is Reference Birnbaum, Shanteau, Mellers and SchumBirnbaum’s (1999; 2008b) special Transfer of Attention Exchange (TAX) model. This model represents the utility of a gamble as a weighted average of the utilities of the consequences, but weight in this model depends on the probabilities of the branch consequences and ranks of the consequences in the gamble. This model can be written for gambles of the form G = (x, p; y) where x > y ≥ 0 as follows:

(1)

where a = t(p) – ωt(p); b = t(q) + ωt(p), where q = 1 – p and when ω > 0. In this case (ω > 0), there is a transfer of attention from the branch leading to the best consequence to the branch leading to the worst consequence. In the case where ω < 0, weight is transferred from lower-valued branches to higher ones; in that case, a = t(p) – ωt(q); b = t(q) + ωt(q). The configural parameter, ω, can produce risk aversion (ω > 0) or risk-seeking (ω < 0), even when u(x) = x. When ω = 0 and t(p) = p, this model reduces to expected utility (EU). Expected Value (EV) is a special case of EU in which u(x) = x.

When fitting individual data in a suitable experiment, parameters can be estimated from the data. However, for the purpose of making predictions before conducting new studies, a simple version of the special TAX model has been used (e.g., Reference BirnbaumBirnbaum, 2008b): u(x) = x for 0 < x < $150; t(p) = p 0.7, and ω = 1/3, where ω = δ/(n + 1), δ = 1, and n = 2 is the number of branches. These have been called “prior” parameters, because they have been used in previous studies to design new experiments and predict modal results with similar participants, contexts, and procedures. Although these are not optimal, they have had reasonable success predicting aggregate results of new studies with American undergraduates who choose among gambles with small gains (e.g., Reference BirnbaumBirnbaum, 2004, 2005, 2008b, 2010).

The TAX model with these prior parameters implies the transitive orders, ABCDE, FGHIJ, and ONMLK. Although the TAX model successfully predicted new results that violated CPT, in the tests of transitivity of Table 1, TAX with these parameters makes virtually the same predictions as CPT with the parameters of Reference Tversky and KahnemanTversky and Kahneman (1992). With other parameters, TAX, CPT, and EU could account for other transitive orders, but these models always imply transitivity. Experiments 2 and 3 also include tests between TAX and CPT.

Transitivity can therefore be considered a critical property of TAX, CPT and EU because these models (with any parameters) cannot account for systematic violations of transitivity. The family of LS models could handle either transitive or intransitive data, so finding transitive preferences would not refute LS models. For example, the LPH LS model with ΔP ≤ 0.04 and ΔL ≤ $4 makes the same transitive predictions for this study as TAX model with its prior parameters. There are critical properties of LS models, however, that can lead to refutation of those models (Reference BirnbaumBirnbaum, 2008a, 2010; Reference Birnbaum and LaCroixBirnbaum & LaCroix, 2008), described next.

1.4 Critical properties of LS models

Birnbaum (2010) considered a general family of LS models in which each person might have a different priority order in which to compare the features; each person might have a different monotonic utility function for monetary prizes and a different subjective function for probability; and each person might have different thresholds for determining if a given subjective difference is decisive. Reference BirnbaumBirnbaum (2010) showed that this general family of LS models implies properties of priority dominance, integrative independence, and interactive independence. In Experiments 2 and 3, we test interactive independence, which can be written:

In these two choice problems, note that F and G share a common probability to win (p), and F and G also share a common probability (q). According to this most general family of LS models, a person should either prefer F to G and F to G or prefer G to F and G to F , or be indifferent in both cases, but a person should not shift from F to G or from G to F as the common probability is changed, except by random error. To test such a property with real data requires a theory to separate random error from systematic violation.

1.5 Testing algebraic properties with probabilistic data

Testing properties such as transitivity or interactive independence is complicated by the fact that people are not completely consistent in their responses. Different people can make different responses when asked the same question, so we must allow for individual differences. Furthermore, the same person might make different responses when the same choice problem is presented on a later trial, following other intervening trials. It is possible that the person has changed her or his “true” preferences, that responses contain “error”, or both. Exactly how to analyze data containing variability has been the topic of debate (Reference Loomes and SugdenLoomes & Sugden, 1995; Reference Birnbaum and BahraBirnbaum & Bahra, 2012; Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al., 2011).

Reference MorrisonMorrison (1963) reviewed two properties that are implied by certain stochastic choice models such as Reference LuceLuce’s (1959) choice model: Weak Stochastic Transitivity (WST) and the Triangle Inequality (TI). These properties were also viewed as methods for analyzing transitive models with variable data. The TI can be written:

where p(A, B) is the probability to choose A over B. WST can be written:

and

Reference MorrisonMorrison (1963) advised that both of these properties should be tested. Reference TverskyTversky (1969) cited Morrison but reported only tests of WST. Tversky’s statistical tests were challenged by Reference Iverson and FalmagneIverson and Falmagne (1985), who noted that Tversky’s tests did not properly allow for individual differences in preference orders (cf., Reference Myung, Karabatsos and IversonMyung, et al., 2005).

Reference Regenwetter, Dana and Davis-StoberRegenwetter, et al. (2010, 2011) also criticized Tversky’s failure to test the TI, and proposed statistical tests of these properties based on the assumptions that repeated responses to the same choice problem are independent and identically distributed (iid). They argued that, if each person’s responses can be modeled as an iid sample from a mixture of different transitive preferences (so that choice proportions satisfy the linear order polytope, which includes TI), there would be no reason to argue for LS models.

Reference BirnbaumBirnbaum (2011, 2012) questioned Reference Regenwetter, Dana and Davis-StoberRegenwetter, et al. (2011) for not testing the crucial iid assumptions; when iid assumptions are false, neither WST nor TI (nor any analysis of the linear order polytope defined on choice proportions for individual choice items) can be regarded as unambiguous tests of transitivity.

There are two problems: WST can be violated even when a person has a mixture of strictly transitive orders, and the TI (and the linear order polytope) can be satisfied even when a person has a mixture that includes intransitive patterns (Reference BirnbaumBirnbaum, 2011, 2012). These two properties can be more informative when they agree, but Reference BirnbaumBirnbaum (2011) argued that we should also examine response patterns in order to ensure that choice proportions also reflect individual behavior that might in fact change systematically during a study.

A debate between Reference BirnbaumBirnbaum (2011, 2012) and Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, Dana, Davis-Stober, and Guo (2011) has arisen concerning methods for analyzing variability of choice responses. The approach of Reference Regenwetter, Dana and Davis-StoberRegenwetter, et al. (2010, 2011) analyzes only binary choice proportions based on the assumption of iid, whereas the “true and error” (TE) models analyze relative frequencies of response patterns, based on the assumption that errors are independent.

The true and error (TE) model, as applied by Reference Birnbaum and GutierrezBirnbaum and Gutierrez (2007) and Reference Birnbaum and SchmidtBirnbaum and Schmidt (2008) assumes that different people may have a different patterns of true preferences and that different choice problems may have different error rates. Different individuals might also have different levels of “noise” in their data. This type of model has not been the subject of much debate, because these models assume that behaviors of people tested separately are independent (i.e., that people do not influence each other via ESP).

More controversial is the proposal that TE model be applied to individual data with the assumption that a person might have different “true” preferences in different blocks of trials during the course of a long study (Reference BirnbaumBirnbaum, 2011; Reference Birnbaum and BahraBirnbaum & Bahra, 2012). This approach uses the variability of response by the same person to the same item within the same block of trials in order to separate variability due to “error” from “true” intention. This model contradicts the iid assumptions of Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al. (2011), which until recently have been assumed but not tested empirically.

This second type of TE model, called individual true and error theory (iTET), allows that, in a long experiment, a person might have different “true” preferences at the end of the study from those at the beginning. If a person has only one “true” pattern of preferences in all blocks, then iTET model implies that responses will satisfy the assumptions of iid. However, when a person has more than one true pattern (changing systematically during the study), the assumptions of iid will not in general be satisfied in this model (Reference BirnbaumBirnbaum, 2011). The present studies tested these iid properties.

1.6 Overview of tests, results and implications

Experiment 1 found overwhelming evidence against iid. The violations of iid suggest that many if not most participants systematically changed their “true” preferences during the course of the study. Violations of iid mean that we cannot properly restrict our analysis to choice proportions, but we should examine response patterns in order to test properties such as transitivity.

Nevertheless, choice proportions are analyzed for comparison with related theories, such as the priority heuristic, that make predictions at the level of average choice proportions (summarized in appendices). The averaged choice proportions did not agree with this heuristic and not one person had data consistent with it.

If a LS model holds, there can be linked patterns of intransitivity in linked designs. This was the key idea that led to Experiment 1, but as shown below, very few participants showed evidence of intransitive preferences in any of the three experiments and only one person showed intransitivity in two linked designs that might be compatible with a LS model. A further analysis of response patterns in every individual block of data found little evidence that many, if any, people held intransitive patterns as portions of a mixture of strategies.

The property of interactive independence should be satisfied according to all LS models; however, most participants in Experiments 2 and 3 violated this property systematically (as predicted by interactive models such as EU, CPT, and TAX) including even the one participant who showed linked violations of transitivity. The results led to the surprisingly strong conclusion that LS models can be rejected for nearly every participant.

2 Method

Each participant made choices between gambles, knowing that 10 participants would play one of their chosen gambles for real cash prizes. Each gamble was described as an urn containing 100 otherwise identical tickets, which differed only in the prize values printed on them. A ticket would be drawn randomly from the chosen urn to determine the cash prize. Participants were told that any of the choice problems might be selected for play, so they should choose carefully. At the conclusion of the study, randomly selected participants were awarded prizes, as promised.

2.1 Stimuli and designs

Each choice was displayed as in the following example:

  • First Gamble:

  • 50 tickets to win $100

  • 50 tickets to win $0

OR

  • Second Gamble:

  • 50 tickets to win $35

  • 50 tickets to win $25

Participants viewed the choices via computer and indicated their decisions by clicking one of two buttons to identify the gamble they would rather play in each choice.

Three linked sub-designs were used to test transitivity (Table 1). The LH design used 5 binary gambles in which probability was 0.5 and in which the Lowest (L) and Highest (H) consequences were varied. In the LP design, the highest prize was fixed to $100 and both probability (P) and lowest consequence (L) were varied. In the PH design, the lowest consequence was fixed to $0 and both probability (P) and highest consequence (H) were varied.

The five gambles within each of the LH, LP, or PH designs could appear as either First or Second gamble, making 5 × 5 = 25 possible choice trials; however, a gamble was not presented with itself, leaving 20 trials in each of these three sub-design. Note that each of 10 distinct choice problems was presented in each of two counterbalanced arrangements in each block.

There were 5 other “filler” sub-designs containing 6 to 48 choices each. These other sub-designs included trials in which a person was asked to choose between gambles with up to five branches (including choices listed in Table 11 of Reference BirnbaumBirnbaum, 2008b), or to choose between gambles and cash prizes to be received for certain. For the purpose of this article, trials in these other designs served as “fillers” that separated blocks of trials. Complete instructions and materials, including the filler tasks, can be viewed at the following URL: http://psych.fullerton.edu/mbirnbaum/Birnbaum\_Bahra\_archive.htm.

2.2 Procedures and participants of Experiment 1

Trials in the three main subdesigns (LH, LP, and PH) were blocked in sets of 25 to 26 choices each. Each block included all 20 trials from one sub-design, intermixed with 5 or 6 fillers, and put in restricted random order. This means that each of the 10 choices was presented twice within each block of trials, with position (first or second gamble) counterbalanced. A block of trials including any of the LH, LP, or PH designs was not presented again until at least 98 intervening trials and at most 175 intervening trials with choices from other designs had been presented.

Participants of Experiment 1 were 51 undergraduates enrolled in lower division psychology at California State University, Fullerton. Participants were tested in a lab via computers. Each participant served in two sessions of 1.5 hours each, separated by one week.

Each person worked alone, viewing instructions and materials via computer, and worked at his or her own pace for the time allotted. Therefore, some participants completed more repetitions than others. In Experiment 1, the limit was 20 blocks of trials, meaning each of the choice problems testing transitivity was judged up to 40 times by each person. For additional detail, see Reference BahraBahra (2012).

2.3 LS design in Experiments 2 and 3

Experiments 2 and 3 included LH, LP, and PH designs plus additional trials that tested interactive independence, the priority heuristic, and CPT. The LS design consisted of 16 choices. Five choices testing interactive independence were of the form, R = ($95, p; $5, 1 – p) versus S = ($55, p; $20, 1 – p), where p = 0.95, 0.9, 0.5, 0.1, or 0.05. Six others were formed by presenting each of three choices: S = ($99, p; $1, 1 – p) versus R = ($40, p; $35, 1 – p), where p = 0.9, 0.5, or 0.1, with either S or R presented first. There were five additional trials, as follows: R = ($90, 0.05; $88, 0.05; $2, 0.9) versus S = ($45, 0.2; $4, 0.2; $2, 0.6), R+ = ($90, 0.1; $3, 0.7; $2, 0.2) versus S– = ($45, 0.1; $44, 0.1; $2, 0.8), S2 = ($40, 0.4; $5, 0.1; $4, 0.5) versus R2 = ($80, 0.1; $78, 0.1; $3, 0.8), S3– = ($40, 0.2; $39, 0.2; $3, 0.5) versus R3+ = ($80, 0.2; $4, 0.7; $3, 0.1), and G4 = ($99, 0.30; $15, 0.65; $14, 0.05) versus F4 = ($88, 0.12; $86, 0.70; $3, 0.18). These five trials test implications of the priority heuristic and CPT (see Reference BirnbaumBirnbaum, 2008c).

2.4 Procedures and participants in Experiment 2

Experiment 2 was conducted as a replication of Experiment 1 with new participants, except with different filler designs between blocks and the addition of the LS design, which allowed us to test if those people showing signs of intransitive preferences also satisfied a critical property of LS models. In Experiment 2, blocks containing LH, LP, and PH designs (each with 5 or 6 intermixed trials from the LS design) were separated by at least 76 intervening trials, which included different intervening choices from those used in Experiment 1. There were 43 different undergraduates from the same “participant pool” tested with this procedure. Further description of the “filler” tasks, which tested restricted branch independence and stochastic dominance, are described in Birnbaum and Bahra (2012, Study 1).

2.5 Procedures and participants in Experiment 3

Experiment 3 was conducted to investigate two conjectures: First, it was conjectured that if all three transitivity designs were intermixed to make larger and more heterogeneous blocks, it might be more “confusing” to subjects, and this might induce more intransitivity. In the first two experiments, where a block of trials contained 25 trials, it was argued, people might remember preferring A to B and B to C when they were comparing A versus C, so they might obey transitivity because of an experimental demand for consistency. The idea was that by intermixing trials and spreading them out over larger blocks, memory would be overburdened, so intransitive data might be observed. Therefore, trials of all three transitivity designs were intermixed in Experiment 3.

Second, it was argued (Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al., 2011) that if trials from one design are separated by 3 intervening “filler” trials, responses might satisfy iid, presumably also due to the burdens of memory. Therefore, order was constrained in Experiment 3 such that any two trials from the LH, LP, or PH designs were separated by at least 3 trials from other designs. Perhaps these procedures would “help” iid to be satisfied.

Each block of Experiment 3 consisted of 107 trials (including 20 trials of the LH design, 20 trials of the PH design, 20 trials of the LP design, 16 trials of the LS design, and 31 other trials consisting of choices problems like those in Reference BirnbaumBirnbaum, 2008b). Following a warmup of four “filler” trials, each pair of trials from any of the LH, LP, or PH designs was separated by at least 3 intervening trials from other designs. Each block of 107 mixed trials was separated by a separate “filler” task with 57 trials involving choices between three-branch gambles, with equally likely consequences. Experiment 3 used 42 undergraduates from the same pool, who served in two, 1.5-hr sessions, one week apart, and who completed at least 10 blocks. Materials from Experiment 3, including, the filler task, can be viewed from the following URL: http://ati-birnbaum-2009.netfirms.com/Spr\_2010/thanks3.htm.

3 Results

3.1 Data reliability and consistency

Each of the ten basic choices in each of the LH, LP, and PH designs was presented twice in each block of trials, with positions counterbalanced. We define within-block consistency as the number of consistent choices out of 10 in each block, which required the participant to push opposite buttons for the two versions of each choice. If a person “went to sleep” and clicked the same button throughout a block, within-block consistency would be 0; if a person clicked buttons randomly, expected consistency would be 5, and if a person made perfectly consistent preferences (appropriately clicking opposite buttons), this index would be 10 (100%).

Within-block consistency was apparently high in the first two experiments; mean consistencies were 86% and 91% in Exps 1 and 2, respectively). The least consistent individuals in Exps 1 and 2 had self-agreements of 67% and 73%, respectively. Similar figures were found within each subdesign; mean within block consistency was 86%, 85%, and 87% in LH, LP, and PH designs of Exp 1, and 91%, 84%, and 86% in Exp 2, respectively. In Exp 3, where each block had 107 mixed choices, within-block consistency was 77%.

We defined between-block consistency as the mean number of consistent responses out of 20 choices between every pair of trial blocks. If a participant completed 20 blocks of trials, for example, this person judged each of the 10 choices 40 times, and there are 190 pairs of blocks (20*19/2) for which the number of agreements can be counted. Comparing two blocks of trials, if a person made the same decisions in all 20 choice problems, the score would be 20; if a person randomly pushed buttons; the expected score would be 10 (50%), and the lowest score possible is 0, if a person made exactly opposite choices on all 20 trials. Note that if a person used the same button on all 20 responses in two blocks (which would produce within-block consistency of 0), between block consistency would be 20 (100%). If a person had a response bias, for example, clicking button 2 when unsure or indifferent, such a bias would increase between-block consistency and decrease within-block consistency.

We found that mean between-block consistency was significantly lower than within-block consistency: 80%, 84%, and 75% in Exps 1, 2, and 3 respectively, t(50) = 6.19, t(42) = 7.13, and t(41) = 4.26.

3.2 Violations of stochastic independence and stationarity

Examining individual data of Experiment 1, we found a surprising result: Some individuals had exactly opposite responses on 20 out of 20 choices between two trial blocks and had perfect within-block consistency on both blocks. The probability of observing this result, assuming iid is as probable as predicting the exact sequence of 20 tosses of a fair coin: less than 1 in a million! Yet 10 people out of 51 (#101, 106, 109, 113, 124, 130, 134, 141, 145, 149) showed such patterns in Exp 1, and most of these showed multiple instances of perfect reversals in their data. In addition, three other people showed reversals of 19 out of 20 choices, which has a probability less than 1 in 50,000. Such results mean that the assumptions of iid are seriously and systematically violated.

Table 2 shows raw data for Participant #134, who showed complete reversals in all three designs. Responses to the 20 choice problems are listed in the LH design in the order: AB, AC, AD, AE, BA, BC, BD, BE, CA, CB, CD, CE, DA, DB, DC, DE, EA, EB, EC, ED. The integers 1 and 2 indicate responses indicating preference for first or second gamble, respectively. Entries under the columns labeled “order” indicate cases where all 20 responses in a block are perfectly consistent with a transitive order. This person started the experiment with three trial blocks showing inconsistency, but on the seventh block, this person finished the first day with all 60 responses perfectly consistent with the transitive orders, ABCDE, JIHGF, and ONMLK.

Table 2: Raw data from Case #134 in the LH, LP, and PH Designs. Day indicates the day on which the participant completed each block, denoted “blk”. “Order” indicates where all 20 responses in a block were perfectly consistent with a transitive order. Note that all 60 responses are opposite between Block 7 and Block 15.

Participant #134 began the second session (day 8) with the same behavior in the LH and PH designs, but the LP data were different; by the 10th block of trials, all 60 responses were now perfectly consistent with the orders, ABCDE, FGHIJ, and ONMLK. During the 11th block, data were not perfectly consistent in any of the designs, but on the 12th and 15th blocks, data were now perfectly consistent with EDCBA, FGHIJ, and KLMNO, exactly the opposite of that shown at the end of the first day. The odds of a single such perfect reversal, assuming iid, of 60 trials is less than one in a million cubed!

Such dramatic and surprising results from Exp 1 led us to conduct Exp 2 as a replication with different “filler” tasks between blocks and with new participants. In Exp 2 there were 7 of 43 participants who showed at least one such complete reversal of 20 responses (#201, 221, 235, 212, 222, 230, and 232); an additional 4 showed 19 reversals between at least two blocks.

In Experiment 3, where all three designs were intermixed in blocks of 107 trials, one person (#334) showed a complete reversal in the LP design. Summed over the three studies, we observed 410 instances of perfect reversals of 20 responses between blocks. So many “1 chance in a million” outcomes cannot be reconciled with the assumptions of iid.

These cases with perfect reversals involve data that are so clean and response patterns that are so different that it is easy to detect obvious and systematic changes in preferences between blocks. Such findings suggest that there might be subtler cases where people change between similar patterns of preference or where responses contain enough variability that one could not spot violations of iid without a statistical tool to detect them.

Reference BirnbaumBirnbaum (2012) devised two statistical tests of iid based on the Monte Carlo procedure suggested by Reference Smith and BatchelderSmith and Batchelder (2008): One test uses the variance of preference reversals between pairs of trial blocks and the other uses the correlation between preference reversals and the separation between blocks (which is correlated with the intervening time between blocks). These tests, summarized in Appendix B, show that most participants in all three studies had significant violations of iid. In Exps 1 and 2 there were six tests of iid for each person (variance and correlation methods in LH, LP, and PH designs). We found that only 3 out of 51 and only 4 of 43 participants did not have at least one violation of iid significant at the .01 level of significance in Exps 1 and 2, respectively.

Only 4 of 42 in Exp 3 did not have at least one of two tests significant at the .01 level. These results show that the assumptions of iid must be rejected as empirical descriptions. Violations of iid suggest that people are changing their “true” preferences during a study, in which case it could easily be misleading to analyze only marginal choice proportions. Instead, we should examine response patterns.

3.3 Analysis of modal response patterns

According to the family of LS models, it is possible to prefer AB, BC, CD, DE, and yet prefer EA. This pattern is denoted 11112, where 1 (or 2) represents a preference response for the alphabetically higher (or lower) labeled gamble, in Choices AB, BC, CD, DE, and AE, respectively. The opposite pattern, 22221, is also intransitive. All other patterns for these five choice problems are compatible with transitivity.

In each block of trials testing transitivity, there are two such tests, each based on 5 choice problems, where the positions of the gambles are reversed in the two tests. For each person, we determined the most frequent response patterns for these five choice problems separately for each presentation order. Out of 408 possible cases (136 participants by 3 designs), 333 cases were consistent; that is, the same modal pattern was observed in both presentation arrangements.

The number of participants who showed each of the modal response patterns are shown in Table 3, for consistent cases in the LH, LP, and PH designs, respectively. There were only 7 matrices (involving just 6 participants) with intransitive, consistent modal patterns; 7 cases out of 333 represent only about 2%. Cases were numbered starting with #101, 201, and 301 in the three experiments, respectively, and case numbers for those people showing consistent modal violations of transitivity are listed in Table 3. Only one person (#214) showed intransitivity in two linked designs: 22221 and 22221 in LH and LP designs. This case will be reexamined later.

Table 3: The frequency of consistent, modal response patterns in LH, LP, and PH designs. To be consistent, the participant had to have the same modal response pattern, over repetition blocks, in both ways of presenting the choices. Patterns 11112 and 22221 are intransitive. There were 51, 43, and 42 participants in Experiments 1, 2, and 3 with three designs each; only 7 cases out of 333 consistent modal patterns were intransitive.

3.4 Analysis of the Priority Heuristic

The priority heuristic implies the patterns, 22221 and 11112 in Table 3 for LH and PH designs, respectively. No consistent case showed these modal patterns, so no one obeyed the predictions of the priority heuristic.

The priority heuristic was proposed as a theory to describe the process that (most) people use when making choices (Reference Brandstätter, Gigerenzer and HertwigBrandstätter, et al., 2006). It is supposed to fit modal preferences, averaged over participants. Averaging our data (see Appendix C), the median response proportions are perfectly consistent with both WST and the TI. The averaged data agree with the transitive orders implied by the TAX model with its prior parameters: ABCDE, FGHIJ, and ONMLK; i.e., 11111, 11111, and 22222, in designs LH, LP, and PH, respectively.

As shown in Appendix C, the priority heuristic correctly predicted the most often chosen gamble in the averaged data in only three out of ten choice problems in each sub-design. By predicting only 9 of 30 modal choice proportions correctly, this model performed significantly worse than a random coin toss, which would have a binomial probability of .98 of scoring 10 out of 30 or higher. Therefore, the priority heuristic not only failed to fit the data of any individual, it also failed to describe the averaged data in this study.

3.5 Individual response patterns

The finding in Table 3 that 98% of individual modal response patterns are transitive does not rule out the possibility that some individuals might have intransitive patterns of preferences as “true” patterns in a mixture of response patterns. In order to explore this possibility, we tabulated all response patterns (see Appendix D). The proportions are shown in Table 4, which shows that intransitive response patterns (11112 and 22221) amounted to 5% or less of all individual response patterns in all three designs of all three experiments. Because some of these responses could occur by random error, this low rate of intransitivity of individual response patterns provides little support for the notion that more than a small number have “true” intransitive patterns as part of a mixture of response patterns. A more detailed analysis is presented in Appendix D, which describes the search for individuals who might have intransitive patterns as secondary or tertiary patterns in a mixture. Appendix H further analyzes these response patterns with respect to iid and TE models.

Table 4: Percentages of all response patterns in LH, LP, and PH Designs. Column sums may differ from 100, due to rounding.

3.6 WST and TI

Whereas Tables 3 and 4 analyze response patterns for five of the choice problems, WST and TI can be examined for all 10 choice problems, which might detect other violations of transitivity besides those implied by LS models.

Violations of iid (Appendix B) cast doubt on any analysis that focuses strictly on marginal choice proportions, including tests of WST and TI. Nevertheless, we examined WST and TI as a third tactic to search for evidence of intransitivity. There were 136 participants with 3 matrices per person, making 408 data matrices; of these, only 18 matrices (4.4%) violated (were not perfectly consistent with) both WST and TI (In LH, #120, 126, 137, 140, 151, 214, and 311; in LP, #102, 122, 125, 214, and 239; in PH, #137, 147, 202, 239, 309, and 338). If one were to apply statistical tests (a dubious procedure given the violations of iid), some of these 18 cases (out of 408) might be declared “nonsignificant;” but in the big picture, it matters little whether the rate of violation of WST and TI is 4.4% or, say, 2%. Further details are in Appendix E.

3.7 Intransitive individuals

Because violations of transitivity (by any of the definitions) are so rare in these data, it might be tempting to conclude that no one is ever intransitive. However, we think that such a conclusion is too strong for two reasons: First, it argues from failure to reject the null hypothesis to its “truth”. Second, some intransitive cases appear to be systematic. Table 5 shows response patterns (for the five choice problems of Table 3) for Participants #125, 214, and 309 in each block. Responses are shown for each block in each of two sessions (days), with responses reflected so that identical numbers represent consistent choices. Participant #125 shows evidence of intransitivity in the LP design, showing the exact, intransitive response pattern 22221 in 20 of 30 presentations, and showing the same intransitive pattern 7 times in both versions within a block.

Table 5: Analysis of Participants #125, #214, and #309. LH and LH2 show the response patterns for choice problems AB, BC, CD, DE, and AE when the alphabetically higher gamble was presented first or second. The patterns, 22221 and 11112 are intransitive (bold font).

One might argue that #125 was transitive in the LP design with a “true” pattern of 22222, but responded randomly on the last choice, which produced the modal pattern 22221. But we need to explain why, 27 out of 30 times, this person chose F over J (a response of 1 for the last choice listed); the binomial probability of 27 or more out of 30 with p = ½ is less than .00001. This binomial calculation assumes a single “true” pattern (22222) and independence of responses both within and between blocks.

A more complex analysis by means of the TE model (see Appendix F) allows all 30 transitive “true” patterns (but no intransitive ones). Fitting this more general model, the probability to observe the pattern of 22221 both times in a block is estimated to be only 0.15, which under the weaker independence assumptions of the TE model yields a probability of finding 7 or more repeated patterns of 22221 (out of 15 blocks) of only p = .003). So even with this mixture model, these data are unlikely to arise from such a transitive model.

This response pattern for #125 in the LP design, 22221, would be consistent with LPH, LHP, or HLP LS models, if $16 ≥ ΔL > $4. According to these respective LS models, however, #125 should have shown a pattern of 22221, 22221, or 22222 in the LH design, respectively. Instead, #125 had the modal response pattern of 11111 in the LH design (also 20 of 30 times, with 8 repeats), which requires ΔL ≤ $4 under any of these three models, contradicting the behavior in the LP design. Therefore, we cannot use any of the LS models to connect the modal response patterns of Case #125 in these two linked designs. So even if we conclude that Case #125 was truly intransitive in the LP design, the results in the LH design contradict the compatible linkages of the LS models.

Participant #214 is the only case in which a person had intransitive modal data in two linked designs that might be consistent with a LS model. This person showed the modal pattern 22221 in both the LH and LP designs and the transitive pattern, 22222, in the PH design. These modal patterns of behavior (22221, 22221, and 22222 in LH, LP, and PH Designs) are consistent with the LPH LS model with $16 ≥ ΔL > $4 and ΔP ≤ 0.04.

If we argue that #214 was truly transitive with “true” patterns of 22222 in both LH and LP conditions, we need to explain why this person repeated the exact 22221 pattern on eleven blocks out of 22 opportunities and why the last choice is “1” 19 times out of 22 in the LH condition, and 19 of 22 in the LP task. Suppose the probability of choosing “1” in the last choice is 0.5; if so, the binomial probability to show 19 or more out of 22 is 0.0004, so it is unlikely that two sets of such data arose from true patterns of 22222, combined with independent random responding on the last choice. Case #214 is the only case of intransitive behavior in more than one design consistent with a single LS model. However, this same person violated a critical property of LS models, as shown in the next section, as did most of the other participants.

3.8 Testing interactive independence

The LS design of Experiments 2 and 3 provides critical tests of the family of LS models. Four tests of interactive independence (each consisting of two choices) per block of trials were constructed from the LS design. For example, consider these two choice problems: R = ($95, .95; $5) versus S = ($55, .95; $20) and R = ($95, .10; $5) versus S = ($55, .1; $20).Footnote 2

For each of four tests, there are four possible response patterns, SS , SR , RS , and RR in each block. According to any LS model, a person should prefer either S and S or R and R ; that is, SS and RR . With any mixture of LS models, a person might show a mixture of these two response patterns, but should not switch systematically from R in the first choice to S in the second choice, denoted the RS pattern. TAX with its prior parameters, implies this RS reversal.

Consider again Case #214, whose modal data conformed to the LPH LS model in two linked designs. This person completed 11 blocks of trials with 4 tests of interactive independence each, making 44 possible tests. Out of 44 tests (two choice problems per test), this person had the exact response pattern of RS in 43 of 44 tests. Therefore, the data of Case #214 cannot be represented by any LS model or mixture of LS models.

In Experiments 2 and 3, there were just four cases that showed consistent evidence of intransitivity: #214, 239, 309, and 311. For these cases, the scores were 43 to 0, 27 to 0, 11 to 0, and 48 to 0, comparing RS reversals (predicted by interactive models) versus opposite reversals, SR , respectively. Three other cases (from Exps 2 and 3) were identified with partial indicators of intransitivity: #202, 218, and 338. The scores for these cases on the tests of interactive independence are 37 to 0, 33 to 0, and 8 to 0, respectively. So even for those cases that seem most promising for evidence of intransitivity, the data refute interactive independence, which is implied by any LS model or mixture of LS models.

Most individuals, including those whose data appear compatible with transitivity, showed evidence of interaction: Out of the 85 participants in Experiments 2 and 3, there were 79 (93%) who had more response patterns of RS against only 2 who had more of the opposite reversal and only 4 who had equal numbers or no reversals. Summed over participants and blocks, there were 1807 blocks with the RS pattern compared to only 98 with the SR pattern. Interaction rules out all LS models.

Another test from the LS design rules out a sub-class of LS models including the priority heuristic. Any person who uses a LS, starting with the four variables of the priority heuristic (lowest consequence, probability of the lowest consequence, highest consequence, probability of highest consequence), considered in any order, should prefer G4 = ($99, 0.3; $15, 0.65; $14, 0.05) over F4 = ($88, 0.12; $86, 0.7; $3, 0.18), if ΔL and ΔH ≤ $11 and ΔP ≤ 0.13). Instead, 84% of participants chose F4 over G4 more than half the time. If we retain a LS starting with any of these four attributes, we must conclude that ΔL > $11, ΔH > $11 and ΔP > 0.13, contrary to published parameters needed to account for previous data.

In order for LPH, LHP, and HLP LS models to mimic the transitive predictions of the prior TAX model (which are the most commonly observed patterns in the data), these LS models all require ΔL ≤ $4. These are the only LS models that mimic the TAX model this way, but we must reject them for those people who systematically prefer F4 over G4, since that requires ΔL> $11, which contradicts the assumption (ΔL ≤ $4) needed in order to mimic that transitive model’s predictions.

3.9 Tests of Cumulative Prospect Theory and the Priority Heuristic

Also included in the LS design were direct tests of CPT that also test the priority heuristic. For any monotonic utility function and any probability weighting function, CPT implies that if R = ($90, 0.05; $88, 0.05; $2, 0.9) ≻ S = ($45, 0.2; $4, 0.2; $2, 0.6) ⇒ R+ = ($90, 0.1; $3, 0.7; $2, 0.2) ≻ S– = ($45, 0.1; $44, 0.1; $2, 0.8). Note that R+ stochastically dominates R, and that S stochastically dominates S–. CPT therefore allows the response pattern SR+ but not the opposite, RS–, which is implied by TAX with its prior parameters (proofs in Reference BirnbaumBirnbaum, 2008c).

The LPH LS and the priority heuristic imply that a person should choose SR and R+S–, as long as ΔP ≤ 0.30, so the priority heuristic implies the SR+ pattern that is also compatible with CPT. The PHL and PLH LS models with ΔP ≤ 0.30 also imply the same pattern. The HLP, HPL, and LHP LS models imply the pattern RR+ when ΔH ≤ $45. These parameter ranges are extremely large and include by a wide margin plausible values.

There were two tests of this type per block in Experiments 2 and 3. In the two tests, 38 of 43 and 35 of 42 participants in Experiments 2 and 3, respectively, showed more response patterns of RS– than of the opposite, against only 4 and 4 who showed more of the SR+ pattern compatible with CPT and the priority heuristic. These findings rule out CPT and the priority heuristic as well as the other LS models (with wide parameter ranges) for those participants who systematically show the SR+ pattern. Additional results in the LS design, including individual results, are presented in Appendix G.

4 Discussion

Our first experiment was initially designed to test whether those participants who showed evidence of intransitive behavior consistent with use of a LS model in one design would show evidence of linked intransitivity between designs. However, we were surprised by two results from that first study: First, few participants showed plausible evidence of intransitivity in even one design, and no one in that study had consistent evidence of linked intransitivity in two designs.

Second, several individuals completely reversed their preferences between blocks of trials, which refutes the assumption of iid that is required for meaningful analysis of marginal choice proportions, averaged over response patterns. These findings led to a second experiment with new participants and new “filler” tasks between blocks, which also included tests of critical properties of LS models.

The second experiment confirmed that some people completely and perfectly reversed preferences between blocks. Because this behavior has less than one chance in a million under the assumptions of iid, we must reject that assumption. Evidence of intransitive behavior was again quite minimal.

The third experiment was an attempt to alter our procedures more drastically in an attempt to “confuse” participants by intermixing many different types of trials within blocks and by including multiple “fillers” between related items in order to put a greater burden on memory, which was conjectured as the reason that people behaved transitively. It was suggested that these changes in procedure might also produce better satisfaction of iid. Although these procedures increased “error” and reduced the incidence of perfect reversals, they did not prevent them, nor did these changes in procedure increase the incidence of violations of transitivity. Targeted statistical tests indicated that iid was violated strongly in all three studies by all but a very small number of participants.

When iid can be assumed, it means that an investigator can simplify data analysis by examining only choice proportions. But when iid is dubious, it means that we need to also examine response patterns because choice proportions could easily misrepresent individual data. When testing transitivity, it means that choice proportions can appear transitive when the person’s data are perfectly intransitive and it means that choice proportions can appear intransitive when every single response pattern by the person was transitive.

In a search for intransitive patterns of the type consistent with LS models, only a few cases gave credible evidence of intransitivity. However, these cases also showed evidence of violation of critical properties of LS models, including systematic violation of interactive independence. Other tests led to contradictions in the value of difference thresholds required by LS models to handle the data.

Only one person showed intransitive behavior in two designs that could be linked by a LS model. Case #214 showed data consistent with the LPH LS model with $16 ≥ ΔL > $4 and ΔP ≤ 0.04. However, this same person chose F4 over G4 100% of the time, which means that ΔP > 0.13, contradicting the LS model that links these two designs. And this person also systematically violated the critical property of interactive independence 43 times in 44 tests, which means that no LS model can account for this person’s data.

Had we tested only transitivity in separate designs, we would have concluded that cases of intransitive preference are rare. Such findings might modify our assessment of the incidence of this behavior. Some studies claimed evidence of systematic violations (e.g., Reference TverskyTversky, 1969; Reference Myung, Karabatsos and IversonMyung, et al., 2005) and others claimed that “significant” violations of transitivity might be due to chance (Reference Regenwetter, Dana and Davis-StoberRegenwetter, et al., 2010). Based on these new data, the estimated incidence of violation of transitivity in these designs is below 5%, which is compatible with recent studies with PH designs.

Because LS models can handle transitive response patterns as well as intransitive ones, and because studies done to date have examined only a tiny region of the space of all possible sets of choice problems, evidence concerning the incidence of violations of intransitivity says very little about the empirical standing of the class of LS models and it says little about the general validity of transitivity in the infinite space of all choice problems. The failure to find predicted intransitivity might mean only that the researchers did not yet do the right study.

4.1 Refutation of LS models

However, by using linked designs and by including critical tests of the LS models, we can reach much stronger conclusions regarding the LS family; namely, these models can be rejected as descriptive for most people, including even those who appeared to show indications of intransitivity as well as for those whose data appear to be transitive. These findings agree with other tests of critical properties of LS models (Reference BirnbaumBirnbaum, 2010; Reference Birnbaum and GutierrezBirnbaum & Gutierrez, 2007; Reference Birnbaum and LaCroixBirnbaum & LaCroix, 2008)

If those few cases of systematic intransitivity we observed are “real” (and not due to statistical coincidence), then some other origin must be sought to account for them besides the family of LS models. One possibility that has been suggested is that people use an interactive, integrative model but have a tendency to “round off” via editing (i.e., they assimilate subjective values of attributes that are similar) in a choice problem before applying an interactive, integrative model (Reference González-VallejoKahneman & Tversky, 1979). Such a model could produce intransitive choices and also violate the critical property of interactive independence (Reference Birnbaum and GutierrezBirnbaum & Gutierrez, 2007).

This “rounding” or “editing” model should also cause linked violations as long as the rules for rounding stay the same in all designs; if so, it might describe the data of Case #214, who was the only case showing evidence of linked intransitivity in two designs, but not Case #125, whose intransitive data in one design contradicted the modal pattern in another design under any such interpretation. Although the “editing” operations were originally proposed as general descriptions, only one case (out of 136) appears to call for this “rounding” rule. Systematic violations of the editing rules of cancellation and combination have been observed in other studies (Reference BirnbaumBirnbaum, 2008b), so the empirical status of the editing rules remains doubtful.

4.2 Refutation of the Priority Heuristic

These experiments were designed to produce robust, linked violations of transitivity if a person used the priority heuristic. The priority heuristic is a variant of LPH LS, with specified parameters that are part of the theory. As Reference BirnbaumBirnbaum (2008a) noted, the optimal parameters for this model to describe certain published data are indeed close to the values postulated by Reference Brandstätter, Gigerenzer and HertwigBrandstätter, et al., (2006). However, neither averaged choice proportions nor individual data of any person agreed with the predictions of the priority heuristic. Furthermore, violations of interactive independence refute the priority heuristic along with the other LS models.

These failures of priority heuristic to predict new data are consistent with findings of other recent studies that tested other implications of this heuristic (Reference BirnbaumBirnbaum, 2008a; 2010; Reference Birnbaum and BahraBirnbaum & Bahra, 2012; Reference Birnbaum and LaCroixBirnbaum and LaCroix, 2008; Reference FiedlerFiedler, 2010; Reference Gloeckner and BetschGloeckner & Betsch, 2008; Reference Gloeckner and HerboldGloeckner & Herbold, 2011; Reference HilbigHilbig, 2008; Reference RieskampRieskamp, 2008).

In response to previous failures of the priority heuristic, Reference Brandstätter, Gigerenzer and HertwigBrandstätter, Gigerenzer & Hertwig (2008) proposed that the priority heuristic is preceded by use of an “adaptive toolbox” of other procedures for comparing risky gambles. These other processes were proposed to handle cases where the priority heuristic was refuted by experiments designed to test its implications. For example, they suggested that the model does not apply when there is a large discrepancy in expected value (a ratio exceeding 2), when there is a no conflict resolution, when there are branches that might be cancelled, when two or more branches lead to the same consequence, etc. However, this study contains none of the “triggering conditions” yet postulated to invoke the other heuristics. The gambles compared in each set (Table 1) are very close in expected value, choices between them do not have “no conflict” resolutions, and all gambles have exactly two branches. None of the excuses yet published provide a reason for the model to fail in this study.

4.3 Refutation of CPT

The LS design of Experiments 2 and 3 also included individual tests of CPT and the priority heuristic. These tests extend previous findings (Reference BirnbaumBirnbaum, 2008c; Reference BirnbaumBirnbaum, 2010) and show that, analyzed at the level of individuals, the majority show systematic violations of CPT as well as of the priority heuristic. Because the property tested does not assume any specific functional form of the value function on money and it does not assume any particular probability weighting function, the refutation of CPT holds for all parametric versions of that model. This evidence against CPT is consistent with other studies that found systematic violations of that model (Reference Birnbaum, Shanteau, Mellers and SchumBirnbaum, 1999, 2004, 2008b, 2008c; Reference Birnbaum and BahraBirnbaum & Bahra, 2012).

4.4 Refutation of iid assumptions

The present data show extremely strong evidence against iid. These violations were found even in Experiment 3, where multiple fillers separated related trials within blocks and more than 50 intervening trials separated blocks. Therefore, the assumption of iid should be considered as a dubious basis for determining whether or not formal properties in choice data satisfy structural properties such as transitivity.

The refutation of iid creates difficulties for the analysis of individual choice proportions, rather than response patterns. Reference Regenwetter, Dana and Davis-StoberRegenwetter, et al. (2010, 2011) proposed that marginal choice proportions (averaged over response patterns) could be used in a general method for analyzing algebraic models with probabilistic data. However, the overwhelming and extreme violations of iid indicate that this method might, in principle, lead to wrong theoretical conclusions as well as erroneous statistical results.

There are two general forms of the TE model that require weaker assumptions than iid. These models typically violate iid, except in special cases. In iTET, a person is assumed to have the same “true” preferences within a block of trials, but it is allowed that a person might change “true” preferences from block to block. This model implies that iid can be violated within a person, if that person changes systematically during the study.

In a sense, the debate between the methods of Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al. (2011) and of Reference BirnbaumBirnbaum (2011) is a debate about how often a person might change “true” preferences. The method of Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al. (2011) assumes that responses are resampled independently (“true” preferences can change) between every pair of trials, as long as there are several intervening filler trials, whereas the TE approach of Reference BirnbaumBirnbaum (2011) assumes that a person’s true preferences last longer and are theorized to be constant within a block of trials. They are allowed to change between blocks.

In the TE models, iid occurs only in special cases, such as when a person maintains only one “true” preference pattern throughout a study. Because the TE model can allow iid to be satisfied or violated, we think the Regenwetter, et al. iid assumptions are stronger than those required by or implied by the TE model. Trials within the same block and between blocks are assumed to be independent in the iid approach, whereas the TE model assumes only that errors within and between blocks are independent, allowing choices within blocks to be dependent.

When the TE model holds, it can provide more information than is available when iid holds: one can estimate the distribution of “true” preferences in a mixture, whereas in the method of Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al. (2011) one can test mixture models but one cannot discover the distribution of preferences in a person’s mixture. A practical difficulty of the TE model is that more data are required in order to fit this model to the data.

The correlations between behavior in different blocks represent a problem for both iid models and for a sub-class of TE models in which “true” choice patterns are re-sampled independently between blocks. Instead, the present data appear more consistent with the idea that a person follows a model in which parameters change gradually but systematically throughout a study. For example, parameters of the TAX model might change gradually via a random walk from trial to trial and block to block.

It would be useful to produce a statistical method that detects when a person changes “true” preferences; such a test might provide a more accurate model and also allow one to compare the assumptions of iid and TE as special cases of a more general model. According to the iid models, this test should identify that people change “true” preference between every trial (or that they never change), and according to the TE models, this test should indicate that people change true preferences only between blocks.

The finding that iid assumptions are violated agrees with Reference BirnbaumBirnbaum’s (2012) reanalysis of data from Reference Regenwetter, Dana, Davis-Stober and GuoRegenwetter, et al. (2011), which concluded there were significant deviations from iid in those data as well. By using two repetitions of each choice problem within each block, and by separating blocks by more than 50 trials, the present studies provide stronger tests of iid than were possible in that study, where each choice problem was presented only once per block and only three trials separated blocks.

4.5 Does transitivity hold everywhere?

Because so few “significant” violations of transitivity have been found in recent studies, including this one, some might argue that they should be dismissed as due to chance. If one uses a 5% level of significance, one expects 5 cases per hundred to be “significant” by chance, so finding a small number of “significant” violations of transitivity does not refute the null hypothesis that transitivity holds for everyone.

But even if one can retain the null hypothesis, it does not mean that the null hypothesis has been proved to be true, and there may indeed be some people who truly violate transitivity. Those particular individuals who significantly violated transitivity might continue to do so if they were tested again. A new study might discover stimuli in which violations would be more apparent. Another procedure might be found in which transitivity is easier to detect. Therefore, despite the weakness of evidence against transitivity in the literature, we think the case for transitivity is still open.

4.6 Conclusions

In contrast, the cases against the priority heuristic, the family of LS models, CPT, and assumptions of iid are quite strong: This study found significant and systematic violation of all four of these theoretical ideas. We found that data systematically violated the predictions of the priority heuristic and no one satisfied them. Only one person showed intransitivity in two designs that could be linked via a LS model; and that person (along with the majority of others) also showed systematic violations interactive independence, a critical property of LS models. Tests of CPT with any monotonic value and probability weighting functions led to systematic violations by most individuals. Finally, tests of iid showed extremely strong violations indicating that people are likely changing their true preferences during a long study.

Appendix A: Analysis of the LPH Lexicographic Semiorder

In the LPH LS model, the person is assumed to compare first the lowest consequences (L), then the probabilities (P), then the highest consequences (H). Table A.1 shows the predicted patterns of behavior for the LPH LS in the LH design for choices AB, BC, CD, DE, and AE; in the LP design for FG, GH, HI, IJ, and FJ; and in the PH design for KL, LM, MN, NO, and KO, respectively. The numbers 1 and 2 refer to preference for the alphabetically higher or lower alternative, respectively; and “?” designates that the model is undecided. Each row shows the results under different ranges of threshold parameters. According to the priority heuristic, $4 < ΔL ≤ $16 and 0.04 < ΔP ≤ 0.16, so the predicted patterns are 22221, ????1, and 11112 in the three designs, respectively. Other transitive patterns (e.g., 22111) are possible if people compare subjective values of the stimulus attributes, as in Footnote 1.

Table A. 1. Predicted preference patterns in LPH LS model.

Appendix B: Testing assumptions of independence and identical distribution (iid)

Reference BirnbaumBirnbaum (2011, 2012) noted that the TE model implies iid only in the special case when a person has only a single “true” preference pattern; if a person changes from one “true” pattern to another from block to block, iid can be violated.

Reference BirnbaumBirnbaum (2012) devised two tests that use Monte Carlo simulations suggested by Reference Smith and BatchelderSmith and Batchelder (2008). Both tests begin by computing the average number of preference reversals between each pair of repetition blocks and the variance of the number of preference reversals between blocks. Suppose a person completed 20 blocks: one counts the number of preference reversals between each of 190 = 20*19/2 pairs of blocks, summed over the 20 choice problems in each design. If iid holds, the variance of preference reversals should not be large. In addition, the number of preference reversals between two blocks should not be systematically smaller between blocks that are closer together in time than between blocks that are farther apart in time.

Table B.1 shows the results of tests of iid in Experiments 1 and 2. The mean number of preference reversals between blocks is shown in columns labeled “m” for each person in each design. For example, the 3.77 for Case # 101 (first row) means that the average number of preference reversals (out of 20 choice problems) between two repetition blocks was 3.77; in other words, the mean number of agreements between two blocks was 20 – 3.77 = 16.23 out of 20 (81%) for this person in the LH design.

Table B.1. Analysis of iid assumptions in Experiments 1 and 2 in LH, LP, and PH designs (m = mean number of preference reversals between blocks, var = variance, p v = simulated p-level of variance test, r = correlation, p r = simulated p-level of correlation test).

The median numbers of preference reversals between blocks were 2.77, 2.68, and 1.91 in the LH, LP, and PH designs, respectively, corresponding to 86%, 87%, and 90% agreement. In Experiment 1, the medians were 2.96, 3.25, and 2.21, all higher than corresponding values in Experiment 2, which were 2.48, 1.99, and 1.78, respectively. Perhaps agreement between blocks was higher in Experiment 2 because there were fewer filler trials between blocks in Experiment 2 than in Experiment 1.

Next, we computed the mean number of preference reversals between successive blocks, between blocks that are separated by two blocks, by three, etc. These scores were then correlated for each person with the absolute difference between blocks. This correlation would be positive if a person’s behavior changed gradually and systematically from block to block. If iid assumptions hold, however, this correlation would be zero, aside from random fluctuations. These correlation coefficients were computed for each individual for each design of Experiments 1 and 2, and the results are shown under the columns labeled “r” in Table B.1.

Table B.1 shows that most of the correlations in Experiments 1 and 2 are positive. The median correlations in LH, LP, and PH designs were 0.71, 0.70, and 0.51, respectively. For individuals, 83%, 76%, and 66% were positive in the LH, LP, and PH designs, respectively, all significantly more than half the samples (z = 6.39, 4.95, and 2.34, respectively). Correlation coefficients of these magnitudes represent serious violations of iid.

For each person, a significance test of the correlations using the Monte Carlo procedure of Reference Smith and BatchelderSmith and Batchelder (2008) was conducted. For each person, responses to each choice problem are randomly permuted between blocks and the correlation coefficient is recalculated for each random permutation. If iid holds, it should not matter how responses to a given choice problem are permuted among blocks. The estimated p r value is then the proportion of simulations in which the absolute value of the simulated correlation is greater than or equal to the absolute value of the original correlation in the data. The use of absolute values means that this is a two-tailed test.

Based on 10,000 simulations per person per task, these p-levels for the correlations are shown under columns labeled “p r” in Table B.1. If iid holds, we expect that 5% of these should be “significant” at the .05 level, which means about 5 people out of 94 in Experiments 1 and 2). Instead, 32, 23, and 19 had p r < 0.05 in the LH, LP, and PH designs.

Analysis of iid in Experiment 3 is presented in Table B.2. Recall that in Experiment 3, all three transitivity designs (60 trials) were intermixed with each other, with the 16 trials of the LS design, and with 31 additional choice problems, making blocks of 107 trials. Each block was separated by at least 57 unrelated trials; in this procedure, two repetitions of the same exact choice problem were separated on average by 164 intervening trials. The median number of preference reversals in this study between blocks was 24.7 out of 107, corresponding to a median agreement rate of 77%. This figure is lower than agreement rates in Experiments 1 and 2 where the LH, LP, and PH designs were in separate blocks, rather than intermixed.

Table B.2. Analysis of iid assumptions in Experiment 3, as in Table B.1. Each block contains 107 choice problems, including LH, LP, and PH designs. Blocks were separated by a filler task with 57 choices.

The median correlation between mean number of preference reversals (over 107 choice problems) and distance in blocks in Experiment 3 was 0.88; only 5 of 42 participants had negative correlations, significantly fewer than half (z = –4.94). We would expect only about 2 of 42 should be significant (p < .05), but as shown in Table B.2, 27 of 42 individuals had p r < .05, highly unlikely under the null hypothesis of iid (z = 17.63).

Reference BirnbaumBirnbaum’s (2012) second test of iid compares the variance of the number of preference reversals between blocks against variances simulated via computer-generated permutations of the data. If people have different “true” preferences in different trial blocks, they could show a greater variance of preference reversals than would be found when data are randomly permuted between replication blocks. Even if a person randomly and independently sampled a new “true” pattern before each block of trials, the variance method could potentially detect such violations of iid, which the correlation method would not detect.

By means of the same type of permutations, the p v-level was estimated as the proportion of 10,000 permutations of the data in which the variance of preference reversals was greater than or equal to the variance in the original data. Tables B.1 and B.2 show the variances in columns labeled “var”, and the estimated p v values. In Experiments 1 and 2, p v were “significant” (i.e., p < 0.05) for 67, 68, and 58 out of 94 participants in the LH, LP, and PH designs, respectively. In Experiment 3, all except two (#313 and 322) of the 42 participants had p v < .05.

Even with the .01 level of significance, only 11 cases did not have a significant violation of iid in at least one of the tests.

The failure of iid means not only that statistical tests based on this assumption are inappropriate, but also that marginal choice proportions may not be representative of the actual patterns of behavior exhibited by participants, so one might reach wrong conclusions. It also means that we cannot not assume that participants are displaying a static single behavior, but rather that they are likely learning, changing, or shifting their behavior throughout the course of a long experiment.

Appendix C: Analysis of overall choice proportions and the priority heuristic

Median choice proportions (averaged over all three experiments) are shown in Table C.1 for LH, LP, and PH designs in the upper, middle, and lower portions of the table, respectively. The numbers above the diagonal in each part of the table show the median proportion of responses preferring the column stimulus over the row. For example, the entry of .33 in Row A column C shows that on average, C was chosen over A 33% of the time (so in 67% of choices, A was chosen over C). Because all choice proportions above the diagonal are less than 50%, proportions in this table satisfy WST with the order ABCDE, which agrees with the prediction of the TAX model (and CPT) with their prior parameters. These proportions are also perfectly consistent with the TI.

Table C.1. Binary choice proportions (above diagonal) for each design, medians over all three experiments. Predictions of the priority heuristic are shown below diagonal; “?” indicates that the model is undecided.

Both WST and TI are perfectly satisfied by the median choice proportions in the other two designs as well, shown in middle and lower sections of Table C.1. The majority choice proportions in these designs also agree with predictions of the TAX model with prior parameters: FGHIJ, and ONMLK.

The predicted majority choices of the priority heuristic are shown below the diagonal in Table C.1 for each design. For example, the priority heuristic predicts that the majority of people should choose C in the choice between A = ($84, 0.5; $24) and C = ($92, 0.5; $16) because the difference in the lowest outcome is less than $10, so the choice should be determined by the highest consequences, which favor C. However, the median for this choice was 0.33, which shows that more than half the participants chose A over C more than half the time. The priority heuristic correctly predicted only three out of ten proportions in each table.

This type of analysis can be (justly) criticized because it is based on averaged choice proportions, which may or may not represent patterns of behavior by individuals. Nevertheless, it is worthwhile to show that the priority heuristic fails to describe averaged choice proportions. The priority heuristic was previously claimed to be an accurate model for predicting modal choice proportions (Reference Brandstätter, Gigerenzer and HertwigBrandstätter, et al., 2006, 2008). If this analysis were not presented, the idea might persist that the heuristic might provide a good description of averaged data, even if it fits no single person.

Appendix D: Analysis of all response patterns

Table D.1 shows an analysis of five choices from the LH design: AB, BC, CD, DE, and AE. Responses are coded such that 1 indicates choice of the gamble represented by the alphabetically higher letter in the choice and 2 indicates choice of the other gamble; therefore, 11111 represents the transitive pattern ABCDE; 22222 matches the transitive pattern EDCBA. The priority heuristic implies the intransitive pattern, 22221; i.e., ED, DC, CB, BA, but AE. This pattern is consistent with either LPH LS or PLH LS with $16 ≥ ΔL > $4 or with LHP LS with $16 ≥ ΔL > $4 and ΔH ≤ $4.

Table D.1 shows the number of individual trial blocks on which each response pattern on these five choices was observed. The last row in Table D.1 shows the totals. In Experiment 1, for example, participants completed a total of 801 trial blocks in the LH design, with two versions of each choice problem per block (there are 1602 responses per item).

Each choice problem was presented twice in each block (with positions counterbalanced); therefore, we can tabulate the frequencies of each possible response patterns when the gambles were presented in one arrangement, (e.g., AB), in the other arrangement (e.g., BA), or in both. For example, the 315 in the first row of the table (11111) under “ROW” shows that of the 801 blocks in Experiment 1, 315 times a person chose AB, BC, CD, DE, and AE, when the gambles were presented with the alphabetically higher-labeled gamble first (e.g., AB). The 311 under “COL” shows that 311 times people expressed these same preferences (by clicking opposite buttons) when they were presented with positions reversed (BA).

The column labeled “BOTH” shows the number of blocks in which individuals showed exactly the same preference pattern on both versions of the same choices within a block. That is, the person exactly matched 10 responses to show the same decisions on five choice problems presented twice. For example, the 235 under “BOTH” in the first row for Experiment 1 indicates that 235 times (out of 801 blocks), a person had all ten choices matching choice pattern 11111.

The most common response patterns in all three experiments are the transitive patterns, 11111 and 22222, which correspond to the orders, ABCDE and EDCBA, respectively. These were also the most frequently repeated patterns (BOTH positions), accounting for 88%, 88%, and 81% of the repeated (i.e., consistent) patterns.

We can define within-block, pattern self-consistency as the percentage of times that a person had the same response pattern in both presentations of each choice problem in the same block. Note that pattern self-consistency requires that responses to ten items agree in two presentations of five choice problems. Self-consistency was higher in Experiments 1 and 2 (405/801 is 51% and 438/645 is 68%, respectively), where each trial block had 25 or 26 trials, than it was in Experiment 3, where each trial block had 107 trials (197/591 corresponds to only 33%).

This finding of lower self-consistency in Experiment 3 would be consistent with the idea that people had more “error” (more “confusion”) in Experiment 3, when these different types of trials were intermixed than in the first two studies. It would also be consistent with the idea that people are less likely to maintain the same “true” preferences for 107 trials than for 25 trials.

Table D.1. Frequency of response patterns in tests of transitivity in LH Design. The pattern of intransitivity predicted by the priority heuristic is 22221.

The intransitive response pattern predicted by the priority heuristic for the LH design, 22221, was observed only 51, 29, and 52 times in Experiments 1, 2, and 3 (3%, 2%, and 4%), and it was repeated by a person (BOTH) only 8, 6, and 8 times within a block (2%, 1%, and 4% of repeated behavior) in the three studies, respectively. These figures represent very small percentages of the overall data, and one should keep in mind that some of this intransitive behavior (though less likely in the BOTH data) might be the result of “error”; for example, cases where the “true” pattern was 22222 and an “error” occurred on the last listed choice.

Table D.2. Frequency of response patterns in tests of transitivity in LP Design.

Although the vast majority of individual response patterns are transitive, there might be a few individuals whose behavior, at least during part of the study, was truly intransitive. These cases of “temporary intransitivity” are more likely to be “real” when the same person repeated the same intransitive pattern in both versions within a block. Four of the 8 cases in Experiment 1 (i.e., BOTH 22221) were produced by #120, who showed this intransitive pattern only in the first two blocks of each day; the last three blocks each day (out of 12 total) were perfectly consistent with the transitive order EDCBA. Participant #140 contributed only 1 repeated instance of this pattern, but had 6 other blocks in which this pattern appeared once (out of 14 blocks completed). Three others produced one repeated pattern each.

Table D.3. Frequency of response patterns in tests of transitivity in PH Design. The predicted pattern of intransitivity from the priority heuristic is 11112.

In Experiment 2, #214 repeated the 22221 pattern in the LH design four times and had 5 other blocks with one instance of this pattern out of 11 blocks; #218 repeated this pattern twice out of 11 blocks completed, but the last 7 blocks were almost perfectly consistent with the transitive order, EDCBA.

In Experiment 3, one person (#311) accounted for 4 of the 8 repeated patterns of 22221 in the LH design; this person also showed two other blocks with a single instance of this pattern and violated both WST and TI. Four others contributed one repeated pattern each.

Table D.2 shows an analysis of response patterns in the LP design. Patterns 11111 and 22222 in this design represent transitive choice patterns FGHIJ and JIHGF, respectively. The 22221 pattern represents these intransitive preferences: JI, IH, HG, GF, but FJ, which are implied by the LPH LS model when $16 ≥ ΔL > $4 and 0.04 ≤ ΔP; LHP LS and HLP LS models can also imply this intransitive pattern, if $16 ≥ ΔL > $4 with any ΔP.

Only 13, 12, and 0 blocks with a repeated pattern of 22221 were observed in LP Design in Experiments 1, 2, and 3 (3%, 3%, and 0%), respectively. Of the 13 in Experiment 1, 7 were contributed by #125, who also had 6 other blocks with one instance out of 15 blocks; #122 contributed 2 repeats with 5 other instances in 10 blocks; #102 had two blocks repeating the opposite intransitive pattern, 11112, and three other blocks with one instance of that pattern. In Experiment 2, 7 of the 12 repeated patterns of 22221 were from #214, who also had 3 other blocks showing one instance of this pattern; five others contributed one repeated pattern each.

Table D.3 analyzes the PH design, where the priority heuristic predicts the intransitive pattern, 11112; i.e. KL, LM, MN, and NO, but OK. This pattern would also be consistent with LPH LS or PLH LS with 0.16 ≥ ΔP > 0.04, or with PHL, with 0.16 ≥ ΔP > 0.04 and ΔH ≤ $4. This pattern was repeated once, three times, and four times in Experiments 1, 2, and 3. The one repeated pattern in Experiment 1 was by #137, who had 3 other instances of this pattern. Two of the three in Experiment 2 came from #239. All four in Experiment 3 came from #309.

In summary, the analysis of Tables D.1, D.2, and D.3 has added very little, if any, evidence that there are individuals (besides those already identified) who displayed intransitive patterns systematically for large sub-portions of the study. Even if we assume that all observed intransitive response patterns are “real,” Tables D.1, D.2, and D.3 indicate that only 5% or fewer of all response patterns for these five choices in three studies could be described as intransitive.

Appendix E: Individual Choice Proportions, WST and TI

Tables E.1, E.2, and E.3 show marginal choice proportions for each person in the LH, LP, and PH designs, respectively. Participants in Experiments 1, 2, and 3 were assigned three digit identifiers starting with 101, 201, and 301, respectively.

Table E.1. Binary choice proportions for each individual in LH design, WST= weak stochastic transitivity, TI = triangle inequality; “yes” means that the property is perfectly satisfied by the proportions; Order compatible with WST is listed; Blks is the number of blocks, each of which has two presentations of each choice.

Table E.2. Binary choice proportions in the LP design, as in Table E.1.

Table E.3. Individual binary choice proportions in the PH Design, as in Table E.1.

The choice proportions for Case #101 are shown in the first row. The entry in the last column of Table 2 indicates that #101 completed 20 blocks. Because each block included two presentations of each choice, proportions are based on 40 responses to each choice problem by Participant #101. All ten choice proportions in the first row are greater than ½; therefore, this person’s data are perfectly consistent with WST (indicated by the “yes”) and the transitive order, EDCBA. The choice proportions of #101 are also perfectly consistent with the TI, indicated by the “yes” under TI.

A “NO” displayed for WST or TI in Tables E.1, E.2, or E.3 indicates that choice proportions for a given person are not perfectly compatible with these properties, respectively. These do not represent tests of significance. For example, proportions for case #104 (fourth row of Table 2) are all greater than ½, so this case is perfectly compatible with WST and the order EDCBA. However, these choice proportions are not perfectly compatible with the TI, indicated by the “NO” in column TI, because, for example, the choice proportions show that P(AB) + P(BE) – P(AE) = 1 + 1 – .97 = 1.03, which is not between 0 and 1. The data for #104 are based on 30 responses per choice (15 blocks), so this violation would not have appeared if the one response out of 30 when this person chose A over E had been different (1/30 = .03).

There were 107 people out of 136 (79%) whose choice proportions were perfectly consistent with WST in all three designs. There were only 13, 10, and 10 cases in which WST was not perfectly satisfied in the LH, LP, and PH designs, respectively (10%, 7%, and 7%). There was no one whose proportions violated (were not perfectly consistent with) WST in all three designs; only 4 were not perfectly compatible with WST in two designs (#137, 214, 239, and 328).

Violations of WST can easily occur when a person has a mixture of transitive patterns (Reference Birnbaum and GutierrezBirnbaum & Gutierrez, 2007; Reference Regenwetter, Dana and Davis-StoberRegenwetter, et al., 2010, 2011). Although the TI has the advantage (over WST) that it is consistent with a mixture of transitive orders, the TI can be violated by tiny deviations when a person is otherwise highly consistent with transitivity, and it can also be satisfied when a person has a mixture that includes systematic violations of transitivity (Reference BirnbaumBirnbaum, 2011). Both TI and WST therefore might be misleading when a person has a mixture of preference patterns. When iid is violated, both of these properties can be misleading, and one should examine response patterns.

Appendix F: Analysis of transitivity in iTET model

This section presents an individual TE model for the five choice problems that test the intransitive prediction of the LS models. For example, in the LP design, these are the FG, GH, HI, IJ, and FJ choices. Under an LS model, it is possible to show the intransitive data patterns 22221 (or 11112), which represent observed preferences for GF, HG, IH, JI but FJ, (or their reverses). All other response patterns are compatible with transitivity. Suppose that within each block of trials, a person has the same “true” preferences but may show random “errors” in discovering or reporting his or her true preferences.

The probability of observing the response pattern 22221 in both tests and the “true” pattern is 22222 on a block is given by the following:

Where P 22222(22221, 22221) is the probability responding 22221 on both tests and having the true pattern of 22222; p 22222 is the probability that the “true” pattern is the transitive order EDCBA, and e 1, e 2, e 3, e 4, and e 5 are the probabilities of “errors” on the five respective choice problems, which are assumed to be mutually independent and less than ½. Note that the error terms are squared because this expression calculates the probability of observing the same response pattern on both items within a block (two responses for each of five choice problems).

The overall predicted probability in the TE model for the observed pattern 22221 on both tests, P(22221, 22221) is the sum of 32 terms including that above for the 32 possible “true” patterns, each with the appropriate error terms to create each response pattern given each true pattern. Transitivity is defined as the special case of this TE model in which the two intransitive probabilities of zero; that is, that p 22221 = p 11112 = 0.

A maximum likelihood solution of the transitive TE model to the 15 blocks of response patterns for case #125 (Table 5) yielded e 1 = e 2 = e 3 = 0; e 4 = 0.21, e 5 = 0.50, p 22211 = 0.038, and p 22222 = 0.962; all other parameters were zero. According to this solution, P(22221, 22221) = 0.15. However, the data showed 7 blocks with repeats of 22221 out of 15 blocks. From the binomial, the probability to observe 7 or more response patterns of 22221 and 22221 out of 15 is .003. Therefore, one can reject the assumption that the “true” probability of 22221 is zero. The binomial in this case assumes only that blocks and errors are independent, it does not assume or imply the stronger iid assumptions that responses within a block are independent (See Appendix H for more on this distinction).

When all parameters are free, the maximum likelihood solution yields e 1 = e 2 = e 3 = 0; e 4 = 0.21, e 5 = 0.10, p 22211 = 0.038, and p 22221 = 0.962; all other parameters were zero. In this solution, P(22221, 22221) = 0.49, which is compatible with the data showing 7 out of 15 repeated intransitive patterns of this type. In sum, the TE model indicates that we should reject the assumption of transitivity for #125 in this design. As a further note, Case #125 also chose GJ on 26 of 30 choices, creating another intransitive cycle of a type consistent with an LS model.

Appendix G: Tests of interactive independence

Individual choice proportions for the LS Design of Experiments 2 and 3 are shown in Table G.1. X1, X2, X3, X4, and X5 refer to the choices between R = ($95, p; $5, 1 – p) and S = ($55, p; $20, 1 – p), where p = 0.95, 0.9, 0.5, 0.1, or 0.05, respectively, which test interactive independence. Similarly, Y1, Y2, and Y3 in Table 9 represent the choices between S = ($99, p; $1, 1 – p) versus R = ($40, p; $35, 1 – p), where p = 0.9, 0.5, or 0.1, averaged over two presentations with either S or R presented first. These also test interactive independence.

According to any LS model, the value of p should have no effect, because it is the same in both R and S. Even if subjective probability is a function of objective probability (as in Footnote 1), the common probability term drops out. Under any order of examining the attributes and with any difference thresholds, a person should either choose R or S, in all choices, for any common p. Therefore, any mixture of LS models should also show no effect of p.

According to the priority heuristic, a person should always choose S, because it has the higher lowest outcome (by $15 in X1 to X5 and by $34 in Y1 to Y3). According to interactive models such as TAX, CPT and EU, however, as p decreases, the tendency to choose S should increase, because it provides the better lowest consequence, whose value is multiplied by a function of 1 – p. In agreement with predictions of TAX, CPT, and EU, and contrary to all LS models (including priority heuristic) and mixtures thereof, the median choice proportions for S increase from 0.15 to 0.91 from X1 to X5 and from 0.15 to 0.93 in Y1 to Y3. Out of 85 participants, 70 and 72 had X5 > X1 and Y3 > Y1, respectively.

Table G.1. Individual choice proportions in the LS design (Experiments 2 and 3).

In Table G.1, Z1 and Z2 represent choices between R = ($90, 0.05; $88, 0.05; $2, 0.9) versus S = ($45, 0.2; $4, 0.2; $2, 0.6) and between R+ = ($90, 0.1; $3, 0.7; $2, 0.2) versus S– = ($45, 0.1; $44, 0.1; $2, 0.8), respectively. Note that R+ stochastically dominates R and S stochastically dominates S–. According to CPT, RSR+S–. According to the priority heuristic, most people should choose SR (because of the smaller probability to receive the lowest consequence) and R+S– (again because of the probabilities to receive the lowest outcome).

According to TAX with its prior parameters, however, a person would have the opposite preferences: RS and S–R+. Consistent with TAX and contrary to CPT, priority heuristic, EU, and EV, the median choice proportions show 69% preference for RS and 73% preference for S– ≻ R+.

Z3 and Z4 present a similar test with positions counterbalanced: R2 = ($80, 0.1; $78, 0.1; $3, 0.8) versus S2 = ($40, 0.4; $5, 0.1; $4, 0.5) and R3+ = ($80, 0.2; $4, 0.7; $3, 0.1) versus S3– = ($40, 0.2; $39, 0.2; $3, 0.5). In this test, R3+ dominates R2 and S2 dominates S3–; CPT implies that R2S2R3+S3–. The priority heuristic implies that the majority should choose S2R2 and R3+S3–. However, median choice percentages again contradict CPT, priority heuristic, EU, and EV: median choice percentages are 70% choosing R2S2 and 82% choosing S3– ≻ R3+, showing the reversal predicted by TAX with prior parameters.

These results are also representative of the majority of individuals: Of the 85 participants in Experiments 2 and 3, 62 (73%) showed both Z1 < Z2 and Z3 < Z4, which is significantly more than half of the sample (z = 4.23).

The last column, W, in Table G.1 shows the proportion of responses favoring F4 = ($88, 0.12; $86, 0.7; $3, 0.18) over G4 = ($99, 0.3; $15, 0.65; $14, 0.05). According to the priority heuristic, people should choose G4 because the lowest outcome is better by $11. In addition, notice that the probability to receive the lowest consequence of G4 is also better (by 0.13), as is the highest consequence of G4 (by $11), as is the probability to win the highest consequence (by 0.18). Thus, all four of the features that are compared in the priority heuristic favor G4. However, the median choice proportion was 0.82 for F4, and only 14 of 85 individuals (16%) chose G4 over F4 half or more than half of the time. These 71 people (84% who chose F4 over G4 more than half the time) represents significantly more than half of all participants, z = 6.18), contradicting priority heuristic and other LS models using these four variables with the same thresholds. The TAX model with its prior parameters correctly predicted this result.

The most promising cases in Experiments 2 and 3 for evidence of a LS model are #202, 214, 218, 239, 309, 311, and 338, who either showed response patterns consistent with intransitive LS models, or who violated WST and TI, or both. Those cases (marked in bold font in Table G.1 and described in the main text) show that even these people appear to systematically violate implications of the LS models. Consequently, most people violate the implications of the family of LS models, and no one showed evidence of intransitivity who also appeared to satisfy the LS models in these tests of

Appendix H: Fit of True and Error and IID models to response patterns

The frequencies of response patterns from Tables D.1, D.2, and D.3 were fit to two models. The iid model assumes that the probability of showing a response pattern is the product of the marginal probabilities of showing each response. The TE model assumes that each block may have a different “true” pattern and independent “errors”. Both of these models allow transitive and intransitive patterns, and both models are oversimplified, but their application is instructive.

Table H.1 shows the fit of these two models to the LH design, aggregated over three experiments. “Rows” indicates the number of times that a person showed each response pattern when the gambles were presented with the alphabetically higher gamble first, and “Cols” shows the frequency when the same choice problem was presented with the gambles reversed. “Both” indicates cases where a person made the same choice responses on both presentations within a block. The U–C values represent the average frequency of showing the response pattern in either one arrangement or the other but not both. The models were fit to minimize the Chi-Square between the obtained and predicted frequencies in these 2 × 32 = 64 cells.

The iid model assumes that the probability of showing a response pattern is simply the product of the individual choice probabilities. Consequently, the predicted frequency of response pattern 11111 is proportional to the product of the marginal choice proportions of choosing Response 1 in the five choice problems making up this pattern: P(AB)P(BC)P(CD)P(DE)P(AE), where P(AB) is the proportion choosing A in the choice problem between A and B.

The iid model fails badly because people are more consistent than this model allows them to be. When a response pattern of 11111, for example, is observed within a block in one arrangement, it is highly likely that the same response pattern is observed when the stimuli are presented in the other arrangement within the same block, even though this requires pressing opposite buttons on randomly ordered trials. Consequently, responses agree within blocks to a much greater degree than predicted by the iid model.

For example, the response pattern 11111 occurred in the LH design 687 and 694 times in the two arrangements, and this same response pattern was shown with both arrangements 537 times within blocks. According to iid, however, the predicted frequency of showing this pattern in both versions of a block is only 7.4. Summed over response patterns, the iid model implies that out of the 2037 blocks, there should have been agreement in only 74 cases where a response pattern was repeated twice within a block. Instead, the actual number of cases where response patterns were consistent within blocks was 1038.

To understand why independence implies so little self-agreement, keep in mind that each response pattern is built of five responses. Therefore, ten responses have to fall in place to produce a match. The marginal probabilities to respond “1” on the five choices are 0.54, 0.54, 0.51, 0.61, and 0.66, respectively, so the probability to show all ten responses--assuming iid--is the product of these values squared, which is only .0036. This figure is much smaller than observed .26 = 537/2037, so the assumption of iid is not an accurate description of this behavior.

The TE model does a better job since it predicts agreement in 994 cases (where the actual was 1038). The TE model violates independence because it assumes that the true preferences are the same within blocks and only the errors are independent. Note that the data reveal even greater dependence than predicted by this TE model. The estimated error rates in the TE model in the LH design were 0.10, 0.06, 0.09, 0.06, and 0.04 for the five choice problems, respectively. The estimated “true” rate for the 11111 pattern was 0.452.

The assumption that errors are independent means that the probability to show the “true” pattern in both repetitions is the product of the probabilities of not making an error on all ten choice problems. Subtract each error term from one, square it, and find the product, which for the pattern 11111 is 0.472. If the “true” probability of the pattern 11111 is 0.452, the probability to repeat this true pattern is then 0.213, which is much greater than that predicted by iid and closer to the observed proportion. (The TE model also allows that a person can show this pattern by having other “true” patterns and making the required errors to match this pattern, but these only contribute a very small amount to the prediction in this case).

Tables D.5 and D.6 show predicted frequencies of these two models for the LP and PH designs, respectively. These cases again show that the iid model is not accurate at all in predicting self-agreement within blocks and the TE model does better but is not completely accurate.

The Chi-Squares of fit for the iid model are (obviously) off the charts, ranging from over 350,000 to more than 1.5 million. The Chi-Squares for the TE model with 27 df are 245.2, 253.2, and 299.7, all significant. The Chi-Squares testing the special case of TE that assumes transitivity, with 2 df, are 116.2, 251.7, and 37.2. The estimated rates of intransitivity are as follows: In the LH design, p 22221 = .03; in the LP design, p 22221 = .03, and in the PH design, p 11112 = .01. Assuming this TE model (which is dubious due to its lack of fit), one would conclude that these tiny rates of intransitivity are significant.

This TE analysis likely understates the dependence in the data because it does not properly handle individual differences, which contribute to its lack of fit. Despite that caveat, however, the estimated rates of intransitivity are similar to the rates based on separate analyses of individual data. In other words, besides the individuals identified as intransitive in Table 3 and Appendix D, there little additional evidence, if any, for mixtures containing partial or temporary intransitivity of these types in the individual block data by others.

Table H.1. Fit of Two Models to Frequency Data of LH Design. TE = True and Error model; IID = Independent and Identically Distributed model. Both models allow intransitivity. U–C = average frequency of a response pattern in either position arrangement but not both. Both = frequency of showing the same response pattern in both arrangements.

Table H.2. Fit of two models to frequency data LP design.

Table H.3. Fit of two models to frequency data PH design.

Footnotes

We thank William Batchelder, Daniel Cavagnaro, Geoffrey Iverson, R. Duncan Luce, Michel Regenwetter, and Clintin Davis-Stober, for helpful discussions of issues in this project. This work was supported in part by a grant from the National Science Foundation, SES DRMS-0721126. Experiment 1 is based on a Master’s thesis by the second author under supervision of the first.

1 A more general family of LS models can be defined by allowing subjective transformations for prizes and probabilities, u(x), u(y), and t(p), and assuming that people compare subjective differences against the thresholds, for example as follows: . Because the functions, u(x) and t(p) are assumed to be strictly monotonic, this version of LS models also makes the same qualitative predictions concerning the AE choice compared to the set of adjacent choices, but it does not require that EDDCCBBA, aside from error. The analyses in Tables 3, 4 and 5 and Appendices D, F and G allow for this more general family of LS models.

2 The four tests of interactive independence described here are as follows: Test 1: R = ($95, .95; $5), S = ($55, .95; $20), R = ($95, .1; $5), S = ($55, .1; $20); Test 2: R = ($95, .90; $5), S = ($55, .90; $20), R = ($95, .05; $5), S = ($55, .05; $20); Test 3: R = ($99, .90; $1), S = ($40, .90; $35), R = ($99, .10; $1), S = ($40, .10; $35); Test 4: same as Test 3 with positions reversed.

References

Bahra, J. P. (2012). Violations of transitivity in decision making. (Unpublished master’s thesis). California State University, Fullerton, CA, USA.Google Scholar
Birnbaum, M. H. (1999). Paradoxes of Allais, stochastic dominance, and decision weights. In Shanteau, J., Mellers, B. A., & Schum, D. A. (Eds.), Decision science and technology: Reflections on the contributions of Ward Edwards (pp. 2752). Norwell, MA: Kluwer Academic Publishers.CrossRefGoogle Scholar
Birnbaum, M. H. (2004). Causes of Allais common consequence paradoxes: An experimental dissection. Journal of Mathematical Psychology, 48, 87106.CrossRefGoogle Scholar
Birnbaum, M. H. (2005). A comparison of five models that predict violations of first-order stochastic dominance in risky decision making. Journal of Risk and Uncertainty, 31, 263287.CrossRefGoogle Scholar
Birnbaum, M. H. (2008a). Evaluation of the priority heuristic as a descriptive model of risky decision making: Comment on Brandstätter, Gigerenzer, and Hertwig (2006). Psychological Review, 115, 253260.CrossRefGoogle Scholar
Birnbaum, M. H. (2008b). New paradoxes of risky decision making. Psychological Review, 115, 253262.CrossRefGoogle ScholarPubMed
Birnbaum, M. H. (2008c). New tests of cumulative prospect theory and the priority heuristic: Probability-Outcome tradeoff with branch splitting. Judgment and Decision Making, 3, 304316.CrossRefGoogle Scholar
Birnbaum, M. H. (2010). Testing lexicographic semiorders as models of decision making: Priority dominance, integration, interaction, and transitivity. Journal of Mathematical Psychology, 54, 363386.CrossRefGoogle Scholar
Birnbaum, M. H. (2011). Testing mixture models of transitive preference: Comments on Regenwetter, Dana, and Davis-Stober (2011). Psychological Review, 118, 675683.CrossRefGoogle Scholar
Birnbaum, M. H. (2012). A statistical test in choice data of the assumption that repeated choices are independently and identically distributed. Judgment and Decision Making, 7, 97109.CrossRefGoogle Scholar
Birnbaum, M. H., & Bahra, J. P. (2012). Separating response variability from structural inconsistency to test models of risky decision making. Judgment and Decision Making, 7, 402426.CrossRefGoogle Scholar
Birnbaum, M. H., & Gutierrez, R. J. (2007). Testing for intransitivity of preferences predicted by a lexicographic semiorder. Organizational Behavior and Human Decision Processes, 104, 97112.CrossRefGoogle Scholar
Birnbaum, M. H., & LaCroix, A. R. (2008). Dimension integration: Testing models without trade-offs. Organizational Behavior and Human Decision Processes, 105, 122133.CrossRefGoogle Scholar
Birnbaum, M. H., & Schmidt, U. (2008). An experimental investigation of violations of transitivity in choice under uncertainty. Journal of Risk and Uncertainty, 37, 7791.CrossRefGoogle Scholar
Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Choices without tradeoffs. Psychological Review, 113, 409432.CrossRefGoogle ScholarPubMed
Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2008). Risky Choice with Heuristics: Reply to Birnbaum (2008), Johnson, Schulte-Mecklenbeck, & Willemsen (2008) and Rieger & Wang (2008). Psychological Review, 115, 281289.CrossRefGoogle ScholarPubMed
Fiedler, K. (2010). How to study cognitive decision algorithms: The case of the priority heuristic. Judgment and Decision Making, 5, 2132.CrossRefGoogle Scholar
Gloeckner, A., & Betsch, T. (2008). Do people make decisions under risk based on ignorance? An empirical test of the priority heuristic versus cumulative prospect theory. Organizational Behavior and Human Decision Processes, 107, 7595.CrossRefGoogle Scholar
Gloeckner, A., & Herbold, A.-K. (2011). An eye-tracking study on information processing in risky decisions: Evidence for compensatory strategies based on automatic processes. Journal of Behavioral Decision Making, 24, 7198.CrossRefGoogle Scholar
González-Vallejo, C. (2002). Making trade-offs: A probabilistic and context-sensitive model of choice behavior. Psychological Review, 109, 137155.CrossRefGoogle ScholarPubMed
Hilbig, B. (2008). One-reason decision making in risky choice? A closer look at the priority heuristic. Judgment and Decision Making, 3, 457462.CrossRefGoogle Scholar
Iverson, G., & Falmagne, J.-C. (1985). Statistical issues in measurement. Mathematical Social Sciences, 10, 131153.CrossRefGoogle Scholar
Loomes, G., Starmer, C., & Sugden, R. (1991). Observing violations of transitivity by experimental methods. Econometrica, 59, 425440.CrossRefGoogle Scholar
Loomes, G., & Sugden, R. (1995). Incorporating a stochastic element into decision theories. European Economic Review, 39, 641648.CrossRefGoogle Scholar
Luce, R. D. (1956). Semiorders and a theory of utility discrimination. Econometrica, 24, 178191.CrossRefGoogle Scholar
Luce, R. D. (1959). Individual choice behavior. New York: Wiley.Google Scholar
Luce, R. D. (2000). Utility of gains and losses: Measurement-theoretical and experimental approaches. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Morrison, H. W. (1963). Testable conditions for triads of paired comparison choices. Psychometrika, 28, 369390.CrossRefGoogle Scholar
Myung, J., Karabatsos, G., & Iverson, G. (2005). A Bayesian approach to testing decision making axioms. Journal of Mathematical Psychology, 49, 205225.CrossRefGoogle Scholar
Regenwetter, M., Dana, J. & Davis-Stober, C. (2010). Testing Transitivity of Preferences on Two- Alternative Forced Choice Data. Frontiers in Psychology, 1, 148. http://dx.doi.org/10.3389/fpsyg.2010.00148.CrossRefGoogle ScholarPubMed
Regenwetter, M., Dana, J., & Davis-Stober, C. P. (2011). Transitivity of Preferences. Psychological Review, 118, 4256.CrossRefGoogle ScholarPubMed
Regenwetter, M., Dana, J., Davis-Stober, C. P., and Guo, Y. (2011). Parsimonious testing of transitive or intransitive preferences: Reply to Birnbaum (2011). Psychological Review, 118, 684688.CrossRefGoogle Scholar
Rieskamp, J. (2008). The probabilistic nature of preferential choice. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 14461465.Google ScholarPubMed
Rieskamp, J., Busemeyer, J. R., & Mellers, B. A. (2006). Extending the bounds of rationality: A review of research on preferential choice. Journal of Economic Literature, 44, 631661.CrossRefGoogle Scholar
Smith, J. B., & Batchelder, W. H. (2008). Assessing individual differences in categorical data. Psychonomic Bulletin & Review, 15, 713731. http://dx.doi.org/10.3758/PBR.15.4.713.CrossRefGoogle ScholarPubMed
Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76, 3148.CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297323.CrossRefGoogle Scholar
Wakker, P. (2011). Prospect theory: For risk and ambiguity. Cambridge, UK: Cambridge University Press.Google Scholar
Figure 0

Table 1: Gambles used in linked tests of transitivity.

Figure 1

Table 2: Raw data from Case #134 in the LH, LP, and PH Designs. Day indicates the day on which the participant completed each block, denoted “blk”. “Order” indicates where all 20 responses in a block were perfectly consistent with a transitive order. Note that all 60 responses are opposite between Block 7 and Block 15.

Figure 2

Table 3: The frequency of consistent, modal response patterns in LH, LP, and PH designs. To be consistent, the participant had to have the same modal response pattern, over repetition blocks, in both ways of presenting the choices. Patterns 11112 and 22221 are intransitive. There were 51, 43, and 42 participants in Experiments 1, 2, and 3 with three designs each; only 7 cases out of 333 consistent modal patterns were intransitive.

Figure 3

Table 4: Percentages of all response patterns in LH, LP, and PH Designs. Column sums may differ from 100, due to rounding.

Figure 4

Table 5: Analysis of Participants #125, #214, and #309. LH and LH2 show the response patterns for choice problems AB, BC, CD, DE, and AE when the alphabetically higher gamble was presented first or second. The patterns, 22221 and 11112 are intransitive (bold font).

Figure 5

Table A. 1. Predicted preference patterns in LPH LS model.

Figure 6

Table B.1. Analysis of iid assumptions in Experiments 1 and 2 in LH, LP, and PH designs (m = mean number of preference reversals between blocks, var = variance, pv = simulated p-level of variance test, r = correlation, pr = simulated p-level of correlation test).

Figure 7

Table B.2. Analysis of iid assumptions in Experiment 3, as in Table B.1. Each block contains 107 choice problems, including LH, LP, and PH designs. Blocks were separated by a filler task with 57 choices.

Figure 8

Table C.1. Binary choice proportions (above diagonal) for each design, medians over all three experiments. Predictions of the priority heuristic are shown below diagonal; “?” indicates that the model is undecided.

Figure 9

Table D.1. Frequency of response patterns in tests of transitivity in LH Design. The pattern of intransitivity predicted by the priority heuristic is 22221.

Figure 10

Table D.2. Frequency of response patterns in tests of transitivity in LP Design.

Figure 11

Table D.3. Frequency of response patterns in tests of transitivity in PH Design. The predicted pattern of intransitivity from the priority heuristic is 11112.

Figure 12

Table E.1. Binary choice proportions for each individual in LH design, WST= weak stochastic transitivity, TI = triangle inequality; “yes” means that the property is perfectly satisfied by the proportions; Order compatible with WST is listed; Blks is the number of blocks, each of which has two presentations of each choice.

Figure 13

Table E.2. Binary choice proportions in the LP design, as in Table E.1.

Figure 14

Table E.3. Individual binary choice proportions in the PH Design, as in Table E.1.

Figure 15

Table G.1. Individual choice proportions in the LS design (Experiments 2 and 3).

Figure 16

Table H.1. Fit of Two Models to Frequency Data of LH Design. TE = True and Error model; IID = Independent and Identically Distributed model. Both models allow intransitivity. U–C = average frequency of a response pattern in either position arrangement but not both. Both = frequency of showing the same response pattern in both arrangements.

Figure 17

Table H.2. Fit of two models to frequency data LP design.

Figure 18

Table H.3. Fit of two models to frequency data PH design.

Supplementary material: File

Birnbaum and Bahra supplementary material

Birnbaum and Bahra supplementary material 1
Download Birnbaum and Bahra supplementary material(File)
File 653.3 KB
Supplementary material: File

Birnbaum and Bahra supplementary material

Birnbaum and Bahra supplementary material 2
Download Birnbaum and Bahra supplementary material(File)
File 608.3 KB
Supplementary material: File

Birnbaum and Bahra supplementary material

Birnbaum and Bahra supplementary material 3
Download Birnbaum and Bahra supplementary material(File)
File 599 KB