To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The objectives of this study were to obtain a deeper understanding of the donor behavior characteristics of young affluent individuals; and to ascertain whether young affluent women differed significantly from young affluent males in their approaches to philanthropy. Two hundred and seventeen investment bankers, accountants, and corporate lawyers, aged under 40 years, earning more than £50,000 annually and working in the City of London were questioned about their attitudes and behavior in relation to charitable giving. Significant differences emerged between the donor behavior characteristics of males and females. A conjoint analysis revealed that whereas men were more interested in donating to the arts sector in return for “social” rewards (invitations to gala events and black-tie dinners, for example), women had strong predilections to give to “people” charities and sought personal recognition from the charity to which they donated.
In the face of the discourse about the democratic deficit and declining public support for the European Union (EU), institutionalist scholars have examined the roles of institutions in EU decision making and in particular the implications of the empowered European Parliament. Almost in isolation from this literature, prior research on public attitudes toward the EU has largely adopted utilitarian, identity and informational accounts that focus on individual‐level attributes. By combining the insights from the institutional and behavioural literature, this article reports on a novel cross‐national conjoint experiment designed to investigate multidimensionality of public attitudes by taking into account the specific roles of institutions and distinct stages in EU decision making. Analysing data from a large‐scale experimental survey in 13 EU member states, the findings demonstrate how and to what extent the institutional design of EU decision making shapes public support. In particular, the study finds a general pattern of public consensus about preferred institutional reform regarding powers of proposal, adoption and voting among European citizens in different countries, but notable dissent about sanctioning powers. The results show that utilitarian and partisan considerations matter primarily for the sanctioning dimension in which many respondents in Austria, the Czech Republic, Denmark and Sweden prefer national courts to the Court of Justice of the EU.
Most charity organizations depend on contributions from the general public, but little research is conducted on donor preferences. Do donors have geographical, recipient, or thematic preferences? We designed a conjoint analysis experiment in which people rated development aid projects by donating money in dictator games. We find that our sample show strong age, gender, regional, and thematic preferences. Furthermore, we find significant differences between segments. The differences in donations are consistent with differences in donors’ attitudes toward development aid and their beliefs about differences in poverty and vulnerability of the recipients. The method here used for development projects can easily be adapted to elicit preferences for other kinds of projects that rely on gifts from private donors.
This paper analyzes how social venture capitalists evaluate the integrity of social entrepreneurs. Based on an experiment with 40 social venture capitalists and 40 students, we investigate how five attributes of the entrepreneur contribute to the assessment of integrity. These attributes are the entrepreneur’s personal experience, professional background, voluntary accountability efforts, reputation and awards/fellowships granted to the entrepreneur. Results indicate that social venture capitalists focus largely on voluntary accountability efforts of the entrepreneur and the entrepreneur’s reputation when judging integrity. For an overall positive judgment of integrity, it seems to be sufficient if either voluntary accountability efforts or reputation of the entrepreneur are high. By comparing social venture capitalists with students, we show that experience leads to a simpler decision model focusing on key attributes.
The study of subjective democratic legitimacy from a citizens’ perspective has become an important strand of research in political science. Echoing the well‐known distinction between ‘input‐oriented’ and ‘output‐oriented’ legitimacy, the scientific debate on this topic has coined two opposed views. Some scholars find that citizens have a strong and intrinsic preference for meaningful participation in collective decision making. But others argue, to the contrary, that citizens prefer ‘stealth democracy’ because they care mainly about the substance of decisions, but much less about the procedures leading to them. In this article, citizens’ preferences regarding democratic governance are explored, focusing on their evaluations of a public policy according to criteria related to various legitimacy dimensions, as well as on the (tense) relationship among them. Data from a population‐based conjoint experiment conducted in eight metropolitan areas in France, Germany, Switzerland and the United Kingdom is used. By analysing 5,000 respondents’ preferences for different governance arrangements, which were randomly varied with respect to their input, throughput and output quality as well as their scope of authority, light is shed on the relative importance of different aspects of democratic governance. It is found, first, that output evaluations are the most important driver for citizens’ choice of a governance arrangement; second, consistent positive effects of criteria of input and throughput legitimacy that operate largely independent of output evaluations can be discerned; and third, democratic input, but not democratic throughput, is considered somewhat more important when a governance body holds a high level of formal authority. These findings run counter to a central tenet of the ‘stealth democracy’ argument. While they indeed suggest that political actors and institutions can gain legitimacy primarily through the provision of ‘good output’, citizens’ demand for input and throughput do not seem to be conditioned by the quality of output as advocates of stealth democratic theory suggest. Democratic input and throughput remain important secondary features of democratic governance.
The boom in survey experiments in international relations has allowed researchers to make causal inferences on longstanding foreign policy debates such as democratic peace, and audience costs. However, most of these experiments rely on mass samples, whereas foreign policy is arguably more technocratically driven. We probe the validity of generalizing from mass to elite preferences by exploring preferences of ordinary U.S. citizens and foreign policy experts (employees of the U.S. Department of State) in two identical conjoint experiments on democratic peace. We find that experts are not only more opposed to military actions against other democracies than members of the public—but also that overall preferences about the matters of war and peace are stronger among foreign policy professionals.
The literature in political science considers (sometimes inaccurate) perceptions of immigrants as a factor in anti-immigration attitudes among natives, but much less is known about perceptions regarding immigrants from specific regions. In this paper, I explore Americans’ perceptions about immigrants from Africa, Asia, Europe, Latin America, and the Middle East. To measure these perceptions, I apply a conjoint experiment with a multinomial outcome, in which respondents are asked to categorize hypothetical immigrants as coming from one of the five regions. Results from a nationally diverse sample demonstrate that immigrants from all regions other than Europe are associated with speaking poor English. Immigrants from Latin America are also associated with welfare dependency and rule-breaking behavior, while the opposite is true for immigrants from Asia. These negative perceptions may at least partly explain opposition to non-European, and specifically Latin American, immigration in the United States.
Despite voters' distaste for corruption, corrupt politicians frequently get reelected. This Element provides a framework for understanding when corrupt politicians are reelected. One unexplored source of electoral accountability is court rulings on candidate malfeasance, which are increasingly determining politicians' electoral prospects. The findings suggest that (1) low-income voters – in contrast to higher-income voters – are responsive to such rulings. Unlike earlier studies, we explore multiple trade-offs voters weigh when confronting corrupt candidates, including the candidate's party, policy positions, and personal attributes. The results also surprisingly show (2) low-income voters, like higher-income voters, weigh corruption allegations and policy positions similarly, and are slightly more responsive to candidate attributes. Moreover, irrespective of voter income, (3) party labels insulate candidates from corruption, and (4) candidate attributes like gender have little effect. The results have implications for when voters punish corrupt politicians, the success of anti-corruption campaigns, and the design and legitimacy of electoral institutions.
Researchers in the field of conjoint analysis know the index-of-fit values worsen as the judgmental error of evaluation increases. This simulation study provides guidelines on the goodness of fit based on distribution of index-of-fit for different conjoint analysis designs. The study design included the following factors: number of profiles, number of attributes, algorithm used and judgmental model used. Critical values are provided for deciding the statistical significance of conjoint analysis results. Using these cumulative distributions, the power of the test used to reject the null hypothesis of random ranking is calculated. The test is found to be quite powerful except for the case of very small residual degrees of freedom.
Recent research reflects a growing awareness of the value of using structural equation models to analyze repeated measures data. However, such data, particularly in the presence of covariates, often lead to models that either fit the data poorly, are exceedingly general and hard to interpret, or are specified in a manner that is highly data dependent. This article introduces methods for developing parsimonious models for such data. The underlying technology uses reduced-rank representations of the variances, covariances and means of observed and latent variables. The value of this approach, which may be implemented using standard structural equation modeling software, is illustrated in an application study aimed at understanding heterogeneous consumer preferences. In this application, the parsimonious representations characterize systematic relationships among consumer demographics, attitudes and preferences that would otherwise be undetected. The result is a model that is parsimonious, illuminating, and fits the data well, while keeping data dependence to a minimum.
Societies are experiencing deep and intertwined structural changes that may unsettle perceptions European citizens have of their economic and employment security. In turn, such perceptions likely alter people’s political positions. For instance, those worried by labour market competition may prefer greater social protection to compensate for the accrued risk, or prefer more closed economies where external borders provide protection (or perceived protection). We develop expectations about how such distinct reactions can emerge from distinct labour-market risks of globalization, or automation, or migration. We test these expectations using a conjoint experiment in 13 European countries on European-level social policy. Results broadly corroborate our expectations on how different concerns about sources of labour market competition yield support for different features of European-level social policy.
from
Part II
-
The Practice of Experimentation in Sociology
Davide Barrera, Università degli Studi di Torino, Italy,Klarita Gërxhani, Vrije Universiteit, Amsterdam,Bernhard Kittel, Universität Wien, Austria,Luis Miller, Institute of Public Goods and Policies, Spanish National Research Council,Tobias Wolbring, School of Business, Economics and Society at the Friedrich-Alexander-University Erlangen-Nürnberg
Vignette experiments are a tool to present systematically varied descriptions of traits and conditions and eliciting to survey respondent and to elicit their beliefs and normative judgments on different combinations of these traits and conditions. Using a study on the gender pay gap and an analysis of trust problems in the purchase of used cars as examples, we discuss design characteristics of vignettes. Core issues are the selection of the vignettes that are included out of the universe of possible combinations, the type of dependent variables, such as rating scales or ranking tasks, the presentation style, differentiating text vignettes from a tabular format, and issues related to sampling strategies.
Survey experiments are an important tool to measure policy preferences. Researchers often rely on the random assignment of policy attribute levels to estimate different types of average marginal effects. Yet, researchers are often interested in how respondents trade-off different policy dimensions. We use a conjoint experiment administered to more than 10,000 respondents in Germany, to study preferences over personal freedoms and public welfare during the COVID-19 crisis. Using a pre-registered structural model, we estimate policy ideal points and indifference curves to assess the conditions under which citizens are willing to sacrifice freedoms in the interest of public well-being. We document broad willingness to accept restrictions on rights alongside sharp heterogeneity with respect to vaccination status. The majority of citizens are vaccinated and strongly support limitations on freedoms in response to extreme conditions—especially, when they vaccinated themselves are exempted from these limitations. The unvaccinated minority prefers no restrictions on freedoms regardless of the severity of the pandemic. These policy packages also matter for reported trust in government, in opposite ways for vaccinated and unvaccinated citizens.
Class action damages used to be boring. Essentially an accounting exercise, they came at the end of the case, after resolution of the more interesting issues of what the defendant did and whether it was liable for doing it. And because trial rarely happens, especially in consumer class actions where jury awards can be untethered to damages estimates and potentially astronomical, the damages reports quietly served by the dueling expert witnesses near the close of discovery served mainly as a benchmark for pretrial settlement discussions.
The problems referred to in the title of this chapter concern evaluating a given variable when it is one of several that have combined to bring about a result. In some cases, there is an easy market solution. Imagine that you contract to buy a house and then the beautiful kitchen stove, one of many things that attracted you to the property, is destroyed before you close the transaction or occupy the property. How much should the price now be reduced? Here there is an upper limit based on the cost of a comparable replacement appliance. A more precise valuation would also be easy if identical houses, lacking this one feature, had recently been sold. The stove is just a piece of the larger transaction and, with these convenient facts, there is not much of a “component valuation problem.” Additionally, the stove is unlikely to have been of greater value because of its interaction with other items in the house; colors and sizes are fairly standardized. “Conjoint analysis” – a term that usually refers to survey evidence that tries to elicit the value of a component – is therefore unnecessary, or at least uncomplicated, because value does not depend on an interaction among variables in a way that is not directly observed. It is also interesting because it does not present a difficult game theory problem, or result that might be described in common parlance as something that depends on the relative bargaining skill of the parties.
Conjoint analysis is a commonly used methodology in marketing – it can provide crucial information for new product development,1 product line extensions,2 design of product packaging,3 pricing,4 and various other applications for which it is important to understand consumer preferences. Because conjoint analysis can help market researchers, managers, and ultimately anyone else answer the question of which attributes of a product impact consumer purchase decisions, and to what extent, the method has become more and more frequently applied in the realm of litigation cases.5 For example, in the legal domain, conjoint surveys can contribute to understanding and determining purchase reasons, consumer valuations, and potentially associated damages in matters with claims regarding product liability, false advertising, lack of disclosures, data/privacy breaches, infringement of intellectual property, and antitrust issues. Even though conjoint analysis seems to be a useful instrument when tackling certain legal challenges involving consumer purchase decision-making, courts have frequently rejected conjoint analyses from allowable evidence due to concerns regarding the validity or applicability of its results. The reasons for factfinders’ skepticism are manifold and range from lack of specific expertise to misapplications of the technique. While lack of expertise can be preempted through careful selection of a proficient expert, the process of conducting a reliable conjoint analysis presents hurdles and challenges to anyone: sometimes, conjoint analysis is simply an unsuitable methodology for the question at hand, and at other times intricate aspects of the survey design or sample selections are disregarded. In the same vein, experts have expressed on various occasions that the application of the conjoint methodology may run into conceptual problems such as ignoring supply-side factors when determining consumers’ loss for a specific product characteristic that may have been promised but was not provided. This chapter outlines common applications of conjoint analysis in litigation, describes the basic concepts and approaches in properly applying conjoint analysis, and points to misapplications of conjoint analysis in litigation matters. It will also make evident how conjoint survey design, data analysis, and use of results in litigation matters depend on the complexities of each case.
Conjoint analysis is widely used for estimating the effects of a large number of treatments on multidimensional decision-making. However, it is this substantive advantage that leads to a statistically undesirable property, multiple hypothesis testing. Existing applications of conjoint analysis except for a few do not correct for the number of hypotheses to be tested, and empirical guidance on the choice of multiple testing correction methods has not been provided. This paper first shows that even when none of the treatments has any effect, the standard analysis pipeline produces at least one statistically significant estimate of average marginal component effects in more than 90% of experimental trials. Then, we conduct a simulation study to compare three well-known methods for multiple testing correction, the Bonferroni correction, the Benjamini–Hochberg procedure, and the adaptive shrinkage (Ash). All three methods are more accurate in recovering the truth than the conventional analysis without correction. Moreover, the Ash method outperforms in avoiding false negatives, while reducing false positives similarly to the other methods. Finally, we show how conclusions drawn from empirical analysis may differ with and without correction by reanalyzing applications on public attitudes toward immigration and partner countries of trade agreements.
For the purposes of farm animal welfare assessment, Farm Assurance Schemes and enforcement of animal welfare legislation, a requirement arises for a unitary welfare score which may be the amalgamation of several animal welfare measures. In amalgamating measures, weighting to reflect the importance of the individual measures for animal welfare is desirable. A study is described in which conjoint analysis was used to collect and evaluate expert opinion to weight a number of welfare assessment measures for the importance of each to broiler welfare in UK husbandry systems. The statistically combined opinion of the experts consulted revealed the weighting factors of the welfare assessment measures selected, with respect to the importance for bird welfare, to be: 0.26 for mortality levels on the growing unit; 0.24 for the level of leg weakness; 0.16 for the level of hock burn; 0.14 for stocking density; 0.10 for enrichment provision; and, 0.10 for the level of emergency provision. Criteria for selection of welfare assessment measures for use in the field, and level of agreement between experts consulted for the study, are discussed. It is concluded that weightings of welfare assessment measures by expert opinion, using conjoint analysis, might be used in the construction of a welfare index for assessment of broiler welfare on-farm. Such an index should not be considered as a ‘gold standard’ for welfare measurement but as an evolving standard for welfare assessment, based on current knowledge.
A large range of variables can affect the welfare of the dairy cow, making it difficult to assess the overall ‘level of welfare’ of the individual animal. Two groups of individuals completed a questionnaire based upon the ‘five freedoms’: 26 respondents had expertise either in the field of dairy cow welfare or as practicing veterinary surgeons, and 30 were veterinary students in their penultimate year of study. Conjoint analysis was used to calculate the average importance scores (AIS) for 34 variables presented to the respondents as 52 ‘model cows’ in the form of grouped questions, phrases and pictures. Conjoint analysis identified the most important factors for each ‘freedom’: access to forage, body condition score, foot conformation, hock lesions, and the encouragement required for a dairy cow to walk into the parlour. There was a significant difference between the expert and student groups for seven out of 34 factors, which may be attributed to individual variation of opinion, knowledge, experience and expectation. The factors were ranked within each ‘freedom’ using the experts' AIS but it was not assumed that each freedom had equal ‘weight’; therefore, the factors within each freedom were compared only with factors within the same freedom. These scores produced a weighting scale, which was applied on-farm, in a preliminary exercise comparing ‘model’ and ‘perceived’ welfare scores.
Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser & Orlin, 2007; Kohli & Jedidi, 2007) to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.