We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter deals with how microeconomics can provide insights into the key challenge that artificial intelligence (AI) scientists face. This challenge is to create intelligent, autonomous agents that can make rational decisions. In this challenge, they confront two questions: what decision theory to follow and how to implement it in AI systems. This chapter provides answers to these questions and makes three contributions. The first is to discuss how economic decision theory – expected utility theory (EUT) – can help AI systems with utility functions to deal with the problem of instrumental goals, the possibility of utility function instability, and coordination challenges in multiactor and human–agent collective settings. The second contribution is to show that using EUT restricts AI systems to narrow applications, which are “small worlds” where concerns about AI alignment may lose urgency and be better labeled as safety issues. The chapter’s third contribution points to several areas where economists may learn from AI scientists as they implement EUT.
Is Artificial Intelligence a more significant invention than electricity? Will it result in explosive economic growth and unimaginable wealth for all, or will it cause the extinction of all humans? Artificial Intelligence: Economic Perspectives and Models provides a sober analysis of these questions from an economics perspective. It argues that to better understand the impact of AI on economic outcomes, we must fundamentally change the way we think about AI in relation to models of economic growth. It describes the progress that has been made so far and offers two ways in which current modelling can be improved: firstly, to incorporate the nature of AI as providing abilities that complement and/or substitute for labour, and secondly, to consider demand-side constraints. Outlining the decision-theory basis of both AI and economics, this book shows how this, and the incorporation of AI into economic models, can provide useful tools for safe, human-centered AI.
In this paper, we show how to represent a non-Archimedean preference over a set of random quantities by a nonstandard utility function. Non-Archimedean preferences arise when some random quantities have no fair price. Two common situations give rise to non-Archimedean preferences: random quantities whose values must be greater than every real number, and strict preferences between random quantities that are deemed closer in value than every positive real number. We also show how to extend a non-Archimedean preference to a larger set of random quantities. The random quantities that we consider include real-valued random variables, horse lotteries, and acts in the theory of Savage. In addition, we weaken the state-independent utility assumptions made by the existing theories and give conditions under which the utility that represents preference is the expected value of a state-dependent utility with respect to a probability over states.
In Chapter 10 we discuss feedback and control as an advanced topic. We introduce how to use the measurement results to control the quantum system, via applying conditional unitary operator. A number of experimental systems are discussed, including active qubit phase stabilization, adaptive phase measurements, and continuous quantum error correction.
In this article, we re-examine Pascal's Mugging, and argue that it is a deeper problem than the St. Petersburg paradox. We offer a way out that is consistent with classical decision theory. Specifically, we propose a “many muggers” response analogous to the “many gods” objection to Pascal's Wager. When a very tiny probability of a great reward becomes a salient outcome of a choice, such as in the offer of the mugger, it can be discounted on the condition that there are many other symmetric, non-salient rewards that one may receive if one chooses otherwise.
Risk is inherent to many, if not all, transformative decisions. The risk of regret, of turning into a person you presently consider to be morally objectionable, or of value change are all risks of choosing to transform. This aspect of transformative decision-making has thus far been ignored, but carries important consequences to those wishing to defend decision theory from the challenge posed by transformative decision-making. I contend that a problem lies in a common method used to cardinalise utilities – the von Neumann and Morgenstern (vNM) method – which measures an agent's utility function over sure outcomes. I argue that the risks involved in transformative experiences are constitutively valuable, and hence their value cannot be accurately measured by the vNM method. In Section 1, I outline what transformative experiences are and the problem they pose to decision theory. In Section 2, I outline Pettigrew's (2019, Choosing for Changing Selves) decision-theoretic response, and in Section 3, I present the case for thinking that risks can carry value. In Section 4, I argue for the claim that at least some transformative experiences involve constitutive risk. I argue that this causes a problem for decision-theoretic responses within the vNM framework in Section 5.
This paper presents some impossibility results for certain views about what you should do when you are uncertain about which moral theory is true. I show that under reasonable and extremely minimal ways of defining what a moral theory is, it follows that the concept of expected moral choiceworthiness is undefined, and more generally that any theory of decision-making under moral uncertainty must generate pathological results.
Edited by
Jonathan Fuqua, Conception Seminary College, Missouri,John Greco, Georgetown University, Washington DC,Tyler McNabb, Saint Francis University, Pennsylvania
Traditional theistic arguments conclude that God exists. Pragmatic theistic arguments, by contrast, conclude that you ought to believe in God. The two most famous pragmatic theistic arguments are put forth by Blaise Pascal (1662) and William James (1896). Pragmatic arguments for theism can be summarized as follows: believing in God has significant benefits, and these benefits are not available for the unbeliever. Thus, you should believe in, or “wager on,” God. This chapter distinguishes between various kinds of theistic wagers, including finite vs. infinite wagers, premortem vs. postmortem wagers, and doxastic vs. acceptance wagers. Then, it turns to the epistemic–pragmatic distinction and discusses the nuances of James’ argument, and how views like epistemic permissivism and epistemic consequentialism provide unique “hybrid” wagers. Finally, it covers outstanding objections and responses.
The nature of evidence is a problem for epistemology, but I argue that this problem intersects with normative decision theory in a way that I think is underappreciated. Among some decision theorists, there is a presumption that one can always ignore the nature of evidence while theorizing about principles of rational choice. In slogan form: decision theory only cares about the credences agents actually have, not the credences they should have. I argue against this presumption. In particular, I argue that if evidence can be unspecific, then an alleged counterexample to causal decision theory fails. This implies that what theory of decision we think is true may depend on our opinions regarding the nature of evidence. Even when we are theorizing about subjective theories of rationality, we cannot set aside questions about the objective nature of evidence.
A wise decider D uses the contents of his mind fully, accurately and efficiently. D’s ideal decisions, i.e., those that best serve his interests, would be embedded in a comprehensive set of totally coherent judgments lodged in his mind. They would conform to the norms of statistical decision theory, which extracts quantitative judgments of fact and value from D’s mind contents and checks them for coherence. However, the most practical way for D to approximate his ideal may not be with models that embody those norms, i.e., with applied decision theory (ADT). In practice, ADT can represent only some of D’s judgments and those imperfectly. Quite different decision aid, including intuition, pattern recognition and cognitive vigilance (especially combined), typically outperform feasible ADT models—with some notable exceptions. However, decision theory training benefits D’s informal decisions. ADT, both formal and informal, should become increasingly useful and widespread, as technical, cultural and institutional impediments are overcome.
Encounters with art can change us in ways both big and small. This paper focuses on one of the more dramatic cases. I argue that works of art can inspire what L. A. Paul calls transformations, classic examples of which include getting married, having a child, and undergoing a religious conversion. Two features distinguish transformations from other changes we undergo. First, they involve the discovery of something new. Second, they result in a change in our core preferences. These two features make transformations hard to motivate. I argue, however, that art can help on both fronts. First, works of art can guide our attempt to imagine unfamiliar ways of living. Second, they can attract us to values we currently reject. I conclude by observing that what makes art powerful also makes it dangerous. Transformations are not always for the good, and art's ability to inspire them can be put to immoral ends.
Frames and framing make one dimension of a decision problem particularly salient. In the simplest case, frames prime responses (as in, e.g., the Asian disease paradigm, where the gain frame primes risk-aversion and the loss frame primes risk-seeking). But in more complicated situations frames can function reflectively, by making salient particular reason-giving aspects of a thing, outcome, or action. For Shakespeare's Macbeth, for example, his feudal commitments are salient in one frame, while downplayed in another in favor of his personal ambition. The role of frames in reasoning can give rise to rational framing effects. Macbeth can prefer fulfilling his feudal duty to murdering the king, while also preferring bravely taking the throne to fulfilling his feudal duty, knowing full well that bravely taking the throne just is murdering the king. Such patterns of quasi-cyclical preferences can be correct and appropriate from the normative perspective of how one ought to reason. The paper explores three less dramatic types of rational framing effects: (1) Consciously framing and reframing long-term goals and short-term temptations can be important tools for self-control. (2) In the prototypical social interactions modeled by game theory, allowing for rational framing effects solves longstanding problems, such as the equilibrium selection problem and explaining the appeal of non-equilibrium solutions (e.g., the cooperative solution in the Prisoner's Dilemma). (3) Processes for resolving interpersonal conflicts and breaking discursive deadlock, because they involve internalizing multiple and incompatible ways of framing actions and outcomes, in effect create rational framing effects.
Two of the most poignant decisions in pediatrics concern disagreements between physicians and families over imperiled newborns. When can the family demand more life-sustaining treatment (LST) than physicians want to provide? When can it properly ask for less? The author looks at these questions from the point of view of decision theory, and first argues that insofar as the family acts in the child’s best interest, its choices cannot be constrained, and that the maximax and minimax strategies are equally in the child’s best interest. He then proposes a guideline according to which the family can demand LST if it is physiologically possible to preserve a life the child can be expected to welcome, and refuse such treatment if it causes suffering that is “more than can be borne” even if an uncompromised life is expected to emerge.
Evidential Decision Theory is a radical theory of rational decision-making. It recommends that instead of thinking about what your decisions *cause*, you should think about what they *reveal*. This Element explains in simple terms why thinking in this way makes a big difference, and argues that doing so makes for *better* decisions. An appendix gives an intuitive explanation of the measure-theoretic foundations of Evidential Decision Theory.
An agent often does not have precise probabilities or utilities to guide resolution of a decision problem. I advance a principle of rationality for making decisions in such cases. To begin, I represent the doxastic and conative state of an agent with a set of pairs of a probability assignment and a utility assignment. Then I support a decision principle that allows any act that maximizes expected utility according to some pair of assignments in the set. Assuming that computation of an option's expected utility uses comprehensive possible outcomes that include the option's risk, no consideration supports a stricter requirement.
Risk-weighted expected utility theory (REU theory for short) permits preferences which violate the Sure-Thing Principle (STP for short). But preferences that violate the STP can lead to bad decisions in sequential choice problems. In particular, they can lead decision-makers to adopt a strategy that is dominated – i.e. a strategy such that some available alternative leads to a better outcome in every possible state of the world.
There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate.
Many moral theories are committed to the idea that some kinds of moral considerations should be respected, whatever the cost to ‘lesser’ types of considerations. A person's life, for instance, should not be sacrificed for the trivial pleasures of others, no matter how many would benefit. However, according to the decision-theoretic critique of lexical priority theories, accepting lexical priorities inevitably leads us to make unacceptable decisions in risky situations. It seems that to operate in a risky world, we must reject lexical priorities altogether. This paper argues that lexical priority theories can, in fact, offer satisfactory guidance in risky situations. It does so by equipping lexical priority theories with overlooked resources from decision theory.
There is a rich tradition within game theory, decision theory, economics, and philosophy correlating practical rationality with impartiality, and spatial and temporal neutrality. I argue that in some cases we should give priority to people over both times and places, and to times over places. I also show how three plausible dominance principles regarding people, places, and times conflict, so that we cannot accept all three. However, I argue that there are some cases where we should give priority to times over people, suggesting that there is impersonal value to the distribution of high quality life over different times.
I have claimed that risk-weighted expected utility (REU) maximizers are rational, and that their preferences cannot be captured by expected utility (EU) theory. Richard Pettigrew and Rachael Briggs have recently challenged these claims. Both authors argue that only EU-maximizers are rational. In addition, Pettigrew argues that the preferences of REU-maximizers can indeed be captured by EU theory, and Briggs argues that REU-maximizers lose a valuable tool for simplifying their decision problems. I hold that their arguments do not succeed and that my original claims still stand. However, their arguments do highlight some costs of REU theory.