We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This Element covers the interaction of two research areas: linguistic semantics and deep learning. It focuses on three phenomena central to natural language interpretation: reasoning and inference; compositionality; extralinguistic grounding. Representation of these phenomena in recent neural models is discussed, along with the quality of these representations and ways to evaluate them (datasets, tests, measures). The Element closes with suggestions on possible deeper interactions between theoretical semantics and language technology based on deep learning models.
The hospital industry in many countries is characterized by right-skewed distributions of hospitals’ sizes and varied ownership types, raising numerous questions about the performance of hospitals of different sizes and ownership types. In an era of aging populations and increasing healthcare costs, evaluating and understanding the consumption of resources to produce healthcare outcomes is increasingly important for policy discussions. This chapter discusses recent developments in the statistical and econometric literature on DEA and FDH estimators that can be used to examine hospitals’ technical efficiency and productivity. Use of these new results and methods is illustrated by revisiting the Burgess and Wilson hospital studies of the 1990s to estimate and make inference about the technical efficiency of US hospitals, make inferences about returns to scale and other model features, and test for differences among US hospitals across ownership types and size groups in the context of a rigorous, statistical paradigm that was unavailable to researchers until recently.
We study the problem of identifying a small number $k\sim n^\theta$, $0\lt \theta \lt 1$, of infected individuals within a large population of size $n$ by testing groups of individuals simultaneously. All tests are conducted concurrently. The goal is to minimise the total number of tests required. In this paper, we make the (realistic) assumption that tests are noisy, that is, that a group that contains an infected individual may return a negative test result or one that does not contain an infected individual may return a positive test result with a certain probability. The noise need not be symmetric. We develop an algorithm called SPARC that correctly identifies the set of infected individuals up to $o(k)$ errors with high probability with the asymptotically minimum number of tests. Additionally, we develop an algorithm called SPEX that exactly identifies the set of infected individuals w.h.p. with a number of tests that match the information-theoretic lower bound for the constant column design, a powerful and well-studied test design.
Students are introduced the logic, foundation, and basics of statistical inference. The need for samples is first discussed and then how samples can be used to make inferences about the larger population. The normal distribution is then discussed, along with Z-scores to illustrate basic probability and the logic of statistical significance.
Experience is the cornerstone of Epicurean philosophy and nowhere is this more apparent than in the Epicurean views about the nature, formation, and application of concepts. ‘The Epicureans on Preconceptions and Other Concepts’ by Gábor Betegh and Voula Tsouna aims to piece together the approach to concepts suggested by Epicurus and his early associates, trace its historical development over a period of approximately five centuries, compare it with competing views, and highlight the philosophical value of the Epicurean account on that subject. It is not clear whether, properly speaking, the Epicureans can be claimed to have a theory about concepts. However, an in-depth discussion of the relevant questions will show that the Epicureans advance a coherent if elliptical explanation of the nature and formation of concepts and of their epistemological and ethical role. Also, the chapter establishes that, although the core of the Epicurean account remains fundamentally unaffected, there are shifts of emphasis and new developments marking the passage from one generation of Epicureans to another and from one era to the next.
Concepts are basic features of rationality. Debates surrounding them have been central to the study of philosophy in the medieval and modern periods, as well as in the analytical and Continental traditions. This book studies ancient Greek approaches to the various notions of concept, exploring the early history of conceptual theory and its associated philosophical debates from the end of the archaic age to the end of antiquity. When and how did the notion of concept emerge and evolve, what questions were raised by ancient philosophers in the Greco-Roman tradition about concepts, and what were the theoretical presuppositions that made the emergence of a notion of concept possible? The volume furthers our own contemporary understanding of the nature of concepts, concept formation, and concept use. This title is part of the Flip it Open Programme and may also be available Open Access. Check our website Cambridge Core for details.
Collecting network data directly from network members can be challenging. One alternative involves inferring a network from observed groups, for example, inferring a network of scientific collaboration from researchers’ observed paper authorships. In this paper, I explore when an unobserved undirected network of interest can accurately be inferred from observed groups. The analysis uses simulations to experimentally manipulate the structure of the unobserved network to be inferred, the number of groups observed, the extent to which the observed groups correspond to cliques in the unobserved network, and the method used to draw inferences. I find that when a small number of groups are observed, an unobserved network can be accurately inferred using a simple unweighted two-mode projection, provided that each group’s membership closely corresponds to a clique in the unobserved network. In contrast, when a large number of groups are observed, an unobserved network can be accurately inferred using a statistical backbone extraction model, even if the groups’ memberships are mostly random. These findings offer guidance for researchers seeking to indirectly measure a network of interest using observations of groups.
The Victorian era is often seen as solidifying modern law’s idealization of number, rule, and definition. Yet Wilkie Collins thwarts the trend toward “trial by mathematics” and “actuarial justice” by adopting an antinumerical example as the basis for a literary experiment. The bizarre third verdict (“not proven”) of Scots law, which falls between “guilty” and “not guilty” and acts as an acquittal that nonetheless imputes a lack of evidence for conviction, structures his detective novel The Law and the Lady (1875). Revealing Collins’s sources in trial reports and legal treatises, this chapter shows how uncertainty inflects judicial reasoning and models of reading. The verdict of “not proven” undercuts the truth claims of binary judgment at law, subverts normative categories, and allows for more flexible visions of social judgment. Collins makes visible a counter-trend to certainty and closure in legal institutions and Victorian novels about the law. The chapter briefly treats Anthony Trollope’s Orley Farm (1862) and Mary Braddon’s An Open Verdict (1878), which also promote types of inference and models of critical judgment that value the tentative, hesitant, and processual, evading the calculative pressures of nineteenth-century law and life.
In his treatment of the Wittgensteinian paradox about rule-following, Saul Kripke represents the non-reductionist approach, according to which meaning something by an expression is a sui generis state that cannot be elucidated in more basic terms, as brushing philosophical questions under the rug. This representation of non-reductionism aligns with the conception of some of its proponents. Meaning is viewed by these philosophers as an explanatory primitive that provides the basic materials for philosophical inquiry, and whose nature cannot serve as an object for that inquiry. There is, however, an alternative way of conceiving of non-reductionism, which makes it possible to tackle philosophical questions about the nature of meaning head-on, and thus to respond to Kripke’s challenge in an illuminating manner.
Humans produce utterances intentionally. Visible bodily action, or gesture, has long been acknowledged as part of the broader activity of speaking, but it is only recently that the role of gesture during utterance production and comprehension has been the focus of investigation. If we are to understand the role of gesture in communication, we must answer the following questions: Do gestures communicate? Do people produce gestures with an intention to communicate? This Element argues that the answer to both these questions is yes. Gestures are (or can be) communicative in all the ways language is. This Element arrives at this conclusion on the basis that communication involves prediction. Communicators predict the behaviours of themselves and others, and such predictions guide the production and comprehension of utterance. This Element uses evidence from experimental and neuroscientific studies to argue that people produce gestures because doing so improves such predictions.
If some of our knowledge cannot be articulated, how does it make itself manifest? It will not surprise anyone who has followed the argument of this book up to now that there are things that we can do with knowledge besides talking about it. Millikan, as we saw, used his knowledge of experimentation and of professional discourse to guide his exemplary investigations of the charge of the electron. Neither was something he made explicit; I doubt that he (or anyone) could have. No practitioner who looked at Millikan’s work found any basis for these accusations, because their training endowed them with a knowledge only available to practitioners. They all made effective use of this knowledge, despite not being able to articulate its content. That kind of knowledge manifests itself not in the form of beliefs, but rather in the scholar’s sense of how things seem.
When using dyadic data (i.e., data indexed by pairs of units), researchers typically assume a linear model, estimate it using Ordinary Least Squares, and conduct inference using “dyadic-robust” variance estimators. The latter assumes that dyads are uncorrelated if they do not share a common unit (e.g., if the same individual is not present in both pairs of data). We show that this assumption does not hold in many empirical applications because indirect links may exist due to network connections, generating correlated outcomes. Hence, “dyadic-robust” estimators can be biased in such situations. We develop a consistent variance estimator for such contexts by leveraging results in network statistics. Our estimator has good finite-sample properties in simulations, while allowing for decay in spillover effects. We illustrate our message with an application to politicians’ voting behavior when they are seating neighbors in the European Parliament.
Chapter 3 focuses on lexical semantics–pragmatics. Drawing on the views adopted in Construction Grammar and Relevance Theory, it provides an in-depth analysis aimed at exploring the nature of conceptual content and its use in context. It is argued that lexical concepts are best characterized by means of rich networks of encyclopedic knowledge, an approach that enables Relevance Theory to resolve a number of conflicting assumptions (including the presumed paradox discussed in Leclercq, 2022). At the same time, the case is made that this knowledge constitutes an intrinsically context-sensitive semantic potential that serves as the foundation of an inferential process guided by strong pragmatic principles. This process is addressed in terms of lexically regulated saturation, which forms the cornerstone of the integrated model outlined in this book.
We argue that stereotypes associated with concepts like he-said–she-said, conspiracy theory, sexual harassment, and those expressed by paradigmatic slurs provide “normative inference tickets”: conceptual permissions to automatic, largely unreflective normative conclusions. These “mental shortcuts” are underwritten by associated stereotypes. Because stereotypes admit of exceptions, normative inference tickets are highly flexible and productive, but also liable to create serious epistemic and moral harms. Epistemically, many are unreliable, yielding false beliefs which resist counterexample; morally, many perpetuate bigotry and oppression. Still, some normative inference tickets, like some activated by sexual harassment, constitute genuine moral and hermeneutical advances. For example, our framework helps explain Miranda Fricker's notion of “hermeneutical lacunae”: what early victims of “sexual harassment” – as well as their harassers – lacked before the term was coined was a communal normative inference ticket – one that could take us, collectively, from “this is happening” to “this is wrong.”
Evidentialism as an account of theoretical rationality is a popular and well-defended position. However, recently, it's been argued that misleading higher-order evidence (HOE) – that is, evidence about one's evidence or about one's cognitive functioning – poses a problem for evidentialism. Roughly, the problem is that, in certain cases of misleading HOE, it appears evidentialism entails that it is rational to adopt a belief in an akratic conjunction – a proposition of the form “p, but my evidence doesn't support p” – despite it being the case that believing an akratic conjunction appears to be clearly irrational. In this paper, I diffuse the problem for evidentialism using the distinction between propositional and doxastic rationality. I argue that, although it can be propositionally rational to believe an akratic conjunction (according to evidentialism), one cannot inferentially base an akratic belief in one's evidence, and, thus, one cannot doxastically rationally possess an akratic belief. In addition, I address the worry that my solution to the puzzle commits evidentialists to the possibility of epistemic circumstances in which a proposition, p, is propositionally rational to believe (namely, an akratic conjunction), yet one cannot, in principle, (doxastically) rationally believe p. As I demonstrate, cases of misleading HOE are not the only types of cases that force evidentialists to accept that propositional rationality does not entail the possibility of doxastic rationality. There are no new problems raised by misleading HOE that weren't already present in cases involving purely first-order evidence.
This chapter details the practical, theoretical, and philosophical aspects of experimental science. It discusses how one chooses a project, performs experiments, interprets the resulting data, makes inferences, and develops and tests theories. It then asks the question, "are our theories accurate representations of the natural world, that is, do they reflect reality?" Surprisingly, this is not an easy question to answer. Scientists assume so, but are they warranted in this assumption? Realists say "yes," but anti-realists argue that realism is simply a mental representation of the world as we perceive it, that is, metaphysical in nature. Regardless of one's sense of reality, the fact remains that science has been and continues to be of tremendous practical value. It would have to be a miracle if our knowledge and manipulation of the nature were not real. Even if they were, how do we know they are true in an absolute sense, not just relative to our own experience? This is a thorny philosophical question, the answer to which depends on the context in which it is asked. The take-home message for the practicing scientist is "never assume your results are true."
This chapter outlines a theory of moral perception, describes a structural analogy between perception and action, and indicates how perception can provide an objective basis for moral knowledge. It is shown to have a basis in the kinds of grounds that underlie the moral properties to which moral perception responds, such as the violence of a face-slapping. With this outline of a theory of moral perception in view, the chapter describes the presentational phenomenal character of moral perception. Prominent in this presentationality is the phenomenological integration between our moral sensibility and our non-moral perception of the various kinds of natural properties that ground moral properties. Moral perception is possible without moral judgment but commonly yields it. It is also possible without moral emotion but may arise from it in some cases and evoke it in others. Many perceptually grounded judgments are justified; many also express empirical moral knowledge.
Inductive reasoning involves generalizing from samples of evidence to novel cases. Previous work in this field has focused on how sample contents guide the inductive process. This chapter reviews a more recent and complementary line of research that emphasizes the role of the sampling process in induction. In line with a Bayesian model of induction, beliefs about how a sample was generated are shown to have a profound effect on the inferences that people draw. This is first illustrated in research on beliefs about sampling intentions: was the sample generated to illustrate a concept or was it generated randomly? A related body of work examines the effects of sampling frames: beliefs about selection mechanisms that cause some instances to appear in a sample and others to be excluded. The chapter describes key empirical findings from these research programs and highlights emerging issues such as the effect of timing of information about sample generation (i.e., whether it comes before or after the observed sample) and individual differences in inductive reasoning. The concluding section examines how this work can be extended to more complex reasoning problems where observed data are subject to selection biases.
Before venturing into the study of choreographies, we introduce the formalism of inference systems. Inference systems are widely used in the fields of formal logic and programming languages and they were later applied to theory of choreographies as well.
Many of the problems that human minds need to solve – including learning concepts, causal relationships, and languages – require making informed inferences from limited data. Bayesian models of cognition consider how an ideal agent should solve these problems, drawing on ideas from probability theory, statistics, machine learning, and artificial intelligence. The resulting models can then be used to understand human behavior, identifying in formal terms the knowledge that human minds draw on when solving these problems and identifying potential mechanisms by which their solutions might be implemented. This chapter provides an introduction to Bayesian models of cognition, starting with the basic principles of probability theory and then considering more advanced topics such as graphical models, causal learning, hierarchical Bayesian models, and Markov chain Monte Carlo. The chapter ends with a brief review of recent theoretical developments.