We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter argues that Collingwood’s “logic of question and answer” (LQA) can best be understood in the light of contemporary argumentation theory. Even if Collingwood quite often describes LQA in terms of inner thinking and reasoning, as was still usual in his time, his insistence on the normative (“criteriological”) character of LQA, paired with his attack on the pretensions of psychologists to describe logic (as well as other normative endeavours) in a purely empirical manner, makes clear that LQA has the same aspirations as the rising discipline of formal (mathematical) logic. The concise exposition of the form, content, and application of LQA is supported by references to all the relevant passages in Collingwood’s oeuvre as well as illustrated by means of a concrete example of his way of doing history. Although a recent and still developing discipline, contemporary argumentation theory was born as an attempt to describe and analyze argumentative texts as guided by norms constitutive of our argumentative practices in a way that completely escapes formal logic. It thus provides a place for LQA that has so far been lacking.
The process of identifying and interpreting norms of customary international law, while appearing to be primarily based on an inductive analysis of state practice and opinio juris, is sometimes a deductive exercise based on logic and reason. Logic permeates every decision in international law. Logic manifests itself inherently throughout the process and can be identified in all steps of reasoning in identifying, interpreting and applying customary international law. Logic, however, can constitute the application of either an inductive or deductive inference. This chapter focuses on situations in which the International Court of Justice (ICJ) and the Permanent Court of International Justice (PCIJ) applied a deductive approach, identifying or interpreting norms of customary international law without seeming to consult state practice and opinio juris. Specifically, it considers whether norms that can be reasonably inferred or deduced from existing rules, or that are simply logical for the operation of the international legal system, can be identified as norms of customary international law under a complementary, supplementary or distinctive interpretive approach.
Design rationale is the justification behind a product component, often captured via written reports and oral presentations. Research shows that the structure and information used to communicate and document rationale significantly influence human behavior. To better understand the influence of design rationale on engineering design, we investigate the information engineers and designers include in design rationales in written reports. Eight hundred and forty-six pages of student engineering design reports from 28 teams representing 116 individuals were analyzed using a mixed-methods approach and compared across project types. The rationales from the reports were coded inductively into concepts and later applied to five industry reports consisting of 218 pages. The findings reveal a spectrum of rationales underpinning design decisions. Grounded in the data, the feature, specification and evidence (FSE) framework emerged as a feature-based and low-effort capture approach. We discuss the need to improve design communication in engineering design, through structuring rationales (i.e., using the proposed FSE framework or other representations) and improving technical writing skills. Lastly, by enhancing design rationale communication and documentation practices, significant benefits can be realized for computational support tools such as automatic rationale extraction or generative approaches.
When we're inquiring to find out whether p is true, knowing that we'll get better evidence in the future seems like a good reason to suspend judgment about p now. But, as Matt McGrath has recently argued, this natural thought is in deep tension with traditional accounts of justification. On traditional views of justification, which doxastic attitude you are justified in having now depends on your current evidence, not on what you might learn later. McGrath proposes to resolve this tension by distinguishing between different ways of having a neutral attitude. I argue that McGrath's account is unable to account for the full range of cases in which an agnostic attitude is warranted. We can remedy this by pairing his account with my theory of transitional and terminal attitudes, which claims that attitudes are justified in different ways depending on whether they are formed in intermediate stages of deliberation or as conclusions of deliberation. I compare my view with an alternative, more parsimonious one, according to which deliberation itself is a source of new evidence. I argue that this alternative proposal is faced with a dilemma: it either generates a vicious regress, or it fails to capture the relevant cases.
Our experience of reasoning is replete with conflict. People phenomenologically vacillate between options when confronted with challenging decisions. Existing experimental designs typically measure a summary of the experience of the conflict experienced throughout the choice process for any individual choice or even between multiple observers for a choice. We propose a new method for measuring vacillations in reasoning during the time-course of individual choices, utilizing them as a fine-grained indicator of cognitive conflict. Our experimental paradigm allows participants to report the alternative they were considering while deliberating. Through 3 experiments, we demonstrate that our measure correlates with existing summary judgments of conflict and confidence in moral and logical reasoning problems. The pattern of deliberation revealed by these vacillations produces new constraints for theoretical models of moral and syllogistic reasoning.
According to Action-First theorists, like Jonathan Dancy, reasons for action explain reasons for intentions. According to Intention-First theorists, like Conor McHugh and Jonathan Way, reasons for intentions explain reasons for action. In this paper, I introduce and defend a version of the Action-First theory called “Instrumentalism.” According to Instrumentalism, just as we can derive, using principles of instrumental transmission, reasons to ψ from reasons to ϕ (provided there’s some relevant instrumental relation between ψ-ing and ϕ-ing), we can derive reasons to intend to ϕ from reasons to ϕ (provided there’s some relevant instrumental relation between intending to ϕ and ϕ-ing). After providing some defense of Instrumentalism, I turn to two recent, important arguments for the Intention-First theory advanced by McHugh and Way, and I argue that neither of them succeed. I conclude that we should reject the Intention-First theory and that we have grounds for optimism about the Action-First theory.
A peculiar feature of our species is that we settle what to believe, value, and do by reasoning through narratives. A narrative is adiachronic, information-rich story that contains persons, objects, and at least one event. When we reason through narrative, we usenarrative to settle what to do, to make predictions, to guide normative expectations, and to ground which reactive attitudes we think areappropriate in a situation. Narratives explain, justify, and provide understanding. Narratives play a ubiquitous role in human reasoning. Andyet, narratives do not seem up to the task. Narratives are often unmoored representations (either because they are do not purport to referto the actual world, or because they are grossly oversimplified, or because are known to be literally false). Against this, I argue thatnarratives guide our reasoning by shaping our grasp of modal structure: what is possible, probable, plausible, permissible, required,relevant, desirable and good. Narratives are good guides to reasoning when they guide us to accurate judgments about modal space. Icall this the modal model of narrative. In this paper, I develop an account of how narratives function in reasoning, as well as an account ofwhen reasoning through narrative counts as good reasoning.
Thinking encompasses a very wide range of phenomena. Chapter 6 first comes back to a study focused on the pleasure of thinking itself. Pleasure is then examined in three modes of thinking: sense-making, reasoning, and daydreaming. Second, as acts of thinking are always situated in specific activities and anchored in various domains of experience, the chapter distinguishes various domains of knowledge: all are complex semiotic systems, culturally mediated, which can be more or less culturally shared and formalised. Third, the chapter examines trajectories of thinking in many systems of knowledge, formal or informal; starting with daily modes of thinking and their pleasures, it examines the pleasures of thinking in professional thinkers before exploring a specific form of sense-making connected to personal experiences. Altogether, this chapter shows that trajectories of thinking are dynamic and that they intermesh elements from a diversity of knowledge systems, moving along various modalities of pleasure.
Temporal Logics are a rich variety of logical systems designed for formalising reasoning about time, and about events and changes in the world over time. These systems differ by the ontological assumptions made about the nature of time in the associated models, by the logical languages involving various operators for composing temporalized expressions, and by the formal logical semantics adopted for capturing the precise intended meaning of these temporal operators. Temporal logics have found a wide range of applications as formal frameworks for temporal knowledge representation and reasoning in artificial intelligence, and as tools for formal specification, analysis, and verification of properties of computer programs and systems. This Element aims at providing both a panoramic view on the landscape of the variety of temporal logics and closer looks at some of their most interesting and important landmarks.
Influential ‘fast-and-slow’ dual process models suggest that sound reasoning requires the correction of fast, intuitive thought processes by slower, controlled deliberation. Recent findings with high-level reasoning tasks started to question this characterization. Here we tested the generalizability of these findings to low-level cognitive control tasks. More specifically, we examined whether people who responded accurately to the classic Stroop and Flanker tasks could also do so when their deliberate control was minimized. A two-response paradigm, in which people were required to give an initial ‘fast’ response under time–pressure and cognitive load, allowed us to identify the presumed intuitive answer that preceded the final ‘slow’ response given after deliberation. Across our studies, we consistently find that correct final Stroop and Flanker responses are often non-corrective in nature. Good performance in cognitive control tasks seems to be driven by accurate ‘fast’ intuitive processing, rather than by ‘slow’ controlled correction of these intuitions. We also explore the association between Stroop and reasoning performance and discuss implications for the dual process view of human cognition.
'Encouraging Innovation: Cognition, Education, and Implementation' is of interest to people who desire to become more innovative in their daily lives and careers. Part I discusses the cognitive and social skills required for innovation – reasoning, problem solving, creativity, group decision making, and collaborative problem solving. The second part discusses education – the development of cognitive skills and talent, application of relevant learning theories, methods and curricula for enhancing creativity, creativity across disciplines, and design education. Part III discusses the implementation of these skills in society – the transition from theory to practice, business innovation, social innovation, and organizational support. Whereas business innovation is related to commercialization, market demands, and profitability, social innovation addresses fulfilling social needs and public demands. Organizational support for innovation occurs at international, national, agency, and regional levels.
This chapter takes a fresh look at the marionette image introduced by Plato in a famous passage of Book 1 of the Laws, as he undertakes to explain the bearing of self-rule upon virtue (644b–645e). I argue that the reader of the passage is first offered a cognitive model of a unitary self, presided over by reasoning – which prompts bafflement in the Athenian Visitor’s interlocutors. The marionette image then in effect undermines that model, by portraying humans as passive subjects of contrary controlling impulses determining their behaviour. Finally the image is complicated and in the end transcended by reintroduction of reasoning as a special kind of divinely inspired impulse, with which one must actively cooperate if animal impulses are to be mastered. I examine the way Plato’s reference at this point to law (where there is a key translation problem) should be understood to bear upon the nature of the reasoning in question. In conclusion, I comment on what light is thrown by the marionette passage on self-rule, as we have been promised.
Inductive reasoning involves generalizing from samples of evidence to novel cases. Previous work in this field has focused on how sample contents guide the inductive process. This chapter reviews a more recent and complementary line of research that emphasizes the role of the sampling process in induction. In line with a Bayesian model of induction, beliefs about how a sample was generated are shown to have a profound effect on the inferences that people draw. This is first illustrated in research on beliefs about sampling intentions: was the sample generated to illustrate a concept or was it generated randomly? A related body of work examines the effects of sampling frames: beliefs about selection mechanisms that cause some instances to appear in a sample and others to be excluded. The chapter describes key empirical findings from these research programs and highlights emerging issues such as the effect of timing of information about sample generation (i.e., whether it comes before or after the observed sample) and individual differences in inductive reasoning. The concluding section examines how this work can be extended to more complex reasoning problems where observed data are subject to selection biases.
We present an abstract model of rationality that focuses on structural properties of attitudes. Rationality requires coherence between your attitudes, such as your beliefs, values and intentions. We define three ‘logical’ conditions on attitudes: consistency, completeness and closedness. They parallel the familiar logical conditions on beliefs, but contrast with standard rationality conditions such as preference transitivity. We establish a formal correspondence between our logical conditions and standard rationality conditions. Addressing John Broome’s programme ‘rationality through reasoning’, we formally characterize how you can (not) become more logical by reasoning. Our analysis connects rationality with logic, and enables logical talk about multi-attitude psychology.
Inductive reasoning involves using existing knowledge to make predictions about novel cases. This chapter reviews and evaluates computational models of this fundamental aspect of cognition, with a focus on work involving property induction. The review includes early induction models such as similarity coverage, and the feature-based induction model, as well as a detailed coverage of more recent Bayesian and connectionist approaches. Each model is examined against benchmark empirical phenomena. Model limitations are also identified. The chapter highlights the major advances that have been made in our understanding of the mechanisms that drive induction, as well as identifying challenges for future modeling. These include accounting for individual and developmental differences and applying induction models to explain other forms of reasoning.
Ch 3: The third chapter concerns pleasure and happiness conveyed by a brief moment in time. Ronsard’s hedonistic poem uses dialectical reasoning and rhetorical deliberation to perfect the pleasure that is ephemeral. It is because the pleasure does not last that it becomes an absolute imperative; the pleasure of the moment is not viewed as lesser than the happiness conferred in duration. In the Princesse de Clèves, a scene of mutual pleasure taken by the protagonist and her beloved is reduced to a series of statements of causation, submitted to intense pressure of time, and turns out to be the only “pure” joy the protagonist feels throughout the novel. Baudelaire’s famous “À une passante,” an example of the new urban lyric, ironizes the lyric tradition while similarly proposing the ephemeral as the source of perfect pleasure.
Conclusion: The conclusion summarizes the human abilities addressed throughout the book and adumbrates the notion of "person" that underlies these abilities.
This chapter establishes the strong link between coherence and legal reasoning. In so doing, it draws three main conclusions. A first conclusion is that legal reasoning is an instance of practical reasoning and practical deliberation. What this means, ultimately, is that when one reasons and argues about the content of the law one does not seek to discover truth in the same sense as when forming an opinion about the way things are in nature. Rather, the aim is to formulate a reasoned opinion and commit oneself to a specific course of action given the presence of a legal problem. A second conclusion is that, when understood as practical reasoning, legal reasoning exhibits certain coherence-related features. These are: (i) a web-like structure; (ii) the fact that rationality in legal reasoning does not depend only on logic but also on plausibility (or fit); and (iii) a purposive nature. A third conclusion is that coherence acts as a substantive and a methodological principle during legal reasoning, thus further confirming the dual dimension of coherence identified in Chapter 1.
We claim that understanding human decisions requires that both automatic and deliberate processes be considered. First, we sketch the qualitative differences between two hypothetical processing systems, an automatic and a deliberate system. Second, we show the potential that connectionism offers for modeling processes of decision making and discuss some empirical evidence. Specifically, we posit that the integration of information and the application of a selection rule are governed by the automatic system. The deliberate system is assumed to be responsible for information search, inferences and the modification of the network that the automatic processes act on. Third, we critically evaluate the multiple-strategy approach to decision making. We introduce the basic assumption of an integrative approach stating that individuals apply an all-purpose rule for decisions but use different strategies for information search. Fourth, we develop a connectionist framework that explains the interaction between automatic and deliberate processes and is able to account for choices both at the option and at the strategy level.