We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Daniel Kahneman's legacy is best understood in light of developments in economic theory in the early-mid-20th century, when economists were eager to put utility functions on a firm mathematical foundation. The axiomatic system that provided this foundation was not originally intended to be normative in a prescriptive sense but later came to be seen that way. Kahneman took the axioms seriously, tested them for descriptive accuracy, and found them wanting. He did not view the axioms as necessarily prescriptive. Nevertheless, in the research program he conceived, factual discoveries about real decision-making were stated as deviations from the axioms and thus deemed ‘errors’. This was an unfortunate turn that needs to be corrected for the psychological enrichment of economics to proceed in a productive direction.
This chapter deals with quality control in the virology laboratory, including quality control and quality assurance. It stresses the need to conduct regular audits of the service to maintain quality standards and the need for accredition schemes (e.g. UKAS). Sources of errors in the laboratory and factors associated with technical quality are also discussed.
The gradual digitization of EU migration policies is turning external borders into AI-driven filters that limit access to fundamental rights for people from third countries according to risk indicators. An unshakeable confidence in the reliability of technological devices and their ability to predict the future behaviour of incoming foreigners is leading towards the datafication of EU external frontiers. What happens if the supposedly infallible algorithms are wrong? The article aims to understand the consequences of algorithmic errors on the lives of migrants, refugees and asylum seekers arriving in the European Union. This contribution investigates the socio-political implications of deploying data-driven solutions at the borders in an attempt to problematize the techno-solutionist approach of EU migratory policies and its fundamental rights impact on affected individuals.
Chapter 6 centres on Galen’s longest moral work, the Affections and Errors of the Soul, and explores the features of Galenic practical philosophy from a number of angles. The first section provides an analysis of the work’s programmatic preface and shows that Galen exploits the dynamics of polemic, self-promotion and self-effacement to cast himself as a prominent contributor in this intellectual area. The next section discusses Galen’s emphasis on self-knowledge, which is often blocked by self-love. It claims that in order to generate feelings of revulsion with regard to the latter, Galen works on ‘class fraction’ as a tactic with moralising intent. Another strand of special importance in the essay is the figure of the moral adviser, which Galen elaborates on so as to highlight the need for welcoming and indeed enduring moral criticism. Even though the moral adviser features in other authors of the Second Sophistic, in Galen it points to the applicability of ethics to a broad range of social contexts, thus credentialing his situational ethics. A separate section of Chapter 6 focuses on the concept of free speech (parrhēsia). While Galen debates the challenges of social and political interaction, he advises frankness at all costs. A genuine friend should never be reluctant to express the truth of someone’s moral situation and this makes him strikingly different from the flatterer, a disgusting stock figure in Imperial works on moralia, particularly in Plutarch, whom Galen seems to follow here. Another shrewd device that Galen uses to good effect to achieve the moral rectification of readers is the description of the pathology of anger (its origins and results), particularly in the episode featuring Galen’s Cretan friend, which is framed, I suggest, as an ‘ethical case history’, sharing characteristics with Galen’s medical case histories.
User models that can directly use and learn how to do tasks with unmodified interfaces would be helpful in system design to compare task knowledge and times between interfaces. Including user errors can be helpful because users will always make mistakes and generate errors. We compare three user models: an existing validated model that simulates users’ behavior in the Dismal spreadsheet in Emacs, a newly developed model that interacts with an Excel spreadsheet, and a new model that generates and fixes user errors. These models are implemented using a set of simulated eyes and hands extensions. All the models completed a 14-step task without modifying the system that participants used. These models predict that the task in Excel is approximately 20% faster than in Dismal, including suggesting why, where, and how much Excel is a better design. The Excel model predictions were compared to newly collected human data (N = 23). The model’s predictions of subtask times correlate well with the human data (r2 = .71). We also present a preliminary model of human error and correction based on user keypress errors, including 25 slips. The predictions to data comparison suggest that this interactive model that includes errors moves us closer to having a complete user model that can directly test interface design by predicting human behavior and performing the task on the same interface as users. The errors from the model’s hands also allow further exploration of error detection, error correction, and different knowledge types in user models.
Chapter 3 presents a variety of feedback methods that practitioners and researchers alike use to support learners’ linguistic development. Building on the belief that errors are a normal and even beneficial part of language learning, this chapter shares how and why feedback is a critical component of language teaching. Examples of oral corrective feedback are provided, along with a description of when teachers may choose to use what type of feedback over another.
Biological and chemical weapons are banned by treaty and attract less interest by the military than do nuclear and other modern means like cyber, space action and artificial intelligence. The number of nuclear weapons has gone down but there is no sign of their elimination through acceptance of the Treaty Prohibiting Nuclear Weapons (TPNW) or otherwise, nor is there any prospect of early common commitment to ‘non-first use’. Rather, at least the major nuclear-armed states regard their ability to inflict devastating second nuclear strikes as indispensable to deter any first strike and the ‘nuclear posture reviews of the US and Russia retain a good deal of freedom of action. We cannot at present see signs of zero nuclear and must agree with the conclusion that so long as nuclear weapons exist, there remains a risk of use – through misunderstandings or technical errors. We may also conclude that as cyber, space and other new means of struggle have become available, and capable of escalating, rendering conflicts increasingly unpredictable. It becomes implausible that any civilian or military leadership would allow itself to initiate or slide into conflict. It seems likely that they would choose intense competition by means other than force, notably economic and financial.
Birnbaum (2020) reanalyses the data from Butler and Pogrebna (2018) using his ‘true and error’ test of choice patterns. His results generally support the evidence we presented in that paper. Here we reiterate the reasons for our agnosticism as to the direction any cycles might take, even though the paradox that motivated our study takes a ‘probable winner’ direction. We conclude by returning to the potential significance of predictably intransitive preferences for decision theory generally.
Over three decades, the IPCC has been no stranger to controversies. Given its institutional character as a boundary organisation working between science and policy, it is no surprise that IPCC reports often reflect wider controversies in the scientific and political life of climate change, especially those concerning its consequences and potential solutions. In this chapter, we explain why controversies about the IPCC’s knowledge assessment are inevitable and point out how the IPCC could use controversies for adapting and developing its assessment processes in constructive ways. That is, we show how controversies serve as ‘generative political events’ for the IPCC’s own learning process. To do so, we classify IPCC knowledge controversies into four types (factual, procedural, epistemic and ontological) and, using two illustrative cases, distinguish between controversies which the IPCC triggers and those which the IPCC absorbs into its knowledge assessment.
Chapter 5 analyses the questions of accountability, namely whether or not AWS will give place to an ‘accountability gap’. Its first part aims to explain that, from a legal point of view, what is relevant is who (human operators) or what (AWS) guarantees a better compliance with IHL. The second part looks at eventual violations of IHL caused by AWS. Three types of situations are distinguished: hardware malfunctions, accidents (violations caused by human fault) and errors (violations caused by the systems‘ software). In this regard, special attention is given to the category of ‘dolus eventualis’ for ‘accidents‘ as a level of guilt that should be included in the ICC Statute for some specific situations of individual accountability. In the case of ‘errors’- IHL violations caused by the software system alone - they cannot be attributed to any human operator but the deploying state alone. Lastly, in the third part, it is explained why machine learning algorithms introduce specific challenges in terms of evidence due to their black-box nature. In this regard, it is argued that those algorithms should be able to provide ‘factual algorithms’, that is, be able to provide information about the fundamental facts that the algorithm considered in its selection-making process.
While written language development involves reducing erroneous expressions, traditional error-based measures are problematic for several reasons, including low inter-coder reliability for lexical errors, limited sensitivity for capturing development within a short time period, and the questionable separation of lexical and grammatical errors. Given these problems, we explore automated accuracy measures rooted in a usage-based theory of Second Language Acquisition, which views language as a set of constructions or chunks. For this study, we examined 139 essays in terms of using traditional measures of complexity, accuracy, lexical sophistication, and fluency, as well as novel corpus-based n-gram measures. A factor analysis was conducted to explore how traditional measures grouped with corpus-based measures, and regression analyses were used to examine how corpus-based measures predicted error counts and holistic accuracy scores. With the results of these analyses, we suggest that automated n-gram based measures are a viable alternative to traditional accuracy measures.
Human factors can be defined as the science of understanding of interactions among humans and other elements of a system, and how they can be adapted to improve performance and safety. Human factors issues were present in 40% of the cases of major complications in airway management in NAP4. Human factors issues can be considered in terms of ‘threats’ and ‘safeguards’. Threats increase the likelihood of the occurrence of an error that results in patient harm while safeguards help prevent this. Threats and safeguards in relation to human factors in airway management refer not only to ‘non-technical skills’ (e.g. situation awareness, teamwork) but also many other factors such as procedures, staffing and the physical environment in which airway management is conducted. Proper attention to human factors related issues contributes to both the prevention and effective management of airway emergencies and requires that these issues are considered as part of an integrated approach at the level of the individual, team, environment and organisation as part of routine airway care – not only when an emergency arises.
Economic models play a central role in the decision-making process of the National Institute for Health and Care Excellence (NICE). Inadequate validation methods allow for errors to be included in economic models. These errors may alter the final recommendations and have a significant impact on outcomes for stakeholders.
Objective
To describe the patterns of technical errors found in NICE submissions and to provide an insight into the validation exercises carried out by the companies prior to submission.
Methods
All forty-one single technology appraisals (STAs) completed in 2017 by NICE were reviewed and all were on medicines. The frequency of errors and information on their type, magnitude, and impact was extracted from publicly available NICE documentation along with the details of model validation methods used.
Results
Two STAs (5 percent) had no reported errors, nineteen (46 percent) had between one and four errors, sixteen (39 percent) had between five and nine errors, and four (10 percent) had more than ten errors. The most common errors were transcription errors (29 percent), logic errors (29 percent), and computational errors (25 percent). All STAs went through at least one type of validation. Moreover, errors that were notable enough were reported in the final appraisal document (FAD) in eight (20 percent) of the STAs assessed but each of these eight STAs received positive recommendations.
Conclusions
Technical errors are common in the economic models submitted to NICE. Some errors were considered important enough to be reported in the FAD. Improvements are needed in the model development process to ensure technical errors are kept to a minimum.
Introduction: Trauma resuscitations are plagued with high stress and require time sensitive and intensive interventions. It is a landscape that is a perfect hot bed for clinical errors and adverse events for patients. We sought to describe the adverse events and errors that occur during trauma resuscitation and any associated outcomes. Methods: Medline was searched for a combination of key terms involving trauma resuscitation, adverse events and errors from January 2000 to May 2019. Studies that described adverse events or errors in initial adult trauma resuscitations were included. Two reviewers analyzed papers for inclusion and exclusion criteria with a third reviewer for any discrepancies. Descriptions of errors, adverse events and associated outcomes were collated and presented. Results: A total of 3,462 papers were identified by our search strategy. 18 papers met our inclusion and exclusion criteria and were selected for full review. Adverse events and errors reported in trauma resuscitation included missed injuries, aspiration, failed airway, and deviation from protocol. Rates of adverse events and errors were reported where applicable. Mortality outcomes or length of stay were not directly correlated to adverse events or errors experienced in the trauma resuscitation. Conclusion: Our study highlights the predominance of adverse events and errors experienced during initial trauma resuscitation. We described a multitude of adverse events and errors and their rates but further study is needed to determine outcome differences for patients and possibility for quality improvement.
Introduction: Trauma care is highly complex and prone to medical errors. Accordingly, several studies have identified adverse events and conditions leading to potentially preventable or preventable deaths. Depending on the availability of specialized trauma care and the trauma system organization, between 10 and 30% of trauma-related deaths worldwide could be preventable if optimal care was promptly delivered. This narrative review aims to identify the main determinants and areas for improvements associated with potentially preventable trauma mortality. Methods: A literature review was performed using Medline, Embase and Cochrane Central Register of Controlled Trials from 1990 to a maximum of 6 months before submission for publication. Experimental or observational studies that have assessed determinants and areas for improvements that are associated with trauma death preventability were considered for inclusion. Two researchers independently selected eligible studies and extracted the relevant data. The main areas for improvements were classified using the Joint Commission on Accreditation of Healthcare Organizations patient event taxonomy. No statistical analyses were performed given the data heterogeneity. Results: From the 3647 individual titles obtained by the search strategy, a total of 37 studies were included. Each study included between 72 and 35311 trauma patients who had sustained mostly blunt trauma, frequently following a fall or a motor vehicle accident. Preventability assessment was performed for 17 to 2081 patients using either a single expert assessment (n = 2, 5,4%) or an expert panel review (n = 35, 94.6%). The definition of preventability and the taxonomy used varied greatly between the studies. The rate of potentially preventable or preventable death ranged from 2.4% to 76.5%. The most frequently reported areas for improvement were treatment delay, diagnosis accuracy to avoid missed or incorrect diagnosis and adverse events associated with the initial procedures performed. The risk of bias of the included studies was high for 32 studies because of the retrospective design and the panel review preventability assessment. Conclusion: Deaths occurring after a trauma remain often preventable. Included studies have used unstandardized definitions of a preventable death and various methodologies to perform the preventability assessment. The proportion of preventable or potentially preventable death reported in each study ranged from 2.4% to 76.5%. Delayed treatment, missed or incorrect initial diagnosis and adverse events following a procedure were commonly associated with preventable trauma deaths and could be targeted to develop quality improvement and monitoring projects.
Reasons-responsiveness theories of moral responsibility are currently among the most popular. Here, I present the fallibility paradox, a novel challenge to these views. The paradox involves an agent who is performing a somewhat demanding psychological task across an extended sequence of trials and who is deeply committed to doing her very best at this task. Her action-issuing psychological processes are outstandingly reliable, so she meets the criterion of being reasons-responsive on every single trial. But she is human after all, so it is inevitable that she will make rare errors. The reasons-responsiveness view, it is claimed, is forced to reach a highly counterintuitive conclusion: she is morally responsible for these rare errors, even though making rare errors is something she is powerless to prevent. I review various replies that a reasons-responsiveness theorist might offer, arguing that none of these replies adequately addresses the challenge.
In the usage-based approach to children’s language learning, language is seen as emerging from children’s preverbal communicative and cognitive skills. Children construct more abstract linguistic representations only gradually, and show uneven development in all aspects of their language learning. I will present results that show the relationship between children’s emerging linguistic structures and patterns in the speech addressed to them, and demonstrate the effects played by the consistency of markers, the complexity of the construction in question, and relative type and token frequencies within and across constructions. I highlight the contribution made by research that employs naturalistic, experimental, and modelling methodologies, and that is applied to a range of languages and to variability in the errors that children make. Finally, I will outline the outstanding issues for this approach, and how we might address them.
In interpreting radiocarbon dating results, it is important that archaeologists distinguish uncertainties derived from random errors and those from systematic errors, because the two must be dealt with in different ways. One of the problems that archaeologists face in practice, however, is that when receiving dating results from laboratories, they are rarely able to critically assess whether differences between multiple 14C dates of materials are caused by random or systematic errors. In this study, blind tests were carried out to check four possible sources of errors in dating results: repeatability of results generated under identical field and laboratory conditions, differences in results generated from the same sample given to the same laboratory submitted at different times, interlaboratory differences of results generated from the same sample, and differences in the results generated between inner and outer rings of wood. Five charred wood samples, collected from the Namgye settlement and Hongreyonbong fortress, South Korea, were divided into 80 subsamples and submitted to five internationally recognized 14C laboratories on a blind basis twice within a 2-month interval. The results are generally in good statistical accordance and present acceptable errors at an archaeological scale. However, one laboratory showed a statistically significant variance in ages between batches for all samples and sites. Calculation of the Bayesian partial posterior predictive p value and chi-squared tests rejected the null hypothesis that the errors randomly occurred, although the source of the error is not specifically known. Our experiment suggests that it is necessary for users of 14C dating to establish an organized strategy for dating sites before submitting samples to laboratories in order to avoid possible systematic errors.
The current study examined the use of biographical data to predict errors, tardiness, policy violations, overall job performance, and turnover among nurses. The results of the study indicate that biodata measures are valid selection devices for nurses and effective at predicting nurse errors, tardiness, policy violations, and overall job performance, but the instrument was not an effective predictor of turnover, voluntary or involuntary. Additionally, examination of group differences revealed that White subjects scored significantly higher on the biodata instrument compared to Black subjects but produced group differences considerably smaller than typically found with measures of cognitive ability. Future research directions and implications for practice are discussed.
Numerous studies have demonstrated that prospective memory (PM) abilities are impaired following traumatic brain injury (TBI). PM refers to the ability to remember to complete a planned action following a delay. PM post-TBI has been shown to be related to performance on neuropsychological tests of executive functioning and retrospective episodic memory (RM). However, the relative influence of impairments in RM versus executive functioning on PM performance post-TBI remains uninvestigated. In the current study, PM and neuropsychological test performance were examined in 45 persons with a history of moderate to severe TBI at least 1 year before enrollment. Regression analyses examined the relative contributions of RM and executive functioning in the prediction of PM performance on the Rivermead Behavioral Memory Test (RBMT). Results indicated that scores on tests of delayed RM and rule monitoring (i.e., ability to avoid making errors on executive measures) were the strongest predictors of PM. When the interaction between RM impairment and rule monitoring was examined, a positive relationship between PM and rule monitoring was found only in TBI participants with impaired RM. Results suggest that PM performance is dependent upon rule monitoring abilities only when RM is impaired following TBI. (JINS, 2014, 20, 1–11)