We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter describes three main numerical methods to model hazards which cannot be simplified by analytical expressions (as covered in Chapter 2): cellular automata, agent-based models (ABMs), and system dynamics. Both cellular automata and ABMs are algorithmic approaches while system dynamics is a case of numerical integration. Energy dissipation during the hazard process is a dynamic process, that is, a process that evolves over time. Reanalysing all perils from a dynamic perspective is not always justified, since a static footprint (as defined in Chapter 2) often offers a reasonable approximation for the purpose of damage assessment. However, for some specific perils, the dynamics of the process must be considered for their proper characterization. A variety of dynamic models is presented here, for armed conflicts, blackouts, epidemics, floods, landslides, pest infestations, social unrest, stampedes, and wildfires. Their implementation in the standard catastrophe (CAT) model pipeline is also discussed.
Dietary strategies for weight loss typically place an emphasis on achieving a prescribed energy intake. Depending on the approach taken, this may be achieved by restricting certain nutrients or food groups, which may lower overall diet quality. Various studies have shown that a higher quality diet is associated with better cardiovascular (CV) health outcomes1. This study aimed to evaluate the effect of an energy restricted diet on diet quality, and associated changes in cardiovascular risk factors. One hundred and forty adults (42 M:98 F, 47.5 ± 10.8 years, BMI 30.7 ± 2.3 kg/m2) underwent an energy restricted diet (30% reduction) with dietary counselling for 3 months, followed by 6 months of weight maintenance. Four-day weighed food diaries captured dietary data at baseline, 3 and 9 months and were analysed using a novel algorithm to score diet quality (based on the Dietary Guideline Index, DGI)2. Total DGI scores ranged from 0-120, with sub scores for consumption of core (0-70) and non-core foods (0-50). For all scores, a higher score or increase reflects better diet quality. The CV risk factors assessed included blood pressure (SBP and DBP) and fasting lipids (total (TC), high and low-density lipoprotein cholesterol (HDL-C, LDL-C) and triglycerides (TAG). Mixed model analyses were used to determine changes over time (reported as mean ± standard error), and Spearman rho (rs) evaluated associations between DGI score and CV risk factors. Dietary energy intake was significantly restricted at 3 months (−3222 ± 159 kJ, P<0.001, n = 114) and 9 months (−2410 ± 167 kJ, P<0.001, n = 100) resulting in significant weight loss (3 months −7.0 ± 0.4 kg, P<0.001; 9 months −8.2 ± 0.4 kg, P<0.001). Clinically meaningful weight loss (>5% body mass) was achieved by 81% of participants by 3 months. Diet quality scores were low at baseline (scoring 49.2 ± 1.5), but improved significantly by 3 months (74.7 ± 1.6, P<0.000) primarily due to reductions in the consumption of non-core i.e. discretionary foods (Core sub-score +4.0. ± 0.7, Non-core sub-score +21.3.1 ± 1.6, both P<0.001). These improvements were maintained at 9 months (Total score 71.6 ± 1.7, P<0.000; Core sub-score +4.4 ± 0.7 from baseline, P<0.000; Non-core sub-score +17.9 ± 1.6 from baseline, P<0.000). There were significant inverse relationships between changes in Total DGI score and changes in DBP (rs = −0.268, P = 0.009), TC (rs = −0.298, P = 0.004), LDL-C (rs = −0.224, P = 0.032) and HDL-C (rs = −0.299, P = 0.004) but not SBP and TG at 3 months. These data emphasise the importance of including diet quality as a key component when planning energy restricted diets. Automated approaches will enable researchers to evaluate subtle changes in diet quality and their effect on health outcomes.
This Handbook brings together a global team of private law experts and computer scientists to examine the interface between private law and AI, which includes issues such as whether existing private law can address the challenges of AI and whether and how private law needs to be reformed to reduce the risks of AI while retaining its benefits.
Sun Tzu's Art of War is widely regarded as the most influential military & strategic classic of all time. Through 'reverse engineering' of the text structured around 14 Sun Tzu 'themes,' this rigorous analysis furnishes a thorough picture of what the text actually says, drawing on Chinese-language analyses, historical, philological, & archaeological sources, traditional commentaries, computational ideas, and strategic & logistics perspectives. Building on this anchoring, the book provides a unique roadmap of Sun Tzu's military and intelligence insights and their applications to strategic competitions in many times and places worldwide, from Warring States China to contemporary US/China strategic competition and other 21st century competitions involving cyber warfare, computing, other hi-tech conflict, espionage, and more. Simultaneously, the analysis offers a window into Sun Tzu's limitations and blind spots relevant to managing 21st century strategic competitions with Sun-Tzu-inspired adversaries or rivals.
Through textually grounded "reverse engineering" of Sun Tzu’s ideas, this study challenges widely held assumptions. Sun Tzu is more straightforward, less "crafty," than often imagined. The concepts are more structural, less aphoristic. The fourteen themes approach provides a way of addressing Sun Tzu’s tendency to speak to multiple, often shifting, audiences at once ("multivocality"). It also sheds light on Sun Tzu’s limitations, including a pervasive zero-sum mentality; focus mostly on conventional warfare; a narrow view of human nature. Sun Tzu’s enduring value is best sought in the text’s extensive attention to warfare’s information aspects, where Sun Tzu made timeless contributions having implications for modern information warfare and especially its human aspects (e.g., algorithm sabotage by subverted insiders). The text points opportunities for small, agile twenty-first-century strategic actors to exploit cover provided by modern equivalents to Sun Tzu’s "complex terrain" (digital systems, social networks, complex organizations, and complex statutes) to run circles around large, sluggish, established institutional actors, reaping great profit from applying Sun Tzu’s insights.
There are two Sun Tzu verses which, by Sun Tzu’s own affirmations, may be seen as summations of the active ingredient of his way of war. One is Theme #6’s centerpiece verse III.4 (Passage #6.1).
Through textually grounded "reverse engineering" of Sun Tzu’s ideas, this study challenges widely held assumptions. Sun Tzu is more straightforward, less "crafty," than often imagined. The concepts are more structural, less aphoristic. The fourteen themes approach provides a way of addressing Sun Tzu’s tendency to speak to multiple, often shifting, audiences at once ("multivocality"). It also sheds light on Sun Tzu’s limitations, including a pervasive zero-sum mentality; focus mostly on conventional warfare; a narrow view of human nature. Sun Tzu’s enduring value is best sought in the text’s extensive attention to warfare’s information aspects, where Sun Tzu made timeless contributions having implications for modern information warfare and especially its human aspects (e.g., algorithm sabotage by subverted insiders). The text points opportunities for small, agile twenty-first-century strategic actors to exploit cover provided by modern equivalents to Sun Tzu’s "complex terrain" (digital systems, social networks, complex organizations, and complex statutes) to run circles around large, sluggish, established institutional actors, reaping great profit from applying Sun Tzu’s insights.
Among the rhetorical pleas that follow most instances of public dissatisfaction is the call for more or better accountability. Accountability is a lauded notion, a “golden concept” that is considered widely as critical to the success of democratic government. Such pleas, I will argue, are misplaced. Rather than starting from the premise of accountability as an idea that no one can be against, I consider the possibility that accountability undermines the very notion it ostensibly promotes: self-government. The concept of accountability in modern political theory is tied more closely to the emergence of an impersonal administrative state than it is to the hopeful horizon of a democratic one. In practice and in theory, it is a concept of irresponsibility, a technological approach to government that provides the comforts of impersonal rationality.
Suicidal behaviors are prevalent among college students; however, students remain reluctant to seek support. We developed a predictive algorithm to identify students at risk of suicidal behavior and used telehealth to reduce subsequent risk.
Methods
Data come from several waves of a prospective cohort study (2016–2022) of college students (n = 5454). All first-year students were invited to participate as volunteers. (Response rates range: 16.00–19.93%). A stepped-care approach was implemented: (i) all students received a comprehensive list of services; (ii) those reporting past 12-month suicidal ideation were directed to a safety planning application; (iii) those identified as high risk of suicidal behavior by the algorithm or reporting 12-month suicide attempt were contacted via telephone within 24-h of survey completion. Intervention focused on support/safety-planning, and referral to services for this high-risk group.
Results
5454 students ranging in age from 17–36 (s.d. = 5.346) participated; 65% female. The algorithm identified 77% of students reporting subsequent suicidal behavior in the top 15% of predicted probabilities (Sensitivity = 26.26 [95% CI 17.93–36.07]; Specificity = 97.46 [95% CI 96.21–98.38], PPV = 53.06 [95% CI 40.16–65.56]; AUC range: 0.895 [95% CIs 0.872–0.917] to 0.966 [95% CIs 0.939–0.994]). High-risk students in the Intervention Cohort showed a 41.7% reduction in probability of suicidal behavior at 12-month follow-up compared to high-risk students in the Control Cohort.
Conclusions
Predictive risk algorithms embedded into universal screening, coupled with telehealth intervention, offer significant potential as a suicide prevention approach for students.
iNaturalist is a widely-utilized platform for data collection and sharing among non-professional volunteers and is widely employed in citizen science. This platform's data are also used in scientific studies for a wide range of purposes, including tracking changes in species distribution, monitoring the spread of alien-invasive species, and assessing the impacts of urbanization and land-use change on biodiversity. Lichens, due to their year-round presence on trees, soil and rocks, and their diverse shapes and colours, have captured the attention of iNaturalist users, and lichen records are widely represented on the platform. However, due to the complexity of lichen identification, the use of data collected by untrained, or poorly trained volunteers in scientific investigation poses concerns among lichenologists. To address these concerns, this study assessed the reliability of lichen identification by iNaturalist users by comparing records on the platform with identifications carried out by experts (experienced lichenologists) in three cities where citizen science projects were developed. Results of this study caution against the use of unchecked data obtained from the platform in lichenology, demonstrating substantial inconsistency between results gathered by iNaturalist users and experts.
One of the most innovative changes to the practice of human embryo culture was the introduction of sophisticated time-lapse imaging (TLI) systems that eventually became part of the incubation unit. TLI allows continuous, uninterrupted monitoring of embryo development. Embryo selection at either the cleavage or the blastocyst stage using algorithms developed with tens of thousands or more of embryos with known implantation is robust and repeatable. The technology has continued to evolve, with improvements to the physical technology as well as software enhancements, including artificial intelligence (AI)-based embryo selection algorithms and machine learning.
To develop an agitation reduction and prevention algorithm is intended to guide implementation of the definition of agitation developed by the International Psychogeriatric Association (IPA)
Design:
Review of literature on treatment guidelines and recommended algorithms; algorithm development through reiterative integration of research information and expert opinion
Setting:
IPA Agitation Workgroup
Participants:
IPA panel of international experts on agitation
Intervention:
Integration of available information into a comprehensive algorithm
Measurements:
None
Results
The IPA Agitation Work Group recommends the Investigate, Plan, and Act (IPA) approach to agitation reduction and prevention. A thorough investigation of the behavior is followed by planning and acting with an emphasis on shared decision-making; the success of the plan is evaluated and adjusted as needed. The process is repeated until agitation is reduced to an acceptable level and prevention of recurrence is optimized. Psychosocial interventions are part of every plan and are continued throughout the process. Pharmacologic interventions are organized into panels of choices for nocturnal/circadian agitation; mild-moderate agitation or agitation with prominent mood features; moderate-severe agitation; and severe agitation with threatened harm to the patient or others. Therapeutic alternatives are presented for each panel. The occurrence of agitation in a variety of venues—home, nursing home, emergency department, hospice—and adjustments to the therapeutic approach are presented.
Conclusions
The IPA definition of agitation is operationalized into an agitation management algorithm that emphasizes the integration of psychosocial and pharmacologic interventions, reiterative assessment of response to treatment, adjustment of therapeutic approaches to reflect the clinical situation, and shared decision-making.
Bioinformatics is discussed in Chapter 11. The complex nature of the subject and its interaction with other disciplines are outlined, and the inter-dependence of bioinformatics, the development of computer hardware and the internet is stressed. The nature and range of biological databases are outlined, from the inception of nucleic acid databases in the 1970s to the present breadth of primary and secondary databases that are repositories for information on nucleic acid and protein sequences, interactions between cellular components, biochemical pathways, pharmacological targets and many other data sets derived from existing information. Genome sequence databases are used to illustrate the tools needed to assemble, collate, annotate and interrogate the data, and the impact of bioinformatics in enabling experiments and protocols to to be conducted in silico is discussed.
Media and communication influence, shape, and change our societies. Therefore, this first chapter aims to explain the implications of digital communication for our societies and the relationship between media, technology, and society. The chapter introduces the concept of society from a sociological perspective and explains how societies change because of the effects technological developments have on them, and vice versa. It illustrates this interplay with the example of digital divides.
In order to explain the significance and changes of public communication in a digital society, the chapters zooms in on the media landscape and explicates the difference between new media and old (or traditional) media. It pays particular attention to the ideas of Marshall McLuhan, as his work remains a cornerstone when studying the relationship between media, technology, and society. The chapter then outlines the discipline of media linguistics and explains how media linguistics can help to make digital media and digital communication more tangible. It focuses on three key terms crucial for understanding public digital communication: multimodality, media convergence, and mediatization.
The lack of a standardised definition for the concept of TRD and an adequate criteria for therapeutic response make difficult the management of patients with MDD who do not achieve remission with one or more courses of treatment. All classifications suggested to define TRD are arbitrary, partially evidence-based, subordinated to the pharmacological findings of the time in which they are written and with serious inconsistencies, making it difficult to construct a universal and enduring diagnostic system.
Objectives
Considering that the most important goal in treating a patient with Major Depressive Disorder (MDD) should be remission and return to previous functionality, the search for a standardised, evidence-based classification system will allow timely and effective interventions leading to the reduction of this devastating condition.
Methods
Bibliographic review
Results
The proposed therapeutic algorithm arises from the combination of several fundamental principles for the management of treatment-resistant depression: the different classification systems of the concept, as well as the concepts of response, relapse, recurrence and remission; the scientific evidence found in the current literature, routine clinical practice, knowledge of switching and augmentation strategies, the new pharmacological targets and neurobiological hypothesis discovered, without forgetting finally the different clinical profiles of depressive symptomatology and the specific indications of each antidepressant.
Conclusions
Resistant depression is difficult to treat successfully and is not a uniform entity. Recently there has been a move to characterise treatment-resistant depression as ‘difficult-to-treat’ depression on the basis that the former description implies that depression treatments are normally effective and that non-response is therefore somehow abnormal.
The VML method was developed and designed to treat Apraxia of speech focusing on the Autistic population. After experiencing over 2000 children in many countries around the world, we have developed an algorithm which represents the VML analysis process. The algorithm includes almost 1000 conditions and was found reliable with copying the in-person VML evaluation. The algorithm generates a treatment program with 95% accuracy of the elected treatment topics.
Objectives
The objective of the VML software is to enable the VML analysis and treatment at low cost to wide population around the world, at home. The program includes main treatment topics, detailed exercises, picture and videos demonstrating the proposed treatment and general guidelines. The software users are supported by the VML experts around the world.
Methods
Based on the algorithm, we have developed a software which can produce a highly detailed motor speech treatment program. The software is web based, available now in English, Mandarin and soon in other languages as well. The user is required to fill in the speech data using the software interface.
Results
The uniqe sofetware was tested and found to have 90% reliability rate in comparison to a VML expert treatment program. In addition it was found to have the ability to over come mild evaluation mistakes while producing an effective treatment program.
Conclusions
The MYVML evaluation software is innovation in the field of speech treatment, striving to share the knowledge and give the treatment tool to as many practitioners and families as possible.
Disclosure
I am the developer of the VML software described in the abstract
Currently, there are known problems of assessing the severity of psychopathological states based on psychometric (rank) scales [1]. The main problem: ranks are non-numeric information that does not allow the simplest mathematical operations (summation, average) [2] and, as a result, the impossibility of constructing correct models for evaluating states
Objectives
Development of algorithms for processing initial rank information about the severity of psychopathological states in order to obtain results in numerical form based on the Analytic Hierarchy Process (AHP) [2]
Methods
Clinical, statistical, algorithms of the AHP.
Results
The problems of assessing the patient’s states are multicriteria. They are solved within the framework of AHP by constructing numerical intensity scales when measuring the dimensions of disorders. This means a correct transition from the rank scale to the scale of relations, in which the estimates are numbers that allow any mathematical operations. The implementation of AHP procedures is based on the application of the AHP normative approach [2], which uses expert comparisons of ratings of the rank scale.
Conclusions
The fundamental difference between the results based on AHP and rank results is due to the fact that numerical estimates of the severity of states are obtained, which can be used for any mathematical processing and the construction of correct models of communication and prediction of the state of patients from many factors, taking into account their weight. References: 1. Zimmerman M., Morgan T.A., Stanton K. World Psychiatry. 2018;17: 258–275. 2. Mitikhin, V.G., Solokhina, T.A. S.S. Korsakov Journal of Neurology and Psychiatry. 2019; 119(2): 49-54.
Artificial intelligence (AI) represents emerging technology with beneficial potential for the maritime domain, to contain all natural and manmade features, events, or activities appertaining to the seas, oceans or other navigable waterways. It is not a single technology but a continuum of capabilities designed to synergize computational processing power with human creativity. This chapter introduces key AI concepts, including but not limited to, algorithms, reinforcement learning, deep learning and artificial general intelligence. Science has not yet achieved sentient machines, and fully autonomous vessels may not become commonplace for a number of years; nevertheless, current AI technologies offer risk-reduction methodologies to human-crewed vessels operating in dynamic and often dangerous conditions. In general, AI can enhance compliance with the law of the sea and reduce marine casualties. Specifically, this chapter proposes that AI technologies should be adopted to facilitate safer navigation through improved hydrographic services and AI-supported decision-making for vessel masters and human crews at sea.
Algorithmic transparency is the basis of machine accountability and the cornerstone of policy frameworks that regulate the use of artificial intelligence techniques. The goal of algorithmic transparency is to ensure accuracy and fairness in decisions concerning individuals.AI techniques replicate bias, and as these techniques become more complex, bias becomes more difficult to detect. But the principle of algorithmic transparency remains important across a wide range of sectors. Credit determinations, employment assessments, educational tracking, as well as decisions about government benefits, border crossings, communications surveillance and even inspections in sports stadiums increasingly rely on black box techniques that produce results that are unaccountable, opaque, and often unfair. Even the organizations that rely on these methods often do not fully understand their impact or their weaknesses.