We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In patients with treatment resistant depression (TRD), the ESCAPE-TRD study showed esketamine nasal spray was superior to quetiapine extended release.
Aims
To determine the robustness of the ESCAPE-TRD results and confirm the superiority of esketamine nasal spray over quetiapine extended release.
Method
ESCAPE-TRD was a randomised, open-label, rater-blinded, active-controlled phase IIIb trial. Patients had TRD (i.e. non-response to two or more antidepressive treatments within a major depressive episode). Patients were randomised 1:1 to flexibly dosed esketamine nasal spray or quetiapine extended release, while continuing an ongoing selective serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor. The primary end-point was achieving a Montgomery–Åsberg Depression Rating Scale score of ≤10 at Week 8, while the key secondary end-point was remaining relapse free through Week 32 after achieving remission at Week 8. Sensitivity analyses were performed on these end-points by varying the definition of remission based on timepoint, threshold and scale.
Results
Of 676 patients, 336 were randomised to esketamine nasal spray and 340 to quetiapine extended release. All sensitivity analyses on the primary and key secondary end-point favoured esketamine nasal spray over quetiapine extended release, with relative risks ranging from 1.462 to 1.737 and from 1.417 to 1.838, respectively (all p < 0.05). Treatment with esketamine nasal spray shortened time to first and confirmed remission (hazard ratio: 1.711 [95% confidence interval 1.402, 2.087], p < 0.001; 1.658 [1.337, 2.055], p < 0.001).
Conclusion
Esketamine nasal spray consistently demonstrated significant superiority over quetiapine extended release using all pre-specified definitions for remission and relapse. Sensitivity analyses supported the conclusions of the primary ESCAPE-TRD analysis and demonstrated robustness of the results.
This Element works as non-technical overview of Agent-Based Modelling (ABM), a methodology which can be applied to economics, as well as fields of natural and social sciences. This Element presents the introductory notions and historical background of ABM, as well as a general overview of the tools and characteristics of this kind of models, with particular focus on more advanced topics like validation and sensitivity analysis. Agent-based simulations are an increasingly popular methodology which fits well with the purpose of studying problems of computational complexity in systems populated by heterogeneous interacting agents.
Carefully designing blade geometric parameters is necessary as they determine the aerodynamic performance of a rotor. However, manufacturing inaccuracies cause the blade geometric parameters to deviate randomly from the ideal design. Therefore, it is essential to quantify uncertainty and analyse the sensitivity of the blade geometric deviations on the compressor performance. This work considers a subsonic compressor rotor stage and examines samples with different geometry features using three-dimensional Reynolds-averaged Navier-Stokes simulations. A method to combine Halton sequence and non-intrusive polynomial chaos is adopted to perform the uncertainty quantitative (UQ) analysis. The Sobol’ index and Spearman correlation coefficient help analyse the sensitivity and correlation between the compressor performance and blade geometric deviations, respectively. The results show that the fluctuation amplitude of the compressor performance decreases for lower mass flow rates, and the sensitivity of the compressor performance to the blade geometrical parameters varies with the working conditions. The effects of various blade geometric deviations on the compressor performance are independent and linearly superimposed, and the combined effects of different geometric deviations on the compressor performance are small.
Open rotors can play a critical role towards transitioning to a more sustainable aviation by providing a fuel-efficient alternative. This paper considers the sensitivity of an open-rotor engine to variations of three operational parameters during take-off, focusing on both aerodynamics and aeroacoustics. Via a sensitivity analysis, insights to the complex interactions of aerodynamics and aeroacoustics can be gained. For both the aerodynamics and aeroacoustics of the engine, numerical methods have been implemented. Namely, the flowfield has been solved using unsteady Reynolds Averaged Navier Stokes and the acoustic footprint of the engine has been quantified through the Ffowcs Williams-Hawking equations. The analysis has concluded that the aerodynamic performance of the open rotor can decisively be impacted by small variations of the operational parameters. Specifically, blade loading increased by 9.8% for a 5% decrease in inlet total temperature with the uncertainty being amplified through the engine. In comparison, the aeroacoustic footprint of the engine had more moderate variations, with the overall sound pressure level increasing by up to 2.4dB for a microphone lying on the engine axis and aft of the inlet. The results signify that there is considerable sensitivity in the model and shall be systematically examined during the design or optimisation process.
This chapter applies the total error framework presented in Chapter 5 to a case example of preelection polling during the 2016 US presidential election. Here, the focus is on problems with a single poll.
The United States Congress passed the 21st Century Cures Act mandating the development of Food and Drug Administration guidance on regulatory use of real-world evidence. The Forum on the Integration of Observational and Randomized Data conducted a meeting with various stakeholder groups to build consensus around best practices for the use of real-world data (RWD) to support regulatory science. Our companion paper describes in detail the context and discussion of the meeting, which includes a recommendation to use a causal roadmap for study designs using RWD. This article discusses one step of the roadmap: the specification of a sensitivity analysis for testing robustness to violations of causal model assumptions.
Methods:
We present an example of a sensitivity analysis from a RWD study on the effectiveness of Nifurtimox in treating Chagas disease, and an overview of various methods, emphasizing practical considerations on their use for regulatory purposes.
Results:
Sensitivity analyses must be accompanied by careful design of other aspects of the causal roadmap. Their prespecification is crucial to avoid wrong conclusions due to researcher degrees of freedom. Sensitivity analysis methods require auxiliary information to produce meaningful conclusions; it is important that they have at least two properties: the validity of the conclusions does not rely on unverifiable assumptions, and the auxiliary information required by the method is learnable from the corpus of current scientific knowledge.
Conclusions:
Prespecified and assumption-lean sensitivity analyses are a crucial tool that can strengthen the validity and trustworthiness of effectiveness conclusions for regulatory science.
The curse of dimensionality confounds the comprehensive evaluation of computational structural mechanics problems. Adequately capturing complex material behavior and interacting physics phenomenon in models can lead to long run times and memory requirements resulting in the need for substantial computational resources to analyze one scenario for a single set of input parameters. The computational requirements are then compounded when considering the number and range of input parameters spanning material properties, loading, boundary conditions, and model geometry that must be evaluated to characterize behavior, identify dominant parameters, perform uncertainty quantification, and optimize performance. To reduce model dimensionality, global sensitivity analysis (GSA) enables the identification of dominant input parameters for a specific structural performance output. However, many distinct types of GSA methods are available, presenting a challenge when selecting the optimal approach for a specific problem. While substantial documentation is available in the literature providing details on the methodology and derivation of GSA methods, application-based case studies focus on fields such as finance, chemistry, and environmental science. To inform the selection and implementation of a GSA method for structural mechanics problems for a nonexpert user, this article investigates five of the most widespread GSA methods with commonly used structural mechanics methods and models of varying dimensionality and complexity. It is concluded that all methods can identify the most dominant parameters, although with significantly different computational costs and quantitative capabilities. Therefore, method selection is dependent on computational resources, information required from the GSA, and available data.
Increasing emphasis on the use of real-world evidence (RWE) to support clinical policy and regulatory decision-making has led to a proliferation of guidance, advice, and frameworks from regulatory agencies, academia, professional societies, and industry. A broad spectrum of studies use real-world data (RWD) to produce RWE, ranging from randomized trials with outcomes assessed using RWD to fully observational studies. Yet, many proposals for generating RWE lack sufficient detail, and many analyses of RWD suffer from implausible assumptions, other methodological flaws, or inappropriate interpretations. The Causal Roadmap is an explicit, itemized, iterative process that guides investigators to prespecify study design and analysis plans; it addresses a wide range of guidance within a single framework. By supporting the transparent evaluation of causal assumptions and facilitating objective comparisons of design and analysis choices based on prespecified criteria, the Roadmap can help investigators to evaluate the quality of evidence that a given study is likely to produce, specify a study to generate high-quality RWE, and communicate effectively with regulatory agencies and other stakeholders. This paper aims to disseminate and extend the Causal Roadmap framework for use by clinical and translational researchers; three companion papers demonstrate applications of the Causal Roadmap for specific use cases.
Causal inference from observational data is notoriously difficult, and relies upon many unverifiable assumptions, including no confounding or selection bias. Here, we demonstrate how to apply a range of sensitivity analyses to examine whether a causal interpretation from observational data may be justified. These methods include: testing different confounding structures (as the assumed confounding model may be incorrect), exploring potential residual confounding and assessing the impact of selection bias due to missing data. We aim to answer the causal question ‘Does religiosity promote cooperative behaviour?’ as a motivating example of how these methods can be applied. We use data from the parental generation of a large-scale (n = approximately 14,000) prospective UK birth cohort (the Avon Longitudinal Study of Parents and Children), which has detailed information on religiosity and potential confounding variables, while cooperation was measured via self-reported history of blood donation. In this study, there was no association between religious belief or affiliation and blood donation. Religious attendance was positively associated with blood donation, but could plausibly be explained by unmeasured confounding. In this population, evidence that religiosity causes blood donation is suggestive, but rather weak. These analyses illustrate how sensitivity analyses can aid causal inference from observational research.
Survey weighting allows researchers to account for bias in survey samples, due to unit nonresponse or convenience sampling, using measured demographic covariates. Unfortunately, in practice, it is impossible to know whether the estimated survey weights are sufficient to alleviate concerns about bias due to unobserved confounders or incorrect functional forms used in weighting. In the following paper, we propose two sensitivity analyses for the exclusion of important covariates: (1) a sensitivity analysis for partially observed confounders (i.e., variables measured across the survey sample, but not the target population) and (2) a sensitivity analysis for fully unobserved confounders (i.e., variables not measured in either the survey or the target population). We provide graphical and numerical summaries of the potential bias that arises from such confounders, and introduce a benchmarking approach that allows researchers to quantitatively reason about the sensitivity of their results. We demonstrate our proposed sensitivity analyses using state-level 2020 U.S. Presidential Election polls.
The Welfare Quality® (WQ) protocol for on-farm dairy cattle welfare assessment describes 33 measures and a step-wise method to integrate the outcomes into 12 criteria scores, grouped into four principle scores and into an overall welfare categorisation with four possible levels. The relative contribution of various welfare measures to the integrated scores has been contested. Using a European dataset (491 herds), we investigated: i) variation in sensitivity of integrated outcomes to extremely low and high values of measures, criteria and principles by replacing each actual value with minimum and maximum observed and theoretically possible values; and ii) the reasons for this variation in sensitivity. As intended by the WQ consortium, the sensitivity of integrated scores depends on: i) the observed value of the specific measures/criteria; ii) whether the change was positive/negative; and iii) the relative weight attributed to the measures. Additionally, two unintended factors of considerable influence appear to be side-effects of the complexity of the integration method. Namely: i) the number of measures integrated into criteria and principle scores; and ii) the aggregation method of the measures. Therefore, resource-based measures related to drinkers (which have been criticised with respect to their validity to assess absence of prolonged thirst), have a much larger influence on integrated scores than health-related measures such as ‘mortality rate’ and ‘lameness score’. Hence, the integration method of the WQ protocol for dairy cattle should be revised to ensure that the relative contribution of the various welfare measures to the integrated scores more accurately reflect their relevance for dairy cattle welfare.
The theoretical background on sensitivity analysis, especially on the deterministic approach, is described along with definitions on the forward sensitivity coefficient, adjoint sensitivity coefficient, and relative sensitivity coefficient along with examples of their practical applications. Concept, strategies, and applications of adaptive (targeted) observations are discussed, using adjoint sensitivity analysis, singular vectors, the ensemble transform Kalman filter, and conditional nonlinear optimal perturbations. Forecast sensitivity of observations is also discussed as a tool for assessing the impact of observations. In addition, various targeting field programs are introduced.
This chapter illustrates how to apply explicit Bayesian analysis to scrutinize qualitative research, pinpoint sources of disagreement on inferences, and facilitate consensus-building discussions among scholars, highlighting examples of intuitive Bayesian reasoning as well as departures from Bayesian principles in published research.
We establish sufficient conditions for differentiability of the expected cost collected over a discrete-time Markov chain until it enters a given set. The parameter with respect to which differentiability is analysed may simultaneously affect the Markov chain and the set defining the stopping criterion. The general statements on differentiability lead to unbiased gradient estimators.
Laser Powder Bed Fusion is the most widespread additive manufacturing process for metals. In literature, there are several analytical models for estimating the manufacturing cost. However, few papers present sensitivity analyses for evaluating the most relevant product and process parameters on the production cost. This paper presents a cost model elaborated from previous studies used in a sensitivity analysis. The most relevant process parameters observed in the sensitivity analysis are the 3D printer load factor, layer thickness, raw material price and laser speed.
As an important index to quantitatively measure the motion performance of a manipulator, motion reliability is affected by many factors, such as joint clearance. The present research utilized a UR10 manipulator as the research object. A factor mapping model for influencing the motion reliability was established. The link flexibility factor, joint flexibility factor, joint clearance factor, and Denavit–Hartenberg (DH) parameters were comprehensively considered in this model. The coupling relationship among the various factors was concisely expressed. Subsequently, the nonlinear response surface method was used to calculate the reliability and sensitivity of the manipulator, which provided an applicable reference for its trajectory planning and motion control. In addition, a data-driven fault diagnosis method based on the kernel principal component analysis (KPCA) was used to verify the motion accuracy and sensitivity of the manipulator, and joint rotation failure was considered as an example to verify the accuracy of the KPCA method. This study on the motion reliability of the manipulator is of great significance for the current motion performance, adjusting the control strategy and optimizing the completion effect of the motion task of a manipulator.
The knowledge gained in the previous two chapters leads to procedures for computing solutions to the Navier–Stokes equations in 2D and 3D. Chapter 6 explains the major components and functions of a typical Reynolds-averaged Navier–Stokes (RANS) code, including the modeling of turbulence in steady or unsteady flows. Convergence acceleration devices, including multigrid techniques, are explained. Finite-volume formulation and standard physical modeling for turbulence yields the RANS equations used in most computational fluid dynamics (CFD) codes directed toward compressible-flow aeronautical applications. By taking the reader through a RANS application step by step, this chapter illustrates the process that an informed CFD user needs to know for applying a typical code of this genus to aerodynamic design. Two practical cases of transonic flow over an airfoil – one in steady flow and the other in unsteady buffeting flow – demonstrate execution of the workflow. Computing a Mach sweep across the entire transonic regime, the steady-flow example exhibits the nonlinear phenomenon of shock stall. Mastering this chapter makes the student a reasonably well-informed CFD user who understands how to carry out a sensitivity analysis to demonstrate CFD due diligence.
The Scenario Weights for Importance Measurement (SWIM) package implements a flexible sensitivity analysis framework, based primarily on results and tools developed by Pesenti et al. (2019). SWIM provides a stressed version of a stochastic model, subject to model components (random variables) fulfilling given probabilistic constraints (stresses). Possible stresses can be applied on moments, probabilities of given events, and risk measures such as Value-At-Risk and Expected Shortfall. SWIM operates upon a single set of simulated scenarios from a stochastic model, returning scenario weights, which encode the required stress and allow monitoring the impact of the stress on all model components. The scenario weights are calculated to minimise the relative entropy with respect to the baseline model, subject to the stress applied. As well as calculating scenario weights, the package provides tools for the analysis of stressed models, including plotting facilities and evaluation of sensitivity measures. SWIM does not require additional evaluations of the simulation model or explicit knowledge of its underlying statistical and functional relations; hence, it is suitable for the analysis of black box models. The capabilities of SWIM are demonstrated through a case study of a credit portfolio model.
Written in a conversational tone, this classroom-tested text introduces the fundamentals of linear programming and game theory, showing readers how to apply serious mathematics to practical real-life questions by modelling linear optimization problems and strategic games. The treatment of linear programming includes two distinct graphical methods. The game theory chapters include a novel proof of the minimax theorem for 2x2 zero-sum games. In addition to zero-sum games, the text presents variable-sum games, ordinal games, and n-player games as the natural result of relaxing or modifying the assumptions of zero-sum games. All concepts and techniques are derived from motivating examples, building in complexity, which encourages students to think creatively and leads them to understand how the mathematics is applied. With no prerequisite besides high school algebra, the text will be useful to motivated high school students and undergraduates studying business, economics, mathematics, and the social sciences.