We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This Element examines how climate scientists have arrived at answers to three key questions about climate change: How much is earth's climate warming? What is causing this warming? What will climate be like in the future? Resources from philosophy of science are employed to analyse the methods that climate scientists use to address these questions and the inferences that they make from the evidence collected. Along the way, the analysis contributes to broader philosophical discussions of data modelling and measurement, robustness analysis, explanation, and model evaluation.
Noise is a ubiquitous feature for all organisms growing in nature. Noise (defined here as stochastic variation) in the availability of nutrients, water and light profoundly impacts their growth and development. Not only is noise present as an external factor but cellular processes themselves are noisy. Therefore, it is remarkable that organisms can display robust control of growth and development despite noise. To survive, various mechanisms to suppress noise have evolved. However, it is also becoming apparent that noise is not just a nuisance that organisms must suppress but can be beneficial as low noise can facilitate the response of an organism to a sub-threshold input signal in a stochastic resonance mechanism. This review discusses mechanisms capable of noise suppression or noise leveraging that might play a significant role in robust temporal regulation of an organism’s response to their noisy environment.
Text classification methods have been widely investigated as a way to detect content of low credibility: fake news, social media bots, propaganda, etc. Quite accurate models (likely based on deep neural networks) help in moderating public electronic platforms and often cause content creators to face rejection of their submissions or removal of already published texts. Having the incentive to evade further detection, content creators try to come up with a slightly modified version of the text (known as an attack with an adversarial example) that exploit the weaknesses of classifiers and result in a different output. Here we systematically test the robustness of common text classifiers against available attacking techniques and discover that, indeed, meaning-preserving changes in input text can mislead the models. The approaches we test focus on finding vulnerable spans in text and replacing individual characters or words, taking into account the similarity between the original and replacement content. We also introduce BODEGA: a benchmark for testing both victim models and attack methods on four misinformation detection tasks in an evaluation framework designed to simulate real use cases of content moderation. The attacked tasks include (1) fact checking and detection of (2) hyperpartisan news, (3) propaganda, and (4) rumours. Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions, e.g. attacks on GEMMA being up to 27% more successful than those on BERT. Finally, we manually analyse a subset adversarial examples and check what kinds of modifications are used in successful attacks.
This Element aims to build, promote, and consolidate a new social science research agenda by defining and exploring the concepts of turbulence and robustness, and subsequently demonstrating the need for robust governance in turbulent times. Turbulence refers to the unpredictable dynamics that public governance is currently facing in the wake of the financial crisis, the refugee crisis, the COVID-19 pandemic, the inflation crisis etc. The heightened societal turbulence calls for robust governance aiming to maintain core functions, goals and values by means of flexibly adapting and proactively innovating the modus operandi of the public sector. This Element identifies a broad repertoire of robustness strategies that public governors may use and combine to respond robustly to turbulence. This title is also available as Open Access on Cambridge Core.
The idea that plants would be efficient, frugal or optimised echoes the recurrent semantics of ‘blueprint’ and ‘program’ in molecular genetics. However, when analysing plants with quantitative approaches and systems thinking, we instead find that plants are the results of stochastic processes with many inefficiencies, incoherence or delays fuelling their robustness. If one had to highlight the main value of quantitative biology, this could be it: plants are robust systems because they are not efficient. Such systemic insights extend to the way we conduct plant research and opens plant science publication to a much broader framework.
This chapter explores ways to diagnose the potential for nonignorable nonresponse to cause problems. Section 7.1 describes how to define the range of possible values of population values that are consistent with the observed data. These calculations require virtually no assumptions and are robust to nonignorable nonresponse; they are simple yet tend to be uninformative. Section 7.2 shows how to postulate possible levels of nonignorability and assess how results would change.
Opinion formation and information processing are affected by unconscious affective responses to stimuli—particularly in politics. Yet we still know relatively little about such affective responses and how to measure them. In this study, we focus on emotional valence and examine facial electromyography (fEMG) measures. We demonstrate the validity of these measures, discuss ways to make measurement and analysis more robust, and consider validity trade-offs in experimental design. In doing so, we hope to support scholars in designing studies that will advance scholarship on political attitudes and behavior by incorporating unconscious affective responses to political stimuli—responses that have too often been neglected by political scientists.
Robustness is a property of system analyses, namely monotonic maps from the complete lattice of subsets of a (system’s state) space to the two-point lattice. The definition of robustness requires the space to be a metric space. Robust analyses cannot discriminate between a subset of the metric space and its closure; therefore, one can restrict to the complete lattice of closed subsets. When the metric space is compact, the complete lattice of closed subsets ordered by reverse inclusion is $\omega$-continuous, and robust analyses are exactly the Scott-continuous maps. Thus, one can also ask whether a robust analysis is computable (with respect to a countable base). The main result of this paper establishes a relation between robustness and Scott continuity when the metric space is not compact. The key idea is to replace the metric space with a compact Hausdorff space, and relate robustness and Scott continuity by an adjunction between the complete lattice of closed subsets of the metric space and the $\omega$-continuous lattice of closed subsets of the compact Hausdorff space. We demonstrate the applicability of this result with several examples involving Banach spaces.
A domain-theoretic framework is presented for validated robustness analysis of neural networks. First, global robustness of a general class of networks is analyzed. Then, using the fact that Edalat’s domain-theoretic L-derivative coincides with Clarke’s generalized gradient, the framework is extended for attack-agnostic local robustness analysis. The proposed framework is ideal for designing algorithms which are correct by construction. This claim is exemplified by developing a validated algorithm for estimation of Lipschitz constant of feedforward regressors. The completeness of the algorithm is proved over differentiable networks and also over general position
${\mathrm{ReLU}}$
networks. Computability results are obtained within the framework of effectively given domains. Using the proposed domain model, differentiable and non-differentiable networks can be analyzed uniformly. The validated algorithm is implemented using arbitrary-precision interval arithmetic, and the results of some experiments are presented. The software implementation is truly validated, as it handles floating-point errors as well.
Based on a multisector general equilibrium framework, we show that the sectoral elasticity of substitution plays the key role in the evolution of asymmetric tails of macroeconomic fluctuations and the establishment of robustness against productivity shocks. A non-unitary elasticity of substitution renders a nonlinear Domar aggregation, where normal sectoral productivity shocks translate into non-normal aggregated shocks with variable expected output growth. We empirically estimate 100 sectoral elasticities of substitution, using the time-series linked input-output tables for Japan and find that the production economy is elastic overall, relative to a Cobb-Douglas economy with unitary elasticity. In addition to the previous assessment of an inelastic production economy for the USA, the contrasting tail asymmetry of the distribution of aggregated shocks between the USA and Japan is explained. Moreover, the robustness of an economy is assessed by expected output growth, the level of which is led by the sectoral elasticities of substitution under zero-mean productivity shocks.
For linear stochastic differential equations with bounded coefficients, we establish the robustness of nonuniform mean-square exponential dichotomy (NMS-ED) on $[t_{0},\,+\infty )$, $(-\infty,\,t_{0}]$ and the whole ${\Bbb R}$ separately, in the sense that such an NMS-ED persists under a sufficiently small linear perturbation. The result for the nonuniform mean-square exponential contraction is also discussed. Moreover, in the process of proving the existence of NMS-ED, we use the observation that the projections of the ‘exponential growing solutions’ and the ‘exponential decaying solutions’ on $[t_{0},\,+\infty )$, $(-\infty,\,t_{0}]$ and ${\Bbb R}$ are different but related. Thus, the relations of three types of projections on $[t_{0},\,+\infty )$, $(-\infty,\,t_{0}]$ and ${\Bbb R}$ are discussed.
The question of whether global norms are experiencing a crisis allows for two concurrent answers. From a facticity perspective, certain global norms are in crisis, given their worldwide lack of implementation and effectiveness. From a validity perspective, however, a crisis is not obvious, as these norms are not openly contested discursively and institutionally. In order to explain the double diagnosis (crisis/no crisis), this article draws on international relations research on norm contestation and norm robustness. It proposes the concept of hidden discursive contestation and distinguishes it from three other key types of norm contestation: open discursive, open non-discursive and hidden non-discursive contestation. We identify four manifestations of hidden discursive contestation in: (1) the deflection of responsibility; (2) forestalling norm strengthening; (3) displaying norms as functional means to an end; and (4) downgrading or upgrading single norm elements. Our empirical focus is on the decent work norm, which demonstrates the double diagnosis. While it lacks facticity, it enjoys far-reaching verbal acceptance and high validity. Our qualitative analysis of discursive hidden contestation draws on two case studies: the International Labour Organization’s compliance procedures, which monitor international labour standards, and the United Nations Treaty Process on a binding instrument for business and human rights. Although both fora have different context and policy cycles, they exhibit similar strategies of hidden discursive contestation.
The Markov True and Error (MARTER) model (Birnbaum & Wan, 2020) has three components: a risky decision making model with one or more parameters, a Markov model that describes stochastic variation of parameters over time, and a true and error (TE) model that describes probabilistic relations between true preferences and overt responses. In this study, we simulated data according to 57 generating models that either did or did not satisfy the assumptions of the True and Error fitting model, that either did or did not satisfy the error independence assumptions, that either did or did not satisfy transitivity, and that had various patterns of error rates. A key assumption in the TE fitting model is that a person’s true preferences do not change in the short time within a session; that is, preference reversals between two responses by the same person to two presentations of the same choice problem in the same brief session are due to random error. In a set of 48 simulations, data generating models either satisfied this assumption or they implemented a systematic violation, in which true preferences could change within sessions. We used the true and error (TE) fitting model to analyze the simulated data, and we found that it did a good job of distinguishing transitive from intransitive models and in estimating parameters not only when the generating model satisfied the model assumptions, but also when model assumptions were violated in this way. When the generating model violated the assumptions, statistical tests of the TE fitting models correctly detected the violations. Even when the data contained violations of the TE model, the parameter estimates representing probabilities of true preference patterns were surprisingly accurate, except for error rates, which were inflated by model violations. In a second set of simulations, the generating model either had error rates that were or were not independent of true preferences and transitivity either was or was not satisfied. It was found again that the TE analysis was able to detect the violations of the fitting model, and the analysis correctly identified whether the data had been generated by a transitive or intransitive process; however, in this case, estimated incidence of a preference pattern was reduced if that preference pattern had a higher error rate. Overall, the violations could be detected and did not affect the ability of the TE analysis to discriminate between transitive and intransitive processes.
This volume focuses on the assessments political actors make of the relative fragility and robustness of political orders. The core argument developed and explored throughout its different chapters is that such assessments are subjective and informed by contextually specific historical experiences that have important implications for how leaders respond. Their responses, in turn, feed into processes by which political orders change. The volume's contributions span analyses of political orders at the state, regional and global levels. They demonstrate that assessments of fragility and robustness have important policy implications but that the accuracy of assessments can only be known with certainty ex post facto. The volume will appeal to scholars and advanced students of international relations and comparative politics working on national and international orders.
This chapter defines robustness and fragility, argues that they can only be determined confidently in retrospect, but that assessments made by political actors, whilst subjective, have important political implications. We suggest some of the consideration that may shape these assessment. They include ideology, historical lessons, and the Zeitgeist. We go on to describe the following chapters, providing an outline of the book.
I make two related claims: (1) assessments of stability made by political actors and analysts are largely hit or miss; and (2) that leader responses to fear of fragility or confidence in robustness are unpredictable in their consequences. Leader assessments are often made with respect to historical lessons derived from dramatic past events that appear relevant to the present. These lessons may or may not be based on good history and may or may not be relevant to the case at hand. Leaders and elites who believe their orders to be robust can help make their beliefs self-fulfilling. However, overconfidence can help make these orders fragile. I argue that leader and elite assessments of robustness and fragility are influenced by cognitive biases and also often highly motivated. Leaders and their advisors use information selectively and can confirm tautologically the lessons they apply.
We review our theoretical claims in light of the empirical chapters and their evidence that leader assessments matter, are highly subjective, and very much influenced by ideology and role models. They are also influenced by leader estimates of what needs to be done and their political freedom to act. This is in turn shows variation across leaders. The most common response to fragility is denial, although some leaders convince themselves – usually unrealistically – they can enact far-reaching reforms to address it.
The study of threshold functions has a long history in random graph theory. It is known that the thresholds for minimum degree k, k-connectivity, as well as k-robustness coincide for a binomial random graph. In this paper we consider an inhomogeneous random graph model, which is obtained by including each possible edge independently with an individual probability. Based on an intuitive concept of neighborhood density, we show two sufficient conditions guaranteeing k-connectivity and k-robustness, respectively, which are asymptotically equivalent. Our framework sheds some light on extending uniform threshold values in homogeneous random graphs to threshold landscapes in inhomogeneous random graphs.