We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The popularity of green, social and sustainability-linked bonds (GSS bonds) continues to rise, with circa US$939 billion of such bonds issued globally in 2023. Given the rising popularity of ESG-related investment solutions, their relatively recent emergence, and limited research in this field, continued investigation is essential. Extending non-traditional techniques such as neural networks to these fields creates a good blend of innovation and potential. This paper follows on from our initial publication, where we aim to replicate the S&P Green Bond Index (i.e. this is a time series problem) over a period using non-traditional techniques (neural networks) predicting 1 day ahead. We take a novel approach of applying an N-BEATS model architecture. N-BEATS is a complex feedforward neural network architecture, consisting of basic building blocks and stacks, introducing the novel doubly residual stacking of backcasts and forecasts. In this paper, we also revisit the neural network architectures from our initial publication, which include DNNs, CNNs, GRUs and LSTMs. We continue the univariate time series problem, increasing the data input window from 1 day to 2 and 5 days respectively, whilst still aiming to predict 1 day ahead.
With the emerging developments in millimeter-wave/5G technologies, the potential for wireless Internet of things devices to achieve widespread sensing, precise localization, and high data-rate communication systems becomes increasingly viable. The surge in interest surrounding virtual reality (VR) and augmented reality (AR) technologies is attributed to the vast array of applications they enable, ranging from surgical training to motion capture and daily interactions in VR spaces. To further elevate the user experience, and real-time and accurate orientation detection of the user, the authors proposes the utilization of a frequency-modulated continuous-wave (FMCW) radar system coupled with an ultra-low-power, sticker-like millimeter-wave identification (mmID). The mmID features four backscattering elements, multiplexed in amplitude, frequency, and spatial domains. This design utilizes the training of a supervised learning classification convolutional neural network, enabling accurate real-time three-axis orientation detection of the user. The proposed orientation detection system exhibits exceptional performance, achieving a noteworthy accuracy of 90.58% over three axes at a distance of 8 m. This high accuracy underscores the precision of the orientation detection system, particularly tailored for medium-range VR/AR applications. The integration of the FMCW-based mmID system with machine learning proves to be a promising advancement, contributing to the seamless and immersive interaction within virtual and augmented environments.
The standard two-step scheme for modeling extracellular signals is to first compute the neural membrane currents using multicompartment neuron models (step 1) and next use the volume-conductor theory to compute the extracellular potential resulting from these membrane currents (step 2). We here give a brief introduction to the multicompartment modeling of neurons in step 1. The formalism presented, which has become the gold standard within the field, combines a Hodgkin-Huxley-type description of membrane mechanisms with the cable theory description of the membrane potential in dendrites and axons.
Chapter 13 discusses neural networks and deep learning; included is a presentation of deep convolutional networks that seem to have a great potential in the classification of medical images.
Guaranteed minimum accumulation benefits (GMABs) are retirement savings vehicles that protect the policyholder against downside market risk. This article proposes a valuation method for these contracts based on physics-inspired neural networks (PINNs), in the presence of multiple financial and biometric risk factors. A PINN integrates principles from physics into its learning process to enhance its efficiency in solving complex problems. In this article, the driving principle is the Feynman–Kac (FK) equation, which is a partial differential equation (PDE) governing the GMAB price in an arbitrage-free market. In our context, the FK PDE depends on multiple variables and is difficult to solve using classical finite difference approximations. In comparison, PINNs constitute an efficient alternative that can evaluate GMABs with various specifications without the need for retraining. To illustrate this, we consider a market with four risk factors. We first derive a closed-form expression for the GMAB that serves as a benchmark for the PINN. Next, we propose a scaled version of the FK equation that we solve using a PINN. Pricing errors are analyzed in a numerical illustration.
In a Model Predictive Control (MPC) setting, the precise simulation of the behavior of the system over a finite time window is essential. This application-oriented benchmark study focuses on a robot arm that exhibits various nonlinear behaviors. For this arm, we have a physics-based model with approximate parameter values and an open benchmark dataset for system identification. However, the long-term simulation of this model quickly diverges from the actual arm’s measurements, indicating its inaccuracy. We compare the accuracy of black-box and purely physics-based approaches with several physics-informed approaches. These involve different combinations of a neural network’s output with information from the physics-based model or feeding the physics-based model’s information into the neural network. One of the physics-informed model structures can improve accuracy over a fully black-box model.
High-cardinality categorical features are pervasive in actuarial data (e.g., occupation in commercial property insurance). Standard categorical encoding methods like one-hot encoding are inadequate in these settings.
In this work, we present a novel Generalised Linear Mixed Model Neural Network (“GLMMNet”) approach to the modelling of high-cardinality categorical features. The GLMMNet integrates a generalised linear mixed model in a deep learning framework, offering the predictive power of neural networks and the transparency of random effects estimates, the latter of which cannot be obtained from the entity embedding models. Further, its flexibility to deal with any distribution in the exponential dispersion (ED) family makes it widely applicable to many actuarial contexts and beyond. In order to facilitate the application of GLMMNet to large datasets, we use variational inference to estimate its parameters—both traditional mean field and versions utilising textual information underlying the high-cardinality categorical features.
We illustrate and compare the GLMMNet against existing approaches in a range of simulation experiments as well as in a real-life insurance case study. A notable feature for both our simulation experiment and the real-life case study is a comparatively low signal-to-noise ratio, which is a feature common in actuarial applications. We find that the GLMMNet often outperforms or at least performs comparably with an entity-embedded neural network in these settings, while providing the additional benefit of transparency, which is particularly valuable in practical applications.
Importantly, while our model was motivated by actuarial applications, it can have wider applicability. The GLMMNet would suit any applications that involve high-cardinality categorical variables and where the response cannot be sufficiently modelled by a Gaussian distribution, especially where the inherent noisiness of the data is relatively high.
Aiming at alleviating the adverse influence of coupling unmodeled dynamics, actuator faults and external disturbances in the attitude tracking control system of tilt tri-rotor unmanned aerial vehicle (UAVs), a neural network (NN)-based robust adaptive super-twisting sliding mode fault-tolerant control scheme is designed in this paper. Firstly, in order to suppress the unmodeled dynamics coupled with the system states, a dynamic auxiliary signal, exponentially input-to-state practically stability and some special mathematical tools are used. Secondly, benefiting from adaptive control and super-twisting sliding mode control (STSMC), the influence of the unexpected chattering phenomenon of sliding mode control (SMC) and the unknown system parameters can be handled well. Moreover, NNs are employed to estimate and compensate some unknown nonlinear terms decomposed from the system model. Based on a decomposed quadratic Lyapunov function, both the bounded convergence of all signals of the closed-loop system and the stability of the system are proved. Numerical simulations are conducted to demonstrate the effectiveness of the proposed control method for the tilt tri-rotor UAVs.
Physics-informed neural networks (PINNs), which are a recent development and incorporate physics-based knowledge into neural networks (NNs) in the form of constraints (e.g., displacement and force boundary conditions, and governing equations) or loss function, offer promise for generating digital twins of physical systems and processes. Although recent advances in PINNs have begun to address the challenges of structural health monitoring, significant issues remain unresolved, particularly in modeling the governing physics through partial differential equations (PDEs) under temporally variable loading. This paper investigates potential solutions to these challenges. Specifically, the paper will examine the performance of PINNs enforcing boundary conditions and utilizing sensor data from a limited number of locations within it, demonstrated through three case studies. Case Study 1 assumes a constant uniformly distributed load (UDL) and analyzes several setups of PINNs for four distinct simulated measurement cases obtained from a finite element model. In Case Study 2, the UDL is included as an input variable for the NNs. Results from these two case studies show that the modeling of the structure’s boundary conditions enables the PINNs to approximate the behavior of the structure without requiring satisfaction of the PDEs across the whole domain of the plate. In Case Study (3), we explore the efficacy of PINNs in a setting resembling real-world conditions, wherein the simulated measurment data incorporate deviations from idealized boundary conditions and contain measurement noise. Results illustrate that PINNs can effectively capture the overall physics of the system while managing deviations from idealized assumptions and data noise.
The market for green bonds, and environmentally aligned investment solutions, is increasing. As of 2022, the market of green bonds exceeded USD 2 trillion in issuance, with India, for example, having issued its first-ever sovereign green bonds totally R80bn (c.USD1bn) in January 2023. This paper lays the foundation for future papers and summarises the initial stages of our analysis, where we try to replicate the S&P Green Bond Index (i.e. this is a time series problem) over a period using non-traditional techniques. The models we use include neural networks such as CNNs, LSTMs and GRUs. We extend our analysis and use an open-source decision tree model called XGBoost. For the purposes of this paper, we use 1 day’s prior index information to predict today’s value and repeat this over a period of time. We ignore for example stationarity considerations and extending the input window/output horizon in our analysis, as these will be discussed in future papers. The paper explains the methodology used in our analysis, gives details of general underlying background information to the architecture models (CNNs, LSTMs, GRUs and XGBoost), as well as background to regularisation techniques specifically L2 regularisation, loss curves and hyperparameter optimisation, in particular, the open-source library Optuna.
Antisocial behaviour arises from a complex interplay of innate and environmental factors, with the brain’s adaptability to shifting environmental demands playing a pivotal role. An important but scantly studied environmental factor – micro-geographic hot spots of crime – covers a broad array of problems that produce frequent triggers for antisocial behaviour. Despite the established influence of neural substrates and various environmental factors on antisocial behaviour, the impact of residing in high-risk, violent crime hot spots in Israel, as well as other global locales, remains understudied. This paper aims to elucidate the intricate interplay between neurobiological mechanisms and crime hot spots in the context of antisocial behaviour. Its objectives are twofold: first, to acquaint researchers with the existing literature on the subject; and second, to catalyse further research and robust discourse in this domain. The article commences by reviewing the behavioural manifestations of antisocial tendencies within the framework of crime hot spots. Subsequently, it delves into the influence of crime hot spots on neurocognitive substrates, particularly emphasizing their impact on developmental trajectories associated with antisocial tendencies and the expression of antisocial behaviours. In closing, the paper offers implications and conclusions pertinent to crime hot spots in Israel.
Given the peculiarly linguistic approach that contemporary philosophers use to apply St. Thomas Aquinas’s arguments on the immateriality of the human soul, this paper will present a Thomistic-inspired evaluation of whether artificial intelligence/machine learning (AI/ML) chatbots’ composition and linguistic performance justify the assertion that AI/ML chatbots have immaterial souls. The first section of the paper will present a strong, but ultimately crucially flawed argument that AI/ML chatbots do have souls based on contemporary Thomistic argumentation. The second section of the paper will provide an overview of the actual computer science models that make artificial neural networks and AI/ML chatbots function, which I hope will assist other theologians and philosophers writing about technology, The third section will present some of Emily Bender’s and Alexander Koller’s objections to AI/ML chatbots being able to access meaning from computational linguistics. The final section will highlight the similarities of Bender’s and Koller’s argument to a fuller presentation of St. Thomas Aquinas’s argument for the immateriality of the human soul, ultimately arguing that the current mechanisms and linguistic activity of AI/ML programming do not constitute activity sufficient to conclude that they have immaterial souls on the strength of St. Thomas’s arguments.
Flares on the Sun are often associated with ejected plasma: these events are known as coronal mass ejections (CMEs). These events, although are studied in detail on the Sun, have only a few dozen known examples on other stars, mainly detected using the Doppler-shifted absorption/emission features in Balmer lines and tedious manual analysis. We present a possibility to find stellar CMEs with the help of high-resolution solar spectra.
Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. Although the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function that – in combination with Bayes' rule – determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, that is, by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to date. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.
Prognostics and Health Management (PHM) models aim to estimate remaining useful life (RUL) of complex systems, enabling lower maintenance costs and increased availability. A substantial body of work considers the development and testing of new models using the NASA C-MAPSS dataset as a benchmark. In recent work, the use of ensemble methods has been prevalent. This paper proposes two adaptations to one of the best-performing ensemble methods, namely the Convolutional Neural Network – Long Short-Term Memory (CNN-LSTM) network developed by Li et al. (IEEE Access, 2019, 7, pp 75464–75475)). The first adaptation (adaptable time window, or ATW) increases accuracy of RUL estimates, with performance surpassing that of the state of the art, whereas the second (sub-network learning) does not improve performance. The results give greater insight into further development of innovative methods for prognostics, with future work focusing on translating the ATW approach to real-life industrial datasets and leveraging findings towards practical uptake for industrial applications.
Two stories of brain development are described. The first is from the perspective of developmental neuroscience, describing the core story of developing neurons, synapses, and neural networks, and showing how these create the brain’s developing capacities for memory, language, self-regulation, and other abilities. The second story is from the perspective of developmental science, summarizing the large literature on developing concepts and reasoning skills and the influence of early relationships in their growth. The two stories are then compared to reveal how complementary they are (as we should expect, since they both concern the developing child), but how their integration is still a work in progress, especially because the stories of brain and mind have somewhat different perspectives on development deriving from different research methods, levels of analysis, vocabulary, and concepts. The last section describes overlooked topics in the brain development story that was publicly messaged: the effects of poverty, fetal brain “programming,” and adolescence as a period of renewed brain plasticity and growth. The chapter shows how the science of brain development is constantly evolving, how the interaction of mind and brain is only slowly becoming understood, and the selectivity of the public communication of developmental brain science.
This paper proposes a robust generalised dynamic inversion (GDI) control system design with adaptive neural network (NN) estimation for spacecraft attitude tracking under the absence of knowledge of the spacecraft inertia parameters. The robust GDI control system works to enforce attitude tracking, and the adaptive NN augmentation compensates for the lack of knowledge of the spacecraft inertia parameters. The baseline GDI control law consists of a particular part and an auxiliary part. The particular part of the GDI control law works to realise a desired attitude dynamics of the spacecraft, and the auxiliary part works for finite-time stabilisation of the spacecraft angular velocity. Robustness against modeling uncertainties and external disturbances is provided by augmenting a siding mode control element within the particular part of the GDI control law. The singularity that accompanies GDI control is avoided by modifying the Moore-Penrose generalised inverse by means of a dynamic scaling factor. The NN weighting matrices are updated adaptively through a control Lyapunov function. A detailed stability analysis shows that the closed loop system is semi-global practically stable. For performance assessment, a spacecraft model is developed, and GDI-NN control is investigated for its attitude control problem through numerical simulations. Simulation results reveal the efficacy, robustness and adaptive attributes of proposed GDI-NN control for its application to spacecraft attitude control.
Predicting the laminar to turbulent transition is an important aspect of computational fluid dynamics because of its impact on skin friction. Traditional transition prediction methods such as local stability theory or the parabolized stability equation method do not allow for the consideration of strongly non-parallel boundary layer flows, as in the presence of surface defects (bumps, steps, gaps, etc.). A neural network approach, based on an extensive database of two-dimensional incompressible boundary layer stability studies in the presence of gap-like surface defects, is used. These studies consist of linearized Navier–Stokes calculations and provide information on the effect of surface irregularity geometry and aerodynamic conditions on the transition to turbulence. The physical and geometrical parameters characterizing the defect and the flow are then provided to a neural network whose outputs inform about the effect of a given gap on the transition through the ${\rm \Delta} N$ method (where N represents the amplification of the boundary layer instabilities).
Gamma-ray bursts (GRBs) and double neutron star merger gravitational-wave events are followed by afterglows that shine from X-rays to radio, and these broadband transients are generally interpreted using analytical models. Such models are relatively fast to execute, and thus easily allow estimates of the energy and geometry parameters of the blast wave, through many trial-and-error model calculations. One problem, however, is that such analytical models do not capture the underlying physical processes as well as more realistic relativistic numerical hydrodynamic (RHD) simulations do. Ideally, those simulations are used for parameter estimation instead, but their computational cost makes this intractable. To this end, we present DeepGlow, a highly efficient neural network architecture trained to emulate a computationally costly RHD-based model of GRB afterglows, to within a few percent accuracy. As a first scientific application, we compare both the emulator and a different analytical model calibrated to RHD simulations, to estimate the parameters of a broadband GRB afterglow. We find consistent results between these two models, and also give further evidence for a stellar wind progenitor environment around this GRB source. DeepGlow fuses simulations that are otherwise too complex to execute over all parameters, to real broadband data of current and future GRB afterglows.
Finite element (FE) simulations can be used both in the early product development phase to evaluate the performance of developed components as well as in later stages to verify the reliability of functions and components that would otherwise require a large number of physical prototype tests. This requires calibrated material cards that are capable of realistically representing the specific material behavior. The necessary material parameter identification process is usually time-consuming and resource-intensive, which is why the direct inverse method based on machine learning has recently become increasingly popular. Within the neural network (NN) the generated domain knowledge can be stored and retrieved within milliseconds, which is why this method is time and resource-efficient. This research paper describes advantages and potentials of the direct inverse method in the context of the product development process (PDP). Additionally, arising transformation opportunities of the PDP are discussed and an application scenario of the method is presented followed by possible linkage potentials with existing development methods such as shape optimization.