We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Machine learning is increasingly being utilised across various domains of nutrition research due to its ability to analyse complex data, especially as large datasets become more readily available. However, at times, this enthusiasm has led to the adoption of machine learning techniques prior to a proper understanding of how they should be applied, leading to non-robust study designs and results of questionable validity. To ensure that research standards do not suffer, key machine learning concepts must be understood by the research community. The aim of this review is to facilitate a better understanding of machine learning in research by outlining good practices and common pitfalls in each of the steps in the machine learning process. Key themes include the importance of generating high-quality data, employing robust validation techniques, quantifying the stability of results, accurately interpreting machine learning outputs, adequately describing methodologies, and ensuring transparency when reporting findings. Achieving this aim will facilitate the implementation of robust machine learning methodologies, which will reduce false findings and make research more reliable, as well as enable researchers to critically evaluate and better interpret the findings of others using machine learning in their work.
Making best use of collected weather observations is simplified where thought is given to record management and storage: gathering meteorological records is usually a means to an end, rather than an end in itself. The more effectively records are stored, the quicker and easier it becomes to analyse and use them productively – a statement which applies equally to both professional and amateur observers. This chapter provides tried and tested suggestions for collecting, storing and archiving data from both manual observations and automatic weather stations (AWSs).
This paper presents a comprehensive strategy to improve the locomotion performance of humanoid robots on various slippery floors. The strategy involves the implementation and adaptation of a divergent component of motion (DCM) based control architecture for the humanoid NAO, and the introduction of an embedded yaw controller (EYC), which is based on a proportional-integral-derivative (PID) control algorithm. The EYC is designed not only to address the slip behavior of the robot on low-friction floors but also to tackle the issue of non-straight walking patterns that we observed in this humanoid, even on non-slippery floors. To fine-tune the PID gains for the EYC, a systematic trial-and-error approach is employed. We iteratively adjusted the P (Proportional), I (Integral), and D (Derivative) parameters while keeping the others fixed. This process allowed us to optimize the PID controller’s response to different walking conditions and floor types. A series of locomotion experiments are conducted in a simulated environment, where the humanoid step frequency and PID gains are varied for each type of floor. The effectiveness of the strategy is evaluated using metrics such as robot stability, energy consumption, and task duration. The results of the study demonstrate that the proposed approach significantly improves humanoid locomotion on different slippery floors, by enhancing stability and reducing energy consumption. The study has practical implications for designing more versatile and effective solutions for humanoid locomotion on challenging surfaces and highlights the adaptability of the existing controller for different humanoid robots.
This article explores digitalization’s impacts on the existing international investment law regime. In particular, it examines whether international investment agreements (IIAs) apply to the digital economy, analyzing their scope of application, including the definition of protected investment and protected investor, as well as the territorial application of those treaties. We conclude that the IIAs and their provisions are, in principle, not intended for the digital era. However, their usually broad definitions are likely to cover investments in digital assets, if there is a flexible interpretation of the required territorial nexus. However, we believe caution should be exercised about including digital transformation commitments in IIAs, as they could increase the chance of investor-state dispute settlement (-ISDS-).
The Centre for Advanced Laser Applications in Garching, Germany, is home to the ATLAS-3000 multi-petawatt laser, dedicated to research on laser particle acceleration and its applications. A control system based on Tango Controls is implemented for both the laser and four experimental areas. The device server approach features high modularity, which, in addition to the hardware control, enables a quick extension of the system and allows for automated data acquisition of the laser parameters and experimental data for each laser shot. In this paper we present an overview of our implementation of the control system, as well as our advances in terms of experimental operation, online supervision and data processing. We also give an outlook on advanced experimental supervision and online data evaluation – where the data can be processed in a pipeline – which is being developed on the basis of this infrastructure.
Optimizing research on the developmental origins of health and disease (DOHaD) involves implementing initiatives maximizing the use of the available cohort study data; achieving sufficient statistical power to support subgroup analysis; and using participant data presenting adequate follow-up and exposure heterogeneity. It also involves being able to undertake comparison, cross-validation, or replication across data sets. To answer these requirements, cohort study data need to be findable, accessible, interoperable, and reusable (FAIR), and more particularly, it often needs to be harmonized. Harmonization is required to achieve or improve comparability of the putatively equivalent measures collected by different studies on different individuals. Although the characteristics of the research initiatives generating and using harmonized data vary extensively, all are confronted by similar issues. Having to collate, understand, process, host, and co-analyze data from individual cohort studies is particularly challenging. The scientific success and timely management of projects can be facilitated by an ensemble of factors. The current document provides an overview of the ‘life course’ of research projects requiring harmonization of existing data and highlights key elements to be considered from the inception to the end of the project.
Distributed adaptive filtering has been considered to be an effective approach for data processing and estimation over distributed networks. Most existing algorithms focus on designing different information diffusion rules, regardless of the evolutionary characteristics of a distributed network. In this chapter, we study the adaptive network from the game-theoretic perspective and formulate the distributed adaptive filtering problem as a graphical evolutionary game. With this formulation, the nodes in the network are regarded as players and the local combiner of estimated information from different neighbors is regarded as a form of diverse strategy selection. We show that this graphical evolutionary game framework is very general and can unify the existing adaptive network algorithms. Based on this framework, as examples, two error-aware adaptive filtering algorithms are discussed. Moreover, we use graphical evolutionary game theory to analyze the information diffusion process over the adaptive networks and the evolutionarily stable strategy of the system. Finally, simulation results are shown to verify the effectiveness of the method discussed in this chapter.
The purpose of this chapter is to set the stage for the book and for the upcoming chapters. We first overview classical information-theoretic problems and solutions. We then discuss emerging applications of information-theoretic methods in various data-science problems and, where applicable, refer the reader to related chapters in the book. Throughout this chapter, we highlight the perspectives, tools, and methods that play important roles in classic information-theoretic paradigms and in emerging areas of data science. Table 1.1 provides a summary of the different topics covered in this chapter and highlights the different chapters that can be read as a follow-up to these topics.
This study was designed to elucidate the biological variation in expression of many metabolites due to environment, genotype, or both, and to investigate the potential utility of metabolomics to supplement compositional analysis for the design of a new resilient cultivar of Brassica napus that can be steady in phytochemicals in different regions in France. Eight rapeseed varieties, grown in eight regions of France, were compared using a non-targeted metabolomics approach. The statistical analysis highlighted the distance and closeness between the samples in terms of both genotypes and geographical regions. A major environmental impact was observed on the polar metabolome, with different trends, depending on the varieties. Some varieties were very sensitive to the environment, while others were quite resilient. The identified secondary metabolites were mapped into the KEGG pathway database to reveal the most sensitive target proteins susceptible to environmental influences. A glucosyl-transferase encoded by the UGT84A1 gene involved in the biosynthesis of phenylpropanoid was identified. This protein could be rate limiting/promoting in this pathway depending on environmental conditions. The metabolomics approach used in this study demonstrated its efficiency to characterize the environmental influence on various cultivars of Brassica napus seeds and may help identify targets for crop improvement.
Steps required for proper acquisition and processing of laser Doppler velocimetry data for turbomachinery research applications are addressed. Turbomachinery applications are difficult due to the small internal passages, high-frequency fluctuations, large turbulence intensities, and strong secondary flows resulting in low overall signal-to-noise ratios and narrowband noise sources that cannot be removed by simple band-pass filters. Special aspects that must be considered for successful and accurate laser Doppler velocimetry studies to be conducted in turbomachinery are discussed. Specifically, the design of the measurement volume size, reflection mitigation, engineering of seed particle size and injection schema, and alignment of the traverse mechanism are addressed in terms of their importance (from literature sources) and the solutions implemented by the authors. These techniques have been applied to successfully obtain three-component, unsteady velocity data in a high-speed centrifugal compressor for aeroengine application. Processing techniques are also presented including a novel mixture-model-based statistical method for narrowband noise isolation developed by the authors. The method, validation steps, and example results are presented, showing the successful rejection of noise with high accuracy, a low failure rate, and a significant reduction in required manual inspection. This newly developed method elucidated flow features that were not clear prior to the noise removal.
Electron tomography has become an essential tool for three-dimensional (3D) characterization of nanomaterials. In recent years, advances have been made in specimen preparation and mounting, acquisition geometries, and reconstruction algorithms. All of these components work together to optimize the resolution and clarity of an electron tomogram. However, one important component of the data-processing has received less attention: the 2D tilt series alignment. This is challenging for a number of reasons, namely because the nature of the data sets and the need to be coherently aligned over the full range of angles. An inaccurate alignment may be difficult to identify, yet can significantly limit the final 3D resolution. In this work, we present an improved center-of-mass alignment model that allows us to overcome discrepancies from unwanted objects that enter the imaging area throughout the tilt series. In particular, we develop an approach to overcome changes in the total mass upon rotation of the imaging area. We apply our approach to accurately recover small Pt nanoparticles embedded in a zeolite that may otherwise go undetected both in the 2D microscopy images and the 3D reconstruction. In addition to this, we highlight the particular effectiveness of the compressed sensing methods with this data set.
The Figaro data reduction system was originally written for DEC VAXes running VMS, and little attention was paid to making it portable. Recently, however, a cooperative effort between the AAO, MSSSO, UNSW, the UK Starlink network and Caltech has resulted in a version for UNIX. This new version has been run under VMS and three different versions of UNIX. The files produced by any version may be read directly by any other version, although Figaro has a particularly complex file format which contains an extensible, self-defining, hierarchical structure of data items. This complexity has allowed the addition of error and quality data, as well as specific structures used, for example, for echelle data. Figaro is written mainly in Fortran (with numerous DEC extensions) but there is also a significant use of C. While C and Fortran are reasonably portable, the way one is called from the other is less portable and needs careful handling. Ports to other systems are possible, with effort; a Macintosh version is being considered.
The discrete Fourier transform is among the most routine tools used in high-resolution scanning/transmission electron microscopy (S/TEM). However, when calculating a Fourier transform, periodic boundary conditions are imposed and sharp discontinuities between the edges of an image cause a cross patterned artifact along the reciprocal space axes. This artifact can interfere with the analysis of reciprocal lattice peaks of an atomic resolution image. Here we demonstrate that the recently developed Periodic Plus Smooth Decomposition technique provides a simple, efficient method for reliable removal of artifacts caused by edge discontinuities. In this method, edge artifacts are reduced by subtracting a smooth background that solves Poisson’s equation with boundary conditions set by the image’s edges. Unlike the traditional windowed Fourier transforms, Periodic Plus Smooth Decomposition maintains sharp reciprocal lattice peaks from the image’s entire field of view.
Two-wave methods and Three-wave method are conventional methods to process the SHPB experiment data. Due to the presence of transient waves in dynamic experiments, both the stress and strain fields within a specimen are seldom absolutely uniform. Different date processing methods lead to different results. In this paper, we have developed a program to compare the results getting from different methods. The difference of the strains corresponding to the ultimate stress can reach 20%. Which one is better? One material shouldn't have different constitutive. In order to solve the problem, we have developed Three-wave mutual-checking method, which is based on the conservation of the momentum of the whole system. This method provides a checking mechanism, so some human error can be avoided when process the same experiment data. By this method, different person can obtain the only credible stress-strain curve based on the same test data.
Gaia is an ambitious space astrometry mission of ESA with a main objective to map the sky in astrometry and photometry down to a magnitude 20 by the end of the next decade. Given its extreme astrometric accuracy and the repeated observations over five years, the observation modelling is done in a fully relativistic framework and several tests of General Relativity or of its extensions can be carried out during the data processing. The paper presents an overview of the current activities in this area and of the expected performances.
Energy filtered convergent beam electron diffraction was used to
investigate localized strain in aluminum interconnects. By analyzing
the position of higher order Laue zone lines, it is possible to measure
the three-dimensional lattice strain with high accuracy
(∼10−4) and high spatial resolution (10 to 100 nm).
In the present article, important details of the strain analysis procedure
are outlined. Subsequently, results of measurements of the local variation
of thermal strains in narrow, free-standing interconnects are presented.
The strain development in single grains during thermal cycling between
−170°C and +100°C was measured in situ and local
stress variations along the interconnect were investigated. The interconnects
show reversible elastic behavior over the whole temperature range, leading to
large stresses at low temperatures. The strain state varies locally within
single grains, as well as from grain to grain, by as much as 50% in
both types of samples. By comparing the experimental findings with
elastic finite element modeling, a detailed understanding of the
triaxial strain state could be achieved.
Animal performance recording and breeding in Greece aim at improving milk production of pure-bred cattle under intensive systems and of sheep and goats under semi-intensive or extensive production systems. Although milk recording was established in Greece by the Ministry of Agriculture in 1952, it is only since 1978 that it has been carried out more systematically on larger populations and in the frame of a more specific genetic improvement programme for each animal species and breed. For the application of this programme, close co-operation has been set up among the competent services of the Ministry of Agriculture, the Agricultural Universities of the country and the relevant farmers' organisations which are in the process of being established. Milk is recorded on 61 867 dairy cows (29% of the total dairy population) in 1 425 herds (average herd size 43 cows), 31 611 dairy sheep (0.36% of the total sheep population) in 429 flocks (average flock size 72 ewes) and 3 296 goats (0.06% of the total goats population) in 36 flocks (average flock size 92). The procedures of performance recording and the future planning, aiming at increasing the number of animals and the recorded traits as well as the supporting of the farmers in managing their herds/flocks, are presented.
The paper was prepared by an informal working party as an introduction to an open forum discussion. It considers from several viewpoints the relationship between the actuary and the computer, both in its function as a calculating tool and as a data processing and storage medium. The paper considers in detail the requirements of the Appointed Actuary for adequate data, the problems of allocating and pricing for computer costs and the increasing uses of computer modelling in financial reporting. The place of computer literacy in actuarial education, future developments and standards are also considered.
Tianeptine is a new antidepressant that, in animals, has a faciliting effect on both working and reference memory. To investigate the effects of tianeptine on vigilance and memory in humans, a placebo-controlled cross-over study was performed in 20 healthy volunteers. The duration of each treatment period was 7 days and the dosage of tianeptine 37.5 mg per day. The evaluations consisted of 3 computerized tests assessing alert function, continuous recognition and semantic facilitation. A memory questionnaire and the Rey test (15 words) were added. The analysis of variance (cross-over type) performed on the results did not show any significant difference between the evolution of these tests on tianeptine and on placebo in young healthy subjects, at the height of their intellectual capacities. In this study, tianeptine respected the vigilance as well as the performances of the healthy volunteers and did not impair their memory and cognitive processes, which distinguishes it from many psychotropic drugs. Nevertheless, tianeptine did not produce a faciliting effect on these processes in healthy volunteers. These results allow us to propose new clinical trial with tianeptine in order to investigate its effects on older volunters and patients complaining of memory disorders.