We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many documents are produced over the years of managing assets, particularly those with long lifespans. However, during this time, the assets may deviate from their original as-designed or as-built state. This presents a significant challenge for tasks that occur in later life phases but require precise knowledge of the asset, such as retrofit, where the assets are equipped with new components. For a third party who is neither the original manufacturer nor the operator, obtaining a comprehensive understanding of the asset can be a tedious process, as this requires going through all available but often fragmented information and documents. While common knowledge regarding the domain or general type of asset can be helpful, it is often based on the experiences of engineers and is, therefore, only implicitly available. This article presents a graph-based information management system that complements traditional PLM systems and helps connect fragments by utilizing generic information about assets. To achieve this, techniques from systems engineering and data science are used. The overarching management platform also includes geometric analyses and operations that can be performed with geometric and product information extracted from STEP files. While the management itself is first described generically, it is also later applied to cabin retrofit in aviation. A mock-up of an Airbus A320 is utilized as the case study to demonstrate further how the platform can provide benefits for retrofitting such long-living assets.
Confounding refers to a mixing or muddling of effects that can occur when the relationship we are interested in is confused by the effect of something else. It arises when the groups we are comparing are not completely exchangeable and so differ with respect to factors other than their exposure status. If one (or more) of these other factors is a cause of both the exposure and the outcome, then some or all of an observed association between the exposure and outcome may be due to that factor.
While there are cases where it is straightforward and unambiguous to define a network given data, often a researcher must make choices in how they define the network and that those choices, preceding most of the work on analyzing the network, have outsized consequences for that subsequent analysis. Sitting between gathering the data and studying the network is the upstream task: how to define the network from the underlying or original data. Defining the network precedes all subsequent or downstream tasks, tasks we will focus on in later chapters. Often those tasks are the focus of network scientists who take the network as a given and focus their efforts on methods using those data. Envision the upstream task by asking, what are the nodes? and what are the links?, with the network following from those definitions. You will find these questions a useful guiding star as you work, and you can learn new insights by reevaluating their answers from time to time.
Drawing examples from real-world networks, this essential book traces the methods behind network analysis and explains how network data is first gathered, then processed and interpreted. The text will equip you with a toolbox of diverse methods and data modelling approaches, allowing you to quickly start making your own calculations on a huge variety of networked systems. This book sets you up to succeed, addressing the questions of what you need to know and what to do with it, when beginning to work with network data. The hands-on approach adopted throughout means that beginners quickly become capable practitioners, guided by a wealth of interesting examples that demonstrate key concepts. Exercises using real-world data extend and deepen your understanding, and develop effective working patterns in network calculations and analysis. Suitable for both graduate students and researchers across a range of disciplines, this novel text provides a fast-track to network data expertise.
Aging ships and offshore structures face harsh environmental and operational conditions in remote areas, leading to age-related damages such as corrosion wastage, fatigue cracking, and mechanical denting. These deteriorations, if left unattended, can escalate into catastrophic failures, causing casualties, property damage, and marine pollution. Hence, ensuring the safety and integrity of aging ships and offshore structures is paramount and achievable through innovative healthcare schemes. One such paradigm, digital healthcare engineering (DHE), initially introduced by the final coauthor, aims at providing lifetime healthcare for engineered structures, infrastructure, and individuals (e.g., seafarers) by harnessing advancements in digitalization and communication technologies. The DHE framework comprises five interconnected modules: on-site health parameter monitoring, data transmission to analytics centers, data analytics, simulation and visualization via digital twins, artificial intelligence-driven diagnosis and remedial planning using machine and deep learning, and predictive health condition analysis for future maintenance. This article surveys recent technological advancements pertinent to each DHE module, with a focus on its application to aging ships and offshore structures. The primary objectives include identifying cost-effective and accurate techniques to establish a DHE system for lifetime healthcare of aging ships and offshore structures—a project currently in progress by the authors.
To better understand and prevent research errors, we conducted a first-of-its-kind scoping review of clinical and translational research articles that were retracted because of problems in data capture, management, and/or analysis.
Methods:
The scoping review followed a preregistered protocol and used retraction notices from the Retraction Watch Database in relevant subject areas, excluding gross misconduct. Abstracts of original articles published between January 1, 2011 and January 31, 2020 were reviewed to determine if articles were related to clinical and translational research. We reviewed retraction notices and associated full texts to obtain information on who retracted the article, types of errors, authors, data types, study design, software, and data availability.
Results:
After reviewing 1,266 abstracts, we reviewed 884 associated retraction notices and 786 full-text articles. Authors initiated the retraction over half the time (58%). Nearly half of retraction notices (42%) described problems generating or acquiring data, and 28% described problems with preparing or analyzing data. Among the full texts that we reviewed: 77% were human research; 29% were animal research; and 6% were systematic reviews or meta-analyses. Most articles collected data de novo (77%), but only 5% described the methods used for data capture and management, and only 11% described data availability. Over one-third of articles (38%) did not specify the statistical software used.
Conclusions:
Authors may improve scientific research by reporting methods for data capture and statistical software. Journals, editors, and reviewers should advocate for this documentation. Journals may help the scientific record self-correct by requiring detailed, transparent retraction notices.
Decentralized E/E architectures (EEAs) are facing challenges and bottlenecks in implementing new features and technologies. The shift towards centralized EEAs has many challenges and needs to be handled pragmatically by considering concurrency with the existing EEAs. To address the challenges of architectural shift, the paper showcases the quantitative comparison of EEAs and visualizes the flow of shifting sub-function and hardware blocks using the Sankey diagram. The observations from the diagram as a result will support OEMs to analyse and take decisions on the shift while developing EEAs.
Despite their increasing popularity, n-of-1 designs employ data analyses that might not be as complete and powerful as they could be. Borrowing from existing advances in educational and psychological research, this article presents a few techniques and references for rigorous data analytic techniques in n-of-1 research.
This study aimed to identify and understand the major topics of discussion under the #sustainability hashtag on Twitter (now known as “X”) and understand user engagement. The sharp increase in social media usage combined with a rise in climate anomalies in recent years makes the area of sustainability with respect to social media a critical topic. Python was used to gather Twitter posts between January 1, 2023, and March 1, 2023. User engagement metrics were analyzed using a variety of statistical analysis methods, including keyword-frequency analysis and Latent Dirichlet Allocation (LDA), which were used to identify significant topics of discussion under the #sustainability hashtag. Additionally, histograms and scatter plots were used to visualize user engagement. LDA analysis was conducted with 7 topics after trials were run with various topics and results were analyzed to determine which number of topics best fit the dataset. The frequency analysis provided a basic overview of the discourse surrounding #sustainability with the topics of technology, business and industry, environmental awareness, and discussion of the future. The LDA model provided a more comprehensive view, including additional topics such as Environmental, Social, and Governance (ESG) and infrastructure, investing, collaboration, and education. These findings have implications for researchers, businesses, organizations, and politicians seeking to align their strategies and actions with the major topics surrounding sustainability on Twitter to have a greater impact on their audience. Researchers can use the results of this study to guide further research on the topic or contextualize their study with existing literature within the field of sustainability.
PEPAdb (Prehistoric Europe's Personal Adornment Database) is a long-term, open-ended project that aims to improve access to archaeological data online. Its website (https://pepadb.us.es) publishes and analyses datasets about prehistoric personal adornment, drawing on the results of various research projects and bibliographic references.
Edited by
Rob Waller, NHS Lothian,Omer S. Moghraby, South London & Maudsley NHS Foundation Trust,Mark Lovell, Esk and Wear Valleys NHS Foundation Trust
As the use of big data in psychiatry continues to expand, it is crucial to involve patients and the public in decisions about its development and application. Mental Health Data Science Scotland has co-produced a best practice checklist involving both researchers and people with lived experience. This guidance emphasises the need for data to be securely accessible and carefully anonymised and for processes and analyses to be transparent, with participants or patients prioritised throughout.
To create early warning capabilities for upcoming Space Weather disturbances, we have selected a dataset of 61 emerging active regions, which allows us to identify characteristic features in the evolution of acoustic power density to predict continuum intensity emergence. For our study, we have utilized Doppler shift and continuum intensity observations from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). The local tracking of 30.66 × 30.66-degree patches in the vicinity of active regions allowed us to trace the evolution of active regions starting from the pre-emergence state. We have developed a machine learning model to capture the acoustic power flux density variations associated with upcoming magnetic flux emergence. The trained Long Short-Term Memory (LSTM) model is able to predict 5 hours ahead whether, in a given area of the solar surface, continuum intensity values will decrease. The performed study allows us to investigate the potential of the machine learning approach to predict the emergence of active regions using acoustic power maps as input.
With the increasing prevalence of big data and sparse data, and rapidly growing data-centric approaches to scientific research, students must develop effective data analysis skills at an early stage of their academic careers. This detailed guide to data modeling in the sciences is ideal for students and researchers keen to develop their understanding of probabilistic data modeling beyond the basics of p-values and fitting residuals. The textbook begins with basic probabilistic concepts, models of dynamical systems and likelihoods are then presented to build the foundation for Bayesian inference, Monte Carlo samplers and filtering. Modeling paradigms are then seamlessly developed, including mixture models, regression models, hidden Markov models, state-space models and Kalman filtering, continuous time processes and uniformization. The text is self-contained and includes practical examples and numerous exercises. This would be an excellent resource for courses on data analysis within the natural sciences, or as a reference text for self-study.
Team composition in Project Based Learning is the first task for the class and has a great impact on the learning experience. Anyway, little space is dedicated in literature about team composition, considering their personal inclinations towards design tasks.
For these reasons we propose a tool that aims to map the design skills of students to optimise team composition. The tool is based on a questionnaire grounded in the design theory and aims at measuring the willingness of students at performing certain design tasks. The results of the questionnaires are analysed using Principal Component Analysis to normalise each students’ answers to the whole class, and to show the distribution of students in the space of engineering design skills.
We present the design process of the tool, and a first experimentation on two classes of master's degree students in Management Engineering and Data Science, testing the tool on a total of 72 students. The results are promising and demonstrate the robusteness of the questionnaire and of the analytical method. Also, we propose next steps for our research activity, calling for other researchers to test our method in different contexts.
Experimental research designs feature two essential ingredients: manipulation of an independent variable and random assignment of subjects. However, in a quasi-experimental design, subjects are assigned to groups based on non-random criteria. This design allows for manipulation of the independent variable with the aim of examining causality between an intervention and an outcome. In social and behavioral research, this design is useful when it may not be logistically or ethically feasible to use a randomized control design – the “gold standard.” Although not as strong as an experiment, non-equivalent control group pretest–posttest designs are usually higher in internal validity than correlation designs. Overcoming possible threats to internal and external validity in a non-equivalent control group pretest–posttest design, such as cofounding variables, are discussed in relation to sample selection, power, effect size, and specific methods of data analyses.
The categorisation of minerals and their related names, such as synonyms, obsolete or historical names, varieties or mixtures, is an asset for designing an interoperable and consistent mineralogical data warehouse. An enormous amount of this data, provided by mindat.org and other resources, was reviewed and analysed during the research. The analysis indicates the existence of several categories of (1) the abstract titles or designations representing the link to the original material or a group of names or substances without actual physical representation, and (2) the unique names representing actual physical material, compounds, or an aggregate of one or more minerals. A revision of the dependency between the categories attributes stored in a database (e.g. chemical properties, physical properties) and their classification status assigned allowed us to design a robust prototype for maintaining database integrity and consistency. The proposed scheme allows standardisation and structuring of officially regulated and maintained species, e.g. IMA-approved, and, in addition, unregulated ones.
Data analysis starts with preprocessing raw lidar data. Algorithms are presented and explained for digital filtering, background subtraction, range correction, and merging profiles from multiple receiver channels or from hybrid analog/digital data systems. Analysis techniques for cloud and aerosol lidar data are then illustrated, with examples of raw and range-corrected data followed by the scattering ratio, which can be used to find the transmittance of a cloud or aerosol layer. Analysis of depolarization data from co-polar and cross-polar receiver channels is discussed, and an algorithm is included for separating aerosol depolarization from the total atmospheric depolarization. Other simple techniques that do not require data inversion are then covered, including the slope method and multi-angle lidar. Finally, elastic backscatter lidar inversions are described, with a derivation of the Klett method for a single-component atmosphere (aerosols). The algorithms for a two-component atmosphere (molecules and aerosols) are presented, along with the limitations of this method.
This chapter reviews some basic principles of applying prior concepts in the book to reading a typical research study article, and interpreting the results efficiently and correctly.
Lakshmi Balachandran Nair, Libera Università Internazionale degli Studi Sociali Guido Carli, Italy,Michael Gibbert, Università della Svizzera Italiana, Switzerland,Bareerah Hafeez Hoorani, Radboud University Nijmegen, Institute for Management Research, The Netherlands
The final chapter in this book discusses some methodological considerations and debates surrounding case study research and its quality. In particular, we revisit the topic of research paradigms (i.e. positivism and interpretivism). Relatedly, we discuss different quality criteria as proposed by prior researchers from both paradigmatic camps. In particular, we focus on the rigor versus trustworthiness discussion and the internal versus external validity debate. Afterwards, we briefly discuss the iterative cycles of data collection and analysis one would encounter during a qualitative case study research process. We end the chapter (and subsequently the book) with a guiding framework which will help researchers in sequencing case study designs by acknowledging the weaknesses of individual designs and leveraging their strengths. The framework can be adopted and adapted to suit the specific research objectives of the study in hand.