We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Transfer learning has been highlighted as a promising framework to increase the accuracy of the data-driven model in the case of data sparsity, specifically by leveraging pretrained knowledge to the training of the target model. The objective of this study is to evaluate whether the number of requisite training samples can be reduced with the use of various transfer learning models for predicting, for example, the chemical source terms of the data-driven reduced-order modeling (ROM) that represents the homogeneous ignition of a hydrogen/air mixture. Principal component analysis is applied to reduce the dimensionality of the hydrogen/air mixture in composition space. Artificial neural networks (ANNs) are used to regress the reaction rates of principal components, and subsequently, a system of ordinary differential equations is solved. As the number of training samples decreases in the target task, the ROM fails to predict the ignition evolution of a hydrogen/air mixture. Three transfer learning strategies are then applied to the training of the ANN model with a sparse dataset. The performance of the ROM with a sparse dataset is remarkably enhanced if the training of the ANN model is restricted by a regularization term that controls the degree of knowledge transfer from source to target tasks. To this end, a novel transfer learning method is introduced, Parameter control via Partial Initialization and Regularization (PaPIR), whereby the amount of knowledge transferred is systemically adjusted in terms of the initialization and regularization schemes of the ANN model in the target task.
This chapter uses a range of quotes and findings from the internet and the literature. The key premises of this chapter, which is illustrated with examples, are as follows. First, Big Data requires the use of algorithms. Second, algorithms can create misleading information. Third, algorithms can lead to destructive outcomes. But we should not forget that humans program algorithms. With Big Data come algorithms to run many and involved computations. We cannot oversee all these data ourselves, so we need the help of algorithms to make computations for us. We might label these algorithms as Artificial Intelligence, but this might suggest that they can do things on their own. They can run massive computations, but they need to be fed with data. And this feeding is usually done by us, by humans, and we also choose the algorithms to be used.
Currently we may have access to large databases, sometimes coined as Big Data, and for those large datasets simple econometric models will not do. When you have a million people in your database, such as insurance firms or telephone providers or charities, and you have collected information on these individuals for many years, you simply cannot summarize these data using a small-sized econometric model with just a few regressors. In this chapter we address diverse options for how to handle Big Data. We kick off with a discussion about what Big Data is and why it is special. Next, we discuss a few options such as selective sampling, aggregation, nonlinear models, and variable reduction. Methods such as ridge regression, lasso, elastic net, and artificial neural networks are also addressed; these latter concepts are nowadays described as so-called machine learning methods. We see that with these methods the number of choices rapidly increases, and that reproducibility can reduce. The analysis of Big Data therefore comes at a cost of more analysis and of more choices to make and to report.
Soft robotics is rapidly advancing, particularly in medical device applications. A particular miniaturized manipulator design that offers high dexterity, multiple degrees-of-freedom, and better lateral force rendering than competing designs, has great potential for minimally invasive surgery. However, it faces challenges such as the tendency to suddenly and unpredictably deviate in bending plane orientation at higher pressures. In this work, we identified the cause of this deviation as the buckling of the partition wall and proposed design alternatives along with their manufacturing process to address the problem without compromising the original design features. In both simulation and experiment, the novel design managed to achieve a better bending performance in terms of stiffness and reduced deviation of the bending plane. We also developed an artificial neural network-based inverse kinematics model to further improve the performance of the prototype during vectorization. This approach yielded mean absolute errors in orientation of the bending plane below $5^{\circ }$.
Multicomponent systems are representative of the most common real situations as many industrial discharges contain a mixture of several pollutants. This study examines the concurrent adsorption of phenol (PHE) and ciprofloxacin (CIP) onto three types of polyethylene terephthalate microplastics (PET MPs): pristine, acid-modified, and thermal-oxidatively aged. Using extended Langmuir (EL), extended Freundlich (EF) isotherms, and a new artificial neural network (ANN) model, equilibrium adsorption capacities were predicted. The EL isotherm fit for pristine and aged PET MPs, while EF fit for modified PET MPs. Monolayer adsorption capacities ranged from 342.10–3715.73 mg/g for PHE and 2518.23–14498.79 mg/g for CIP, exceeding single-component adsorption. The ANN model used one hidden layer with 3 neurons for pristine and aged PET MPs, and 2 hidden layers with five neurons for modified PET MPs, with a hyperbolic tangent activation function. Models showed excellent performance metrics, including R2 values of 0.989–0.999, RMSE of 0.001–0.413, and AAE of 0.009–0.327. Synergistic interactions were observed in the binary system, with PET MPs showing higher selectivity toward CIP. The study demonstrates the effectiveness of PET MPs for binary adsorption of PHE and CIP in aqueous solutions, highlighting their potential for multicomponent pollutant removal.
This study introduces a hybrid model that utilizes a model-based optimization method to generate training data and an artificial neural network (ANN)-based learning method to offer real-time exoskeleton support in lifting activities. For the model-based optimization method, the torque of the knee exoskeleton and the optimal lifting motion are predicted utilizing a two-dimensional (2D) human–exoskeleton model. The control points for exoskeleton motor current profiles and human joint angle profiles from cubic B-spline interpolation represent the design variables. Minimizing the square of the normalized human joint torque is considered as the cost function. Subsequently, the lifting optimization problem is tackled using a sequential quadratic programming (SQP) algorithm in sparse nonlinear optimizer (SNOPT). For the learning-based approach, the learning-based control model is trained using the general regression neural network (GRNN). The anthropometric parameters of the human subjects and lifting boundary postures are used as input parameters, while the control points for exoskeleton torque are treated as output parameters. Once trained, the learning-based control model can provide exoskeleton assistive torque in real time for lifting tasks. Two test subjects’ joint angles and ground reaction forces (GRFs) comparisons are presented between the experimental and simulation results. Furthermore, the utilization of exoskeletons significantly reduces activations of the four knee extensor and flexor muscles compared to lifting without the exoskeletons for both subjects. Overall, the learning-based control method can generate assistive torque profiles in real time and faster than the model-based optimal control approach.
Complex physical processes that are inherent to rainfall lead to the challenging task of its prediction. To contribute to the improvement of rainfall prediction, artificial neural network (ANN) models were developed using a multilayer perceptron (MLP) approach to predict monthly rainfall 2 months in advance for six geographically diverse weather stations across the Benin Republic. For this purpose, 12 lagged values of atmospheric data were used as predictors. The models were trained using data from 1959 to 2017 and tested for 4 years (2018–2021). The proposed method was compared to long short-term memory (LSTM) and climatology forecasts (CFs). The prediction performance was evaluated using five statistical measures: root mean square error, mean absolute error, mean absolute percentage error, coefficient of determination, and Nash–Sutcliffe efficiency (NSE) coefficient. Furthermore, Taylor diagrams, violin plots, box error, and Kruskal–Wallis test were used to assess the robustness of the model’s forecast. The results revealed that MLP gives better results than LSTM and CF. The NSE obtained with the MLP, LSTM, and CF models during the test period ranges from 0.373 to 0.885, 0.297 to 0.875, and 0.335 to 0.845, respectively, depending on the weather station. Rainfall predictability was more accurate, with 0.512 improvement in NSE using MLP at higher latitudes across the country, showing the effect of geographic regions on prediction model results. In summary, this research has revealed the potential of ANN techniques in predicting monthly rainfall 2 months ahead, supplying valuable insights for decision-makers in the Republic of Benin.
Real-time and efficient path planning is critical for all robotic systems. In particular, it is of greater importance for industrial robots since the overall planning and execution time directly impact the cycle time and automation economics in production lines. While the problem may not be complex in static environments, classical approaches are inefficient in high-dimensional environments in terms of planning time and optimality. Collision checking poses another challenge in obtaining a real-time solution for path planning in complex environments. To address these issues, we propose an end-to-end learning-based framework viz., Path Planning and Collision checking Network (PPCNet). The PPCNet generates the path by computing waypoints sequentially using two networks: the first network generates a waypoint, and the second one determines whether the waypoint is on a collision-free segment of the path. The end-to-end training process is based on imitation learning that uses data aggregation from the experience of an expert planner to train the two networks, simultaneously. We utilize two approaches for training a network that efficiently approximates the exact geometrical collision checking function. Finally, the PPCNet is evaluated in two different simulation environments and a practical implementation on a robotic arm for a bin-picking application. Compared to the state-of-the-art path-planning methods, our results show significant improvement in performance by greatly reducing the planning time with comparable success rates and path lengths.
In climate modeling, the stratospheric ozone layer is typically only considered in a highly simplified form due to computational constraints. For climate projections, it would be of advantage to include the mutual interactions between stratospheric ozone, temperature, and atmospheric dynamics to accurately represent radiative forcing. The overarching goal of our research is to replace the ozone layer in climate models with a machine-learned neural representation of the stratospheric ozone chemistry that allows for a particularly fast, but accurate and stable simulation. We created a benchmark data set from pairs of input and output variables that we stored from simulations of the ATLAS Chemistry and Transport Model. We analyzed several variants of multilayer perceptrons suitable for physical problems to learn a neural representation of a function that predicts 24-h ozone tendencies based on input variables. We performed a comprehensive hyperparameter optimization of the multilayer perceptron using Bayesian search and Hyperband early stopping. We validated our model by replacing the full chemistry module of ATLAS and comparing computation time, accuracy, and stability. We found that our model had a computation time that was a factor of 700 faster than the full chemistry module. The accuracy of our model compares favorably to the full chemistry module within a 2-year simulation run, also outperforms a previous polynomial approach for fast ozone chemistry, and reproduces seasonality well in both hemispheres. In conclusion, the neural representation of stratospheric ozone chemistry in simulation resulted in an ozone layer that showed a high accuracy, significant speed-up, and stability in a long-term simulation.
Identifying early predictors of dialysis requirements in earthquake-related injuries is crucial for optimal resource allocation and timely intervention. This study aimed to develop a predictive scoring system, named SAFE-QUAKE (Seismic Assessment of Kidney Function to Rule Out Dialysis Requirement), to identify patients at high risk of developing acute kidney injury (AKI) and requiring dialysis.
Methods:
A retrospective analysis was conducted on a cohort of 205 patients presenting with earthquake-related injuries. Patients were divided into two groups based on their need for dialysis: the no dialysis group (n = 170) and the dialysis group (n = 35). Demographic, clinical, and laboratory data were collected and compared between the two groups to identify significant predictors of dialysis requirements. The parameters that would form the score were determined by conducting an importance analysis using artificial neural networks (ANNs) to identify parameters that exhibited statistically significant differences in univariate analysis.
Results:
The dialysis group had a significantly longer median duration of being trapped under debris (48 hours) compared to the no dialysis group (eight hours). Blood gas and laboratory analyses revealed significant differences in pH levels, lactate values, creatinine levels, lactate dehydrogenase (LDH) levels, and aspartate transaminase (AST)-to-alanine transaminase (ALT) ratio between the two groups. Based on these findings, the SAFE-QUAKE rule-out scoring system was developed, incorporating entrapment duration (<45 hours), pH levels (>7.31), creatinine levels (<2mg/dL), LDH levels (<1600mg/dL), and the AST-to-ALT ratio (<2.4) as key predictors of dialysis requirements. This score included 139 patients, and among them, only one patient required dialysis, resulting in a negative predictive value of 99.29%.
Conclusions:
The SAFE-QUAKE scoring system demonstrated a high negative predictive value of 99.29% in ruling out the need for dialysis among earthquake-related injury cases. This scoring system offers a practical approach for health care providers to identify patients at high risk of developing AKI and requiring dialysis in earthquake-affected regions.
Hydroinformatics is a technology that combines information and communications technologies together with various disciplinary optimization and simulation models that focus on the management of water. This paper reviews the historical development of hydroinformatics and summarizes the current state of this technology. It describes the range of modeling tools and applications currently described in hydroinformatics literature. The paper concludes with some speculations about possible future developments in hydroinformatics.
Chapter 9 focuses on superalloys operating at high temperature where high strength as well as creep and corrosion resistance are demanded. We take Ni-based single-crystal superalloys and Ni–Fe-based superalloys for advanced ultrasupercritical (A-USC) power plants as examples to demonstrate how alloy design is accomplished in these multicomponent alloy systems. The first case study introduces the design procedure of Ni-based single-crystal superalloy by using a multicriterion constrained multistart optimization algorithm. In the second case study, the design procedure of an Ni–Fe-based superalloy with the artificial neural network (ANN) model combined with a genetic algorithm (GA) based on an experimental dataset is presented.
Inspired by the human brain, neural network (NN) models have emerged as the dominant branch of machine learning, with the multi-layer perceptron (MLP) model being the most popular. Non-linear optimization and the presence of local minima during optimization led to interests in other NN architectures that only require linear least squares optimization, e.g. extreme learning machines (ELM) and radial basis functions (RBF). Such models readily adapt to online learning, where a model can be updated inexpensively as new data arrive continually. Applications of NN to predict conditional distributions (by the conditional density network and the mixture density network) and to perform quantile regression are also covered.
In through-wall radar system, the wall parameters, including permittivity, and wall thickness are of crucial importance for locating targets precisely. Recently, to obtain a quick and accurate estimation of wall parameters, an approach based on machine learning was introduced. However, these approaches are less reliable as only simulations results are presented. One of the major concerns with machine learning-based approaches is the generation of training and testing data which require fabrication of wall with different permittivity, thickness, and conductivity. Creating walls with different permittivity, thickness, and conductivity can really be challenging and expensive. Therefore, an effort has been made in this paper to establish a cost-effective and robust machine learning-based wall parameter estimation process with the usage of transmission line method and artificial neural network. The implementation and efficacy of proposed approach have been demonstrated through simulation and experimental results. The proposed approach quickly and accurately predicted the wall relative permittivity and thickness of real building wall. The merit of proposed approach is that it is less complex and computational efficient as it can extract wall parameters from only one measurement and therefore can be used in conjunction with any commercial through-wall radar systems.
In this work, an artificial neural network model is established to understand the relationship among the tensile properties of as-printed Ti6Al4V parts, annealing parameters, and the tensile properties of annealed Ti6Al4V parts. The database was established by collecting published reports on the annealing treatment of selective laser melting (SLM) Ti6Al4V, from 2006 to 2020. Using the established model, it is possible to prescribe annealing parameters and predict properties after annealing for SLM Ti-6Al-4V parts with high confidence. The model shows high accuracy in the prediction of yield strength (YS) and ultimate tensile strength (UTS). It is found that the YS and UTS are sensitive to the annealing parameters, including temperature and holding time. The YS and UTS are also sensitive to initial YS and UTS of as-printed parts. The model suggests that an annealing process of the holding time of fewer than 4 h and the holding temperature lower than 850°C is desirable for as-printed Ti6Al4V parts to reach the YS required by the ASTM standard. By studying the collected data of microstructure and tensile properties of annealed Ti6Al4V, a new Hall-Petch relationship is proposed to correlate grain size and YS for annealed SLM Ti6Al4V parts in this work. The prediction of strain to failure shows lower accuracy compared with the predictions of YS and UTS due to the large scattering of the experimental data collected from the published reports.
Surface roughness (SR) is one of the major parameters used to govern the quality of the fused deposition modeling (FDM)-printed products, and the FDM process parameters can be easily regulated in order to obtain a good surface finish. The surface quality of the product produced by the FDM is generally affected by the staircase effect that needs to be managed. Also, the production time (PT) to fabricate the product and volume percentage error (VPE) should be minimized to make the FDM process more efficient. The aim of this paper is to accomplish these three objectives with the use of the parametric optimization technique integrating the artificial neural network (ANN) and the whale optimization algorithm (WOA). The FDM parameters which have been taken into consideration are layer thickness, nozzle temperature, printing speed, and raster width. Experimentation has been conducted on printed samples to examine the impact of the input parameters on SR, VPE, and PT according to Taguchi's L27 orthogonal array. The ANN model has been built up using the experimental data, which was further used as an objective function in the WOA with an aim to minimize output responses. The robustness of the proposed method has been validated on the optimal combinations of FDM process parameters.
The objective of the research is to estimate the cost of ecosystem service value (ESV) due to the Rohingya refugee influx in Ukhiya and Teknaf upazilas of Bangladesh.
Methods:
Artificial neural network (ANN) supervised classification technique was used to estimate land use/land cover (LULC) dynamics between 2017 (ie, before the Rohingya refugee influx) and 2021. The ESV changes between 2017 and 2021 were assessed using the benefit transfer approach.
Results:
According to the findings, the forest lost 54.88 km2 (9.58%) because of the refugee influx during the study. Around 47.26 km2 (8.25%) of settlement was increased due to the need to provide shelter for Rohingya refugees in camp areas. Due to the increase in Rohingya refugee settlements, the total ESV increased from US $310.13 million in 2017 to US $332.94 million in 2021. Because of the disappearance of forest areas, the ESV for raw materials and biodiversity fell by 13.58% and 14.57%, respectively.
Conclusion:
Natural resource conservation for long-term development will benefit from the findings of this study.
In this paper, the design of frequency reconfigurable planar antenna by incorporation of metasurface superstrate (FRPA-MSS) is presented using an artificial neural network. The dual-layer radiating structure is created on a 1.524 mm thick Rogers RO4350B substrate board (εr = 3.48, tan δ = 0.0037). The candidate antenna is designed and analyzed using a high-frequency structure simulator (HFSS) tool. The transfer matrix method is employed for the successful retrieval of electromagnetic properties of the metamaterial. Frequency reconfiguration is achieved by placing the metasurface superstrate onto the rectangular patch antenna. A simplified ANN approach has been employed for the design of metasurface incorporated proposed antenna. Presented prototypes are characterized through experimental measurements. It is found from the practical observations that the proposed antenna effectively reconfigures the tuning range from 5.03 to 6.13 GHz. Moreover, the presented antenna operates efficiently with agreeable gain, good impedance matching, and stable pattern characteristics across the entire operational bandwidth. The experimental results obtained validate the simulated performance.
This study presents the main motivation to investigate the COVID-19 pandemic, a major threat to the whole world from the day when it first emerged in China city of Wuhan. Predictions on the number of cases of COVID-19 are crucial in order to prevent and control the outbreak. In this research study, an artificial neural network with rectifying linear unit-based technique is implemented to predict the number of deaths, recovered and confirmed cases of COVID-19 in Pakistan by using previous data of 137 days of COVID-19 cases from the day 25 February 2020 when the first two cases were confirmed, until 10 July 2020. The collected data were divided into training and test data which were used to test the efficiency of the proposed technique. Furthermore, future predictions have been made by the proposed technique for the next 7 days while training the model on whole available data.
Ti–6Al–4V alloy has superior material properties such as high strength-to-weight ratio, good corrosion resistance, and excellent fracture toughness. Therefore, it is widely used in aerospace, medical, and automotive industries where machining is an essential process for these industries. However, machining of Ti–6Al–4V is a material with extremely low machinability characteristics; thus, conventional machining methods are not appropriate to machine such materials. Ultrasonic-assisted machining (UAM) is a novel hybrid machining method which has numerous advantages over conventional machining processes. In addition, minimum quantity lubrication (MQL) is an alternative type of metal cutting fluid application that is being used instead of conventional lubrication in machining. One of the parameters which could be used to measure the performance of the machining process is the amount of cutting force. Nevertheless, there is a number of limited studies to compare the changes in cutting forces by using UAM and MQL together which are time-consuming and not cost-effective. Artificial neural network (ANN) is an alternative method that may eliminate the limitations mentioned above by estimating the outputs with the limited number of data. In this study, a model was developed and coded in Python programming environment in order to predict cutting forces using ANN. The results showed that experimental cutting forces were estimated with a successful prediction rate of 0.99 with mean absolute percentage error and mean squared error of 1.85% and 13.1, respectively. Moreover, considering too limited experimental data, ANN provided acceptable results in a cost- and time-effective way.