We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter zeroes in on the similarities and differences between first and second language acquisition. First, the chapter breaks down the term “second language acquisition” by discussing each of those words. It revisits the components of language (grammar, vocabulary, pronunciation, and pragmatics) from second language acquisition perspectives. It then introduces different second language acquisition theories such as input processing theory, skill acquisition theory, usage-based theory, sociocultural theory, complex dynamic systems theory, translanguaging, and Monitor Theory. The applicability of those theories to classroom second language teaching is discussed.
In this paper, a complete introduction to the dead reckoning navigation technique is offered after a discussion of the many forms of navigation, and the benefits and drawbacks associated with each of those types of navigation. After that, the dead reckoning navigation solution is used as an option that is both low-cost and makes use of the sophisticated equations that are used by the system. Moreover, to achieve the highest level of accuracy in navigation, an investigation of navigation errors caused by dead reckoning is calculated. Employing the suggested dead reckoning navigation system, the final position of an underwater vehicle can be established with a high degree of accuracy by using experimental data (from sensors) and the uncertainties that are associated with the system. Finally, to illustrate the correctness of the dead reckoning navigation process, the system error analysis as uncertainty that was carried out using experimental data using the dead reckoning navigation technique is compared with GPS data.
Calculation of loss scenarios is a fundamental requirement of simulation-based capital models and these are commonly approximated. Within a life insurance setting, a loss scenario may involve an asset-liability optimization. When cashflows and asset values are dependent on only a small number of risk factor components, low-dimensional approximations may be used as inputs into the optimization and resulting in loss approximation. By considering these loss approximations as perturbations of linear optimization problems, approximation errors in loss scenarios can be bounded to first order and attributed to specific proxies. This attribution creates a mechanism for approximation improvements and for the eventual elimination of approximation errors in capital estimates through targeted exact computation. The results are demonstrated through a stylized worked example and corresponding numerical study. Advances in error analysis of proxy models enhance confidence in capital estimates. Beyond error analysis, the presented methods can be applied to general sensitivity analysis and the calculation of risk.
In this paper, the pricing of equity warrants under a class of fractional Brownian motion models is investigated numerically. By establishing a new nonlinear partial differential equation (PDE) system governing the price in terms of the observable stock price, we solve the pricing system effectively by a robust implicit-explicit numerical method. This is fundamentally different from the documented methods, which first solve the price with respect to the firm value analytically, by assuming that the volatility of the firm is constant, and then compute the price with respect to the stock price and estimate the firm volatility numerically. It is shown that the proposed method is stable in the maximum-norm sense. Furthermore, a sharp theoretical error estimate for the current method is provided, which is also verified numerically. Numerical examples suggest that the current method is efficient and can produce results that are, overall, closer to real market prices than other existing approaches. A great advantage of the current method is that it can be extended easily to price equity warrants under other complicated models.
Little is known about the productive morphosyntax of Norwegian children with developmental language disorder (DLD). The current study examined morphosyntax in Norwegian-speaking children with DLD (n =19) and a control group that was pairwise matched for age, gender, and intelligence quotient (IQ; n = 19). The children’s sentence repetitions were studied through the lens of Processability Theory. The group differences were largest for grammatical structures at the latest developmental stage of the processability hierarchy. The Norwegian subordinate clause word order, belonging to the latest stage of the processability hierarchy, stood out as particularly challenging for children with DLD. Only 2 children with DLD but 16 children in the control group produced a subordinate clause with subordinate clause word order. Categorization of children’s errors revealed that children with DLD made more errors of all types (addition, omission, substitution, inflection and word order) but especially errors of omission and inflection.
Determining accurate capital requirements is a central activity across the life insurance industry. This is computationally challenging and often involves the acceptance of proxy errors that directly impact capital requirements. Within simulation-based capital models, where proxies are being used, capital estimates are approximations that contain both statistical and proxy errors. Here, we show how basic error analysis combined with targeted exact computation can entirely eliminate proxy errors from the capital estimate. Consideration of the possible ordering of losses, combined with knowledge of their error bounds, identifies an important subset of scenarios. When these scenarios are calculated exactly, the resulting capital estimate can be made devoid of proxy errors. Advances in the handling of proxy errors improve the accuracy of capital requirements.
This paper develops the conceptual design and error analysis of a cable-driven parallel robot (CDPR). The earlier error analysis of CDPRs generally regarded the cable around the pulley as a center point and neglected the radius of the pulleys. In this paper, the conceptual design of a CDPR with pulleys on its base platform is performed, and an error mapping model considering the influence of radius of the pulleys for the CDPR is established through kinematics analysis and a full matrix complete differential method. Monte Carlo simulation is adopted to deal with the sensitivity analysis, which can directly describe the contribution of each error component to the total orientation error of the CDPR by virtue of the error modeling. The results show that the sensitivity coefficients of pulleys’ geometric errors and geometric errors of the cables are relatively larger, which confirms that the cable length errors and pulleys’ geometric errors should be given higher priority in design and processing.
The main goal of this paper is to solve a class of Darboux problems by converting them into the two-dimensional nonlinear Volterra integral equation of the second kind. The scheme approximates the solution of these integral equations using the discrete Galerkin method together with local radial basis functions, which use a small set of data instead of all points in the solution domain. We also employ the Gauss–Legendre integration rule on the influence domains of shape functions to compute the local integrals appearing in the method. Since the scheme is constructed on a set of scattered points and does not require any background meshes, it is meshless. The error bound and the convergence rate of the presented method are provided. Some illustrative examples are included to show the validity and efficiency of the new technique. Furthermore, the results obtained demonstrate that this method uses much less computer memory than the method established using global radial basis functions.
This chapter provides an in-depth look at Arabic second language acquisition (SLA) by reviewing past and present approaches to teaching and how research into SLA has resulted in a number of hypotheses that relate directly to acquisition of Arabic as a second or foreign language. The account here is very rich in data that provide a substantive background for application and experimentation. A persistent challenge for Arabic is how to measure achievement and proficiency. Alhawary reviews Arabic testing strategies and how they fit into the overall picture of assessment. Blending the teaching of spoken and written Arabic has been a particular problem for the field, and Alhawary dedicates the last part of his chapter to evaluation of attempts to teach variation, underscoring the difficulties and challenges that face teachers, students, and materials developers.
The surface restitution method we present reconstructs the evolution of a glacier surface between two time-separated surface topographies using seasonal surface mass balance (SMB) data. A conservative and systematic error analysis is included, based on the availability of surface elevation measurements within the period. The method is applied from 2001 to 2013 at Hurd Glacier (a 4 km2 glacier), where we have sufficient SMB and elevation data. We estimate surface elevation changes in two steps: (1) elevation change due to SMB and (2) elevation change due to glacier dynamics. Four different models of the method are compared depending on whether or not accumulation is memorised at each time step and whether they employ balance profiles or SMB maps. Models are validated by comparing a set of surface measurements retrieved in 2007 with the corresponding restituted elevations. Although surface elevation change between 2001 and 2007 was larger than 10 m, more than 80% of the points restituted by the four models showed errors below ±1 m compared to only 33% when predicted by a linear interpolator. As error estimates between models differ by 0.10 m, we recommend the simplest model, which does not memorise accumulation and interpolates SMB by elevation profiles.
In recent years, marked gains in the accuracy of machine translation (MT) outputs have greatly increased its viability as a tool to support the efforts of English as a foreign language (EFL) students to write in English. This study examines error corrections made by 58 Korean university students by comparing their original L2 texts to that of MT outputs. Based on the results of the error analysis, the error types were categorized into 12 categories. Students were divided into three distinctive groups to determine differences among them according to the frequency of errors in their writing. The t-test results reveal that the numbers of errors significantly decreased in the revised versions for most of the error types among all groups. The results of the regression analysis also reveal a positive correlation relationship between the number of changes and the reduction of errors. However, the results also indicate that although all groups made error corrections at similar rates, students who less frequently committed errors in their L2 texts (higher language proficiency groups) generally tended to correct a higher proportion of errors. Based on the findings, pedagogical implications are discussed regarding how EFL teachers can effectively incorporate MT into the classroom.
This chapter starts with basic definitions such as types of machine learning (supervised vs. unsupervised learning, classifiers vs. regressors), types of features (binary, categorical, discrete, continuos), metrics (precision, recall, f-measure, accuracy, overfitting), and raw data and then defines the machine learning cycle and the feature engineering cycle. The feature engineering cycle hinges on two types of analysis: exploratory data analysis, at the beginning of the cycle and error analysis at the end of each feature engineering cycle. Domain modelling and feature construction concludes the chapter with particular emphasis on feature ideation techniques.
When machine learning engineers work with data sets, they may find the results aren't as good as they need. Instead of improving the model or collecting more data, they can use the feature engineering process to help improve results by modifying the data's features to better capture the nature of the problem. This practical guide to feature engineering is an essential addition to any data scientist's or machine learning engineer's toolbox, providing new ideas on how to improve the performance of a machine learning solution. Beginning with the basic concepts and techniques, the text builds up to a unique cross-domain approach that spans data on graphs, texts, time series, and images, with fully worked out case studies. Key topics include binning, out-of-fold estimation, feature selection, dimensionality reduction, and encoding variable-length data. The full source code for the case studies is available on a companion website as Python Jupyter notebooks.
The concepts of nodal value and grid average in cell centered finite volume method (FVM) are clarified in this work, strict distinction between the two concepts in constructing numerical schemes is made, and common fault in misidentifying the two concepts is pointed out. The expansion based on grid average, similar to Taylor’s expansion, is deduced to construct correct scheme in terms of grid average and to obtain modified partial differential equation (MPDE) which determines the order of accuracy of numerical scheme theoretically. Correct high order scheme, taking QUICK (Quadratic Upstream Interpolation for Convective Kinematics) scheme as an example, is constructed in different approaches. Furthermore, the property of interpolation coefficients is analyzed. We also pointed out that for high order schemes, round-off error dominates the absolute error in fine grid and truncation error dominates the absolute error in coarse grid.
This paper is devoted to an extension of the finite-energy condition for extended Runge-Kutta-Nyström (ERKN) integrators and applications to nonlinear wave equations. We begin with an error analysis for the integrators for multi-frequency highly oscillatory systems , where M is positive semi-definite, . The highly oscillatory system is due to the semi-discretisation of conservative, or dissipative, nonlinear wave equations. The structure of such a matrix M and initial conditions are based on particular spatial discretisations. Similarly to the error analysis for Gaustchi-type methods of order two, where a finite-energy condition bounding amplitudes of high oscillations is satisfied by the solution, a finite-energy condition for the semi-discretisation of nonlinear wave equations is introduced and analysed. These ensure that the error bound of ERKN methods is independent of . Since stepsizes are not restricted by frequencies of M, large stepsizes can be employed by our ERKN integrators of arbitrary high order. Numerical experiments provided in this paper have demonstrated that our results are truly promising, and consistent with our analysis and prediction.
The productivity of herbicides used in corn and soybeans was calculated from field data collected in Ontario from 1967 to 1985. Combinations of preplant incorporated and preemergence treatments were selected and were evaluated to determine their effect on crop yield. Corn and soybean yields increased from herbicide use, thereby resulting in a positive net benefit to growers. Benefit/cost ratios for herbicide use in corn and soybeans were calculated to be 2.8/1 and 2.6/1, respectively, at an average price of $132/1000 kg for corn and $275/1000 kg for soybeans. The benefit/cost ratios varied with test location, method of herbicide application, and prevailing market price.
LiDAR technology is one option to collect spatial data about canopy geometry in many crops. However, the method of data acquisition includes many errors related to the LiDAR sensor, the GNSS receiver and the data acquisition set up. Therefore, the objective of this study was to evaluate the errors involved in the data acquisition from a mobile terrestrial laser scanner (MTLS). Regular shaped objects were scanned with a developed MTLS in two different tests: i) with the system mounted on a vehicle and ii) with the system mounted on a platform running over a rail. The errors of area estimation varied between 0.001 and 0.071 m2 for the circle, square and triangle objects. The errors on volume estimations were between 0.0003 and 0.0017 m3, for cylinders and truncated cone.
We propose a class of numerical methods for solving nonlinear random differential equations with piecewise constant argument, called gPCRK methods as they combine generalised polynomial chaos with Runge-Kutta methods. An error analysis is presented involving the error arising from a finite-dimensional noise assumption, the projection error, the aliasing error and the discretisation error. A numerical example is given to illustrate the effectiveness of this approach.
A Celestial Navigation System (CNS) is a feasible and economical autonomous navigation system for deep-space probes. Ephemeris errors have a great influence on the performance of CNSs during the Mars approach phase, but there are few research studies on this problem. In this paper, the analysis shows that the ephemeris error of Mars is slowly-varying, while the ephemeris error of Phobos and Deimos is periodical. The influence of the ephemeris errors of Mars and its satellites is analysed in relation to both the Sun-centred frame and the Mars-centred frame. The simulations show that the position error of a probe relative to the Sun caused by the Mars ephemeris error is almost equal to the ephemeris error itself, that the velocity error is affected slightly, and that the position and velocity relative to Mars are hardly affected. The navigation result of a Mars probe is also greatly affected by the quantities and periodicities of the ephemeris errors of Phobos and Deimos, especially that of Deimos.
We introduce a multiple interval Chebyshev-Gauss-Lobatto spectral collocation method for the initial value problems of the nonlinear ordinary differential equations (ODES). This method is easy to implement and possesses the high order accuracy. In addition, it is very stable and suitable for long time calculations. We also obtain the hp-version bound on the numerical error of the multiple interval collocation method under H1-norm. Numerical experiments confirm the theoretical expectations.