We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper,the linear space $\mathcal F$ of a special type of fractal interpolation functions (FIFs) on an interval I is considered. Each FIF in $\mathcal F$ is established from a continuous function on I. We show that, for a finite set of linearly independent continuous functions on I, we get linearly independent FIFs. Then we study a finite-dimensional reproducing kernel Hilbert space (RKHS) $\mathcal F_{\mathcal B}\subset\mathcal F$, and the reproducing kernel $\mathbf k$ for $\mathcal F_{\mathcal B}$ is defined by a basis of $\mathcal F_{\mathcal B}$. For a given data set $\mathcal D=\{(t_k, y_k) : k=0,1,\ldots,N\}$, we apply our results to curve fitting problems of minimizing the regularized empirical error based on functions of the form $f_{\mathcal V}+f_{\mathcal B}$, where $f_{\mathcal V}\in C_{\mathcal V}$ and $f_{\mathcal B}\in \mathcal F_{\mathcal B}$. Here $C_{\mathcal V}$ is another finite-dimensional RKHS of some classes of regular continuous functions with the reproducing kernel $\mathbf k^*$. We show that the solution function can be written in the form $f_{\mathcal V}+f_{\mathcal B}=\sum_{m=0}^N\gamma_m\mathbf k^*_{t_m} +\sum_{j=0}^N \alpha_j\mathbf k_{t_j}$, where ${\mathbf k}_{t_m}^\ast(\cdot)={\mathbf k}^\ast(\cdot,t_m)$ and $\mathbf k_{t_j}(\cdot)=\mathbf k(\cdot,t_j)$, and the coefficients γm and αj can be solved by a system of linear equations.
In this paper, the semilocal convergence for ameliorated super-Halley methods in Banach spaces is considered. Different from the results in [J. M. Gutiérrez and M. A. Hernández, Comput. Math. Appl. 36 (1998) 1–8], these ameliorated methods do not need to compute a second derivative, the computation for inversion is reduced and the $R$-order is also heightened. Under a weaker condition, an existence–uniqueness theorem for the solution is proved.
In two-phase flow simulations, a difficult issue is usually the treatment of surface tension effects. These cause a pressure jump that is proportional to the curvature of the interface separating the two fluids. Since the evaluation of the curvature incorporates second derivatives, it is prone to numerical instabilities. Within this work, the interface is described by a level-set method based on a discontinuous Galerkin discretization. In order to stabilize the evaluation of the curvature, a patch-recovery operation is employed. There are numerous ways in which this filtering operation can be applied in the whole process of curvature computation. Therefore, an extensive numerical study is performed to identify optimal settings for the patch-recovery operations with respect to computational cost and accuracy.
We analyse the mask associated with the $2n$-point interpolatory Dubuc–Deslauriers subdivision scheme $S_{a^{[n]}}$. Sharp bounds are presented for the magnitude of the coefficients $a^{[n]}_{2i-1}$ of the mask. For scales $i \in [1,\sqrt{n}]$ it is shown that $|a^{[n]}_{2i-1}|$ is comparable to $i^{-1}$, and for larger power scales, exponentially decaying bounds are obtained. Using our bounds, we may precisely analyse the summability of the mask as a function of $n$ by identifying which coefficients of the mask contribute to the essential behaviour in $n$, recovering and refining the recent result of Deng–Hormann–Zhang that the operator norm of $S_{a^{[n]}}$ on $\ell ^\infty $ grows logarithmically in $n$.
We propose some new weighted averaging methods for gradient recovery, and present analytical and numerical investigation on the performance of these weighted averaging methods. It is shown analytically that the harmonic averaging yields a superconvergent gradient for any mesh in one-dimension and the rectangular mesh in two-dimension. Numerical results indicate that these new weighted averaging methods are better recovered gradient approaches than the simple averaging and geometry averaging methods under triangular mesh.
This paper explores the possibilities of very simple analysis on derivation of spiral regions for a single segment of cubic function matching positional, tangential, and curvature end conditions. Spirals are curves of monotone curvature with constant sign and have the potential advantage that the minimum and maximum curvature exists at their end points. Therefore, spirals are free from singularities, inflection points, and local curvature extrema. These properties make the study of spiral segments an interesting problem both in practical and aesthetic applications, like highway or railway designing or the path planning of non-holonomic mobile robots. Our main contribution is to simplify the procedure of existence methods while keeping it stable and providing flexile constraints for easy applications of spiral segments.
In this paper, a new type of gradient recovery method based on vertex-edge-face interpolation is introduced and analyzed. This method gives a new way to recover gradient approximations and has the same simplicity, efficiency, and superconvergence properties as those of superconvergence patch recovery method and polynomial preserving recovery method. Here, we introduce the recovery technique and analyze its superconvergence properties. We also show a simple application in the a posteriori error estimates. Some numerical examples illustrate the effectiveness of this recovery method.
The backfitting algorithm is an iterative procedure for fitting additive models in which, at each step, one component is estimated keeping the other components fixed, the algorithm proceeding component by component and iterating until convergence. Convergence of the algorithm has been studied by Buja, Hastie, and Tibshirani (1989). We give a simple, but more general, geometric proof of the convergence of the backfitting algorithm when the additive components are estimated by penalized least squares. Our treatment covers spline smoothers and structural time series models, and we give a full discussion of the degenerate case. Our proof is based on Halperin's (1962) generalization of von Neumann's alternating projection theorem.
We define an iterative interpolation process for data spread over a closed discrete subgroup of the Euclidean space. We describe the main algebraic properties of this process. This interpolation process, under very weak assumptions, is always convergent in the sense of Schwartz distributions. We find also a convenient necessary and sufficient condition for continuity of each interpolation function of a given iterative interpolation process.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.