We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Orthogonal Least Squares (OLS) algorithm is an efficient sparse recovery algorithm that has received much attention in recent years. On one hand, this paper considers that the OLS algorithm recovers the supports of sparse signals in the noisy case. We show that the OLS algorithm exactly recovers the support of $K$-sparse signal $\boldsymbol{x}$ from $\boldsymbol{y}=\boldsymbol{\unicode[STIX]{x1D6F7}}\boldsymbol{x}+\boldsymbol{e}$ in $K$ iterations, provided that the sensing matrix $\boldsymbol{\unicode[STIX]{x1D6F7}}$ satisfies the restricted isometry property (RIP) with restricted isometry constant (RIC) $\unicode[STIX]{x1D6FF}_{K+1}<1/\sqrt{K+1}$, and the minimum magnitude of the nonzero elements of $\boldsymbol{x}$ satisfies some constraint. On the other hand, this paper demonstrates that the OLS algorithm exactly recovers the support of the best $K$-term approximation of an almost sparse signal $\boldsymbol{x}$ in the general perturbations case, which means both $\boldsymbol{y}$ and $\boldsymbol{\unicode[STIX]{x1D6F7}}$ are perturbed. We show that the support of the best $K$-term approximation of $\boldsymbol{x}$ can be recovered under reasonable conditions based on the restricted isometry property (RIP).
The generalized orthogonal matching pursuit $\left( \text{gOMP} \right)$ algorithm has received much attention in recent years as a natural extension of the orthogonal matching pursuit $\left( \text{OMP} \right)$. It is used to recover sparse signals in compressive sensing. In this paper, a new bound is obtained for the exact reconstruction of every $K$-sparse signal via the $\text{gOMP}$ algorithm in the noiseless case. That is, if the restricted isometry constant $\left( \text{RIC} \right)$${{\delta }_{NK+1}}$ of the sensing matrix $A$ satisfies
then the $\text{gOMP}$ can perfectly recover every $K$-sparse signal $x$ from $y\,=\,Ax$. Furthermore, the bound is proved to be sharp. In the noisy case, the above bound on $\text{RIC}$ combining with an extra condition on the minimum magnitude of the nonzero components of $K$-sparse signals can guarantee that the $\text{gOMP}$ selects all of the support indices of the $K$-sparse signals.
The hybrid variational model for restoration of texture images corrupted by blur and Gaussian noise we consider combines total variation regularisation and a fractional-order regularisation, and is solved by an alternating minimisation direction algorithm. Numerical experiments demonstrate the advantage of this model over the adaptive fractional-order variational model in image quality and computational time.
Image segmentation is a fundamental problem in both image processing and computer vision with numerous applications. In this paper, we propose a two-stage image segmentation scheme based on inexact alternating direction method. Specifically, we first solve the convex variant of the Mumford-Shah model to get the smooth solution, the segmentation are then obtained by apply the K-means clustering method to the solution. Some numerical comparisons are arranged to show the effectiveness of our proposed schemes by segmenting many kinds of images such as artificial images, natural images, and brain MRI images.
In this paper, a new stopping rule is proposed for orthogonal multi-matching pursuit (OMMP). We show that, for ℓ2 bounded noise case, OMMP with the new stopping rule can recover the true support of any K-sparse signal x from noisy measurements y = Фx + e in at most K iterations, provided that all the nonzero components of x and the elements of the matrix Ф satisfy certain requirements. The proposed method can improve the existing result. In particular, for the noiseless case, OMMP can exactly recover any K-sparse signal under the same RIP condition.
In this paper, a Cauchy problem of two-dimensional heat conduction equation is investigated. This is a severely ill-posed problem. Based on the solution of Cauchy problem of two-dimensional heat conduction equation, we propose to solve this problem by modifying the kernel, which generates a well-posed problem. Error estimates between the exact solution and the regularized solution are given. We provide a numerical experiment to illustrate the main results.
The alternating direction method of multipliers (ADMM) is applied to a constrained linear least-squares problem, where the objective function is a sum of two least-squares terms and there are box constraints. The original problem is decomposed into two easier least-squares subproblems at each iteration, and to speed up the inner iteration we linearize the relevant subproblem whenever it has no known closed-form solution. We prove the convergence of the resulting algorithm, and apply it to solve some image deblurring problems. Its efficiency is demonstrated, in comparison with Newton-type methods.
We consider an iterated form of Lavrentiev regularization, using a null sequence (αk) of positive real numbers to obtain a stable approximate solution for ill-posed nonlinear equations of the form F(x)=y, where F:D(F)⊆X→X is a nonlinear operator and X is a Hilbert space. Recently, Bakushinsky and Smirnova [“Iterative regularization and generalized discrepancy principle for monotone operator equations”, Numer. Funct. Anal. Optim.28 (2007) 13–25] considered an a posteriori strategy to find a stopping index kδ corresponding to inexact data yδ with resulting in the convergence of the method as δ→0. However, they provided no error estimates. We consider an alternate strategy to find a stopping index which not only leads to the convergence of the method, but also provides an order optimal error estimate under a general source condition. Moreover, the condition that we impose on (αk) is weaker than that considered by Bakushinsky and Smirnova.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.