We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Presented is a novel way to combine snapshot compressive imaging and lateral shearing interferometry in order to capture the spatio-spectral phase of an ultrashort laser pulse in a single shot. A deep unrolling algorithm is utilized for snapshot compressive imaging reconstruction due to its parameter efficiency and superior speed relative to other methods, potentially allowing for online reconstruction. The algorithm’s regularization term is represented using a neural network with 3D convolutional layers to exploit the spatio-spectral correlations that exist in laser wavefronts. Compressed sensing is not typically applied to modulated signals, but we demonstrate its success here. Furthermore, we train a neural network to predict the wavefronts from a lateral shearing interferogram in terms of Zernike polynomials, which again increases the speed of our technique without sacrificing fidelity. This method is supported with simulation-based results. While applied to the example of lateral shearing interferometry, the methods presented here are generally applicable to a wide range of signals, including Shack–Hartmann-type sensors. The results may be of interest beyond the context of laser wavefront characterization, including within quantitative phase imaging.
Motivated by problems from compressed sensing, we determine the threshold behaviour of a random
$n\times d \pm 1$
matrix
$M_{n,d}$
with respect to the property ‘every
$s$
columns are linearly independent’. In particular, we show that for every
$0\lt \delta \lt 1$
and
$s=(1-\delta )n$
, if
$d\leq n^{1+1/2(1-\delta )-o(1)}$
then with high probability every
$s$
columns of
$M_{n,d}$
are linearly independent, and if
$d\geq n^{1+1/2(1-\delta )+o(1)}$
then with high probability there are some
$s$
linearly dependent columns.
In order to merge the advantages of the traditional compressed sensing (CS) methodology and the data-driven deep network scheme, this paper proposes a physical model-driven deep network, termed CS-Net, for solving target image reconstruction problems in through-the-wall radar imaging. The proposed method consists of two consequent steps. First, a learned convolutional neural network prior is introduced to replace the regularization term in the traditional iterative CS-based method to capture the redundancy of the radar echo signal. Moreover, the physical model of the radar signal is used in the data consistency layer to encourage consistency with the measurements. Second, the iterative CS optimization is unrolled to yield a deep learning network, where the weight, regularization parameter, and the other parameters are learnable. A quantity of training data enables the network to extract high-dimensional characteristics of the radar echo signal to reconstruct the spatial target image. Simulation results demonstrated that the proposed method can achieve accurate target image reconstruction and was superior to the traditional CS method, in terms of mean squared error and the target texture details.
Accurate, robust and fast image reconstruction is a critical task in many scientific, industrial and medical applications. Over the last decade, image reconstruction has been revolutionized by the rise of compressive imaging. It has fundamentally changed the way modern image reconstruction is performed. This in-depth treatment of the subject commences with a practical introduction to compressive imaging, supplemented with examples and downloadable code, intended for readers without extensive background in the subject. Next, it introduces core topics in compressive imaging – including compressed sensing, wavelets and optimization – in a concise yet rigorous way, before providing a detailed treatment of the mathematics of compressive imaging. The final part is devoted to recent trends in compressive imaging: deep learning and neural networks. With an eye to the next decade of imaging research, and using both empirical and mathematical insights, it examines the potential benefits and the pitfalls of these latest approaches.
This chapter provides an introduction to uncertainty relations underlying sparse signal recovery. We start with the seminal work by Donoho and Stark (1989), which defines uncertainty relations as upper bounds on the operator norm of the band-limitation operator followed by the time-limitation operator, generalize this theory to arbitrary pairs of operators, and then develop, out of this generalization, the coherence-based uncertainty relations due to Elad and Bruckstein (2002), plus uncertainty relations in terms of concentration of the 1-norm or 2-norm. The theory is completed with set-theoretic uncertainty relations which lead to best possible recovery thresholds in terms of a general measure of parsimony, the Minkowski dimension. We also elaborate on the remarkable connection between uncertainty relations and the “large sieve,” a family of inequalities developed in analytic number theory. We show how uncertainty relations allow one to establish fundamental limits of practical signal recovery problems such as inpainting, declipping, super-resolution, and denoising of signals corrupted by impulse noise or narrowband interference.
In compressed sensing (CS) a signal x ∈ Rn is measured as y =A x + z, where A ∈ Rm×n (m<n) and z ∈ Rm denote the sensing matrix and measurement noise. The goal is to recover x from measurements y when m<n. CS is possible because we typically want to capture highly structured signals, and recovery algorithms take advantage of a signal’s structure to solve the under-determined system of linear equations. As in CS, data-compression codes take advantage of a signal’s structure to encode it efficiently. Structures used by compression codes are much more elaborate than those used by CS algorithms. Using more complex structures in CS, like those employed by data-compression codes, potentially leads to more efficient recovery methods requiring fewer linear measurements or giving better reconstruction quality. We establish connections between data compression and CS, giving CS recovery methods based on compression codes, which indirectly take advantage of all structures used by compression codes. This elevates the class of structures used by CS algorithms to those used by compression codes, leading to more efficient CS recovery methods.
Fast and accurate unveiling of power-line outages is of paramount importance not only for preventing faults that may lead to blackouts but also for routine monitoring and control tasks of the smart grid. This chapter presents a sparse overcomplete model to represent the effects of (potentially multiple) power line outages on synchronized bus voltage angle measurements. Based on this model, efficient compressive sensing algorithms can be adopted to identify outaged lines at linear complexity of the total number of lines. Furthermore, the effects of uncertainty in synchronized measurements will be analyzed, along with the optimal placement of measurement units.
Highly-directional image artifacts such as ion mill curtaining, mechanical scratches, or image striping from beam instability degrade the interpretability of micrographs. These unwanted, aperiodic features extend the image along a primary direction and occupy a small wedge of information in Fourier space. Deleting this wedge of data replaces stripes, scratches, or curtaining, with more complex streaking and blurring artifacts—known within the tomography community as “missing wedge” artifacts. Here, we overcome this problem by recovering the missing region using total variation minimization, which leverages image sparsity-based reconstruction techniques—colloquially referred to as compressed sensing (CS)—to reliably restore images corrupted by stripe-like features. Our approach removes beam instability, ion mill curtaining, mechanical scratches, or any stripe features and remains robust at low signal-to-noise. The success of this approach is achieved by exploiting CS's inability to recover directional structures that are highly localized and missing in Fourier Space.
Scanning transmission electron microscopy (STEM) has become the main stay for materials characterization on atomic level, with applications ranging from visualization of localized and extended defects to mapping order parameter fields. In recent years, attention has focused on the potential of STEM to explore beam induced chemical processes and especially manipulating atomic motion, enabling atom-by-atom fabrication. These applications, as well as traditional imaging of beam sensitive materials, necessitate increasing the dynamic range of STEM in imaging and manipulation modes, and increasing the absolute scanning speed which can be achieved by combining sparse sensing methods with nonrectangular scanning trajectories. Here we have developed a general method for real-time reconstruction of sparsely sampled images from high-speed, noninvasive and diverse scanning pathways, including spiral scan and Lissajous scan. This approach is demonstrated on both the synthetic data and experimental STEM data on the beam sensitive material graphene. This work opens the door for comprehensive investigation and optimal design of dose efficient scanning strategies and real-time adaptive inference and control of e-beam induced atomic fabrication.
Soft X-ray spectro-tomography provides three-dimensional (3D) chemical mapping based on natural X-ray absorption properties. Since radiation damage is intrinsic to X-ray absorption, it is important to find ways to maximize signal within a given dose. For tomography, using the smallest number of tilt series images that gives a faithful reconstruction is one such method. Compressed sensing (CS) methods have relatively recently been applied to tomographic reconstruction algorithms, providing faithful 3D reconstructions with a much smaller number of projection images than when conventional reconstruction methods are used. Here, CS is applied in the context of scanning transmission X-ray microscopy tomography. Reconstructions by weighted back-projection, the simultaneous iterative reconstruction technique, and CS are compared. The effects of varying tilt angle increment and angular range for the tomographic reconstructions are examined. Optimization of the regularization parameter in the CS reconstruction is explored and discussed. The comparisons show that CS can provide improved reconstruction fidelity relative to weighted back-projection and simultaneous iterative reconstruction techniques, with increasingly pronounced advantages as the angular sampling is reduced. In particular, missing wedge artifacts are significantly reduced and there is enhanced recovery of sharp edges. Examples of using CS for low-dose scanning transmission X-ray microscopy spectroscopic tomography are presented.
Sign truncated matching pursuit (STrMP) algorithm is presented in this paper. STrMP is a new greedy algorithm for the recovery of sparse signals from the sign measurement, which combines the principle of consistent reconstruction with orthogonal matching pursuit (OMP). The main part of STrMP is as concise as OMP and hence STrMP is simple to implement. In contrast to previous greedy algorithms for one-bit compressed sensing, STrMP only need to solve a convex and unconstrained subproblem at each iteration. Numerical experiments show that STrMP is fast and accurate for one-bit compressed sensing compared with other algorithms.
In this paper, we consider signal recovery via $l_{1}$-analysis optimisation. The signals we consider are not sparse in an orthonormal basis or incoherent dictionary, but sparse or nearly sparse in terms of some tight frame $D$. The analysis in this paper is based on the restricted isometry property adapted to a tight frame $D$ (abbreviated as $D$-RIP), which is a natural extension of the standard restricted isometry property. Assuming that the measurement matrix $A\in \mathbb{R}^{m\times n}$ satisfies $D$-RIP with constant ${\it\delta}_{tk}$ for integer $k$ and $t>1$, we show that the condition ${\it\delta}_{tk}<\sqrt{(t-1)/t}$ guarantees stable recovery of signals through $l_{1}$-analysis. This condition is sharp in the sense explained in the paper. The results improve those of Li and Lin [‘Compressed sensing with coherent tight frames via $l_{q}$-minimization for $0<q\leq 1$’, Preprint, 2011, arXiv:1105.3299] and Baker [‘A note on sparsification by frames’, Preprint, 2013, arXiv:1308.5249].
Standard techniques in matrix factorization (MF) – a popular method for latent factor model-based design – result in dense matrices for both users and items. Users are likely to have some affinity toward all the latent factors – making a dense matrix plausible, but it is not possible for the items to possess all the latent factors simultaneously; hence it is more likely to be sparse. Therefore, we propose to factor the rating matrix into a dense user matrix and a sparse item matrix, leading to the blind compressed sensing (BCS) framework. To further enhance the prediction quality of our design, we aim to incorporate user and item metadata into the BCS framework. The additional information helps in reducing the underdetermined nature of the problem of rating prediction caused by extreme sparsity of the rating dataset. Our design is based on the belief that users sharing similar demographic profile have similar preferences and thus can be described by the similar latent factor vectors. We also use item metadata (genre information) to group together the similar items. We modify our BCS formulation to include item metadata under the assumption that items belonging to common genre share similar sparsity pattern. We also design an efficient algorithm to solve our formulation. Extensive experimentation conducted on the movielens dataset validates our claim that our modified MF framework utilizing auxiliary information improves upon the existing state-of-the-art techniques.
In this article, we demonstrate the application of a new compressed sensing three-dimensional reconstruction algorithm for electron tomography that increases the accuracy of morphological characterization of nanostructured materials such as nanocrystalline iron oxide particles. A powerful feature of the algorithm is an anisotropic total variation norm for the L1 minimization during algebraic reconstruction that effectively reduces the elongation artifacts caused by limited angle sampling during electron tomography. The algorithm provides faithful morphologies that have not been feasible with existing techniques.
In this paper, a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented. As with Monte-Carlo and stochastic collocation methods, only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest. The new approach differs from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory. The technique allows the retrieval of the modes that contribute most significantly to the approximation of the solution using a minimal amount of information. The generation of this information, via many solver calls, is almost always the bottle-neck of an uncertainty quantification procedure. If the stochastic model output has a reasonably compressible representation in the retained approximation basis, the proposed method makes the best use of the available information and retrieves the dominant modes. Uncertainty quantification of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the significant performance improvement of the new method, requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos (Smolyak scheme) to achieve comparable approximation accuracy.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.