We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Gives a brief overview of the book. Notations for signal representation in continuous time and discrete time are introduced. Both one-dimensional and two-dimensional signals are introduced, and simple examples of images are presented. Examples of noise removal and image smoothing (filtering) are demonstrated. The concept of frequency is introduced and its importance as well as its role in signal representation are explained, giving musical notes as examples. The history of signal processing, the role of theory, and the connections to real-life applications are mentioned in an introductory way. The chapter also draws attention to the impact of signal processing in digital communications (e.g., cell-phone communications), gravity wave detection, deep space communications, and so on.
Principal component analysis (PCA) plays an important role in the analysis of cryo-electron microscopy (cryo-EM) images for various tasks such as classification, denoising, compression, and ab initio modeling. We introduce a fast method for estimating a compressed representation of the 2-D covariance matrix of noisy cryo-EM projection images affected by radial point spread functions that enables fast PCA computation. Our method is based on a new algorithm for expanding images in the Fourier–Bessel basis (the harmonics on the disk), which provides a convenient way to handle the effect of the contrast transfer functions. For $ N $ images of size $ L\times L $, our method has time complexity $ O\left({NL}^3+{L}^4\right) $ and space complexity $ O\left({NL}^2+{L}^3\right) $. In contrast to previous work, these complexities are independent of the number of different contrast transfer functions of the images. We demonstrate our approach on synthetic and experimental data and show acceleration by factors of up to two orders of magnitude.
Analytical studies of nanoparticles (NPs) are frequently based on huge datasets derived from hyperspectral images acquired using scanning transmission electron microscopy. These large datasets require machine learning computational tools to reduce dimensionality and extract relevant information. Principal component analysis (PCA) is a commonly used procedure to reconstruct information and generate a denoised dataset; however, several open questions remain regarding the accuracy and precision of reconstructions. Here, we use experiments and simulations to test the effect of PCA processing on data obtained from AuAg alloy NPs a few nanometers wide with different compositions. This study aims to address the reliability of chemical quantification after PCA processing. Our results show that the PCA treatment mitigates the contribution of Poisson noise and leads to better quantification, indicating that denoised results may be reliable from the point of view of both uncertainty and accuracy for properly planned experiments. However, the initial data need to be of sufficient quality: these results can only be obtained if the signal-to-noise ratio of input data exceeds a minimal value to avoid the occurrence of random noise bias in the PCA reconstructions.
Scanning transmission electron microscopy is a crucial tool for nanoscience, achieving sub-nanometric spatial resolution in both image and spectroscopic studies. This generates large datasets that cannot be analyzed without computational assistance. The so-called machine learning procedures can exploit redundancies and find hidden correlations. Principal component analysis (PCA) is the most popular approach to denoise data by reducing data dimensionality and extracting meaningful information; however, there are many open questions on the accuracy of reconstructions. We have used experiments and simulations to analyze the effect of PCA on quantitative chemical analysis of binary alloy (AuAg) nanoparticles using energy-dispersive X-ray spectroscopy. Our results demonstrate that it is possible to obtain very good fidelity of chemical composition distribution when the signal-to-noise ratio exceeds a certain minimal level. Accurate denoising derives from a complex interplay between redundancy (data matrix size), counting noise, and noiseless data intensity variance (associated with sample chemical composition dispersion). We have suggested several quantitative bias estimators and noise evaluation procedures to help in the analysis and design of experiments. This work demonstrates the high potential of PCA denoising, but it also highlights the limitations and pitfalls that need to be avoided to minimize artifacts and perform reliable quantification.
Low probability of intercept (LPI) radars utilize specially designed waveforms for intra-pulse modulation and hence LPI radars cannot be easily intercepted by passive receivers. The waveforms include linear frequency modulation, nonlinear frequency modulation, polyphase, and polytime codes. The advantages of LPI radar are wide bandwidth, frequency variability, low power, and the ability to hide their emissions. On the other hand, the main purpose of intercept receiver is to classify and estimate the parameters of the waveforms even when the signals are contaminated with noise. Precise measurement of the parameters will provide necessary information about a threat to the radar so that the electronic attack or electronic warfare support system could take instantaneous counter action against the enemy. In this work, noisy polyphase and polytime coded waveforms are analyzed using cyclostationary (CS) algorithm. To improve the signal quality, the noisy signal is pre-processed using two types of denoising filters. The denoised signal is analyzed using CS techniques and the coefficients of spectral correlation density are computed. With this method, modulation parameters of nine types of waveforms up to −12 dB signal-to-noise ratio with an accuracy of better than 95% are extracted. When compared with literature values, it is found that the results are superior.
A deep convolutional neural network has been developed to denoise atomic-resolution transmission electron microscope image datasets of nanoparticles acquired using direct electron counting detectors, for applications where the image signal is severely limited by shot noise. The network was applied to a model system of CeO2-supported Pt nanoparticles. We leverage multislice image simulations to generate a large and flexible dataset for training the network. The proposed network outperforms state-of-the-art denoising methods on both simulated and experimental test data. Factors contributing to the performance are identified, including (a) the geometry of the images used during training and (b) the size of the network's receptive field. Through a gradient-based analysis, we investigate the mechanisms learned by the network to denoise experimental images. This shows that the network exploits both extended and local information in the noisy measurements, for example, by adapting its filtering approach when it encounters atomic-level defects at the nanoparticle surface. Extensive analysis has been done to characterize the network's ability to correctly predict the exact atomic structure at the nanoparticle surface. Finally, we develop an approach based on the log-likelihood ratio test that provides a quantitative measure of the agreement between the noisy observation and the atomic-level structure in the network-denoised image.
Time-resolved imaging of molecules and materials made of light elements is an emerging field of transmission electron microscopy (TEM), and the recent development of direct electron detection cameras, capable of taking as many as 1,600 fps, has potentially broadened the scope of the time-resolved TEM imaging in chemistry and nanotechnology. However, such a high frame rate reduces electron dose per frame, lowers the signal-to-noise ratio (SNR), and renders the molecular images practically invisible. Here, we examined image noise reduction to take the best advantage of fast cameras and concluded that the Chambolle total variation denoising algorithm is the method of choice, as illustrated for imaging of a molecule in the 1D hollow space of a carbon nanotube with ~1 ms time resolution. Through the systematic comparison of the performance of multiple denoising algorithms, we found that the Chambolle algorithm improves the SNR by more than an order of magnitude when applied to TEM images taken at a low electron dose as required for imaging at around 1,000 fps. Open-source code and a standalone application to apply Chambolle denoising to TEM images and video frames are available for download.
During pulsar navigation, the high-frequency noise carried by the pulsar profile signal reduces the accuracy of the pulse TOA (Time of Arrival) estimation. At present, the main method to remove signal noise by using wavelet transform is to redesign the function of the threshold and level of wavelet transform. However, the signal-to-noise ratio and other indicators of the filtered signal need to be further optimised, so a more appropriate wavelet basis needs to be designed. This paper proposes a wavelet basis design method based on frequency domain analysis to improve the denoising effect of pulsar signals. This method first analyses the pulsar contour signal in the frequency domain and then designs a Crab pulsar wavelet basis (CPn, where n represents the wavelet basis length) based on its frequency domain characteristics. In order to improve the real-time performance of the algorithm, a wavelet lifting scheme is implemented. Through simulation, this method analyses the pulsar contour signal data at home and abroad. Results show the signal-to-noise ratio can be increased by 4 dB, the mean square error is reduced by 61% and the peak error is reduced by 45%. Therefore, this method has better filtering effect.
A large number of studies have been made on denoising of a digital noisy image. In regression filters, a convolution kernel was determined based on the spatial distance or the photometric distance. In non-local mean (NLM) filters, pixel-wise calculation of the distance was replaced with patch-wise one. Later on, NLM filters have been developed to be adaptive to the local statistics of an image with introduction of the prior knowledge in a Bayesian framework. Unlike those existing approaches, we introduce the prior knowledge, not on the local patch in NLM filters but, on the noise bias (NB) which has not been utilized so far. Although the mean of noise is assumed to be zero before tone mapping (TM), it becomes non-zero value after TM due to the non-linearity of TM. Utilizing this fact, we propose a new denoising method for a tone mapped noisy image. In this method, pixels in the noisy image are classified into several subsets according to the observed pixel value, and the pixel values in each subset are compensated based on the prior knowledge so that NB of the subset becomes close to zero. As a result of experiments, effectiveness of the proposed method is confirmed.
A new algorithm for the removal of additive uncorrelated Gaussian noise from a digital image is presented. The algorithm is based on a data driven methodology for the adaptive thresholding of wavelet coefficients. This methodology is derived from higher order statistics of the residual image, and requires no a priori estimate of the level of noise contamination of an image.
Denoising of images corrupted by multiplicative noise is an important task in various
applications, such as laser imaging, synthetic aperture radar and ultrasound imaging.
We propose a combined first-order and second-order variational model for removal of
multiplicative noise. Our model substantially reduces the staircase effects while
preserving edges in the restored images, since it combines advantages of the
first-order and second-order total variation. The issues of existence and uniqueness
of a minimizer for this variational model are analysed. Moreover, a gradient descent
method is employed to solve the associated Euler–Lagrange equation, and
several numerical experiments are given to show the efficiency of our model. In
particular, a comparison with an existing model in terms of peak signal-to-noise
ratio and structural similarity index is provided.
Fluorescence images present low signal-to-noise ratio (SNR), are corrupted by a type of multiplicative noise with Poisson distribution, and are affected by a time intensity decay due to photoblinking and photobleaching (PBPB) effects. The noise and the PBPB effects together make long-term biological observation very difficult. Here, a theoretical model based on the underlying quantum mechanic physics theory of the observation process associated with this type of image is presented and the common empirical weighted sum of two decaying exponentials is derived from the model. Improvement in the SNR obtained in denoising when the proposed method is used is particularly important in the last images of the sequence where temporal correlation is used to recover information that is sometimes faded and therefore useless from a visual inspection point of view. The proposed PBPB model is included in a Bayesian denoising algorithm previously proposed by the authors. Experiments with synthetic and real data are presented to validate the PBPB model and to illustrate the effectiveness of the model in denoising and reconstruction results.
In this paper, we propose a generalized penalization technique and a convex constraint minimization approach for the p-harmonic flow problem following the ideas in [Kang & March, IEEE T. Image Process., 16 (2007), 2251-2261]. We use fast algorithms to solve the subproblems, such as the dual projection methods, primal-dual methods and augmented Lagrangian methods. With a special penalization term, some special algorithms are presented. Numerical experiments are given to demonstrate the performance of the proposed methods. We successfully show that our algorithms are effective and efficient due to two reasons: the solver for subproblem is fast in essence and there is no need to solve the subproblem accurately (even 2 inner iterations of the subproblem are enough). It is also observed that better PSNR values are produced using the new algorithms.
Directional multiscale representations such as shearlets and curvelets have gainedincreasing recognition in recent years as superior methods for the sparse representationof data. Thanks to their ability to sparsely encode images and other multidimensionaldata, transform-domain denoising algorithms based on these representations are among thebest performing methods currently available. As already observed in the literature, theperformance of many sparsity-based data processing methods can be further improved byusing appropriate combinations of dictionaries. In this paper, we consider the problem of3D data denoising and introduce a denoising algorithm which uses combined sparsedictionaries. Our numerical demonstrations show that the realization of the algorithmwhich combines 3D shearlets and local Fourier bases provides highly competitive results ascompared to other 3D sparsity-based denosing algorithms based on both single and combineddictionaries.
Using integration by parts on Gaussian spacewe construct a Stein Unbiased Risk Estimator (SURE)for the drift of Gaussian processes, based on theirlocal and occupation times.By almost-sure minimization of the SURE risk ofshrinkage estimators we derive an estimation and de-noisingprocedure for an input signal perturbed by acontinuous-time Gaussian noise.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.