Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-26T07:37:14.989Z Has data issue: false hasContentIssue false

Hyperspectral compressive wavefront sensing

Published online by Cambridge University Press:  21 March 2023

Sunny Howard
Affiliation:
Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK Centre for Advanced Laser Applications, Ludwig-Maximilians-Universität München, Garching, Germany
Jannik Esslinger
Affiliation:
Centre for Advanced Laser Applications, Ludwig-Maximilians-Universität München, Garching, Germany
Robin H. W. Wang
Affiliation:
Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK
Peter Norreys
Affiliation:
Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK John Adams Institute for Accelerator Science, Oxford, UK
Andreas Döpp*
Affiliation:
Department of Physics, Clarendon Laboratory, University of Oxford, Oxford, UK Centre for Advanced Laser Applications, Ludwig-Maximilians-Universität München, Garching, Germany
*
Correspondence to: Andreas Döpp, Centre for Advanced Laser Applications, Ludwig-Maximilians-Universität München, Am Coulombwall 1, 85748 Garching, Germany. Email: a.doepp@lmu.de

Abstract

Presented is a novel way to combine snapshot compressive imaging and lateral shearing interferometry in order to capture the spatio-spectral phase of an ultrashort laser pulse in a single shot. A deep unrolling algorithm is utilized for snapshot compressive imaging reconstruction due to its parameter efficiency and superior speed relative to other methods, potentially allowing for online reconstruction. The algorithm’s regularization term is represented using a neural network with 3D convolutional layers to exploit the spatio-spectral correlations that exist in laser wavefronts. Compressed sensing is not typically applied to modulated signals, but we demonstrate its success here. Furthermore, we train a neural network to predict the wavefronts from a lateral shearing interferogram in terms of Zernike polynomials, which again increases the speed of our technique without sacrificing fidelity. This method is supported with simulation-based results. While applied to the example of lateral shearing interferometry, the methods presented here are generally applicable to a wide range of signals, including Shack–Hartmann-type sensors. The results may be of interest beyond the context of laser wavefront characterization, including within quantitative phase imaging.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press in association with Chinese Laser Press

1 Introduction

Ultrashort laser pulses possess a necessarily broad spectral bandwidth[ Reference Jolly, Gobert and Quere 1 ]. The chromatic properties of the optical elements that are used for the generation or application of such pulses can then create relations between the spatial and temporal profiles, called spatio-temporal couplings (STCs)[ Reference Jeandet, Jolly, Borot, Bussière, Dumont, Gautier, Gobert, Goddet, Gonsalves, Irman, Leemans, Lopez-Martens, Mennerat, Nakamura, Ouillé, Pariente, Pittman, Püschel, Sanson, Sylla, Thaury, Zeil and Quéré 2 ]. These phenomena can lead to a variety of effects including, for example, the broadening of a focused laser pulse either spatially or temporally, thereby reducing its peak intensity[ Reference Bourassin-Bouchet, Stephens, de Rossi, Delmotte and Chavel 3 ]. Deliberately introduced STCs can lead to exotic light pulses that behave very differently from ‘normal’ pulses. An example of this is the so-called flying focus[ Reference Froula, Palastro, Turnbull, Davies, Nguyen, Howard, Ramsey, Franke, Bahk, Begishev, Boni, Bromage, Bucht, Follett, Haberberger, Jenkins, Katz, Kessler, Shaw and Vieira 4 ] with its potential application in laser-driven wakefield accelerators[ Reference Caizergues, Smartsev, Malka and Thaury 5 ] and orbital angular momentum beams[ Reference Aboushelbaya 6 ]. Universally, the expansion in the applications of ultrafast laser pulses has exacerbated the need for a robust way to measure their properties.

To resolve STCs, one must gain wavefront information over the 3D hypercube ( $x,y,t$ ) or equivalently its spatio-spectral analogue ( $x,y,\omega$ ). Due to the limitation that array sensors (such as complementary metal–oxide–semiconductor (CMOS) cameras) capture information in a maximum of two dimensions, the majority of current techniques resort to scanning over one or two dimensions, whether it is a spatial[ Reference Bowlan, Gabolde, Shreenath, McGresham, Trebino and Akturk 7 , Reference López-Ripa, Sola and Alonso 8 ], spectral[ Reference Cousin, Bueno, Forget, Austin and Biegert 9 , Reference Weiße, Esslinger, Howard, Foerster, Haberstroh, Doyle, Norreys, Schreiber, Karsch and Döpp 10 ] or temporal[ Reference Pariente, Gallet, Borot, Gobert and Quéré 11 ] scan. Such techniques are time consuming and are blind to shot-to-shot variations and drift of the laser. While there exist some methods that are single shot[ Reference Gabolde and Trebino 12 ] – that is, those that capture the hypercube in one shot – these currently lack resolution and spectral range and are cumbersome to implement.

Inspired by recent progress in machine-learning-based laser science[ Reference Döpp, Eberle, Howard, Irshad, Lin and Streeter 13 ], here we present the concept for a single-shot method that utilizes compressed sensing (CS) to resolve the wavefront in both the spectral and spatial domains. The paper is structured as follows. In Section 2 we will discuss the wavefront sensor and in Section 3 we introduce snapshot compressive imaging (SCI) as a way to expand the wavefront sensor to measuring multiple colours at once. Our implementation is based on deep unrolling, which yields high performance in both reconstruction fidelity and speed, as required for use as a real-time diagnostic. Section 4 provides a thorough description of all neural network architectures used, and Section 5 contains a description of how training data was generated, before Section 6 displays the results of the proposed method.

2 Wavefront sensing

The wavefront sensor that was simulated in this example was a quadriwave lateral shearing interferometer (QWLSI), which is known for its high resolution and reconstruction fidelity. Nonetheless, our method can in general be applied to any kind of wavefront retrieval technique, including the popular Shack–Hartmann sensor or multi-plane techniques, such as Gerchberg–Saxton phase retrieval.

A lateral shearing interferometer (LSI) measures the spatially varying phase of a light beam, and was first applied to the measurement of ultrashort laser pulses in the late 1990s[ Reference Chanteloup, Druon, Nantel, Maksimchuk and Mourou 14 ]. The LSI works by creating multiple copies of the laser pulse and shearing them laterally relative to each other before their interference pattern is captured on a sensor. Due to the shear, information about the spatial gradient of the wavefront is encoded in the interferogram. This can then be extracted using Fourier filtering[ Reference Takeda, Ina and Kobayashi 15 ] and stitched together to form the wavefront via methods such as modal reconstruction[ Reference Dai, Tang, Wang, Sasaki and Feng 16 ] or Fourier integration[ Reference Velghe, Primot, Guérineau, Cohen and Wattellier 17 ]. The most popular implementation is the aforementioned QWLSI. By generating and shearing four ( $2\times 2$ ) copies of the pulse under investigation, this setup also enables the extraction of two separate pairs of orthogonal gradients, meaning two distinct estimates for the wavefront can be found, providing error estimation. This property is highly desirable for a sensor based on CS, because inevitable noise in the measurement can corrupt the wavefront with reconstruction artifacts. This would, for instance, be the case in a two-plane Gerchberg–Saxton algorithm. In contrast, redundancy of phase information in the QWLSI provides direct validation and, thus, makes the wavefront retrieval much more resilient to noise. A sketch illustrating the concept of the QWLSI is shown in the red box of Figure 1.

Figure 1 Schematic of the experimental setup that was simulated. The pulse first travels through a quadriwave lateral shearing interferometer, yielding a hypercube of interferograms, a slice of which is shown in the green box. The hypercube is then passed through a CASSI setup. This consists of a random mask and a relay system encompassing a prism, before the coded shot is captured with the camera. This diagram is not to scale.

2.1 QWLSI simulation

A physical implementation of the QWLSI usually consists of a phase grating with ‘pixels’ of alternating phase arranged in a checkerboard pattern[ Reference Velghe, Primot, Guérineau, Haïdar, Demoustier, Cohen and Wattellier 18 ], which leads to dominant diffraction in $2\times 2$ copies of the beam. Instead of simulating this process, we consider an idealized setup where we analytically generate the four copies. We begin with creating the pulse of interest by defining a spatial-spectral intensity, ${I}_0$ , and phase, ${\phi}_0$ :

(1) $$\begin{align}{E}_0\left(x,y,z=0,\omega \right)=\sqrt{I_0\left(x,y,z=0,\omega \right)}{e}^{i{\phi}_0\left(x,y,z=0,\omega \right)}.\end{align}$$

The pulse is copied four times, and each copy’s field is propagated to the detector plane according to the following rules. Note that for brevity, when the $z$ index is not stated, $z=0$ and the $\omega$ index will be suppressed from the electric field.

2.1.1 Propagation

Considering the jth copy has travelled a distance $\Delta z$ , its poloidal angle is ${\theta}_{{j}}$ and its azimuthal angle is $\zeta$ , then one finds its displacement in the $x$ and $y$ directions to be as follows:

$$\begin{align*}\Delta {x}_{{j}}\left(\omega \right)=\Delta z\sin \left({\theta}_{{j}}\right)\sin \left(\zeta \left(\omega \right)\right),\\\Delta {y}_{{j}}\left(\omega \right)=\Delta z\cos \left({\theta}_{{j}}\right)\sin \left(\zeta \left(\omega \right)\right).\end{align*}$$

In a QWLSI, the poloidal angles are ${\theta}_{{j}}\in \left\{0,\frac{\pi }{2},\pi, \frac{3\pi }{2}\right\}$ . In Figure 1, one identifies the azimuthal angle, $\zeta$ , as that between the white copy lines and the central yellow line. This is related to both the pitch of the grating $\Lambda$ and the wavelength $\lambda$ by the following:

$$\begin{align*}\zeta \left(\omega \right)=\arcsin \left(2\pi \frac{\lambda }{\Lambda}\right).\end{align*}$$

The resulting electric field of the copy is as follows:

(2) $$\begin{align}{E}_{{j}}\left(x,y,\Delta z\right)=\frac{1}{4}\sqrt{I_0\left(x-\Delta {x}_{{j}},y-\Delta {y}_{{j}}\right)}{e}^{i{\phi}_0\left(x-\Delta {x}_{{j}},y-\Delta {y}_{{j}}\right)}.\end{align}$$

2.1.2 Tilt

As diffraction occurs at an angle $\zeta$ , the grating imparts a tilt onto the copy. This translates to an additional phase shift dependent on both the spectral and spatial domains:

$$\begin{align*}\Delta {\phi}_{{j}}\left(x,y,\omega \right)=k\left(\omega \right)\left(x\cos \left({\theta}_{{j}}\right)+y\sin \left({\theta}_{{j}}\right)\right)\sin \left(\zeta \right),\end{align*}$$

where $k=\frac{2\pi }{\lambda }$ is the wavevector of the pulse. This tilt is crucial in reconstruction as it provides a high-frequency modulation that separates the gradients in Fourier space.

Combining these two effects and summing over copies, we obtain the final changes to the field:

(3) $$\begin{align}\begin{array}{l} E\left(x,y,\Delta z\right)=\frac{1}{4}\sum \limits_{{{j}}=1}^4\;\sqrt{I_0\left(x-\Delta {x}_{{j}},y-\Delta {y}_{{j}}\right)}\\ {}\kern7em \cdot {e}^{i(\phi (x-\Delta {x}_{{j}},y-\Delta {y}_{{j}})+\Delta {\phi}_{{j}}(x,y))}. \end{array}\end{align}$$

At the Talbot self-imaging plane, $\Delta z=2{\Lambda}^2/\lambda$ , one has a hypercube of interferograms. An example of a one frequency channel slice is shown in the green box of Figure 1.

In other applications one would collapse the cube onto a sensor at this point; however, this would eliminate the chance of retrieving the spectrally resolved phase. Instead, as discussed in Section 3, we use SCI to aid in the capturing of the cube.

2.2 Wavefront reconstruction

Once the interferogram is captured, one must extract the wavefront. As previously mentioned, current reconstruction methods usually involve multiple steps, that is, extracting the gradients, integrating and stitching them together. This can be a time-consuming process, especially in a hyperspectral setting where the reconstruction has to be done for every channel. To address this problem, we present a deep learning approach to wavefront reconstruction for the LSI. While similar work has been done in the context of Shack–Hartmann sensors[ Reference Hu, Hu, Gong and Si 19 , Reference Hu, Hu, Gong and Si 20 ], this is the first application of deep learning to LSI reconstruction, to the best of our knowledge. The network that was used will be discussed in Section 4.

3 Snapshot compressive imaging

CS describes the highly efficient acquisition of a sparse signal from fewer samples than would classically be required according to the Nyquist theorem by utilizing optimization methods to solve underdetermined equations. SCI is an example of CS, capturing 3D data on a 2D sensor in a single shot.

Fundamentally, there are two requirements to be fulfilled for CS to work. Firstly, the signal must be sparse on some basis and, secondly, the signal must be sampled on the basis that it is incoherent with respect to the sparse basis[ Reference Candes 21 ]. The first condition was hypothesized to be satisfied given the fact that laser wavefronts are known to be well-expressed with a few coefficients of the Zernike basis. When one does not have prior knowledge about which basis the signal is sparse on, the second condition is often solved by performing random sampling. Whilst being trivial for 2D data, in the context of SCI it is challenging, as the 3D hypercube must be randomly sampled onto a 2D sensor. To do so, nearly all research in this area uses hardware based on the coded aperture snapshot compressive imaging (CASSI) system[ Reference Gehm, John, Brady, Willett and Schulz 22 , Reference Wagadarikar, John, Willett and Brady 23 ].

3.1 CASSI

The hypercube is first imaged onto a coded aperture. This is a binary random mask with each pixel transmitting either 100% or 0% of the light. The cube is then also passed through some dispersive media, for example, a prism or grating, before being captured by a sensor, resulting in what is known as the coded shot. The effect of this optical system is that when the hypercube reaches the detector plane, each spectral channel is encoded with a different coded aperture, thereby approximating random sampling across the whole cube. It is then possible for a reconstruction algorithm to retrieve the cube. A diagram of a CASSI system is shown in the yellow box in Figure 1, with an example of a coded shot for an interferogram hypercube shown on the far left of Figure 3. The setup can easily be simulated by multiplying the cube by the mask, then shifting the channels of the cube according to the amount of (angular) dispersion imparted onto them, and finally summing over the spectral axis.

Mathematically, the CASSI system discussed above is summarized into a matrix $\boldsymbol{\varPhi}$ , which operates on $\boldsymbol{m}$ , a vectorized representation of the hypercube, to give $\boldsymbol{n}$ , a vectorized coded shot:

(4) $$\begin{align}\boldsymbol{n}=\boldsymbol{\varPhi} \boldsymbol{m}.\end{align}$$

In order to reconstruct $\boldsymbol{m}$ , one can solve the following:

(5) $$\begin{align}\tilde{\boldsymbol{m}}={\mathrm{argmin}}_{\boldsymbol{m}}\left[\underset{\kern0.1em \mathrm{data}\kern0.17em \mathrm{term}\kern0.1em }{\underbrace{{\left\Vert \boldsymbol{n}-\boldsymbol{\varPhi} \boldsymbol{m}\right\Vert}^2}}+\underset{\kern0.1em \mathrm{regularizer}\kern0.1em }{\underbrace{\eta \mathrm{\mathcal{R}}\left(\boldsymbol{m},\psi \right)}}\right].\end{align}$$

The first term on the right-hand side is labelled the data term, and enforces that the hypercube must match the coded shot when captured. This alone would be an underdetermined system, so a regularization term, parameterized by $\psi$ , is added that restricts the solution space and selects the correct hypercube.

Most methods that have been developed to solve this non-convex equation can be sorted into two classes: iterative algorithms or end-to-end neural networks. The former offers good generalization but lacks abstraction capability and is slow, whilst deep nets are fast and have been shown to learn almost any function, but can be prone to overfitting[ Reference Yuan, Brady and Katsaggelos 24 ]. A middle ground that offers state-of-the-art performance is deep unrolling.

3.2 Deep unrolling

While an end-to-end neural net would attempt to solve Equation (5) directly, if it were possible to split the equation, the data term can actually be solved analytically. This is desirable as it alleviates the abstraction needed to be done by the network, resulting in greater generalization and better parameter efficiency[ Reference Monga, Li and Eldar 25 ]. To perform such a separation, half quadratic splitting is employed. Firstly, an auxiliary variable $\boldsymbol{p}$ is substituted into the regularization term, with Equation (6) being equivalent to Equation (5). Then, the constraint is relaxed and replaced by a quadratic loss term:

(6) $$\begin{align}\hspace{-5pt}\widehat{\boldsymbol{m}},\kern1pt\widehat{\boldsymbol{p}}={\mathrm{argmin}}_{{\boldsymbol{m}},{\boldsymbol{p}}}\left[{\left|\boldsymbol{n}-\boldsymbol{\varPhi} \boldsymbol{m}\right|}^2+\eta R\left(\boldsymbol{p}\right)\right]\kern0.24em \mathrm{s}.\mathrm{t}.\ \boldsymbol{m}=\boldsymbol{p},\end{align}$$
(7) $$\begin{align}\hspace{20pt}\approx{\mathrm{argmin}}_{{\boldsymbol{m}},{\boldsymbol{p}}}\left[|\boldsymbol{n} - \boldsymbol{\varPhi}\boldsymbol{m}|^{2} + \eta R(\boldsymbol{p}) + \beta |\boldsymbol{m} - \boldsymbol{p}|^{2}\right].\end{align}$$

Here, $\beta$ is a variable that controls the strength of the constraint. High values of $\beta$ will strongly enforce $\boldsymbol{m}=\boldsymbol{p}$ and approximate the subject-to statement.

The benefit of this problem formulation is that it is then possible to split Equation (7) into two minimization sub-problems in $\boldsymbol{m}$ and $\boldsymbol{p}$ , and effectively separate the data term from the regularization term. When minimized iteratively, the following sub-problems can approximate Equation (7):

(8) $$\begin{align}{\widehat{\boldsymbol{p}}}^{k+1}={\mathrm{argmin}}_{\boldsymbol{p}}\left[\beta {\left|\boldsymbol{p}-{\boldsymbol{m}}^k\right|}^2+\eta R\left(\boldsymbol{p}\right)\right]\sim \mathcal{S}\left({\boldsymbol{m}}^k\right),\end{align}$$
(9) $$\begin{align}{\widehat{\boldsymbol{m}}}^{k+1}={\mathrm{argmin}}_{\boldsymbol{m}}\left[{\left|\boldsymbol{n}-\boldsymbol{\varPhi} \boldsymbol{m}\right|}^2+\beta {\left|{\boldsymbol{p}}^{k+1}-\boldsymbol{m}\right|}^2\right].\end{align}$$

Equation (9) is a convex equation and can be solved via a conjugate gradient algorithm, which provides better stability than solving analytically. On the right-hand side of Equation (8), $\mathcal{S}$ represents that a neural network will be used to solve the equation.

The deep unrolling process is shown in Figure 2(b) panel (i). Firstly ${\boldsymbol{m}}^{(0)}$ is initialized: ${\boldsymbol{m}}^{(0)}={\boldsymbol{\varPhi}}^\mathrm{T}\boldsymbol{n}$ . Then the two equations are solved for a fixed number of iterations, with the same architecture neural net being used to represent Equation (8) in each iteration. However, the network has its own set of weights for each iteration, hence the unrolling of the algorithm. The architecture of the network will be discussed in Section 4.

Figure 2 A diagram showing the full reconstruction process of the wavefront from the coded shot. (a) A flow chart of the reconstruction process. (b) (i) The deep unrolling process, where sub-problems ① and ② are solved recursively for 10 iterations. Also shown is the neural network structure used to represent $\mathcal{S}\left({\boldsymbol{m}}^k\right)$ . (ii) The training curve for the deep unrolling algorithm. Plotted is the training and validation PSNR for the 3D ResUNet prior that was used, as well as the validation score for a local–nonlocal prior. Here is demonstrated the superior power of 3D convolutions in this setting. (i) The network design for the Xception-LSI network. The Xception* block represents that the last two layers were stripped from the conventional Xception network. (c) (ii) The training curve for Xception-LSI for training and validation sets, with the loss shown in log mean squared error. Also plotted is the validation loss when further training the model on the deep unrolling reconstruction of the data (transfer).

4 Network architecture

This section contains the architectures of the neural networks that were used. They will be discussed in the order they are used in the reconstruction process, which is displayed in the flow chart of Figure 2(a). Firstly, the deep unrolling algorithm performs reconstruction of the interferogram hypercube from the coded shot, and secondly another network, Xception-LSI, reconstructs the spatial-spectral wavefront from the hypercube.

4.1 Deep unrolling regularizer

As previously discussed, the neural network, $\mathcal{S}$ , represents a regularization term. This means one can exploit prior knowledge about the data to choose a suitable architecture. As will be discussed in the following section, STCs can be described by a correlation between Zernike polynomial coefficients and the wavelength. Accordingly, there will likely be a strong similarity in spot positions for neighbouring spectral channels. Due to this, an architecture with 3D convolutions was developed, which can exploit these relations. Inspired by recent work in video SCI[ Reference Wu, Zhang and Mou 26 ], a simplified ResUNet architecture was chosen[ Reference Zhang, Liu and Wang 27 ], with the standard 2D convolutions replaced with 3D ones. We used 10 iterations for our model, as it has been found that adding more than this produces negligible performance gains[ Reference Wang, Sun, Fu, Kim and Huang 28 ]. A diagram of the network is displayed in Figure 2(b) panel (i).

4.2 Xception-LSI

A wavefront retrieval network was developed that takes a single spectral channel QWLSI interferogram and predicts the spatial wavefront in terms of Zernike coefficients. The network is based on the Xception network[ Reference Chollet 29 ], but as the original 71-layer network is designed for classification, some changes were made to adapt Xception to our application. Firstly, the final two layers were removed. A max pool layer and a convolutional layer were added to shrink the output in the spatial and spectral dimensions, respectively. Dropout was applied before using three dense layers with 1000, 500 and 100 nodes using the ReLu activation function[ Reference Agarap 30 ]. The output layer consists of 15 nodes with linear activation, corresponding to the number of Zernike coefficients to predict. We name the network Xception-LSI, and it can be seen in Figure 2(c) panel (i).

5 Training data generation

To represent the initial pulse, a total of 300 cubes were generated with dimensions $\left({n}_{{x}}\times {n}_{{y}}\times {n}_{\omega}\right)=\left(512\times 512\times 31\right)$ . The data was randomly split at a ratio of $4:1:1$ into training, validation and test sets, respectively. The wavelength range considered was 750–850 nm, representing a broadband Ti:sapphire laser, giving $\Delta \lambda \approx 3.23$ nm. For each cube, the wavefront for each channel was first initialized to a randomly weighted sum of 15 Zernike basis functions. Then, to simulate an STC, one Zernike function was chosen and was made to vary either linearly or quadratically with frequency. Indeed, common STCs, such as pulse front tilt and pulse front curvature, can be represented in this way[ Reference Jolly, Gobert and Quere 1 ]. The mean amplitude of this coefficient was also made to be higher. This choice of Zernike coefficients is arbitrary, but allows for a demonstration that the method can identify all Zernike basis functions. The intensity of slices of the cube was set to an image taken of a real laser intensity.

Each cube was then processed according to Figure 1. Firstly, it was passed through the QWLSI simulation (see Section 2.1), yielding a hypercube of interferograms – these are the training labels for the deep unrolling algorithm. This hypercube was then passed through the SCI simulation, yielding a coded shot – the training data. The wavefront was reconstructed via the process in Figure 2(a). The interferogram hypercube was reconstructed via deep unrolling, before being passed into the Xception-LSI network to predict the spectral Zernike coefficients.

The pitch of the LSI was set to $\Lambda =80$ μm, and the dispersion of the prism, measured at the camera plane, was set to 1 pixel per channel (each channel having a width of 3.23 nm).

Before being passed through the deep unrolling network, the cubes and coded shots were split spatially into 64 $\times$ 64 oblique parallelepiped patches, allowing for a one-to-one reconstruction between the input and output[ Reference Wang, Sun, Fu, Kim and Huang 28 ]. The initial learning rate was set to 0.01 and decayed by 10% every five epochs. The total number of epochs was 70, and the batch size was 8.

The Xception-LSI network was fed individual channels of the ground truth interferogram hypercubes and predicted Zernike coefficients. Normal random noise ( $\mathcal{N}(\mu =0,\sigma =0.1$ )) was applied to the input, to make the model robust to noise produced by the SCI reconstruction. The initial learning rate was set to ${10}^{-5}$ and decayed by 10% every five epochs. The total number of epochs was 40, and the batch size was 16. Once trained on the ground truth hypercubes, the model was trained on interferogram hypercubes that had been reconstructed by deep unrolling, for the further eight epochs. The aim of this transfer learning was to allow the network to account for any systematic noise in the SCI reconstruction, resulting in a more accurate wavefront reconstruction.

6 Results and discussion

6.1 Snapshot compressive imaging

Crucial to this method’s success is the SCI reconstruction of the hypercube of interferograms. As can be seen from the green box of Figure 1, the image slices are modulated and appear as spot patterns. As a result, the images do not exhibit the same sparsity in, for example, the wavelet domain, as most natural images used in SCI research do. Because of this, there was uncertainty in whether it would be possible to recover the cube.

Here it is demonstrated that it is indeed possible to reconstruct such modulated signals with SCI. The training curve can be seen in Figure 2(b) panel (ii). Also plotted is the validation loss when a local–nonlocal prior[ Reference Wang, Sun, Zhang, Fu and Huang 31 ], which is state-of-the-art for natural images, was used. One sees that when both architectures were used with 10 iterations of unrolling, the 3D convolutional model achieved a far superior peak signal-to-noise ratio (PSNR) of 36 compared to 29. Furthermore, it contains approximately 45% fewer parameters.

6.2 QWLSI

In order to reconstruct the wavefront for a full hypercube, each spectral channel is fed through the network sequentially. After training, the final mean squared error on the ground truth test set was $6.80\times {10}^{-4}$ . Figure 2(c) panel (ii) displays the training curve with the training, validation and transfer loss curves. The additional transfer learning proves to be extremely effective in reducing the error of the wavefront predictions when working with reconstructed interferogram hypercubes. The final mean squared error on the reconstructed test set was $9.18\times {10}^{-4}$ .

6.3 Hyperspectral compressive wavefront sensing

An example of the full reconstruction process, from coded shot to spatial-spectral wavefront, is displayed in Figure 3. It is apparent that the deep unrolling network was able to accurately reconstruct the interferogram hypercube, and the Xception-LSI network was able to reconstruct the wavefront.

Figure 3 Example results of the reconstruction process. (a) An example of the coded shot, along with a zoomed section. (b) Deep unrolling reconstruction of the interferogram hypercube in the same zoomed section at different wavelength slices. (c) The Xception-LSI reconstruction of the spatio-spectral wavefront displayed in terms of Zernike coefficients, where the x-axis of each plot is the Zernike function, the y-axis is the wavelength and the colour represents the value of the coefficient. (d) The spatial wavefront resulting from a Zernike basis expansion of the coefficients in (c) at the labelled spectral channels.

7 Summary and outlook

In this report we have demonstrated the possibility of combining a wavefront sensor with SCI in order to achieve a single-shot measurement of the spatial-spectral phase. Crucially, it has been shown that SCI has the ability to reconstruct modulated signals, such as those produced by a QWLSI.

A natural progression to this study is to realize the results in an experimental setting, where challenges arise from the more complicated dispersion, transfer functions and noise. Other further work could include extending the deep learning LSI analysis to the hyperspectral setting. By passing the network a hypercube of interferograms, rather than individual slices, it may be possible to exploit spectral correlations in order to improve accuracy and detect STCs more easily. Also, work can be done on testing the model with a more varied set of Zernike polynomials. Finally, there has been recent interest in the possibility of spreading phase contrast imaging to a hyperspectral setting. However, current methods take many seconds to capture a hypercube of phase[ Reference Ba, Tsang and Mertz 32 ]. The proposed method would be able to collect information with higher spectral resolution in a single shot, allowing for dynamic events to be recorded hyperspectrally.

Acknowledgements

We would like to acknowledge the useful discussions with Dr. Ramy Aboushelbaya and the rest of Professor Peter Norreys’ group. This work was supported by the Independent Junior Research Group ‘Characterization and control of high-intensity laser pulses for particle acceleration’, DFG Project No. 453619281. We would also like to acknowledge UKRI-STFC grant ST/V001655/1.

References

Jolly, S., Gobert, O., and Quere, F., J. Opt. 22, 103501 (2020).CrossRefGoogle Scholar
Jeandet, A., Jolly, S. W., Borot, A., Bussière, B., Dumont, P., Gautier, J., Gobert, O., Goddet, J.-P., Gonsalves, A., Irman, A., Leemans, W. P., Lopez-Martens, R., Mennerat, G., Nakamura, K., Ouillé, M., Pariente, G., Pittman, M., Püschel, T., Sanson, F., Sylla, F., Thaury, C., Zeil, K., and Quéré, F., Opt. Express 30, 3262 (2022).CrossRefGoogle Scholar
Bourassin-Bouchet, C., Stephens, M., de Rossi, S., Delmotte, F., and Chavel, P., Opt. Express 19, 17357 (2011).CrossRefGoogle Scholar
Froula, D. H., Palastro, J. P., Turnbull, D., Davies, A., Nguyen, L., Howard, A., Ramsey, D., Franke, P., Bahk, S.-W., Begishev, I. A., Boni, R., Bromage, J., Bucht, S., Follett, R. K., Haberberger, D., Jenkins, G. W., Katz, J., Kessler, T. J., Shaw, J. L., and Vieira, J., Phys. Plasmas 26, 032109 (2019).CrossRefGoogle Scholar
Caizergues, C., Smartsev, S., Malka, V., and Thaury, C., Nat. Photonics 14, 475 (2020).CrossRefGoogle Scholar
Aboushelbaya, R., “Orbital angular momentum in high-intensity laser interactions”, PhD. Thesis (University of Oxford, 2021).Google Scholar
Bowlan, P., Gabolde, P., Shreenath, A., McGresham, K., Trebino, R., and Akturk, S., Opt. Express 14, 11892 (2006).CrossRefGoogle Scholar
López-Ripa, M., Sola, I. J., and Alonso, B., Photon. Res. 10, 922 (2022).CrossRefGoogle Scholar
Cousin, S. L., Bueno, J. M., Forget, N., Austin, D. R., and Biegert, J., Opt. Lett. 37, 3291 (2012).CrossRefGoogle Scholar
Weiße, N., Esslinger, J., Howard, S., Foerster, F. M., Haberstroh, F., Doyle, L., Norreys, P., Schreiber, J., Karsch, S., and Döpp, A., arXiv:2303.01360 (2023).Google Scholar
Pariente, G., Gallet, V., Borot, A., Gobert, O., and Quéré, F., Nat. Photonics 10, 547 (2016).CrossRefGoogle Scholar
Gabolde, P. and Trebino, R., Opt. Express 14, 11460 (2006).CrossRefGoogle Scholar
Döpp, A., Eberle, C., Howard, S., Irshad, F., Lin, J., and Streeter, M., arXiv:2212.00026 (2022).Google Scholar
Chanteloup, J.-C., Druon, F., Nantel, M., Maksimchuk, A., and Mourou, G., Opt. Lett. 23, 621 (1998).CrossRefGoogle Scholar
Takeda, M., Ina, H., and Kobayashi, S., J. Opt. Soc. Am. 72, 156 (1982).CrossRefGoogle Scholar
Dai, F., Tang, F., Wang, X., Sasaki, O., and Feng, P., Appl. Opt. 51, 5028 (2012).CrossRefGoogle Scholar
Velghe, S., Primot, J., Guérineau, N., Cohen, M., and Wattellier, B., Opt. Lett. 30, 245 (2005).CrossRefGoogle Scholar
Velghe, S., Primot, J., Guérineau, N., Haïdar, R., Demoustier, S., Cohen, M., and Wattellier, B., Proc. SPIE 6292, 62920E (2006).Google Scholar
Hu, L., Hu, S., Gong, W., and Si, K., Opt. Express 27, 33504 (2019).CrossRefGoogle Scholar
Hu, L., Hu, S., Gong, W., and Si, K., Opt. Lett. 45, 3741 (2020).CrossRefGoogle Scholar
Candes, E. J., Comptes Rendus Math. 346, 589 (2008).CrossRefGoogle Scholar
Gehm, M. E., John, R., Brady, D. J., Willett, R. M., and Schulz, T. J., Opt. Express 15, 14013 (2007).CrossRefGoogle Scholar
Wagadarikar, A., John, R., Willett, R., and Brady, D., Appl. Opt. 47, B44 (2008).CrossRefGoogle Scholar
Yuan, X., Brady, D. J., and Katsaggelos, A. K., IEEE Signal Process. Mag. 38, 65 (2021).Google Scholar
Monga, V., Li, Y., and Eldar, Y. C., IEEE Signal Process. Mag. 38, 18 (2021).Google Scholar
Wu, Z., Zhang, J., and Mou, C., arXiv:2109.06548 (2021).Google Scholar
Zhang, Z., Liu, Q., and Wang, Y., IEEE Geosci. Remote. Sens. Lett. 15, 749 (2018).CrossRefGoogle Scholar
Wang, L., Sun, C., Fu, Y., Kim, M. H., and Huang, H., in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019), p. 8024.Google Scholar
Chollet, F. in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), p. 1800.Google Scholar
Agarap, A. F., arXiv:1803.08375 (2018).Google Scholar
Wang, L., Sun, C., Zhang, M., Fu, Y., and Huang, H., in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020), p. 1658.Google Scholar
Ba, C., Tsang, J.-M., and Mertz, J., Opt. Lett. 43, 2058 (2018).CrossRefGoogle Scholar
Figure 0

Figure 1 Schematic of the experimental setup that was simulated. The pulse first travels through a quadriwave lateral shearing interferometer, yielding a hypercube of interferograms, a slice of which is shown in the green box. The hypercube is then passed through a CASSI setup. This consists of a random mask and a relay system encompassing a prism, before the coded shot is captured with the camera. This diagram is not to scale.

Figure 1

Figure 2 A diagram showing the full reconstruction process of the wavefront from the coded shot. (a) A flow chart of the reconstruction process. (b) (i) The deep unrolling process, where sub-problems ① and ② are solved recursively for 10 iterations. Also shown is the neural network structure used to represent $\mathcal{S}\left({\boldsymbol{m}}^k\right)$. (ii) The training curve for the deep unrolling algorithm. Plotted is the training and validation PSNR for the 3D ResUNet prior that was used, as well as the validation score for a local–nonlocal prior. Here is demonstrated the superior power of 3D convolutions in this setting. (i) The network design for the Xception-LSI network. The Xception* block represents that the last two layers were stripped from the conventional Xception network. (c) (ii) The training curve for Xception-LSI for training and validation sets, with the loss shown in log mean squared error. Also plotted is the validation loss when further training the model on the deep unrolling reconstruction of the data (transfer).

Figure 2

Figure 3 Example results of the reconstruction process. (a) An example of the coded shot, along with a zoomed section. (b) Deep unrolling reconstruction of the interferogram hypercube in the same zoomed section at different wavelength slices. (c) The Xception-LSI reconstruction of the spatio-spectral wavefront displayed in terms of Zernike coefficients, where the x-axis of each plot is the Zernike function, the y-axis is the wavelength and the colour represents the value of the coefficient. (d) The spatial wavefront resulting from a Zernike basis expansion of the coefficients in (c) at the labelled spectral channels.