1 Introduction
Equilibrium reconstruction is the process of inferring properties of the magnetohydrodynamic (MHD) equilibrium from experimental measurements. The specification of an equilibrium is the first step for many subsequent analyses, such as transport calculations or stability determination. An accurate experimental equilibrium is needed to compare the theoretical models used in these subsequent analyses with experimental results. Reconstruction is an essential tool in toroidal magnetic confinement fusion research.
During equilibrium reconstruction one must infer multiple radial profiles, that only depend on the flux-surface label, along with the appropriate boundary conditions. Two or more radial profiles are needed to specify an MHD equilibrium. For example, axisymmetric stationary equilibria are solutions of the Grad–Shafranov equation (Grad & Rubin Reference Grad and Rubin1958; Shafranov Reference Shafranov1966), and they depend on the pressure profile, $p(\unicode[STIX]{x1D713})$, and the toroidal magnetic field function, $F(\unicode[STIX]{x1D713})$. Non-axisymmetric equilibria are often determined from their pressure profile and their toroidal current density profile (Hirshman & Whitson Reference Hirshman and Whitson1983). During the reconstruction process it is often necessary to introduce auxiliary profiles that relate MHD equilibrium quantities to measured diagnostic signals. A temperature (or density profile with an equation of state) is needed to relate temperatures measured from a Thomson diagnostic to the MHD equilibrium. Soft x-ray measurements, which can be used to infer the shape of fluxes, require an emissivity profile (Ma et al. Reference Ma, Cianciosa, Ennis, Hanson, Hartwell, Herfindal, Howell, Knowlton, Maurer and Traverso2018).
Many equilibrium reconstruction codes, such as EFIT (Lao & Ferron Reference Lao and Ferron1990) and V3FIT (Hanson et al. Reference Hanson, Hirshman, Knowlton, Lao, Lazarus and Shields2009), define the radial profiles using a parametric representation. The radial profiles are assumed to have a specified functional form characterized by multiple free parameters $\boldsymbol{p}$. The best fit equilibrium is defined by the set of parameters that minimize the error between the observed diagnostic signals, $\boldsymbol{S}^{O}$, and modelled diagnostic signals $\boldsymbol{S}^{M}(\boldsymbol{p})$. Often least-squares minimization is used, and the error between the observed signals and the modelled signals is defined as
Here $\unicode[STIX]{x1D70E}_{ni}$ is the experimental error associated with the $i$th signal.
This paper discusses a non-parametric approach to reconstructing the radial profiles. Non-parametric approaches infer the profile shape directly from the measurements without specifying a particular functional form for the profile. The approach taken here uses Gaussian process regression (GPR) to infer the amplitude of the profile at a finite number of radial locations. The fundamental assumption of GPR is that the radial profile belongs to a Gaussian process (GP), which generalizes multivariate Gaussian distributions to a continuous domain (Rasmussen & Williams Reference Rasmussen and Williams2006). Bayesian analysis is then used to select the profile in the Gaussian process that gives the best fit.
Gaussian process regression has several advantages over standard parametric regression. The achievable accuracy of a parametric representation of experimental data is limited by the choice of a parameterization. In contrast, certain Gaussian processes contain a complete basis, and their accuracy is limited by measured data (Rasmussen & Williams Reference Rasmussen and Williams2006). The quality of fit of a GP will continually improve with additional measurements. Parametric models have to be specially designed to capture complex features like a pedestal or islands. The design of these complex parametric models can introduce systematic errors into the reconstruction process. With sufficient data a Gaussian process can naturally capture these complex features.
Many of the weaknesses of a parametric representation can be addressed by increasing the complexity of the model and thereby increasing the number of free parameters. However, increasing the number of parameters increases the computational cost of minimizing the error. Increasing the number of free parameters also increases the risk of overfitting. Gaussian process regression minimizes issues associated with overfitting by penalizing overly complex models. This feature of Bayesian inference is commonly known as Occam’s razor.
Gaussian process regression has been used in several applications for fusion research. J. Svensson developed a framework for Gaussian process tomography (Svensson & Contributors Reference Svensson2010). This framework has been used to perform soft x-ray (SXR) tomographic analysis of W7-AS and TJ-II stellarator plasmas (Li et al. Reference Li, Svensson, Thomsen, Medina, Werner and Wolf2013). GPR has also been used to infer edge density profiles on the joint European torus (JET) (Kwak et al. Reference Kwak, Svensson, Brix and Ghim2017), and for uncertainty analysis in transport calculations (Chilenski et al. Reference Chilenski, Greenwald, Marzouk, Howard, White, Rice and Walk2015).
This paper is organized as follows. Section 2 provides an introduction to Gaussian process regression. The implementation of the Gaussian process model in V3FIT is discussed in § 3. Several example reconstructions that use both synthetic data and real experimental data are shown in § 4. Finally, a discussion of the results is presented in § 5.
2 Gaussian process regression
Gaussian processes generalize multivariate Gaussian probability distributions to an infinite domain. A GP is a collection of infinitely many random variables such that any finite collection of these random variables is normally distributed. A random function can be represented as an element of a Gaussian process, where the amplitude of the function at every point in the domain is treated as a normal random variable. The amplitudes of the function at any finite collection of points will have a joint multivariate Gaussian distribution.
Univariate normal distributions are uniquely defined by a mean value, $\unicode[STIX]{x1D707}$, and a variance $\unicode[STIX]{x1D70E}^{2}$. Multivariate normal distributions are defined by a mean vector, $\unicode[STIX]{x1D741}$, and a covariance matrix $\unicode[STIX]{x1D72E}$. Similarly, Gaussian processes are uniquely defined by a mean function, $\unicode[STIX]{x1D707}(x)$, and a covariance kernel $K(x,x^{\prime })$.
The covariance kernel determines the properties of the Gaussian process. For example random Gaussian noise is generated using the delta function kernel,
The hyper-parameter $\unicode[STIX]{x1D70E}_{f}$ is the standard deviation off the mean function. It characterizes how far the amplitude of the function can vary from the mean function, $\unicode[STIX]{x1D707}(x)$, at each point $x$. The delta function kernel imposes no correlation in the amplitudes of the function evaluated at two distinct points $x_{1}$ and $x_{2}$. The variance of the function from its mean value at $x_{1}$ is unrelated to the variance of the function at $x_{2}$. In the case where the mean function is zero, $\unicode[STIX]{x1D707}(x)=0$, the delta function kernel implies that there is no correlation in the amplitude of the function across the domain.
The squared exponential kernel is another kernel that is frequently used in GPR,
This kernel is characterized by two hyper-parameters $\unicode[STIX]{x1D70E}_{f}$ and $\unicode[STIX]{x1D70E}_{l}$. Here $\unicode[STIX]{x1D70E}_{f}$ is once again the standard deviation off the mean function. The hyper-parameter $\unicode[STIX]{x1D70E}_{l}$ is a correlation length. It characterizes how strongly amplitude deviations of the function from the mean function are correlated across the domain. If $|x_{1}-x_{2}|\ll \unicode[STIX]{x1D70E}_{l}$ then the variations from the mean function at these two points are strongly correlated. If the variation at $x_{1}$ is positive then the variation at $x_{2}$ will be positive by a similar amount. Conversely if $|x_{1}-x_{2}|\gg \unicode[STIX]{x1D70E}_{l}$, then variations from the mean function at these two distant points are not correlated. In the case where the mean function is zero, $\unicode[STIX]{x1D707}(x)=0$, the correlation length describes how rapidly the amplitude of the function varies across the domain. Thus this correlation length introduces a degree of smoothness into the Gaussian process. The Gaussian process defined by the squared exponential kernel has a basis that spans the space of smooth function (Rasmussen & Williams Reference Rasmussen and Williams2006).
Gaussian process regression can be understood using Bayesian statistics. Here we provide a qualitative sketch of the derivation. A detailed derivation can be found in the works of Rasmussen & Williams (Reference Rasmussen and Williams2006) and Svensson & Contributors (Reference Svensson2010).
The derivation begins with Bayes’ theorem,
The posterior distribution $p(f(x)|\boldsymbol{S}^{O},\boldsymbol{I})$ is the probability that the experimental profile is the function $f(x)$ given the experimental measurements $\boldsymbol{S}^{O}$ and other prior knowledge $\boldsymbol{I}$. The likelihood $p(\boldsymbol{S}^{O}|\,f(x),\boldsymbol{I})$ is the probability of measuring the signals $\boldsymbol{S}^{O}$ for a given profile $f(x)$. The prior distribution $p(f(x)|\boldsymbol{I})$ characterizes the knowledge about the experimental profile before measuring the experimental signals. The evidence $p(\boldsymbol{S}^{O}|\boldsymbol{I})$ is a normalizing factor. Often the goal is to calculate the posterior distribution, but in practice the likelihood is much easier to calculate. Bayes’ theorem provides a convenient way to calculate the posterior from the likelihood.
Gaussian process regression starts by assuming that the experimental profile is an element in a Gaussian process. This assumption is equivalent to saying that the prior distribution is a Gaussian process,
Here, ${\mathcal{G}}{\mathcal{P}}$ represents a Gaussian process. The experimental measurements are assumed to have normally distributed noise with zero mean, characterized by the covariance $\unicode[STIX]{x1D72E}_{n}$. The likelihood is:
where $\unicode[STIX]{x1D72E}_{n}$ is the noise correlation matrix, $L^{i}$ is the mathematical operator that models the $i$th noise-free signal: $\boldsymbol{S}^{M}=\boldsymbol{L}f(x)$, with $\boldsymbol{L}=(L^{1},L^{2},\ldots ,L^{N})^{T}$ and $N$ is the number of signals. In this paper a signal is called a (non)linear signal if the operator modelling that signal is a (non)linear operator. The posterior distribution is calculated from the prior and the likelihood using Bayes’ theorem. The posterior distribution is then sampled at a finite number of points $\boldsymbol{x}_{\ast }$.
If the signals are linear, then the posterior distribution sampled at the points $\boldsymbol{x}_{\ast }$ is a multivariate normal distribution with a mean vector $\unicode[STIX]{x1D741}_{\ast }$ and covariance matrix $\unicode[STIX]{x1D72E}_{\ast }$ (Svensson & Contributors Reference Svensson2010)
where it is assumed that the prior Gaussian process has a zero mean, $\unicode[STIX]{x1D707}(x)=0$. The covariance matrices, $\unicode[STIX]{x1D646}_{pq}$, are calculated from the Gaussian process kernel, and they characterize the covariance between the profile amplitude at different sample locations and the modelled signal values. The subscripts indicate whether the respective argument in the kernel is evaluated at the points $\boldsymbol{x}_{\ast }$ or operated on by the operators $\boldsymbol{L}$. The first (second) subscript corresponds to the first (second) argument in the kernel. The ($i$th, $j$th) elements of the $\unicode[STIX]{x1D646}_{pq}$ matrices are
The last equality uses the fact that $\unicode[STIX]{x1D646}_{\ast L}=\unicode[STIX]{x1D646}_{L\ast }^{T}$.
For example consider the case where the value of a function $f(x)$ is measured at two points $l_{1}$ and $l_{2}$. The operator representing these measurements is a delta function
Similarly the goal is to sample the function at three points: $s_{1},s_{2},s_{3}$. In this case the three $\unicode[STIX]{x1D646}$ matrices are
The matrix $\unicode[STIX]{x1D646}_{\ast \ast }$ characterizes the correlation between the amplitudes of the function evaluated at each of the points in $\boldsymbol{x}_{\ast }$. The matrix $\unicode[STIX]{x1D646}_{LL}$ characterizes the correlation between each of the noise-free modelled signals. The matrix $\unicode[STIX]{x1D646}_{L\ast }$ characterizes the correlation between the amplitude of the function evaluated at each point in $\boldsymbol{x}_{\ast }$ and each noise-free model signal.
The posterior distribution is a multivariate Gaussian distribution, and it is completely specified by solving (2.6) and (2.7) for $\unicode[STIX]{x1D741}_{\ast }$ and $\unicode[STIX]{x1D72E}_{\ast }$. These equations are linear, and they involve the inverse of the matrix $(\unicode[STIX]{x1D646}_{LL}+\unicode[STIX]{x1D72E}_{n})$. This matrix is a symmetric positive definite matrix, and it can be factored using a Cholesky decomposition.
The kernels used to define a GP often have one or more hyper-parameters. An optional step in GPR is to then find the optimal set of hyper-parameters. This is done by using Bayes’ theorem to define a hyper-posterior distribution for the hyper-parameters $\unicode[STIX]{x1D748}_{hp}$,
and then finding the set of hyper-parameters that maximize this hyper-posterior. A common choice is to use a uniform hyper-prior $p(\unicode[STIX]{x1D748}_{hp})$. In this case the hyper-posterior distribution is,
Here, the hyper-posterior is proportional to the Bayesian evidence in (2.3).
The evidence is calculated by marginalizing the posterior distribution for the data over all possible functions,
The result of the integration yields the expression for log of the evidence
Maximizing the evidence is equivalent to minimizing the bracketed quantity in the equation for the log of the evidence. The bracketed quantity is composed of three terms. The first term is a positive normalizing constant that accounts for the number of signals, $N$, in the Gaussian process. The second term characterizes the complexity of the model, and it can be either positive or negative. The third term characterizes the quality of the fit, and it is positive definite. The competition between the second and third terms prevents overfitting by penalizing overly complex models. Increasing the complexity of the model will generally improve the quality of fit, i.e. the third term will be smaller. However, increasing the complexity also increases the second term. Thus an increase in complexity is only advantageous if the resulting improvement in the fit outweighs the cost of the complexity.
3 Implementation of GPR model in V3FIT
V3FIT is a three-dimensional equilibrium reconstruction code (Hanson et al. Reference Hanson, Hirshman, Knowlton, Lao, Lazarus and Shields2009). The code is designed to be fast, with the goal of being able to run reconstructions in between experimental discharges. The code is also designed to be modular and extensible. This enables rapid development of new functionality needed to meet the requirements of different experiments. VMEC (Hirshman & Whitson Reference Hirshman and Whitson1983) is the primary equilibrium solver used by V3FIT; however, the code modularity allows V3FIT to use other equilibrium solvers with minor code modifications. Recent modifications to V3FIT allow it to use the SIESTA three-dimensional (3-D) equilibrium code (Cianciosa et al. Reference Cianciosa, Hirshman, Seal and Shafer2018).
V3FIT finds the parameters $\boldsymbol{p}$ that minimize the cost function
The cost function $g^{2}$, which is closely related to the $\unicode[STIX]{x1D712}^{2}$ error, measures the difference between the experimentally determined signals, $\boldsymbol{S}^{O}$, and the modelled signals, $\boldsymbol{S}^{M}$, for an equilibrium defined by the parameters $\boldsymbol{p}$. Note that the form for $g^{2}$ is consistent with an assumption that the signal noise is uncorrelated – the signal covariance matrix is diagonal. The $\unicode[STIX]{x1D70E}_{ni}$ are the square roots of the signal variances. Weighting factors, $\unicode[STIX]{x1D714}_{i}$, allow one to emphasize or deemphasize selected signals. The cost function $g^{2}$ is a positive definite quantity, and V3FIT uses a modified quasi-newton algorithm to find the equilibrium parameters that minimize $g^{2}$.
The implementation of the GPR model in V3FIT is designed to work in conjunction with the standard parametric representation. The guiding philosophy is to use GPR to infer profiles where it is easy and efficient to do so. A parametric representation is then used to represent the remaining profiles. For speed and efficiency the implementation of the GPR model is currently limited to use with linear diagnostics. These diagnostics can be modelled by a linear operator acting on the radial profile. Some examples of profiles that can be modelled with our GPR model include the emissivity inferred from soft x-ray data, the temperature inferred from Thomson or ECE measurements, and a density profile inferred from interferometry. The use of soft x-ray data to infer the temperature profile is an example of a nonlinear diagnostic where our GPR model is not applicable. Here the soft x-ray emissivity is nonlinearly related to the temperature.
Let us look in more detail at the observed soft x-ray signals and their relation to the soft x-ray emissivity. The VMEC code labels flux surfaces with a minor-radius variable $s$ with $0\leqslant s\leqslant 1$. The soft x-ray emissivity $\unicode[STIX]{x1D716}$ depends on the position within the plasma, $\boldsymbol{r}$. The $\unicode[STIX]{x1D6FC}$th soft x-ray signal is viewed along the chord $\boldsymbol{r}_{\unicode[STIX]{x1D6FC}}(t)=\boldsymbol{r}_{\unicode[STIX]{x1D6FC}i}+t(\boldsymbol{r}_{\unicode[STIX]{x1D6FC}f}-\boldsymbol{r}_{\unicode[STIX]{x1D6FC}i})$ between the initial position $\boldsymbol{r}_{\unicode[STIX]{x1D6FC}i}$ and the final position $\boldsymbol{r}_{\unicode[STIX]{x1D6FC}f}$. The observed signal is modelled as
The soft x-ray emissivity is assumed to be constant on a flux surface, described by the profile function $\unicode[STIX]{x1D716}(s)$. Any position in the plasma can be mapped to a particular flux surface through the function $s(\boldsymbol{r})$. Thus the observed signal can be written as a linear function of the emissivity profile,
The covariance matrix element characterizing the correlation between two x-ray signals is
The restriction to linear diagnostics allows for the direct calculation of the posterior mean and covariance using (2.6) and (2.7). Here the posterior distribution is also a Gaussian process, and thus it is completely defined by the mean and covariance. The posterior mean profile is the best fit to the experimental data, and the covariance matrix characterizes the uncertainty in the fit.
A new term is added to V3FIT’s cost function that allows V3FIT to optimize the hyper-parameters,
Here $N$ is the number of experimental signals, $T$ is the number of radial profiles modelled by independent Gaussian processes and $\unicode[STIX]{x1D6E9}_{j}^{2}$ is related to the negative log evidence for the $j$th Gaussian process,
The nonlinear optimization routine in V3FIT assumes that the cost function is a quadratic form. The constant $c_{j}$ is added to ensure that $\unicode[STIX]{x1D6E9}_{j}^{2}$ is positive for each Gaussian process, and thus $\unicode[STIX]{x1D6E9}_{j}$ is real.
The added term to the cost function treats each Gaussian process and its hyper-parameters in a way that is similar to the experimentally measured signals. This treatment allows for the simultaneous optimization of both the model parameters and the Gaussian process hyper-parameters. The dependence of the cost function on the hyper-parameters enters through the $\unicode[STIX]{x1D646}_{LL}$ matrix.
Figure 1 shows an outline of the V3FIT algorithm. First an initial guess is made for each of the model parameters and Gaussian process hyper-parameters. The model parameters are then used to solve for an initial magnetic equilibrium. Then GPR is used to calculate GP modelled radial profiles. After that each of the model signals are calculated for each diagnostic, and then $g^{2}$ is calculated. V3FIT then checks to see if an optimal set of parameters and hyper-parameters have been found. If a converged solution has been found then V3FIT exits. Otherwise V3FIT calculates a new set of parameters and repeats the process. In order to update the parameters V3FIT numerically calculates a parameter Jacobi matrix. During this calculation V3FIT has to vary each of the parameters and hyper-parameters. The variations of the parameters often require that the equilibrium be resolved and/or the GPR profiles be recalculated.
4 Testing
This section presents three example reconstructions to illustrate and test the performance of the GPR model. The first test case is a synthetic equilibrium that is based on the Compact Toroidal Hybrid experiment (the experiment is explained below). The case uses a realistic set of diagnostics; however, a fictitious Thomson scattering diagnostic. The Thomson scattering diagnostic is treated as a localized measurement of the electron temperature profile. First, the synthetic equilibrium is used to illustrate the behaviour of the Gaussian process reconstruction as the hyper-parameters are varied. Here, the use of the localized measurements is instructive and helps illustrate the GPR behaviour. Second, a full reconstruction of synthetic data is performed to test the performance of the GPR reconstruction model.
The second test case uses real data from Compact Toroidal Hybrid experiment. This reconstruction does not use the fictitious Thomson system, but instead uses the experiments two colour soft x-ray system. GPR is used to reconstruct two emissivity profiles, one for each soft x-ray colour. In addition this case uses fixed hyper-parameters to test the performance of model when reasonable but non-optimal hyper-parameters are used.
The final test case uses real data from the Madison symmetric torus (described below). Here GPR is used to reconstruct the temperature profile from an experimental Thomson diagnostic. In this case the optimal set of hyper-parameters are calculated as part of the reconstruction.
Combined, these examples serve as a comprehensive test the GPR model. The model is tested on both synthetic data and experimental data. The model is tested on multiple machines uses different diagnostics. Finally the model is tested with both fixed hyper-parameters and when the hyper-parameters are calculated as part of the reconstruction.
4.1 Synthetic data
The implementation of the Gaussian process model is tested using a synthetic equilibrium based on the Compact Toroidal Hybrid experimental (CTH) (Hartwell et al. Reference Hartwell, Knowlton, Hanson, Ennis and Maurer2017). CTH is a low-aspect-ratio five field period torsatron designed to study current carrying stellarator equilibria. A flexible set of external magnetic coils allows for the creation of a diverse set of vacuum magnetic equilibria with rotational transforms ranging from $\unicode[STIX]{x1D704}_{\text{vac}}/2\unicode[STIX]{x03C0}\approx 0.02$ to $\unicode[STIX]{x1D704}_{\text{vac}}/2\unicode[STIX]{x03C0}\approx 0.35$. The maximum on-axis toroidal magnetic field is $B_{0}=0.7~\text{T}$. An ohmic transformer is used to drive up to 80 kA of toroidal plasma current.
The synthetic equilibrium uses a two-power parameterization for toroidal current profile
Here $s$ is the flux-surface label, where $s=0$ labels the magnetic axis and $s=1$ labels the last closed flux surface; $I(s)$ is the net toroidal current enclosed by the flux surface $s$, the exponents $\unicode[STIX]{x1D6FC}_{I}$ and $\unicode[STIX]{x1D6FD}_{I}$ are positive real numbers that control the shape of the current profile. For a fixed $\unicode[STIX]{x1D6FD}_{I}$ a larger value of $\unicode[STIX]{x1D6FC}_{I}$ leads to a broader profile. A smaller value of $\unicode[STIX]{x1D6FC}_{I}$ leads to more peaked profile. The net toroidal current $I(1)$ is calculated by integrating $I^{\prime }(s)$ from $s=0$ to $s=1$.
The pressure profile $p(s)$ and the electron temperature profile $T_{e}(s)$ are also specified using two-power profiles
where $p_{0}$ ($T_{e0}$) is the pressure (electron temperature) at the magnetic axis. The density $n$ is inferred from the pressure and electron temperature assuming that the electron and ion temperatures are equal: $p=2n\unicode[STIX]{x1D612}_{B}T_{e}$ with the Boltzmann constant $\unicode[STIX]{x1D612}_{B}$. The values of the parameters that define the synthetic equilibrium are summarized in table 1. The magnetic flux surfaces of the synthetic equilibrium are shown at the full period and half-period in figure 2.
The reconstructions use a realistic set of diagnostics that are based on CTH’s diagnostics. CTH’s full set of magnetic diagnostics are used. CTH’s three-channel interferometer is used to constrain the pressure profile via the inferred density. Two fictitious 10-channel Thomson scattering diagnostics, one located at the full period and the other located at the half-period, are used to infer the electron temperature. This Thomson scattering system is used to illustrate the behaviour of the Gaussian processes. The sampling locations of the Thomson scattering system are indicated by the bullets (●) in figure 2.
Synthetic data are calculated for each diagnostic used in the reconstruction. First the noise-free modelled signal is calculated for each synthetic diagnostic. Noisy signals are then generated using these modelled signals assuming 5 % Gaussian noise. The noisy signals are then used as input to a V3FIT reconstruction. The accuracy of the reconstruction is tested by comparing the reconstructed parameter values with their prescribed equilibrium values.
First, to illustrate the behaviour of GPR, we start by only reconstructing the electron temperature profile. Here the equilibrium current and pressure profiles are specified to have their true equilibrium values, and the electron temperature profile is reconstructed using GPR. V3FIT’s minimization routines are used to converge on the optimal set of hyper-parameters by minimizing the negative log evidence.
The reconstructed temperature at the optimized set of hyper-parameters is shown in figure 3. Figure 3(a) shows the electron temperature at each of the Thomson scattering sampling locations. The noisy synthetic signals, which are used in the reconstruction, are shown in blue. The error bars indicate $5\,\%$ uncertainty in the synthetic measured signals. The modelled signals, shown in black, are calculated from the reconstructed temperature profile. Most of the modelled signals agree with their corresponding synthetic measured signals within one or two standard deviations.
Figure 3(b) shows a comparison of the reconstructed temperature profile (black) and the prescribed temperature profile (blue). The $2\unicode[STIX]{x1D70E}$ uncertainty region for the reconstructed temperature profile is indicated by the shaded region. The noisy signals used in the reconstruction, along with their corresponding errors bars, are indicated by the green markers. The reconstructed profile agrees with the synthetic profile within $2\unicode[STIX]{x1D70E}$ across the entire domain.
The reconstructed temperature profile begins to diverge from the synthetic profile for $s>0.6$, and the reconstructed profile is negative for $s\gtrsim 0.75$. This discrepancy between the reconstructed temperature profile and the synthetic profile is understood by observing that there are no measurements of the temperature for $s\gtrsim 0.6$. In this region the slope of the synthetic temperature profile drastically changes and the temperature asymptotes to zero at $s=1$. There are no measurements that capture this transition, and the Gaussian process uses a smooth gradient scale length to extrapolate the temperature profile beyond the last measurement. The Thomson diagnostic in this case has equally spaced viewing locations in physical space. This spacing leads to clustering of the measured data at small $s$ in flux coordinate space.
The reconstructed temperature profile shown in figure 3 is calculated using a optimized set of hyper-parameters: $\unicode[STIX]{x1D70E}_{f}=167,\unicode[STIX]{x1D70E}_{l}=0.495$. Figure 4(a) shows contours of the negative log evidence as a function of the hyper-parameters. The contours show a well-defined minimum around $\unicode[STIX]{x1D70E}_{f}=146\pm 5$, $\unicode[STIX]{x1D70E}_{l}=0.48\pm 0.05$ indicated by ($+$). The V3FIT optimized values of the hyper-parameters are indicated by (●). The negative log evidence has a shallow minimum and the difference between the two sets of hyper-parameters is due to V3FIT’s convergence criteria. More stringent convergence criteria could be used to more accurately converge on the minimum; however, in practice this is not necessary. As illustrated in figure 3 a reasonable value for the hyper-parameters usually results in a good fit.
As illustrated in figure 4(a) the negative log evidence rapidly increases at small $\unicode[STIX]{x1D70E}_{f}$ and small $\unicode[STIX]{x1D70E}_{l}$. The maximum contour level in the figure has been set to 300 for clarity, but the negative log evidence far exceeds this limit at small $\unicode[STIX]{x1D70E}_{f}$ and small $\unicode[STIX]{x1D70E}_{l}$. At small $\unicode[STIX]{x1D70E}_{f}$ the increase in the negative log evidence results from a bad fit. Here, the GP profile is too tightly constrained to be zero. At small $\unicode[STIX]{x1D70E}_{l}$ the Gaussian process is strongly penalized for having too small of a correlation length scale. The small correlation length scale causes overfitting, and at this point the GP is essentially fitting the noise in the data. Conversely the negative log evidence gradually increases at large $\unicode[STIX]{x1D70E}_{f}$ and $\unicode[STIX]{x1D70E}_{l}$. The increase at large $\unicode[STIX]{x1D70E}_{f}$ results from penalizing the GP for having too large of a variance. At large $\unicode[STIX]{x1D70E}_{l}$ the correlation length is too large, resulting in a bad fit to the data.
Figure 4(b–d) further illustrates the dependence of reconstructed temperature profile on the hyper-parameters. The reconstructed temperature profile is shown in figure 4(b) for two values of the correlation length $\unicode[STIX]{x1D70E}_{l}$. Over-fitting is observed when a small correlation length is used (black). Here the Gaussian process has the flexibility to fit to the noise in the data (shown in green). The uncertainty in the fit, indicated by the grey shaded region, is large between the measured data. When a large correlation length is used (blue) the GP does not have the flexibility to conform to the profile. Here the Gaussian process is effectively fitting a straight line to the data. Both fits in figure 4(b) use the optimized value $\unicode[STIX]{x1D70E}_{f}=145.5$.
The hyper-parameter $\unicode[STIX]{x1D70E}_{f}$ quantifies the prior standard deviation from zero in the prior Gaussian process. This is illustrated in figures 4(c) and 4(d), which show the GP fit for two values of $\unicode[STIX]{x1D70E}_{f}$ at the optimized correlation length $\unicode[STIX]{x1D70E}_{l}=0.48$. Figure 4(c) uses the value $\unicode[STIX]{x1D70E}_{f}=4.0$ to illustrates the behaviour when $\unicode[STIX]{x1D70E}_{f}$ is too small. Here the GP is tightly constrained to be close to zero, and the resulting fit under-predicts the synthetic temperature. Figure 4(d) uses $\unicode[STIX]{x1D70E}_{f}=400$ to illustrate what happens when $\unicode[STIX]{x1D70E}_{f}$ is too large. Here, the GP still produces a good fit; however, the uncertainty in the fit is larger than the uncertainty at the optimized value of $\unicode[STIX]{x1D70E}_{f}$. This behaviour is most easily observed by comparing the shaded regions in figures 3(b) and 4(d) at $s=1$. At the optimized set of hyper-parameters the $2\unicode[STIX]{x1D70E}$ uncertainty region at $s=1$ is bounded by $T_{e}=100~\text{eV}$ in contrast to the case of $\unicode[STIX]{x1D70E}_{f}=400$ the uncertainty region is bounded by $T_{e}\approx 175~\text{eV}$.
Figure 4(b–d) is designed to illustrate the physical significance of the hyper-parameters. Extreme values of the hyper-parameters were chosen for illustrative purposes, and a reasonable choice of the hyper-parameters will often yield a good fit. As a general the optimal value of $\unicode[STIX]{x1D70E}_{f}$ tends to be comparable to the maximum amplitude of the profile. This is a consequence of Occam’s razor, which in this context states that the deviation from the zero mean should be as small as possible while large enough to account for all the measured data. The optimal value of $\unicode[STIX]{x1D70E}_{l}$ is a little harder to predict. We have found that $\unicode[STIX]{x1D70E}_{l}\approx 0.5$ is usually a reasonable guess for smooth profiles that vary across the entire domain. It is worth noting that non-stationary kernels, where the correlation length can vary across the domain, have been useful for capturing sharp transitions in the gradient scale length that happens in many fusion plasma (Li et al. Reference Li, Svensson, Thomsen, Medina, Werner and Wolf2013; Chilenski et al. Reference Chilenski, Greenwald, Marzouk, Howard, White, Rice and Walk2015).
Second, a full reconstruction of the synthetic equilibrium is considered. Here, standard parametric techniques are used to reconstruct all the equilibrium parameters except for the electron temperature profile. GPR regression is used to infer the temperature profile. The cost function $g^{2}$ is minimized to simultaneously calculate the optimal set of parameters and GP hyper-parameters.
Table 2 shows the true value, the initial guess used to seed the reconstruction, and the reconstructed value for each reconstructed equilibrium parameter.
The largest discrepancy between the reconstructed parameters and their equilibrium values is in the current shaping factor $\unicode[STIX]{x1D6FC}_{I}$. Here, the disagreement between the reconstructed $\unicode[STIX]{x1D6FC}_{I}$ and the equilibrium value is two standard deviations. The difficulty in reconstructing this parameter is due to the nonlinear dependence of the flux-surface shape on the current shaping factor. Modest changes in $\unicode[STIX]{x1D6FC}_{I}$ have only a small impact on the shape of the flux surfaces. This is apparent in figure 5, which is discussed in the next section. The other reconstructed parameters agree with their equilibrium values within one standard deviation, and overall the reconstructed equilibrium agrees reasonably well with the synthetic equilibrium.
4.2 CTH experimental discharge
An experimental CTH discharge is used as a second benchmark of the Gaussian process model. In this test case Gaussian processes are used to represent two soft x-ray emissivity profiles. CTH has a three camera two-colour soft x-ray system (Herfindal et al. Reference Herfindal, Dawson, Ennis, Hartwell, Loch and Maurer2014) and a different Gaussian process is used to infer each colour’s emissivity profile. Each emissivity profile is approximated as a flux-surface quantity. The soft x-ray system can be used to topographically infer the shape of internal flux surfaces. The shape of these flux surfaces is determined by the internal current profile. Thus, the soft x-ray measurements help to constrain the internal current profile in equilibrium reconstructions (Ma et al. Reference Ma, Maurer, Knowlton, ArchMiller, Cianciosa, Ennis, Hanson, Hartwell, Hebert and Herfindal2015, Reference Ma, Cianciosa, Ennis, Hanson, Hartwell, Herfindal, Howell, Knowlton, Maurer and Traverso2018).
Six parameters are inferred in this reconstruction: the edge toroidal flux $\unicode[STIX]{x1D6F7}_{\text{edge}}$, the net toroidal current $I_{p}$, a toroidal current shaping factor $\unicode[STIX]{x1D6FC}_{I}$, the pressure at the magnetic axis $p_{0}$, a pressure profile shaping factor $\unicode[STIX]{x1D6FC}_{p}$ and vertical offset from the mid-plane of the magnetic axis $z_{0}$. In this reconstruction a two-power profile is used for the current profile (4.1) and the pressure profile (4.2). The second exponent in the current profile, $\unicode[STIX]{x1D6FD}_{i}$, and the second exponent in the pressure profile $\unicode[STIX]{x1D6FD}_{p}$ are both assumed to be six. This case uses fixed values for the Gaussian process hyper-parameters. The first colour’s kernel has a variance set to $\unicode[STIX]{x1D70E}_{f1}=2$ and a correlation length scale set to $\unicode[STIX]{x1D70E}_{l1}=0.4$. The second colour’s kernel has a variance set to $\unicode[STIX]{x1D70E}_{f1}=0.3$ and a correlation length scale set $\unicode[STIX]{x1D70E}_{l1}=0.4$. These are typical of the hyper-parameter values found for the soft x-ray system when hyper-parameter optimization is used. We refer to this reconstruction as a hybrid reconstruction since some radial profile are represented using a parametric representation and others are represented using a Gaussian process.
A fully parametric reconstruction is also performed to compare with the Gaussian process model. The reconstruction uses two ten-segment linear splines to represent each of the emissivity profiles. The knots of the splines are equally spaced in $s$, and the amplitudes of the knots are treated as free parameters. In total this model has 26 free parameters.
Table 3 shows the reconstructed values of the six equilibrium parameters for the hybrid Gaussian process reconstruction and the parametric reconstruction. The values of equilibrium parameters for the two reconstruction agree within $2\unicode[STIX]{x1D70E}$ error. The soft x-ray emissivity is needed to constrain the current shaping factor $\unicode[STIX]{x1D6FC}_{I}$, and the fact that the two parameters are close indicates that the method is successfully using the GP process profiles to constrain the equilibrium profiles. The large uncertainty in the peak pressure and the pressure profile shaping factor is due to the fact that these reconstructions do not use any direct measurements of the temperature, density or pressure. While the uncertainty in these parameters is large, including these pressure parameters in the reconstruction gives the $\unicode[STIX]{x1D712}^{2}$-minimization routine extra degrees of freedom which aids converge.
Figure 5 shows reconstructed magnetic flux surfaces produced by the two methods at the full period and half-period. The observation that the flux surfaces nearly lie on top of each other is another indication that the two reconstructions are in good agreement. This also illustrates the weak dependence of the flux-surface shape on the current shaping parameter $\unicode[STIX]{x1D6FC}_{I}$.
The reconstructed soft x-ray emissivity profiles for the two colours are shown in figure 6. The mean of the GP posterior distribution is shown black, and the $2\unicode[STIX]{x1D70E}$ uncertainty region is indicated by the grey shaded region. The amplitudes of the linear spline knots for the parametric reconstructions are indicated by the green bullets, and the error bars also indicate the $2\unicode[STIX]{x1D70E}$ uncertainty. The reconstructed profiles inferred are in good agreement; however, the uncertainty in parametric reconstructed profiles is larger than the uncertainty in GP reconstructed profiles throughout most of the domain. The second soft x-ray emissivity profile (figure 6b) has small amplitude in the outer flux region $s>0.5$, and the corresponding soft x-ray measurements have a small signal to noise ratio. Thus the uncertainty in this region is large.
In general the reconstruction of the CTH equilibrium shows good agreement between the two methods. In total the two reconstructions used a total of 135 diagnostics to constrain the equilibrium profiles. The hybrid GP reconstruction used a total six parameters and the final equilibrium had a chi-squared value of $\unicode[STIX]{x1D712}^{2}=132$. This equates to an average chi-squared value of $\langle \unicode[STIX]{x1D712}^{2}\rangle =0.98$ per signal. In contrast the fully parametric reconstruction used a total of 26 free parameters. It had a chi-squared value of $\unicode[STIX]{x1D712}^{2}=114$ and an average value of $\langle \unicode[STIX]{x1D712}^{2}\rangle =0.85$ per signal.
4.3 MST single helicity axis state
A reconstruction of a single-helical-axis (SHAx) state in Madison Symmetric torus (MST) (Dexter et al. Reference Dexter, Kerst, Lovell, Prager and Sprott1991) is used as a second experimental test case. MST is normally operated as a reversed field pinch with a minor radius of $a=0.52~\text{m}$ and a major radius of $R=1.52~\text{m}$. The toroidal magnetic field has a maximum value of $B_{\unicode[STIX]{x1D719}}\leqslant 0.5~\text{T}$ at the magnetic axis, the field decreases away from the magnetic axis, and it reverses sign near the edge. During standard operation MST is characterized by an axisymmetric equilibrium. At high plasma current, $I_{p}\approx 500~\text{kA}$, the MST plasma spontaneously transitions to a three-dimensional SHAx equilibrium (Bergerson et al. Reference Bergerson, Auriemma, Chapman, Ding, Zanca, Brower, Innocente, Lin, Lorenzini and Martines2011). V3FIT has previously been used to reconstruct these SHAx states using the standard parametric representation (Koliner et al. Reference Koliner, Cianciosa, Boguski, Anderson, Hanson, Chapman, Brower, Den Hartog, Ding and Duff2016). Here one of these previously reported SHAx equilibria is used to benchmark the hybrid Gaussian process model in V3FIT.
This MST reconstruction use approximately 200 diagnostics, which include both a poloidal array and a toroidal array of magnetic diagnostics, a single-colour multi-chord soft x-ray system, a multipoint Thomson scattering diagnostic and a far-infrared interferometry/polarimetry diagnostic. The only difference between the two reconstructions is that the parametric reconstruction uses a cubic spline to represent the temperature profile, where the hybrid reconstruction uses a GP to reconstruct this profile. Here, the optimal hyper-parameters are calculated as part of the minimization of the error. Both reconstructions use cubic splines to represent the pressure profile and the safety-factor profile. A two-power profile is used to represent the soft x-ray emissivity profile. The density profile is inferred from the pressure profile and the electron temperature profile. This is in contrast to the reconstructions presented in the paper by Koliner et al. (Reference Koliner, Cianciosa, Boguski, Anderson, Hanson, Chapman, Brower, Den Hartog, Ding and Duff2016), where the density profile was directly reconstructed and the temperature profile was inferred from pressure and density profiles. This reconstruction is designed to test the GPR implementation for a Thomson scattering diagnostic using real experimental data; hence, the choice to explicitly model the temperature profile instead of the density profile.
The hybrid GP reconstruction and fully parametric reconstructions provide similar values for all their mutually shared parameters. This is illustrated in figure 7 which shows the flux surfaces from the two reconstructions. The fact that the two sets of flux surfaces lie on top of each other indicates that the two MHD equilibria are in close agreement. However, the Thomson system on MST does not topographically constrain the flux-surface shape, and the temperature profile only weakly constrains the equilibrium flux-surface shape via the pressure profile.
Figure 8 compares reconstructed temperature profiles for the two methods. Figure 8(a) shows the modelled temperature at each of the Thomson viewing locations. Both the GP temperature profile and the parametric temperature profile agree with the measured data within $2\unicode[STIX]{x1D70E}$ of the experimentally measured temperature at each of the Thomson viewing locations. Channel 14 produced a bad signal for this particular shot, and it is not used in the reconstruction. Figure 8(b) compares the Gaussian process temperature profile with the parametric cubic spline temperature profile. The two profiles agree within $2\unicode[STIX]{x1D70E}$ uncertainty, and the two reconstructions give similar estimates of the uncertainty in the reconstruction.
One advantage of the GP profile is apparent near the core $s\lesssim 0.4$. In this region the parametric reconstruction exhibits oscillatory behaviour characteristic of overfitting, where the Gaussian process is not oscillatory. These oscillations can negatively impact core transport and stability analysis which are sensitive to derivatives of the equilibrium profiles. The oscillations can be removed by reducing the number of knots in the spline and/or adjusting their locations. This process requires trial and error. In contrast the Gaussian process automatically eliminates this behaviour in this particular example.
The parametric reconstruction has an average $\langle \unicode[STIX]{x1D712}^{2}\rangle =1.36$ per signal and uses 26 free parameters. The Gaussian process reconstruction has a comparable average value of $\langle \unicode[STIX]{x1D712}^{2}\rangle =0.47$ (this excludes the log evidence term). The Gaussian process reconstruction uses 20 free parameters and two free hyper-parameters.
5 Discussion and conclusions
This paper introduced a new hybrid regression model that has been implemented in the 3-D reconstruction code V3FIT. This hybrid regression model uses a combination of parametric and non-parametric techniques to infer radial profiles from experimental data during equilibrium reconstruction. Gaussian process regression is used to infer radial profiles from diagnostic signals that are modelled using a linear operator acting on the underlying profile. Standard parametric techniques are used to infer the remaining profiles and equilibrium quantities. The linear assumption simplifies the calculation of the Gaussian process posterior, and the resulting mean and covariance matrices can then be calculated using standard linear algebra techniques.
One of the advantages of using Gaussian process regression is that the profile shape is inferred directly from the experimental data. The equilibrium reconstruction process strongly depends on the choices that the user makes. A bad parameterization can introduce systematic errors that exclude certain types of behaviour, introduce oscillations due to overfitting, etc. Gaussian process regression helps address the above issues by reducing the number of inputs that a user has to make. For example, instead of asking the user to specify a functional form for the profile, use of GP only requires that the user specifies a much broader class of functions (defined by specifying a covariance kernel) to which the profile belongs.
Another advantage of the Gaussian process formalism is that the calculation yields both the posterior mean and the posterior covariance matrix for linear models. These two quantities completely specify the posterior distribution of the reconstructed profile. In post-processing one can use these quantities to randomly sample the posterior profile. These randomly sampled profiles can then be used for sensitivity analysis. For example M. Chilenski used temperature and density profiles sampled from a Gaussian process to do error propagation analysis of turbulent transport fluxes (Chilenski et al. Reference Chilenski, Greenwald, Marzouk, Howard, White, Rice and Walk2015). A similar analysis can be applied to other fusion calculations.
Comparison between fully parametric reconstructions and the hybrid GP reconstructions show good agreement between the two methods. These comparisons use experimental data from CTH and MST. In both cases the fully parametric reconstructions use linear or cubic splines to model the radial profiles. These profiles are then inferred using GPR in the hybrid reconstruction.
The work presented in this paper represents the first steps to incorporating Gaussian process regression into V3FIT. A simple, yet powerful, covariance kernel is used, and the implementation is only valid for linear diagnostics. The modularity of the code makes it straightforward to extend the functionality of the GP model. Minor code modifications would be needed to implement alternative kernel functions.
Acknowledgements
The authors would like to thank J. Svensson for helpful conversations and the CTH and MST experimental teams for providing data used in this work. This work is supported by Auburn University and by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences program under the award no. DE-FG02-03ER54692 and DE-SC0018313.