We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A numerical method is proposed for a class of one-dimensional stochastic control problems with unbounded state space. This method solves an infinite-dimensional linear program, equivalent to the original formulation based on a stochastic differential equation, using a finite element approximation. The discretization scheme itself and the necessary assumptions are discussed, and a convergence argument for the method is presented. Its performance is illustrated by examples featuring long-term average and infinite horizon discounted costs, and additional optimization constraints.
We study 2-stage game-theoretic problem oriented 3-stage service policy computing, convolutional neural network (CNN) based algorithm design, and simulation for a blockchained buffering system with federated learning. More precisely, based on the game-theoretic problem consisting of both “win-lose” and “win-win” 2-stage competitions, we derive a 3-stage dynamical service policy via a saddle point to a zero-sum game problem and a Nash equilibrium point to a non-zero-sum game problem. This policy is concerning users-selection, dynamic pricing, and online rate resource allocation via stable digital currency for the system. The main focus is on the design and analysis of the joint 3-stage service policy for given queue/environment state dependent pricing and utility functions. The asymptotic optimality and fairness of this dynamic service policy is justified by diffusion modeling with approximation theory. A general CNN based policy computing algorithm flow chart along the line of the so-called big model framework is presented. Simulation case studies are conducted for the system with three users, where only two of the three users can be selected into the service by a zero-sum dual cost game competition policy at a time point. Then, the selected two users get into service and share the system rate service resource through a non-zero-sum dual cost game competition policy. Applications of our policy in the future blockchain based Internet (e.g., metaverse and web3.0) and supply chain finance are also briefly illustrated.
The principle of maximum entropy is a well-known approach to produce a model for data-generating distributions. In this approach, if partial knowledge about the distribution is available in terms of a set of information constraints, then the model that maximizes entropy under these constraints is used for the inference. In this paper, we propose a new three-parameter lifetime distribution using the maximum entropy principle under the constraints on the mean and a general index. We then present some statistical properties of the new distribution, including hazard rate function, quantile function, moments, characterization, and stochastic ordering. We use the maximum likelihood estimation technique to estimate the model parameters. A Monte Carlo study is carried out to evaluate the performance of the estimation method. In order to illustrate the usefulness of the proposed model, we fit the model to three real data sets and compare its relative performance with respect to the beta generalized Weibull family.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their
$\mathbb{L}_n$
-errors and
$\mathbb{L}_n$
-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as
$\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$
or
$\mathsf{C}(n)/N^{1/2}$
(
$\mathbb{L}_n$
-errors) and
$\mathsf{C}(n)\left[t+t^{1/2}\right]/N$
or
$\mathsf{C}(n)/N$
(
$\mathbb{L}_n$
-conditional bias), where t is the time horizon, N is the ensemble size, and
$\mathsf{C}(n)$
is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
We introduce an approach and a software tool for solving coupled energy networks composed of gas and electric power networks. Those networks are coupled to stochastic fluctuations to address possibly fluctuating demand due to fluctuating demands and supplies. Through computational results, the presented approach is tested on networks of realistic size.
Oscillatory systems of interacting Hawkes processes with Erlang memory kernels were introduced by Ditlevsen and Löcherbach (Stoch. Process. Appl., 2017). They are piecewise deterministic Markov processes (PDMP) and can be approximated by a stochastic diffusion. In this paper, first, a strong error bound between the PDMP and the diffusion is proved. Second, moment bounds for the resulting diffusion are derived. Third, approximation schemes for the diffusion, based on the numerical splitting approach, are proposed. These schemes are proved to converge with mean-square order 1 and to preserve the properties of the diffusion, in particular the hypoellipticity, the ergodicity, and the moment bounds. Finally, the PDMP and the diffusion are compared through numerical experiments, where the PDMP is simulated with an adapted thinning procedure.
Self-exciting point processes have been proposed as models for the location of criminal events in space and time. Here we consider the case where the triggering function is isotropic and takes a non-parametric form that is determined from data. We pay special attention to normalisation issues and to the choice of spatial distance measure, thereby extending the current methodology. After validating these ideas on synthetic data, we perform inference and prediction tests on public domain burglary data from Chicago. We show that the algorithmic advances that we propose lead to improved predictive accuracy.
In the first part of this paper we study approximations of trajectories of piecewise deterministic processes (PDPs) when the flow is not given explicitly by the thinning method. We also establish a strong error estimate for PDPs as well as a weak error expansion for piecewise deterministic Markov processes (PDMPs). These estimates are the building blocks of the multilevel Monte Carlo (MLMC) method, which we study in the second part. The coupling required by the MLMC is based on the thinning procedure. In the third part we apply these results to a two-dimensional Morris–Lecar model with stochastic ion channels. In the range of our simulations the MLMC estimator outperforms classical Monte Carlo.
The deferred correction (DC) method is a classical method for solving ordinary differential equations; one of its key features is to iteratively use lower order numerical methods so that high-order numerical scheme can be obtained. The main advantage of the DC approach is its simplicity and robustness. In this paper, the DC idea will be adopted to solve forward backward stochastic differential equations (FBSDEs) which have practical importance in many applications. Noted that it is difficult to design high-order and relatively “clean” numerical schemes for FBSDEs due to the involvement of randomness and the coupling of the FSDEs and BSDEs. This paper will describe how to use the simplest Euler method in each DC step–leading to simple computational complexity–to achieve high order rate of convergence.
In order to study the local refinement issue of the horizontal resolution for a global model with Spherical Centroidal Voronoi Tessellations (SCVTs), the SCVTs are set to 10242 cells and 40962 cells respectively using the density function. The ratio between the grid resolutions in the high and low resolution regions (hereafter RHL) is set to 1:2, 1:3 and 1:4 for 10242 cells and 40962 cells, and the width of the grid transition zone (for simplicity, WTZ) is set to 18° and 9° to investigate their impacts on the model simulation. The ideal test cases, i.e. the cosine bell and global steady-state nonlinear zonal geostrophic flow, are carried out with the above settings. Simulation results showthat the larger the RHL is, the larger the resulting error is. It is obvious that the 1:4 ratio gives rise to much larger errors than the 1:2 or 1:3 ratio; the errors resulting from the WTZ is much smaller than that from the RHL. No significant wave distortion or reflected waves are found when the fluctuation passes through the refinement region, and the error is significantly small in the refinement region. Therefore,when designing a local refinement scheme in the global model with SCVT, the RHL should be less than 1:4, i.e., the error is acceptable when the RHL is 1:2 or 1:3.
It is well-known that the traditional full integral quadrilateral element fails to provide accurate results to the Helmholtz equation with large wave numbers due to the “pollution error” caused by the numerical dispersion. To overcome this deficiency, this paper proposed an element decomposition method (EDM) for analyzing 2D acoustic problems by using quadrilateral element. In the present EDM, the quadrilateral element is first subdivided into four sub-triangles, and the local acoustic gradient in each sub-triangle is obtained using linear interpolation function. The acoustic gradient field of the whole quadrilateral is then formulated through a weighted averaging operation, which means only one integration point is adopted to construct the system matrix. To cure the numerical instability of one-point integration, a variation gradient item is complemented by variance of the local gradients. The discretized system equations are derived using the generalized Galerkin weakform. Numerical examples demonstrate that the EDM can achieves better accuracy and higher computational efficiency. Besides, as no mapping or coordinate transformation is involved, restrictions on the shape elements can be easily removed, which makes the EDM works well even for severely distorted meshes.
In this paper, we investigate the mean-square convergence of the split-step θ-scheme for nonlinear stochastic differential equations with jumps. Under some standard assumptions, we rigorously prove that the strong rate of convergence of the split-step θ-scheme in strong sense is one half. Some numerical experiments are carried out to assert our theoretical result.
We discuss modelling and simulation of volumetric rainfall in a catchment of the Murray–Darling Basin – an important food production region in Australia that was seriously affected by a recent prolonged drought. Consequently, there has been sustained interest in development of improved water management policies. In order to model accumulated volumetric catchment rainfall over a fixed time period, it is necessary to sum weighted rainfall depths at representative sites within each sub-catchment. Since sub-catchment rainfall may be highly correlated, the use of a Gamma distribution to model rainfall at each site means that catchment rainfall is expressed as a sum of correlated Gamma random variables. We compare four different models and conclude that a joint probability distribution for catchment rainfall constructed by using a copula of maximum entropy is the most effective.
The modified ghost fluid method (MGFM), due to its reasonable treatment for ghost fluid state, has been shown to be robust and efficient when applied to compressible multi-medium flows. Other feasible definitions of the ghost fluid state, however, have yet to be systematically presented. By analyzing all possible wave structures and relations for a multi-medium Riemann problem, we derive all the conditions to define the ghost fluid state. Under these conditions, the solution in the real fluid region can be obtained exactly, regardless of the wave pattern in the ghost fluid region. According to the analysis herein, a practical ghost fluid method (PGFM) is proposed to simulate compressible multi-medium flows. In contrast with the MGFM where three degrees of freedomat the interface are required to define the ghost fluid state, only one degree of freedomis required in this treatment. However, when these methods proved correct in theory are used in computations for the multi-medium Riemann problem, numerical errors at the material interface may be inevitable. We show that these errors are mainly induced by the single-medium numerical scheme in essence, rather than the ghost fluid method itself. Equipped with some density-correction techniques, the PGFM is found to be able to suppress these unphysical solutions dramatically.
By introducing a new Gaussian process and a new compensated Poisson random measure, we propose an explicit prediction-correction scheme for solving decoupled forward backward stochastic differential equations with jumps (FBSDEJs). For this scheme, we first theoretically obtain a general error estimate result, which implies that the scheme is stable. Then using this result, we rigorously prove that the accuracy of the explicit scheme can be of second order. Finally, we carry out some numerical experiments to verify our theoretical results.
Upon a set of backward orthogonal polynomials, we propose a novel multi-step numerical scheme for solving the decoupled forward-backward stochastic differential equations (FBSDEs). Under Lipschtiz conditions on the coefficients of the FBSDEs, we first get a general error estimate result which implies zero-stability of the proposed scheme, and then we further prove that the convergence rate of the scheme can be of high order for Markovian FBSDEs. Some numerical experiments are presented to demonstrate the accuracy of the proposed multi-step scheme and to numerically verify the theoretical results.
Convergence analysis is presented for recently proposed multistep schemes, when applied to a special type of forward-backward stochastic differential equations (FB-SDEs) that arises in finance and stochastic control. The corresponding k-step scheme admits a k-order convergence rate in time, when the exact solution of the forward stochastic differential equation (SDE) is given. Our analysis assumes that the terminal conditions and the FBSDE coefficients are sufficiently regular.
Collocation has become a standard tool for approximation of parameterized systems in the uncertainty quantification (UQ) community. Techniques for least-squares regularization, compressive sampling recovery, and interpolatory reconstruction are becoming standard tools used in a variety of applications. Selection of a collocation mesh is frequently a challenge, but methods that construct geometrically unstructured collocation meshes have shown great potential due to attractive theoretical properties and direct, simple generation and implementation. We investigate properties of these meshes, presenting stability and accuracy results that can be used as guides for generating stochastic collocation grids in multiple dimensions.
The multi-level Monte Carlo method proposed by Giles (2008) approximates the expectation of some functionals applied to a stochastic process with optimal order of convergence for the mean-square error. In this paper a modified multi-level Monte Carlo estimator is proposed with significantly reduced computational costs. As the main result, it is proved that the modified estimator reduces the computational costs asymptotically by a factor (p / α)2 if weak approximation methods of orders α and p are applied in the case of computational costs growing with the same order as the variances decay.
A Newton/LU-SGS (lower-upper symmetric Gauss-Seidel) iteration implicit method was developed to solve two-dimensional Euler and Navier-Stokes equations by the DG/FV hybrid schemes on arbitrary grids. The Newton iteration was employed to solve the nonlinear system, while the linear system was solved with LU-SGS iteration. The effect of several parameters in the implicit scheme, such as the CFL number, the Newton sub-iteration steps, and the update frequency of Jacobian matrix, was investigated to evaluate the performance of convergence history. Several typical test cases were simulated, and compared with the traditional explicit Runge-Kutta (RK) scheme. Firstly the Couette flow was tested to validate the order of accuracy of the present DG/FV hybrid schemes. Then a subsonic inviscid flow over a bump in a channel was simulated and the effect of parameters was investigated also. Finally, the implicit algorithm was applied to simulate a subsonic inviscid flow over a circular cylinder and the viscous flow in a square cavity. The numerical results demonstrated that the present implicit scheme can accelerate the convergence history efficiently. Choosing proper parameters would improve the efficiency of the implicit scheme. Moreover, in the same framework, the DG/FV hybrid schemes are more efficient than the same order DG schemes.