We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Aggregate implements an efficient fast Fourier transform (FFT)-based algorithm to approximate compound probability distributions. Leveraging FFT-based methods offers advantages over recursion and simulation-based approaches, providing speed and accuracy to otherwise time-consuming calculations. Combining user-friendly features and an expressive domain-specific language called DecL, Aggregate enables practitioners and nonprogrammers to work with complex distributions effortlessly. The software verifies the accuracy of its FFT-based numerical approximations by comparing their first three moments to those calculated analytically from the specified frequency and severity. This moment-based validation, combined with carefully chosen default parameters, allows users without in-depth knowledge of the underlying algorithm to be confident in the results. Aggregate supports a wide range of frequency and severity distributions, policy limits and deductibles, and reinsurance structures and has applications in pricing, reserving, risk management, teaching, and research. It is written in Python.
Covers differentiation and integration, higher derivatives, partial derivatives, series expansion, integral transforms, convolution integrals, Laplace transforms, linear and time-invariant systems, linear ordinary differential equations, periodic functions, Fourier series and transforms, and matrix algebra.
This chapter introduces the continuous-time Fourier transform (CTFT) and its properties. Many examples are presented to illustrate the properties. The inverse CTFT is derived. As one example of its application, the impulse response of the ideal lowpass filter is obtained. The derivative properties of the CTFT are used to derive many Fourier transform pairs. One result is that the normalized Gaussian signal is its own Fourier transform, and constitutes an eigenfunction of the Fourier transform operator. Many such eigenfunctions are presented. The relation between the smoothness of a signal in the time domain and its decay rate in the frequency domain is studied. Smooth signals have rapidly decaying Fourier transforms. Spline signals are introduced, which have provable smoothness properties in the time domain. For causal signals it is proved that the real and imaginary parts of the CTFT are related to each other. This is called the Hilbert transform, Poisson’’s transform, or the Kramers–Kronig transform. It is also shown that Mother Nature “computes” a Fourier transform when a plane wave is propagating across an aperture and impinging on a distant screen – a well-known result in optics, crystallography, and quantum physics.
In this chapter, the aim is to visualize wave dynamics in one dimension as dictated by the Schrödinger equation. The necessary numerical tools are introduced in the first part of the chapter. Via discretization, the wave function is represented as a column vector and the Hamiltonian, which enters into the Schrödinger equation, as a square matrix. It is also seen how different approximations behave as the numerical wave function reaches the numerical boundary – where artefacts appear. This numerical framework is first used to see how a Gaussian wave packet would change its width in time and, eventually, spread out. Two waves interfering is also simulated. And wave packets are sent towards barriers to see how they bounce back or, possibly, tunnel through to the other side. In the last part of the chapter, it is explained how quantum measurements provide eigenvalues as answers – for any observable physical quantity. This, in turn, is related to what is called the collapse of the wave function. It is also discussed how a quantity whose operator commutes with the Hamiltonian is conserved in time. Finally, the concept of stationary solutions is introduced in order to motivate the following chapter.
Polynuclear hydroxy-Al cations were prepared by partially neutralizing dilute solutions of aluminum chloride. These cations were introduced in the interlayer space of montmorillonite by cation exchange, which formed heat-stable pillars between the silicate layers. Polynuclear hydroxy-Al was preferentially adsorbed on montmorillonite compared with monomer-Al; the maximum amount adsorbed was ∼400 meq/100 g of montmorillonite. Of this amount 320 meq was non-exchangeable. The 001 X-ray powder diffraction reflection of the polynuclear hydroxy-Al-montmorillonite complex was at 27 Å, with four additional higher-order basal reflections, giving an average d(001) value of 28.4 Å. This complex was thermally stable to 700°C. An analysis of the basal reflections by the Fourier transform method indicated that the 28-Å complex had a relatively regular interstratified structure of 9.6- and 18.9-Å component layers with a mixing ratio of 0.46:0.54. This ratio implies that the hydroxy-Al pillars occupied every second layer. Considering the relatively small amount of Al adsorbed and the thermally stable nature of the structure, the hydroxy-Al pillars must have been sparsely but homogeneously distributed in the interlayer space.
Synthetic aluminous hematites and goethites have been examined by Fourier-transform infrared spectroscopy. For aluminous hematites prepared at 950°C a linear relationship exists between Al content and the location of the band near 470 cm−1, up to 10 mole % Al substitution which is shown to be the solubility limit. The spectra of aluminous goethites prepared in two different ways are qualitatively similar to each other, but differ as to the relationship between the position of the band near 900 cm−1 and the Al content. The spectra of the two series of hematites produced by calcining the goethites at 590°C also show a strong dependence of band position and intensity on the goethite preparative method.
The notion of indicator of an analytic function, that describes the function’s growth along rays, was introduced by Phragmen and Lindelöf. Trigonometric convexity is a defining property of the indicator. For multivariate cases, an analogous property of trigonometric convexity was not known so far. We prove the property of trigonometric convexity for the indicator of multivariate analytic functions, introduced by Ivanov. The results that we obtain are sharp. Derivation of a multidimensional analogue of the inverse Fourier transform in a sector and obtaining estimates on its decay is an important step of our proof.
Halloysite is used for targeted delivery of drugs and other biomolecules. Renewed interest in examination by X-ray diffraction (XRD) to predict the size of particles that can be loaded onto the nanotubes has resulted. Anhydrous halloysite consists of spiraled tubules the length and diameter of which can be determined by measurement using an electron microscope. In spite of ample evidence regarding the spiral structure of halloysite, current programs to evaluate the structure of halloysite nanotubes consider it to be a hollow tube or a cylinder which prevents accurate prediction of its structure and leads to misinformation about the sizes of materials that can be loaded onto the nanotubes. The overall objective of the current study was to derive equations to estimate the structure of halloysite nanotubes which take into consideration its spiral structure. The study of Fourier transform either by electron diffraction or XRD led to the measurement of the spiral thickness and the nature of the spiral. Calculations of the nanotube dimensions may determine the ability of these carriers to allow the mechanical delivery of certain drugs. Here the structure of hydrated halloysite (hollow cylindrical tubes with a doughnut-like cross-section) and anhydrous halloysite (spiraled or helical structure) are described as previously reported in the literature. The Fourier transform of the spiraled structure was selected based on three different kinds of spirals: the Archimedean spiral, the Power spiral, and the Logarithmic spiral. Programs used to define the crystal structure of materials and to calculate the Fourier transform need to take the spiral structure into consideration.
In this article, we study the recent development of the qualitative uncertainty principle on certain Lie groups. In particular, we consider that if the Weyl transform on certain step-two nilpotent Lie groups is of finite rank, then the function has to be zero almost everywhere as long as the nonvanishing set for the function has finite measure. Further, we consider that if the Weyl transform of each Fourier–Wigner piece of a suitable function on the Heisenberg motion group is of finite rank, then the function has to be zero almost everywhere whenever the nonvanishing set for each Fourier–Wigner piece has finite measure.
During the 1970s to 2000s, more than 180 studies of the elastic thickness of the lithosphere were published. The results of these studies have provided a wealth of new information on the long-term mechanical properties of the lithosphere and their relationship to plate and load age. Although the results of individual studies are subject to uncertainties, the analysis of large, global data sets tends to ‘smooth’ out local discrepancies and, hence, make it more likely they will reveal the main features that describe the long-term behaviour of the lithosphere.
Oceanic and continental flexure studies suggest that the long-term behaviour of the lithosphere can be modelled, to first order, as a thin elastic plate that overlies an inviscid fluid. The thickness of the elastic plate, Te, varies both spatially and temporally, and this has provided information on the relationship between load and plate age.
This chapter is a collection of facts, ideas, and techniques regarding the analysis of boundary value, initial and initial boundary value problems for partial differential equations. We begin by deriving some of the representative equations of mathematical physics, which then give rise to the classification of linear, second order, constant coefficient partial differential equations into: elliptic, parabolic, and hyperbolic equations. For each one of these classes we then discuss the main ideas behind problem with them and the existence of solutions: both classical and weak.
We learn about unbound states and find that the energies are no longer quantized. We learn about momentum eigenstates and superposing momentum eigenstates in a wave packet. We apply unbound states to the problem of scattering from potential wells and barriers in one dimension.
This article is the second within a three-part series on Fourier ptychography, which is a computational microscopy technique for high-resolution, large field-of-view imaging. While the first article laid out the basics of Fourier ptychography, this second part sheds light on its algorithmic ingredients. We present a non-technical discussion of phase retrieval, which allows for the synthesis of high-resolution images from a sequence of low-resolution raw data. Fourier ptychographic phase retrieval can be carried out on standard, widefield microscopy platforms with the simple addition of a low-cost LED array, thus offering a convenient alternative to other phase-sensitive techniques that require more elaborate hardware such as differential interference contrast and digital holography.
This chapter discusses the transition between Fourier series and Fourier Transform, which is the tool for spectrum analysis. Generally, the use of linearly independent base functions allows a wide range of linear regression models that work in a least square sense such that the total error squared is minimized in finding the coefficients of the base functions. A special case is sinusoidal functions based on a fundamental frequency and all its harmonics up to infinity. This leads to the Fourier series for periodic functions. In this chapter, we start from the original Fourier series expression and convert the sinusoidal base functions to exponential functions. We can then consider the limit when the length of the function and the period of the original function approach infinity (so that the fundamental frequency approaches 0, including aperiodic functions), leading to the Fourier integral and Fourier Transform. We can then define the inverse Fourier Transform and establish the relationship between the coefficients of Fourier series and the discrete form Fourier Transform. All these are preparations for the fast Fourier Transform (FFT), an efficient algorithm of computation of the discrete Fourier Transform that is widely used in data analysis for oceanography and other applications.
Fourier ptychography is an emerging computational microscopy technique that can generate gigapixel-scale images of biological samples. With only the addition of a low-cost LED array to a standard digital microscope and a reconstruction algorithm, Fourier ptychography overcomes the fundamental trade-off between a microscope's resolution and field-of-view without any moving parts. This article is the first in a three-part series that aims to introduce the fundamentals of the technology to the broader microscopy community and beyond, using intuitive explanations.
We develop a new analytical solution of a three-dimensional atmospheric pollutant dispersion. The main idea is to subdivide vertically the planetary boundary layer into sub-layers, where the wind speed and eddy diffusivity assume average values for each sub-layer. Basically, the model is assessed and validated using data obtained from the Copenhagen diffusion and Prairie Grass experiments. Our findings show that there is a good agreement between the predicted and observed crosswind-integrated concentrations. Moreover, the calculated statistical indices are within the range of acceptable model performance.
Based on our preceding discussions of atomic-resolution characterization techniques in Chapter 4, no technique has yet achieved ASAT. Combining information from FIM or (S)TEM along with APT has demonstrated some very promising results, and each combination seems to be a likely path toward ASAT. In this chapter, we propose how ASAT might be achieved using correlative and/or combined techniques such as (S)TEM + APT. Such a combination would allow several routes for determination of the ion transfer function, or how imaging occurs during an APT experiment. If we can determine the transfer function with high-enough fidelity, we make the argument that it should be possible to achieve ASAT using a combination of (S)TEM and APT with inputs from simulations.
This chapter provides an introduction to the notion of physical dimension, to the specific notations which are used in physics, as well a brief review of some basic mathematics: an introduction to informal distribution theory, to the delta function and the Fourier transform.