Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-28T03:49:16.494Z Has data issue: false hasContentIssue false

Impact of spatial aliasing on sea-ice thickness measurements

Published online by Cambridge University Press:  26 July 2017

Cathleen Geiger
Affiliation:
Geography, University of Delaware, Newark, DE, USA E-mail: cgeiger@udel.edu Electrical and Computer Engineering, University of Delaware, Newark, DE, USA
Hans-Reinhard Müller
Affiliation:
Physics and Astronomy, Dartmouth College, Hanover, NH, USA
Jesse P. Samluk
Affiliation:
Electrical and Computer Engineering, University of Delaware, Newark, DE, USA
E. Rachel Bernstein
Affiliation:
Geography, University of Delaware, Newark, DE, USA E-mail: cgeiger@udel.edu
Jacqueline Richter-Menge
Affiliation:
Terrestrial and Cryospheric Sciences, US Army Cold Regions Research and Engineering Laboratory, Hanover, NH, USA
Rights & Permissions [Opens in a new window]

Abstract

We explore spatial aliasing of non-Gaussian distributions of sea-ice thickness. Using a heuristic model and >1000 measurements, we show how different instrument footprint sizes and shapes can cluster thickness distributions into artificial modes, thereby distorting frequency distribution, making it difficult to compare and communicate information across spatial scales. This problem has not been dealt with systematically in sea ice until now, largely because it appears to incur no significant change in integrated thickness which often serves as a volume proxy. Concomitantly, demands are increasing for thickness distribution as a resource for modeling, monitoring and forecasting air–sea fluxes and growing human infrastructure needs in a changing polar environment. New demands include the characterization of uncertainties both regionally and seasonally for spaceborne, airborne, in situ and underwater measurements. To serve these growing needs, we quantify the impact of spatial aliasing by computing resolution error (Er) over a range of horizontal scales (x) from 5 to 500 m. Results are summarized through a power law (Er= bxm) with distinct exponents (m) from 0.3 to 0.5 using example mathematical functions including Gaussian, inverse linear and running mean filters. Recommendations and visualizations are provided to encourage discussion, new data acquisitions, analysis methods and metadata formats.

Type
Research Article
Copyright
Copyright © The Author(s) [year] 2015

Introduction

The target area an instrument measures is called an instrument footprint. This area must be considered relative to the size of physiographic features being measured. In the case of sea ice, dominant features are ridges. Ridges are deformed ice features created through mechanical deformation (kinematic) events which pile masses of ice floes together as long, narrow (linear) features which are only meters wide along their narrowest axis. Ridges are several meters thicker than the surrounding level ice which is grown thermodynamically through the more uniform process of freezing sea water. Ridges are important because they contain a disproportionately larger amount of sea-ice volume per unit surface area. Subsequently, volume is the primary variable scientists need for monitoring the mass balance of sea ice in the context of planetary thermal stability (IPCC, 2013).

Electromagnetic induction (EM) devices are currently the most accessible instruments for scientists measuring sea-ice thickness and mapping its features. EM devices assume an instrument footprint from a roughly conical beam (illumination) proportional to 3.7 times the flying altitude (Reference Reid and VrbancichReid and Vrbancich, 2004), i.e. footprint increases radially with distance. EM systems have a long-standing reputation for measuring level sea-ice thickness to within 10% accuracy (e.g. Reference Kovacs and MellorKovacs and Mellor, 1971; Reference KovacsKovacs, 1975; Reference McNeillMcNeill, 1980). Unfortunately, relative errors of 40–60% are commonly reported near deformed sea ice when measured from airborne EM systems relative to ground surveys (e.g. Reference Reid, Pfaffling and VrbancichReid and others, 2006; Reference Pfaffling and HaasPfaffling and others, 2007). This error is not unique to EM instruments. It is a problem of footprint size – a problem of resolution error and therefore an issue of scale.

Growing uncertainty due to scale is undesired because such a problem impacts efforts to develop integrated observing systems from multiple platforms to monitor sea ice. The specific scaling problem just referenced involves the smoothing of narrow, deep features into wider, shallower features with larger instrument footprints smoothing more than smaller footprint instruments. This is problematic for snow and ice thickness because the thickness of these features is not normally distributed (i.e. non-Gaussian). Mathematically, we know that averaging computes the mean, but distorts skew, median and mode information of any measurement which is not normally distributed. Hence, any measurement system or processing method that averages part or all of a non-Gaussian distribution may not adequately capture a physical thickness and may thereby distort true thickness distribution.

This scaling problem has long existed for sea ice, but is more relevant today given large changes in the Arctic that require increased accuracy of thickness and its distribution (e.g. SEARCH Project Office, 2008; SCICEX Science Advisory Committee, 2010; Reference Wadhams, Hughes and RodriguesWadhams and others, 2011; IPCC, 2013). Better projections and predictions of sea ice (e.g. Reference Zhang J,Lindsay, Steele and SchweigerZhang and others, 2008; Reference HunkeHunke, 2010; Reference Holland, Bailey and VavrusHolland and others, 2011; Reference Schweiger, Lindsay, Zhang, Steele, Stern and KwokSchweiger and others, 2011) demand better modeled thickness distribution parameterizations (Bitz and others, 2001). These demands need to be validated and supported by more accurate measurements. Resolution error is long identified in upward-looking sonar (ULS) draft measurements, albeit under the more specific ULS problem known as beamwidth error (e.g. Reference WadhamsWadhams and Davy, 1986). It is also long known that ice topography changes the shape of a returning footprint waveform as a function of backscatter and incidence angle (e.g. Reference Fetterer, Drinkwater and JezekFetterer and others, 1992). However, it has been years since this fundamental measurement topic was re-examined, particularly in a form that communicates clearly in interdisciplinary discourse.

From first principles, large-scale estimates of sea-ice thickness are impacted by ubiquitous small-scale processes (e.g. Reference Hopkins and FrankensteinHopkins and others, 2004; Reference WadhamsWadhams and Doble, 2008; Reference Doble, Skouroup and WadhamsDoble and others, 2011; Reference GeigerGeiger and others, 2011; Reference Thomas and KamhamettuThomas and others, 2011). As an example, small-scale deformed ice features play a central role in air–sea momentum transfer (Reference BankeBanke and others, 1980), with Reference AndreasAndreas (2011) showing a strong coupling between physical and aerodynamic roughness of snow and ice. For clarity, physical roughness includes those features through which momentum is transferred. According to Reference AndreasAndreas (2011), momentum-transferring features are 12.6m and smaller (i.e. the width of ridge sails and keels, and deformed ice blocks) with 1–0.5 m sampling intervals recommended along survey lines that are at least 255 m long. These recommendations ensure sufficient sampling for rendering geometric shapes relevant to volume estimates. Currently, such measurement practices are not standardized.

Another finding is a new full-physics finite-volume small-scale electromagnetic geophysical model where instrument footprint is found to be more variable than previously assumed. Findings in Reference Samluk, Geiger and KolodzeySamluk and others (2015) show how sea-ice conductivity impacts both electromagnetic penetration (so-called skin depth) in vertical extent and, more importantly, lateral skin depth or footprint size of returned secondary eddy currents. More notably, at large scales, Reference Bernstein, Geiger, Deliberty and StamponeBernstein and others (2015) show that regionally integrated thickness exceeds area-weighted average thickness due to a skew towards thick and deformed ice. Using Southern Ocean thickness proxy archives from ice charts, as few as five bins in thickness distribution already makes a big difference in sea-ice volume estimates. Reference Bernstein, Geiger, Deliberty and StamponeBernstein and others (2015) found that volume from integrated thickness exceeds propagated averages by as much as 60%. Such results occur when strong bimodal summer ice is distributed between thinning seasonal ice and thicker surviving ice. This problem amplifies as measurements are propagated through multiple resolution changes. In short, resolution errors grow and modify data records each time data are interpolated to a new grid. When unchecked, such problems make it difficult to compare results, as resolution errors change thickness distribution between measurements, archives, reanalyses and model inputs by way of simple interpolation onto a new grid or other smoothing processes.

Turning this problem on its head, we consider here the hypothesis that resolution error can be leveraged as a tool to quantify and improve the accuracy of snow and sea-ice thickness, distribution, and variability. Questions we pose in this paper are: (1) What is the underlying cause of the problem? (2) How much distortion is incurred? (3) Is there a way to quantify distortions as a function of scale? (4) What is the impact on climate data records (CDR) and stakeholders of community datasets? Most importantly, (5) how can we use this knowledge to improve data synthesis capabilities? We address question (1) by examining the underlying cause of resolution error using a heuristic model. We apply our heuristic model to a sample dataset to answer question (2) and devise a power law to relate errors between scales to address (3). Questions (4) and (5) are discussion points relevant to model prediction, measurement strategies and outlining of new steps forward.

Heuristic Model

Resolution error is difficult to validate with existing coincident sea-ice datasets of drifting pack ice because geolocation errors are still too large for conclusive certainty (personal communication from C. Haas, 2012). Hence, for this study, we defer to a simple heuristic experiment to explain general characteristics from first principles. We begin with an idealized model (Fig. 1) using a two-dimensional (2-D) triangular ridge described with both discrete and statistical representations of thickness. We build this model from earlier work (Reference Worby, Geiger, Paget and WoertWorby and others, 2008) with center draft of 1 and area of 1 (dimensionless units) and call our initial ridge shape Case 0: High Resolution. Next, we consider an idealized running mean of length 3 and call this Case 1 : Low Resolution. From a spatial context (Fig. 1a), we see that the volume (a 2-D area in this case) is conserved as an integrated value between Cases 0 and 1 . However, in both spatial and frequency domains (Fig. 1), thickness distribution between cases is quite different. In both frames of reference, a bimodal distribution of thin and thick ice in Case 0 becomes a single, averaged mode of intermediate thickness in Case 1. This behavior is analogous to aliasing in time series (e.g. Reference EmeryEmery and Thomson, 2001), so we identify the problem here as spatial aliasing. As with temporal aliasing (e.g. Reference GeigerGeiger and Drinkwater, 2005), it quickly becomes difficult to compare, let alone combine, measurements into a larger archive for modeling and remote sensing when one dataset is aliased and one is not; or two datasets are aliased to different degrees. The underlying cause is the smoothing of non-Gaussian data, with bimodal cases producing the most extreme effects.

Fig. 1. Heuristic model of spatial aliasing. An idealized triangular ridge (a) with normalized units is well represented by discrete points (solid blue) when simply connected by line segments, or discrete area rectangles (dashed blue) when interpreted as a piecewise constant function. Both solutions conserve volume and thickness distribution. When smoothed by an example runningmean filter of length 3, the feature changes shape, with discrete points (solid red) and discrete area (dashed red) still conserving volume but no longer conserving thickness distribution. The impact is most pronounced on thickness distribution in the frequency domain (b) when the distribution is bimodal. The underlying cause of thickness distortion is loss of bimodal structure due to averaging of a non-Gaussian feature.

This heuristic exercise leads to our second question: What is the size of these differences, preferably in the context of scale? Differences between two data series are traditionally summed together in quadrature for a measure of disagreement (e.g. Reference GeigerGeiger, 2006). Mathematically, this is equivalent to interpreting each data series as a vector and subtracting one vector from the other in the form of a Euclidean distance of scalar L2 Norm, with superscript ‘2’ denoting the exponent value for terms in summation. The measure of disagreement is often made independent of sample size by dividing the sum by the number of points. For the heuristic example (Fig. 1a), normalized points X = -1 and X = 1 yield a difference of 1/3 in Case 1 relative to Case 0 while the center incurs a difference of -2/3. When these terms are summed in quadrature, the result is ±0.27 from using a formalism presented later.

Unfortunately, it is difficult to interpret a normalized error (disagreement) of ±0.27 (i.e. ∼±1/4) from an intuitive perspective (Fig. 1). The individual errors are never that small, even though the formula itself is essentially identical to a standard deviation calculation. The intuitive disconnect arises because this problem is a measure of a changing perimeter rather than a statistical length.

Our toy model (Fig. 1) suggests that while volume is conserved, perimeter is changing between Case 0 and Case 1, with the profile shrinking in one dimension but growing in the other. We therefore consider here the alternative L1 Norm (e.g. Reference Black and PieterseBlack, 2006; Reference DonohoDonoho, 2006), which essentially sums the absolute differences (‘TaxiCab distance’). In the heuristic example, the L1 normalized error is ±(| 1 / 3 | + 2/3| + |1/3|)/3 = ± 4 / 9 = ±0.44. This numerical value of average disagreement makes a more intuitive connection to typical errors measured (e.g. one location is losing 2/3 while two locations are gaining 1/3, with an average error somewhere between and closer to the smaller repeated values). Thought processes like these are the rationale behind works by Reference WillmottWillmott and Johnson (2005) and Reference Stampone and GeigerStampone and others (2012) where an emphasis on absolute-value error provides an understanding for area-based problems along a 2-D geographic surface.

Generalizing principles above, we devise a two-step algorithm to first create a lower-resolution result (zn , L) that retains the original high sampling frequency, and then measure resolution difference (error) between this lower-resolution product and its higher-resolution source (zn):

(1)

Here wj represents a set of weighting coefficients forj=–J/2 to J/2. The index rangeJ is associated with the length scale (L) through L=J Δx where ∆x (m) is the resolution, making J an integer indexed function of L and of the spacing ∆x. We generalize resolution error as a i deviation in thickness, taken as a function not only of the absolute-value sums of differences between resolutions but also as a function of the weights involved (i.e. both size and shapes of the functions). Here E r expresses an average deviation between a highly resolved signal zn and a smooth signal zn , L where n denotes the discrete data point and L denotes the scale of the filter. Our toy model (Fig. 1) represents an example of these definitions, where zn and zn ,L are represented by blue and red curves, respectively, J = 3, A x = 1 , L=J, j=[-1,0,+1] and wj= [1, 1, 1], noting that the integer value ofj equals INT(–J/2) to INT(J/2) (e.g. INT(3/2) = 1). In that example, only one point, X=0, is investigated such that n = N=1 (N>1 investigated later using an observed data series).

We note for clarity that, if zn is any linear function of n, then Er = 0, provided wj are symmetric with respect to center j=0. The actual value of E r also depends on the weights wj such that weights can be varied to minimize Er to find an optimal weight shape (unpublished work), also called the filter shape. Furthermore, if data are normally distributed, then resolution error reduces to the mean absolute error. Hence, growth of resolution error occurs in a manner distinct from mean absolute error when skewed information is introduced either by non-Gaussian distributions of targeted materials or inclination of filter shape at an incidence angle relative to a target face (not shown, for brevity).

Building on the understanding from above, we provide answers to question (3) by devising a relationship between resolution error and scale in a generalized sense. We craft our experiment using Eqn (1) together with (1) predefined filter shapes (sets of wj) and (2) a dataset. Because different filters have different shapes which a priori we assume will impact results, we choose a range of representative shapes (Table 1) to assess the impact of both narrow and wide filters. For this study, we choose four symmetric shapes that are commonly seen in models, instrumentation and geophysical studies in general.

Table 1. Symmetric smoothing functions

The first choice (Table 1) is a centered Gaussian which often describes the shape of a signal from an instrument transmitter/receiver pair (side lobes excluded for simplicity). The second is an inverse linear filter which is commonly used to interpolate data from one resolution to another in model applications. The remaining two shapes are tapered Gaussian, basically a nonlinear curved fit, and a running mean as used in our heuristic model (Table 1; Fig. 2) and commonly applied in many high-data-volume real-time acquisitions for initial data reduction. Below, we apply these filter shapes to real data to explore possible hardware and software responses between input (high resolution) and output (low resolution).

Fig. 2. Filter shapes. Four normalized shapes are mathematically constructed from Gaussian (thick line), inverse linear (dashed line), tapered Gaussian (thin line) and running-average (dotted line) functions. Each function is expanded to needed length scales (L) to filter any measured point relative to neighboring points.

Filter functions (Fig. 2) are provided to represent a wide variety of symmetric filters and their properties. The compact kernels shown are nonzero in the normalized interval [-1,1], but zero outside this interval. Regions where filter functions are larger than zero are defined in this paper as length scale L for each filter. This is also called the ‘kernel length scale’. Note that a good case can be made for other length definitions. For example, the Gaussian filter appears narrower than the running mean, with the area underneath each curve helping to quantify an ‘intrinsic’ length scale (i.e. the length proportional to the width covered by a fixed area (say 90%) underneath the filter curve). As an example, the running-mean filter has an intrinsic length identical to the kernel length scale. In contrast, the Gaussian achieves 90% of its area between the narrower normalized interval [-0.41, 0.41]. A similar analysis finds an inverse linear filter intrinsic scale of ∼75% of the kernel length scale. While these matters and their impacts are well known to signal-processing specialists, the use of averaging techniques and symmetric filters to smooth instrument signals or post-process data are largely routine applications in many fields where smoothing methods are often used without considering these subtle relationships between a zone of influence (kernel size) and intrinsic weighting of each kernel shape. Hence, these simple filters are chosen for this paper as a means to communicate issues across a broad interdisciplinary community.

Data

Data are retrieved from the SEDNA archive (Sea-ice Experiment: Dynamic Nature of the Arctic; Hutchings and others, 2011), specifically the Geonics EM-31-MK2 (hereafter referred to as EM-31) for sea-ice thickness, and MagnaProbe (Reference SturmSturm and others, 2006) readings for snow depth, plus drilled holes for calibration. These field measurements were taken in April 2007 in the Beaufort Sea (∼73° N, 147°W). Fieldwork was a collaboration between the SEDNA project (Reference HutchingsHutchings and others, 2008) and the European DAMOCLES project (Developing Arctic Modeling and Observing Capabilities for Long-term Environmental Studies; Reference GascardGascard and others, 2008).

Archive records show that the first half of the experiment (1-7 April 2007) included a thickness survey along the array set out near the ice camp (Fig. 3a). The array consisted of six 1 km long transect legs. During the transect survey, the EM-31 was carried by one person at a steady height in a horizontal orientation (perpendicular to the survey track) with a shoulder strap used to support and maintain a constant reference (z o = 1.00 ± 0.05 m). Distance was paced out at ∼5 m intervals between 25 m survey stakes. Following the EM-31 was the MagnaProbe carried by a second person who measured the snow thickness where EM-31 readings had just been taken. Following the survey, calibration sites were chosen, with additional EM-31 samples collected with coincident drilled holes, enabling snow depth and ice thickness measurements to centimeter accuracy. At some locations, the EM-31 was held at two different heights (carrying height and ground) plus different orientations to account for local ice features and variability.

Fig. 3. Arctic ice camp survey. (a) Photograph with superimposed lines taken from light-wing aircraft at oblique angle over 1 km long survey legs. Survey samples are taken along each leg every 5 m using coincident EM-31 and MagnaProbe in tandem. Arrow is bearing true north; camp outlined. (b) Calibration results of EM-31 expressed as conductivity measurements based on 52 vertical distance samples collected coincidentally at drillhole sites, with regression analysis summarized in Table 2 . Ice types in legend identified as first-year level ice (FY), first-year deformed ice (FYD) and multi-year ice (MY).

Data processing

EM-31 relative conductivity records are calibrated with an exponential fit between a recorded apparent conductivity (mSm–1) and distance z between the instrument and a highly conductive material (sea water assumed for this study). The conversion equations used here follow Reference EickenEicken and others (2001) as

(2)

where coefficients A, B and C are solved using nonlinear regression. The nonlinear regression routine requires input of a function such as Eqn (2), a series of matched values for apparent conductivity and distance z from coincident measurements at drilled holes, and an initial guess of coefficients. Once coefficients are found, the inverse solution

(3)

describes the distance z between instrument and water surface at any site given coefficients and input apparent conductivity value. Sea-ice thickness zi is determined subsequently by

(4)

Here z o is the distance between the instrument and the top surface (a mixture of snow and ice) and z s is the snow thickness (MagnaProbe used in this study).

By definition, as exponential relationship varies rapidly relative to an e-folding anchor point. Subsequently, values that are further away from the anchor point are increasingly sensitive to the coefficients chosen via a fitted curve solution. In this case, deeper sea-ice thickness values are the most sensitive. Concomitantly, effective nonlinear regression techniques provide tight confidence intervals for each computed coefficient given an appropriate number of input samples. Hence, we perturb our calibration dataset into three sample sizes to estimate sensitivity beyond drillhole depths. The first sampling includes all pairings of drillhole data with EM-31 readings, from which we compute an initial set of coefficients (Table 2) and a fitted exponential curve. We then subsample the initial dataset into values which are below and above the initial fit. Each of these two subsets is subsequently subject to nonlinear regression to generate two more unique sets of coefficients which we call Low and High solutions (Table 2) which form boundaries of gray shading of uncertainty (Fig. 3b) with a tight set of fitted curves where data values span exponential fit, but a growing uncertainty beyond data-availability range as the solution extrapolates. The end result is better visual communication of ice thickness uncertainty to subsequent users.

Table 2. Summary of EM calibration coefficients

Survey lines are concatenated to form a single long 2-D thickness profile (Fig. 4) with associated uncertainties. We create a profile from 1156 measurements and call the total thickness (snow plus ice thickness) our zn values for n = 1, . . .,1156. Values are spaced ∼5 m such that ∆x = 5 along the concatenated lines. Three realizations of zn profiles are created using the Low (thin profile), Central Tendency (mean profile) and High (thick profile) calibration coefficients (Table 2).

Fig. 4. Concatenated profile from survey lines. Survey lines sampled at 5 m intervals for ice thickness (using EM-31) and snow depth (using MagnaProbe). All six survey lines are concatenated into one synthetic profile with typical properties listed (MagnaProbe depths also indicated at drill sites). Field measurements such as these are often provided as climate data records (CDR) for modelers, remote-sensing calibration and other applications. Note that uncertainties are provided as gray shadow to communicate uncertainties as in Figure 3b. In this way, we explore propagated uncertainties and their compounding effects with other error sources.

Model Processing

For each of the three realizations just described, we apply each filter (Fig. 2) at increasing filter lengths from L = 10, 20,. . .,500m length scales. In this way, we generate sets of smooth solutions zn , L for a range of length scales using the algorithm in Eqn (1) and solve subsequently for E r(L) for each generated profile. To maintain the same number of data points for a growing scale problem, we always start with the original data and buffer the two ends of the concatenated profile with a mirror of end values to needed lengths. For brevity, we only show solutions for the Central Tendency (Fig. 5).

Fig. 5. Impact of instrument footprint. Using mathematical functions (Fig. 2) to simulate instrument footprints of different sizes and shapes, we show how width and depth of narrow features are widened and flattened spatially (a–d). In frequency space (e–h, respectively), observed (black line) frequency distributions (FD) develop artificial modes which grow with scale and exceed white-noise levels in wider filter cases, especially in (g, h). While volume and mean thickness conserve in all cases (inset cumulative frequency distribution (CDF) shown for L = 5, 250, 500 m; e–h), thickness distribution and thickest ice types are altered considerably as noted by artificial peaks and loss of ice at 10 m bin, respectively

Results

Results (Fig. 5) show the Central Tendency profile for all four filters over all scales at 10 m increments. Four filter solutions are arranged from narrowest (top panels) to widest (bottom panels) shapes. We see a direct relationship between increasing filter width and increasing resolution error. Beside each spatial profile, we show the corresponding thickness distribution with only three scales (original measurements, 250m, and largest at 500 m scale) in frequency space to avoid confusion of the many growing distribution peaks at different scales. Deviations from the original distribution are aliased artifacts which appear in two forms: (1) loss of thickest ice and (2) development of additional modes; neither deviation represents any real sea-ice features. For a sense of significant clustering, a white-noise level is shown. White noise is calculated as a constant power across all frequencies (i.e. as the inverse of bin number, 1/B) as a general white-noise description. Here 51 bins (i.e. B = 51) are used with 0.2 m bin intervals. This equates to a white-noise level of 2%. In other words, when new aliased peaks differ from the original signal by more than white noise, there is a significant difference at that frequency bin relative to the original signal. By mixing shapes and sizes of filters, one can create a new smooth thickness distribution which no longer matches the original high-resolution signal (not shown, for brevity) but still conserves volume. The most telling detail is the loss of thickest ice.

To summarize results from all realizations, we plot resolution error (E r) as a function of scale in a log–log relationship (Fig. 6). These results are sufficiently uniform to fit solutions with regression analysis. For clarity, we introduce the dimensionless variable x =L/L 0 and set L 0= 1 m to align our mathematical model with the 1 m length Reference AndreasAndreas (2011) recommends. Using the well-recognized form Y = mX+B, we define Y= log(E r/L0), X=log(x) and B = log(b/ L0) to solve for m and b as exponent and amplitude in the relationship Er(L)=bxm (Table 3 and Table 4).

Table 3. Power-law exponent (m) and 95% confidence interval

Table 4. Power-law amplitude (b) from intercept B= log(b) and 95% confidence interval

Example Applications

Two visual perspectives are provided as example applications. The first is spatial aliasing of a low-resolution instrument which oversamples to collect data coincident with a high-resolution instrument; oversampling is essential for this to work, as is a high sampling rate. Using the dots (Fig. 6) as a guide, high-resolution data are filtered to a length scale matching the low-resolution instrument using the footprint shape and size of the low-resolution instrument. If the filtered result (purple dot in Fig. 6) yields the same thickness distribution as the low-resolution instrument, then both instruments see the same thing at low resolution. Equation (1) estimates the error introduced to the higher-resolution instrument and, more importantly, the increase in error at any scale for instruments with similar footprint shapes (Fig. 5a-d and 6). Such an application is invaluable when analyzing new prototype airborne instruments intended for spaceborne missions.

Fig. 6. Systematic increase in resolution error as a function of scale. Growing resolution errors (Er) are shown based on four filter shapes (Fig. 2), each applied to the Central Tendency calibrated profiles (Fig. 3b and 4). Log–log slopes used to estimate exponential fit Er =bxm for length scale x with fit parameters listed (Tables 3 and Table 44). Each slope and intercept pair is significantly distinct at the 95% confidence interval, thereby providing a predictable trend of growing resolution error as a function related to instrument waveform response, footprint shape and size, and/or post-process smoothing algorithms. Colored dots are used as example application to demonstrate how a high-resolution instrument (blue) can be used to test a low-resolution instrument (red) for aliasing by filtering the high-resolution data using the footprint characteristics of the low-resolution instrument (purple).

The second example shows how resolution error grows in a non-Gaussian distribution. Changes at each point (dZn = zn,L zn) are accumulated into 0.2 m bins as percentages for each scale, with the histogram changing as a function of length scale (L) and filter shape (Fig. 7). Through such visualization, ± values of Er are intuitively related to high concentrations of thickness differences dZn. From such a perspective, we see power-law increases of Er growing slower than overall distribution, and a general trend of under-reported thicknesses (i.e. more dZn<0) with growing footprint size. Hence, while Er represents many values within a Central Tendency, the actual distribution of error grows much faster at extreme ends, where differences really matter in terms of heat fluxes (at the thin end) and human infrastructure (at the thick end). Non-systematic distribution patterns are present across increasing scales, which suggests that spatial aliasing of sea-ice thickness is difficult to anti-alias using techniques such as those found in Reference SmithSmith and others (2000). However, this form of analysis is invaluable for tracking and understanding changes in non-Gaussian properties, especially for cases where instruments are collecting smoothed returns for new ground-to-airborne-to-spaceborne systems and their comparisons. Such assessment provides interesting relationships between an emitted footprint shape and return shape after interacting with rough topography. In this regard, Er is an effective tool for characterizing upscaling processes.

Fig. 7. Example distributions of growing resolution error. Respective to each filter shape (Fig. 5a–d), resolution error is visualized as a distribution of dZn = zn , L zn for Central Tendency calibration results. Grey shading identifies percent of values found within bins discretized by 0.2 m changes in dZ at each incremental 10 m length scale. Thick black lines are positive and negative representations of Er (from Fig. 6). Note, black lines envelope only highest concentrations of error, which increase at a lower rate than surrounding non-Gaussian error distributions. Variability in error propagation is strong enough to impede effective downscaling solutions to reverse (Reference SmithSmith and others, 2000) the aliasing process at this time.

Discussion

Spatial aliasing is a ubiquitous problem that is not limited to sea ice or instruments. We simply use sea ice as a non-Gaussian example and encourage the application of analysis tools provided herein. To address impact, we ask: Why is such detail so important in the first place? How important is a true thickness distribution to a data user? How do we use these results to translate observed thickness distribution into a common reference between different types of observations? We begin by acknowledging that volume is conserved only in a relative sense within the experiments shown herein. No one instrument measures the true elevation and draft of snow and sea ice above and below sea level, smoothed or not. Hence, volume is a derived quantity and can only be conserved if combinations of instruments both above and below sea level measure the same features coincidentally with the same footprint sizes. Geolocation of measurements is already a substantial cutting-edge research problem right now (e.g. Reference Gardner, Richter-Menge, Farrell and BrozenaGardner and others, 2012) in addition to measurement accuracy (Geiger and others, 2015). In short, we argue that good estimates of volume result from higher confidence in thickness distributions.

As for thickness distribution, we must first recognize that horizontal length scales of sea ice are vast, with six or more orders of magnitude from the smallest features to basin-scale extent. Conversely, sea ice is only meters thick everywhere, with nearly all of the roughness of sea ice at high spatial frequencies in the form of rafting, ridges, rubble fields and deformed linear features. At all scales larger than 1m, these features distribute consistency into an iconic skewed shape partitioned into (1) thin ice, (2) a strong central mode of thermodynamically grown seasonal ice, and (3) a long tail of thick ice from deformation processes. Essentially, sea-ice thickness distribution is our ‘Rosetta Stone’ through which we communicate and translate knowledge about sea-ice thickness properties and processes across scales and between data users.

An aliased peak in a thickness distribution does not represent a real feature, though it is real information, simply in the wrong place. We see this happening for the thick ice categories as we look from top to bottom through case studies (Fig. 5a–d) where we see thick ice moved into packets of thinner ice categories at a faster rate for wider filter shapes. Therefore, a first recommended best practice is to test data records intended for data assimilation or other data synthesis approaches by adding the following four pieces of critical information to analysis routines and metadata archives: (1) the beamwidth (or projection angle) of an instrument, (2) the angle of incidence of instrument projection, (3) height of measurement above the snow/ice interface (even if just a mean estimated height) and (4) approximate shape of the signal (or at least a measure of how broadly the signal is shaped – intrinsic length scale). These four points are needed for Eqn (1) to quantify resolution error and scaling effects. Adding this information as metadata to archives will support a growing body of literature on upscaling and downscaling for which resolution error is already being included in larger-scale climate works (Reference WillmottWillmott and Johnson, 2005; Reference Stampone and GeigerStampone and others, 2012; Reference Bernstein, Geiger, Deliberty and StamponeBernstein and others, 2015).

In terms of compounding impacts, imagine how thickness distribution must change when high-resolution data are collected but then reduced through common practices such as a running mean. Imagine how many aliased artifacts are introduced through subsequent objective analysis where correlation length scales and Gaussian noise functions are added to the process after a running mean filter is applied. Such products are often imported into numerical models with inverse linear interpolation to gridcell resolutions. And so we wonder, how many different filter shapes and sizes did specific raw data encounter before being merged with other datasets? Since sea-ice volume is a derived product, one of the most effective ways to make observations more consistent across scales is to archive thickness distributions with the four metadata points just identified and then generate ice volumes from these native inputs and test for resolution error using the algorithms and application tools demonstrated here. In this way, data centers can leverage their large computational and data-mining capabilities with cross checks to characterize upscale problems such as resolution error. We demonstrate (Fig. 6) one way to test data quality assurance.

Furthermore, we strongly recommend an effective alternative to the running mean, using the inverse linear filter which is simple to implement (Table 1) and, surprisingly, surpasses the Gaussian filter in terms of lowest increase of resolution error in this study. The low error is likely related to the absolute value in both the inverse linear and TaxiCab Geometry for our resolution error calculations. Still, inverse linear filters are commonly used in numerical models for data interpolation. Hence, inverse linear filters for in-the-field data reduction offer consistent filtering shapes between modelers and measurement teams. Furthermore, if snow and sea-ice communities could standardize data collection practices, then we could avoid many of the obvious sources of large aliasing problems, by eliminating running-mean filters for underway data reduction, especially for non-Gaussian variables. Removing this one filter alone will decrease a number of aliasing problems that currently make it very difficult to compare and combine different in situ and airborne measurements.

Conclusion

Instruments with footprints larger than length scales of deformed sea-ice features encounter spatial aliasing which impacts the non-Gaussian shape of estimated sea-ice thickness distribution. Using a power law based on resolution error, one can estimate the impact of aliasing when upscaling results or comparing high- and low-resolution instruments given knowledge of an instrument’s footprint size and shape of the emitted (though preferably returned) signal. Data users who may need such records for upscaling and downscaling applications or process studies should test for this condition, especially if they are integrating a diversity of measurements for data assimilation. Much more work is needed on this topic as this is only one case study of a pervasive data integration issue. However, there is much that can be advanced by looking into matters described herein and applying them to new data acquisition campaigns, archives and metadata studies. Based on results found here, a critical best practice is the elimination of any wide filters as soon as possible, especially the running average. Gaussian filters, and surprisingly the simple inverse linear filter, are tapered sufficiently to minimize many aliasing situations introduced through post-processing and interpolation activities. But even the most idealized transmitted signals will return distorted pulses with extensive complexity when they interact with sea-ice topography. In summary, this problem demands further sustained analysis for some time to come.

Acknowledgements

This work is supported by US National Science Foundation grants ARC-0612105 and ARC-1107725. The International Space Science Institute, Bern, Switzerland, is acknowledged through project No. 169 (2009–2011: Space-borne monitoring of polar sea ice). C.A.G. thanks the College of Earth, Ocean, and Environment, University of Delaware, for partial support from January to August 2010, and M. Hilchenbach (Max Planck Institute) for Solar System Research, Germany) from August to September 2011. Finally, we thank Chief Editor P. Heil, Scientific Editor J. Renwick and two anonymous reviewers.

References

Andreas, EL (2011) A relationship between the aerodynamic and physical roughness of winter sea ice. Q. J. R. Meteorol. Soc., 137(659), 15811588 (doi: 10.1002/qj.842)Google Scholar
Banke, EG Smith SD and Anderson RJ (1980)Drag coefficient at AIDJEX from sonic anomometer measurements. In Pritchard RS ed. Sea ice processes and models. University of Washington, Seattle, WA, 430–442Google Scholar
Bernstein, ER Geiger, CA Deliberty, TL and Stampone, M (2015)Antarctic sea-ice thickness and volume estimates from ice charts between 1995 and 1998. Ann. Glaciol., 56(69) (see paper in this issue) (doi: 10.3189/2015AoG69∆763)Google Scholar
Black, PE (2006)Manhattan distance. In Pieterse, V and Black PE eds Dictionary of algorithms and data structures. National Institute of Standards and Technology, Gaithersburg, MD http://www. nist.gov/dads/HTML/manhattanDistance.html Google Scholar
Doble, MJ Skouroup, H, Wadhams, P and Geiger CA (2011) The relation between Arctic sea ice surface elevation and draft: a case study using coincident AUV sonar and airborne scanning laser. J. Geophys.Res., 116(C8), (C00E03) (doi: 10.1029/2011JC007076)Google Scholar
Donoho, DL (2006) For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution. Commun. Pure Appl. Math., 59(6), 797829 (doi: 10.1002/cpa.20132)CrossRefGoogle Scholar
Eicken, H, Tucker WB III and Perovich DK (2001) Indirect measurements of the mass balance of summer Arctic sea ice with an electromagnetic induction technique. Ann. Glaciol., 33, 194200 (doi: 10.3189/172756401781818356)Google Scholar
Emery, WJ and Thomson RE (2001) Data analysis methods in physical oceanography. 2nd edn. Elsevier, Amsterdam Google Scholar
Fetterer, FM Drinkwater, MR Jezek, KC Laxon SWC, Onstott RG and Uhlander LMH (1992)Sea ice altimetry. In Carsey FD and 7 others eds Microwave remote sensing of sea ice. (Geophysical Monograph Series 68) American Geophysical Union, Washington, DC, 111–135Google Scholar
Gardner, J, Richter-Menge, J, Farrell, S and Brozena, J (2012)Coincident multiscale estimates of Arctic sea ice thickness. Eos, 93(6), 57–58 (doi: 10.1029/2012EO060001)Google Scholar
Gascard, JC and 25 others (2008) Exploring Arctic transpolar drift during dramatic sea ice retreat. Eos, 89(3), 2123 (doi: 10.1029/ 2008EO030001)Google Scholar
Geiger, CA (2006)Propagation of uncertainties in sea ice thickness calculations from basin-scale operational observations. ERDC/ CRREL Tech. Rep. TR-06-16Google Scholar
Geiger, CA and Drinkwater MR (2005) Coincident buoy- and SAR-derived surface fluxes in the western Weddell Sea during Ice Station Weddell 1992. J. Geophys. Res., 110(C4), (C04002) (doi: 10.1029/2003JC002112)Google Scholar
Geiger, CA and 9 others (2011)A case study testing the impact of scale on Arctic sea ice thickness distribution. In Proceedings of the 20th IAHR International Symposium on Ice, 14–18 June 2010, Lahti, Finland. International Association for Hydro-Environment Engineering and Research (IAHR), Madrid (doi: 10.13140/2.1.2890.3366)Google Scholar
Geiger CA and 6 others (2015)On the uncertainty of sea-ice isostasy. J. Glaciol., 56(69) (see paper in this issue) (doi: 10.3189/2015AoG69∆633)Google Scholar
Holland, M, Bailey, D and Vavrus, S (2011) Inherent sea ice predictability in the rapidly changing Arctic environment of the Community Climate System Model, version 3. Climate Dyn., 36(7–8), (1239–1253) (doi: 10.1007/s00382-010-0792-4)Google Scholar
Hopkins, MA Frankenstein, S and Thorndike AS (2004) Formation of an aggregate scale in Arctic sea ice. J. Geophys. Res., 109(C1), (C01032) (doi: 10.1029/2003JC001855)Google Scholar
Hunke, EC (2010) Thickness sensitivities in the CICE sea ice model. Ocean Model., 34(3–4), (137–149) (doi: 10.1016/j.ocemod. 2010.05.004)Google Scholar
Hutchings, JK and 15 others (2008)Exploring the role of ice dynamics in the sea ice mass balance. Eos, 89(50), 515–516 (doi: 10.1029/2008EO500003)Google Scholar
Hunke, EC (2010)Thickness sensitivities in the CICE sea ice model. Ocean Model., 34(3–4), 137149 (doi: 10.1016/j.ocemod. 2010.05.004)Google Scholar
Intergovernmental Panel on Climate Change (IPCC) Working Group, I (2013)Final Draft Report of Working Group I contribution to the IPCC Fifth Assessment Report. Climate change 2013: the physical science basis. http://www.climate-change2013.org/report/review-draftsGoogle Scholar
Kovacs, A (1975). study of multi-year pressure ridges and shore ice pile-up. (APOA Project Report 89) Arctic Petroleum Operators’ Association, Calgary, AltaGoogle Scholar
Kovacs, A and Mellor, M (1971) Sea ice pressure ridges and ice islands. (Tech. Note TN-122) Creare Inc., Hanover, NH Google Scholar
McNeill, JD (1980) Electromagnetic terrain conductivity measurements at low induction numbers. (Tech. Note TN-6) Geonics Ltd, Mississauga, Ont. Google Scholar
Pfaffling, A, Haas, C and Reid JE (2007) A direct helicopter EM sea ice thickness inversion, assessed with synthetic and field data. Geophysics, 72(4), (F127–F137) (doi: 10.1190/1.2732551)Google Scholar
Reid, JE and Vrbancich, J (2004) A comparison of the inductive-limit footprints of airborne electromagnetic configurations. Geophysics, 69(5), 12291239 Google Scholar
Reid, JE Pfaffling, A and Vrbancich, J (2006) Airborne electromagnetic footprints in one-dimensional earths. Geophysics, 71(2) G63–G72 Google Scholar
Samluk, JP Geiger, CA Weiss CJ and Kolodzey, J (2015) Full physics 3-D heterogeneous simulations of electromagnetic induction fields on level and deformed ice. Ann. Glaciol., 56(69), ((see paper in this issue)) (doi: 10.3189/2015AoG69∆737)Google Scholar
Schweiger, A, Lindsay, R, Zhang, J, Steele, M, Stern, H and Kwok, R (2011)Uncertaintyinmodeled Arctic sea ice volume.J. Geophys. Res., 116(C8), C00D06 (doi: 10.1029/2011JC007084)Google Scholar
SCICEX (Scientific Ice Expeditions) Science Advisory Committee (2010) SCICEX Phase II Science Plan, Part 1: technical guidance for planning science accommodation missions. US Arctic Research Commission, Arlington, VAGoogle Scholar
Smith, AJE, Ambrosius BAC and Wakker KF (2000) Ocean tides from T/P, ERS-1, and GEOSAT altimetry. J. Geod., 74(5), 399413 (doi: 10.1007/s001900000101)Google Scholar
SEARCH (Study of Environmental Arctic Change) Project Office (2008) Arctic Observation Integration Workshops Report, 17–20 March 2008, Palisades, New York, USA. SEARCH Project Office, Arctic Research Consortium of the United States, Fairbanks, AK http://www.arcus.org/search-program/meetings/ 2008/aow/reportGoogle Scholar
Stampone, MD Geiger, CA DeLiberty TL and Bernstein ER (2012) Data-derived spatial-resolution errors of Antarctic sea-ice thickness. Polar Geogr., 36(3), 202220 (doi: 10.1080/ 1088937X.2012.691120)Google Scholar
Sturm, M and 8 others (2006) Snow depth and ice thickness measurements from the Beaufort and Chukchi Seas collected during the AMSR-Ice03 Campaign. IEEE Trans. Geosci. Remote Sens., 44(11), 30093020 (doi: 10.1109/TGRS.2006.878236)Google Scholar
Thomas, M, Kamhamettu, C and Geiger CA (2011) Motion tracking of discontinuous sea ice. IEEE Trans. Geosci. Remote Sens., 49(12), 50645079 (doi: 10.1109/TGRS.2011.2158005)Google Scholar
Wadhams, P and DavyT(1986)Onthe spacingand draft distributions for pressure ridge keels. J. Geophys. Res., 91 (C9), 10697–10 708Google Scholar
Wadhams, P and Doble MJ (2008) Digital terrain mapping of the underside of sea ice from a small AUV. Geophys. Res. Lett., 35(1), (L01501) (doi: 10.1029/2007GL031921)CrossRefGoogle Scholar
Wadhams, P, Hughes, N and Rodrigues, J (2011) Arctic sea ice thickness characteristics in winter 2004 and 2007 from submarine sonar transects. J. Geophys. Res., 116(C8), (C00E02) (doi: 10.1029/2011JC006982)Google Scholar
Willmott, CJ and Johnson ML (2005) Resolution errors associated with gridded precipitation fields. Int. J. Climatol., 25(15), (1957– 1963) (doi: 10.1002/joc.1235)Google Scholar
Worby, AP Geiger, CA Paget, MJ Van Woert, ML Ackley SF and DeLiberty TL (2008) Thickness distribution of Antarctic sea ice. J. Geophys. Res., 113(C5), (C05S92) (doi: 10.1029/ 2007JC004254)Google Scholar
Zhang J,Lindsay, R, Steele, M and Schweiger, A (2008) What drove the dramatic retreat of arctic sea ice during summer 2007. Geophys. Res. Lett., 35(11), (L11505) (doi: 10.1029/2008GL034005)Google Scholar
Figure 0

Fig. 1. Heuristic model of spatial aliasing. An idealized triangular ridge (a) with normalized units is well represented by discrete points (solid blue) when simply connected by line segments, or discrete area rectangles (dashed blue) when interpreted as a piecewise constant function. Both solutions conserve volume and thickness distribution. When smoothed by an example runningmean filter of length 3, the feature changes shape, with discrete points (solid red) and discrete area (dashed red) still conserving volume but no longer conserving thickness distribution. The impact is most pronounced on thickness distribution in the frequency domain (b) when the distribution is bimodal. The underlying cause of thickness distortion is loss of bimodal structure due to averaging of a non-Gaussian feature.

Figure 1

Table 1. Symmetric smoothing functions

Figure 2

Fig. 2. Filter shapes. Four normalized shapes are mathematically constructed from Gaussian (thick line), inverse linear (dashed line), tapered Gaussian (thin line) and running-average (dotted line) functions. Each function is expanded to needed length scales (L) to filter any measured point relative to neighboring points.

Figure 3

Fig. 3. Arctic ice camp survey. (a) Photograph with superimposed lines taken from light-wing aircraft at oblique angle over 1 km long survey legs. Survey samples are taken along each leg every 5 m using coincident EM-31 and MagnaProbe in tandem. Arrow is bearing true north; camp outlined. (b) Calibration results of EM-31 expressed as conductivity measurements based on 52 vertical distance samples collected coincidentally at drillhole sites, with regression analysis summarized in Table 2 . Ice types in legend identified as first-year level ice (FY), first-year deformed ice (FYD) and multi-year ice (MY).

Figure 4

Table 2. Summary of EM calibration coefficients

Figure 5

Fig. 4. Concatenated profile from survey lines. Survey lines sampled at 5 m intervals for ice thickness (using EM-31) and snow depth (using MagnaProbe). All six survey lines are concatenated into one synthetic profile with typical properties listed (MagnaProbe depths also indicated at drill sites). Field measurements such as these are often provided as climate data records (CDR) for modelers, remote-sensing calibration and other applications. Note that uncertainties are provided as gray shadow to communicate uncertainties as in Figure 3b. In this way, we explore propagated uncertainties and their compounding effects with other error sources.

Figure 6

Fig. 5. Impact of instrument footprint. Using mathematical functions (Fig. 2) to simulate instrument footprints of different sizes and shapes, we show how width and depth of narrow features are widened and flattened spatially (a–d). In frequency space (e–h, respectively), observed (black line) frequency distributions (FD) develop artificial modes which grow with scale and exceed white-noise levels in wider filter cases, especially in (g, h). While volume and mean thickness conserve in all cases (inset cumulative frequency distribution (CDF) shown for L = 5, 250, 500 m; e–h), thickness distribution and thickest ice types are altered considerably as noted by artificial peaks and loss of ice at 10 m bin, respectively

Figure 7

Table 3. Power-law exponent (m) and 95% confidence interval

Figure 8

Table 4. Power-law amplitude (b) from intercept B= log(b) and 95% confidence interval

Figure 9

Fig. 6. Systematic increase in resolution error as a function of scale. Growing resolution errors (Er) are shown based on four filter shapes (Fig. 2), each applied to the Central Tendency calibrated profiles (Fig. 3b and 4). Log–log slopes used to estimate exponential fit Er =bxm for length scale x with fit parameters listed (Tables 3 and Table 44). Each slope and intercept pair is significantly distinct at the 95% confidence interval, thereby providing a predictable trend of growing resolution error as a function related to instrument waveform response, footprint shape and size, and/or post-process smoothing algorithms. Colored dots are used as example application to demonstrate how a high-resolution instrument (blue) can be used to test a low-resolution instrument (red) for aliasing by filtering the high-resolution data using the footprint characteristics of the low-resolution instrument (purple).

Figure 10

Fig. 7. Example distributions of growing resolution error. Respective to each filter shape (Fig. 5a–d), resolution error is visualized as a distribution of dZn = zn,L zn for Central Tendency calibration results. Grey shading identifies percent of values found within bins discretized by 0.2 m changes in dZ at each incremental 10 m length scale. Thick black lines are positive and negative representations of Er (from Fig. 6). Note, black lines envelope only highest concentrations of error, which increase at a lower rate than surrounding non-Gaussian error distributions. Variability in error propagation is strong enough to impede effective downscaling solutions to reverse (Smith and others, 2000) the aliasing process at this time.