Many papers have been directed at estimating multi-decadal ocean heat uptake (Purkey and Johnson, 2010; Lyman and Johnson, 2014), salinity change as an indicator of fresh-water injection (Wadhams and Munk, 2004; Boyer et al., 2005), and sea level (elevation) changes (Nerem et al., 2006; Cazenave et al., 2014) or all together (Levitus et al., 2005; Peltier 2009; Forget and Ponte, 2015). Many more such calculations have been published than can be listed here. A great difficulty with most of these estimates is the historical inhomogeneity in the various data sets employed, and the consequent assumption of nearly untestable statistical hypotheses used to extrapolate and interpolate into data sparse times and places (Boyer et al., 2016 and Wunsch, 2016 for generic discussions). A number of papers have claimed ‘closure’ of the sea level change budget, but that is accomplished through large and uncertain error budgets of the various components.
Ocean general circulation models (GCMs) and coupled climate models have also been used to calculate space- and time-mean oceanic temperature salinity , and sea surface elevations . Most models, including the ECCO system (Estimating the Circulation and Climate of the Ocean; Wunsch and Heimbach, 2013; Forget et al., 2015), compute the ocean state in a deterministic fashion. Given initial conditions and time-varying meteorological boundary conditions, the model time-steps the state vector, as though the external fields, including initial conditions, were fully known. Ensemble (Monte Carlo) methods attempt to estimate the uncertainties of the state at a particular time, usually a forecast time, by computing families of disturbed initial and/or boundary conditions.
A general discussion of the accuracy or precision of general circulation and climate models does not appear to exist. As in all systems, errors will always include systematic ones, e.g. from lack of adequate resolution or improperly represented air-sea transfer processes, amongst many others. Stochastic errors will arise from noisy initial and boundary conditions, as well as rounding errors, and interior instabilities of many types, both numerical and physical. Analysis of systematic and stochastic errors requires completely different methods. (Henrion and Fischoff, 1986, and especially their Fig. 1 reproduced in Wunsch 2015, p. 48.)
The central purpose of this paper is to suggest a potentially useful direction towards partially solving the problem of determining uncertainties in global-scale quantities calculated with both observations and models. Examples are produced for oceanic values and their variability from the nearly homogeneous (in the observational network sense) data sets 1994–2013. Specifically, a separation is attempted between the stochastic and systematic errors inevitably present. Stochastic errors are those often labeled as ‘formal’ and are derived from estimates of random noise present. Estimation of systematic errors involves a near line-by-line discussion of the individual computer codes used to calculate oceanic states. Errors will be different in calculations of the mean state, and in their temporal and spatial changes. For reasons discussed below, as the duration of a computation is extended, the distinction between stochastic and systematic errors can no longer be made and paradoxically, it is the comparatively short (20-year) time interval of the state estimate that supports the methodology outlined.
The ECCO version 4 solution employed here represents the output of an oceanic general circulation model whose initial and boundary (including meteorological) conditions and interior empirical coefficients have been adjusted in a least-squares sense to fit 25 years of global data. All data are assigned an error variance or covariance and the final state is obtained from the free-running adjusted system. For ‘state estimation’ as done in ECCO (Forget et al., 2015; ECCO Consortium 2017a,b; Fukumori et al., 2018), two major obstacles loom if Monte Carlo methods are to be used: (1) the immense state and control vector dimensions; (2) the absence of quantitative estimates (probability distributions) of the stochastic contributions in the initial/boundary conditions and stochastic structures generated by internal instability and turbulence. The same obstacles to uncertainty calculation loom in any ocean or coupled-climate model run for a long time whether or not based upon combinations with data.
A number of methods exist for calculating uncertainties in systems such as that of ECCO. To the extent that the system is linearizable, the method-of-Lagrange-multipliers/adjoint used there can be shown (Wunsch, 2006) to have identical uncertainties to those obtained from sequential estimates, such as the Rauch-Tung-Striebel (RTS) smoother.1 This approach is very well understood and is practical for small systems (Goodwin and Sin, 1984; Brogan, 1991; Wunsch, 2006). It involves calculated covariance matrices that are square of the state vector dimension and of the control vector dimension at any time, t. For the ECCO version 4, state vector dimension at each time step is approximately N = 11 million and updating the state and its covariance requires running the model N + 1 times at each time-step. Similar dimensions and issues apply to the system control vector.
Other methods include calculation of inverse Hessians (Kalmikov and Heimbach, 2014), sometimes using Lanczos methods. Hypothetically, one could solve a Fokker–Planck equation corresponding to the model (Gardiner, 2004) and its initial/boundary condition, or the prediction-particle filtering methods of Majda and Harlim (2012). None of these methods is computationally practical for the global ocean or climate system with today’s computers – although that should gradually change in the future.
Nonetheless, some form of useful uncertainty estimate is necessary for values calculated from models, whether from ordinary forward calculations, or from a state estimate. For example, as described by ECCO Consortium (2017a), Fukumori et al. (2018), the 20-year average ocean temperature is 3.5310 °C found from the volume weighted grid points of the adjusted model (centers of cells). How reliable is that number? On the one hand, it is extremely accurate up to the machine precision of . A standard error might be calculated by dividing the variance of the volume-weighted elements by , but such a number is meaningless: (1) much of the thermal structure of the ocean is deterministic on the large-scale – and with other effectively permanent sub-basin scale structures – stable over 20 + years. Treating that structure as stochastic would be a major distortion. (2) The distribution of values is very inhomogeneous over the three-dimensional volume and any supposition of uniform probability densities or of near-Gaussian values is incorrect.
Parts of the ocean structure and of the meteorological forcing fields are deterministic processes over decades. For example, the depth and properties of the main thermocline, or of the dominant wind systems, do not vary significantly over 20 years and the response times of the global ocean extend to thousands of years and beyond (Wunsch, 2015, p. 338). Superimposed upon the initial and boundary conditions are noise fields best regarded, in contrast, as stochastic – but which cannot build up global-scale covarying structures in a restricted time period.
When integrated through a time-stepping fluid model, the stochastic elements, even disturbances that are white noise in space and/or time, will give rise to complex structured fields (Fig. B5 of Wunsch, 2002). A crux of the uncertainty problem for model outputs then is to separate the deterministic from the stochastic elements. Ensemble methods, generated by stochastic perturbations of initial/boundary conditions/parameters, face the same difficulty: What are the appropriate joint probability distributions to use in generating the ensembles (Evensen, 2009)?2 To the extent that the stochastic influence can be regarded as perturbations about a stable deterministic evolution, the probability densities will be centered about deterministic fields, as in Eq. (3.5.9) of Gardiner (2004). Systematic errors will remain as part of the deterministic components, and must be dealt with separately. A literature on systematic errors in numerical models does exist, much of it directed at the accuracies of advection schemes and other numerical approximations (Hecht et al., 1998). As is always the case with real systems, the stochastic error provides a lower bound on the true uncertainty.
What follows is largely heuristic and without any statistical rigor. Methods for separation of deterministic from stochastic elements in large volumes of numbers do not appear to have been widely explored. (This issue should not be confused with the problem of separating ‘deterministic chaos’ from true stochastic elements familiar in dynamical systems theory; Strogatz, 2015).
A start is made with time-mean three-dimensional fields, which permits introducing the basic ideas while greatly reducing the volume of numbers required. A supposition is thus made that only the time average fields are available and sampled, temporarily suppressing the information contained in the time-variability. Suppression of the deterministic component, so as to leave a stochastic field, is required for both mean and time variations.
Consider the problem of determining the 20-year global ocean average temperature and its corresponding uncertainty. A 20-year average, computed 50 years in the future, might usefully be compared with the present 20-year average. Hourly values of the state estimate, averaged over 20-years, 1994–2013, produce point-wise calculated mean potential temperatures, at each grid point. Mean temperature at one depth can be seen in Fig. 1, displaying the classical large-scale features that are clearly deterministic over 20 years with superimposed stochastic elements. A histogram of the gridded mean temperatures can be seen in Fig. 2 and its heavily skewed behavior is apparent.
To form an average, the volume represented by each grid point is accounted for by weighting each value by the relative volume contribution ai such that,
Here the bootstrap and related jackknife methods in the elementary sense described by Efron and Tibsharani (1993), Mudelsee (2014), and others are used. The basic assumption of the bootstrap is that the values making up the subsampled population are independent, identically distributed (iid), values. Any assumptions that stochastic elements in cold, deep, temperatures are drawn from the same population as the much warmer near-surface values, or that this structure is dominantly stochastic, cannot be correct.
An assumption is now made that the strongest globally spatially varying structures represent the deterministic component. As already alluded to, this assumption is based on considerations of physics alone – and not upon any statistical methodology: any three-dimensional, globally correlated structure can only have been generated by very long-term effectively systematic processes. If a process can be rendered indistinguishable from white noise, then at zero-order most covariance structure has been removed.3 Integration of stochastic fields does, of course, produce globally correlated structures (Wunsch, 2002, Appendix B) but these take many decades and longer to appear. To the extent that the stochastic background has generated large-scale covarying fields, the results here will be biassed low. Experiments with simpler models (not shown) are supportive of the assumption that over time intervals, short compared to the full adjustment time of the system, stochastic disturbances can be so evaluated.
To render the assumption concrete, a residual population, that is iid over the full water column everywhere is generated by subtracting the globally correlated structures. In particular, let the three-dimensional matrix of volume-weighted temperatures be, written with columns in longitude, latitude, and depths. Setting temperatures to zero over land is the simplest choice. One might, alternatively, mask out these regions from the computation. Experiments (not shown) with the mean temperature field, showed only very slight differences from those obtained with the zero assumption (apart from weak structures appearing in the zero region, and masked out here). The choice has the virtue that the same method can be used in conjunction with coupled ocean-atmosphere models, where land values must be specified. Note too, that zeros are also assigned to regions below the local water depth so that the vi vectors (defined below) can capture the influence of topography over a fixed interval of orthogonality.
To proceed, map this three-dimensional matrix into two dimensions by stacking the latitude columns, to form , where is just a reordering of longitude and latitude. Write as its singular value decomposition,
The crux of the statistical problem is in the decision about how many pairs, q, should be removed? A rigorous answer to this question is not provided here. Three basic criteria are used: (1) the residual should be unimodal; (2) the residual variance should not show a major depth dependence; (3) the dominant singular value should account for no less than 30% of the total variance. The criteria permit use of the estimated variance as a useful uncertainty parameter, and support for the iid assumption. A statistically rigorous result requires a method for use analogous to the use of the AIC (Akaike Information Criterion) or BIC (Bayesian Information Criterion) in linear regression analyses (Priestley, 1981).
For temperature, removal of only the first pair, reduces the temperature norm of by over 90% and the histogram of volume weighted values (Fig. 2) is now unimodal. Over 20 years, the response of the ocean is dominantly linear, producing a normal stochastic field that is supported, e.g. by the discussion of Gebbie and Huybers (2018), and the known physics of short-time scale adjustment. With the risk of over-estimating the uncertainty, q = 1 is chosen, and the elements of are assumed to be iid and thus suitable for computing a bootstrap mean and standard error. Fifty samples of bootstrap reconstruction produces a two-standard deviation uncertainty of 1.9 × 10−3 °C owing to the stochastic elements or °C, the ‘formal’ error. (For a Gaussian process, two-standard deviations is approximately 95% confidence interval.)
Much structure exists both in the suppressed and retained singular vector pairs, which ultimately need examination and physical explanation.
The time mean salinity, (34.727, dimensionless on the practical scale), determined from the volume-weighted values has the histograms shown in Fig. 4.4 Taking now , removes about 94% of the variance, and two standard deviations of the bootstrapped residual produces 34.72 ± 0.02. The variance with depth is rendered much more nearly uniform, albeit imperfectly so, and as for temperature, the residual histogram of is now unimodal. That q = 2 for salinity rather than q = 1 for temperanture is likely explained by the much smaller deviations of S about its volumetric mean, but for present purposes that is immaterial.
For reference, under the assumption that total oceanic salt content remains unchanged, where h0, S0 are the mean values of mean depth and salinity (Munk, 2003) ignoring the density change to salinity5 and the contribution of melting sea ice, the uncertainty corresponds to a sea level change uncertainty of about ± 2.2 m. This value may seem surprisingly large, but it simply says that the salinity data permits inference of the total amount of freshwater of about out of a total average depth of about h = 3800 m, or about which by most standards is remarkable accuracy. One can hope that a comparison 50 years hence will not find changes in , which are significantly different from zero! The formal uncertainty in the present state is sufficiently small in determining systematic errors, care must be taken over the definition/calibration changes that may take place in the future as they have in the past; see Millero et al. (2008) for a discussion of systematic changes in definitions.
Mean sea surface height, the ‘dynamic topography’ in the present ocean state, can in principle be compared to its value determined as a 20-year average, 50 years, or any other time interval into the future. Values in the ECCO version 4 state estimate are determined relative to the best available geoid known today (Reigber et al., 2005). The dynamical variables are the horizontal gradient elements and thus if in the future, a different geoid is used, offset by a constant from the one used in ECCO, that change would be of no significance. On the other hand, care would be needed in the future to accommodate changed geoids, for example, with higher spatial resolution, temporal changes, and with significant regional implications. Because dynamics respond only to horizontal gradients in the result is most directly connected to the altimeter, gravity, and hydrographic data.
The assumption used so far, that globally covarying fields over 20 years can be interpreted as the deterministic components, is physically sensible for temperature and salinity. For η, however, the ability of the ocean to transmit barotropic (depth independent) signals globally within a few days, makes the assumption dubious. Nonetheless, with this caveat, the global time-mean value of η and an estimate of its accuracy is calculated within the model context. Because of the complexities of geoid error and the rapid achievement of barotropic motion equilibrium, only the time differences of the mean surface height are discussed. For the record, the area weighted mean of the values in Fig. 5 is 8.48 ± 0.13 cm with the error based upon but no more discussion is provided here.
Time changes, represented for now by the difference between two yearly-averages, years t1, t2, should largely remove the deterministic components contained in the initial/boundary conditions. A trend, e.g. in exchange of heat between ocean and atmosphere as a part of the global warming signal and part of the surface boundary conditions, might be regarded as deterministic. But, as has been noted in numerous publications (Ocaña et al., 2016), with a 20-year record, the duration is far too short to distinguish a true deterministic trend from the long-term stochastic shifts characteristic of red-noise processes. Thus, any trend present is treated as though arising from a stochastic process. Discussion of temporal changes is done in two ways: (1) the value of the differences of the first and last years (20-year difference), and which makes no assumptions about the nature of the trend. (2) The bootstrapped or jackknifed estimate of the trends, assumed to be linear ones using all of the intermediate year averages.
One interesting example is the difference between the mean ocean temperature in 2013 and what it was in 1994 (shown for two depths in Figs. 6–7) – as a constraint on the rates of global warming. This difference is again a static field and can be analyzed in the same fashion as the time-mean was treated. The spatial pattern of warming and cooling is a complicated one with large-scale structures corresponding to known physical regimes, e.g. the eastern tropical Pacific, the near-Gulf Stream system/subpolar gyre, the Southern Ocean. As one might expect, temperature difference variances are much larger near the surface than they are in the abyss and the gridded temperature field itself is far from having an iid. One can proceed as above, subtracting the appropriate three-dimensional singular vector pairs. To shorten the discussion, the temperature differences are here integrated in the vertical to different depths, including the bottom. The top-to-bottom area-weighted integrals and their histogram distribution are shown in Fig. 8.
Note that the two yearly estimates are not independent ones – they are connected through the time-evolving equations of motion. To the extent that any systematic error in the ECCO system is time-independent, it will be subtractive in the time-difference. The vertical integral, top-to-bottom has no dominant singular vector pair, with the largest squared singular value accounting for less than 30% of the variance. Proceeding under the assumption that the 20-year time-difference has a vertical integral that is close to iid (, the bootstrap produces a temperature change of °C (two standard deviations). The underlying ECCO GCM uses a constant heat capacity of , so that with an ocean mass of °C corresponds to a net heat change of . Over 20 years, the heating rate is approximately 0.48 ± 0.16 W/m2 including 0.095 W/m2 from geothermal heating (ECCO Consortium, 2017a). As a rough comparison, Levitus et al. (2012), estimated a change of , but to 2000 m and in the interval 1955–2010 for a rate of 0.4 W/m2, and using an entirely different basis for the uncertainty estimate.
With all the results here, these values represent lower bounds on the uncertainty. Geothermal heating itself has uncertainties, which should be separately analyzed, and the dependence of heat capacity and density on temperature, salinity, and pressure will introduce systematic errors.
The pattern of vertically integrated differences of salinity between 1994 and 2013 (Fig. 9) is already visually somewhat stochastic in character and thus no further structure is removed (the largest singular value corresponds to only 20% of the variance). The mean salinity change between the two years is (−5.4 ± 0.84) × 10−4 from the bootstrap estimate with q = 0. Using the above expression for the addition of freshwater, the net increase in water depth is , or 3.0 ± 0.5 mm/y.
The difference in height over 20 years (Fig. 10) is 6.37 ± 0.3 cm, or an average change of 3.2 ± 0.015 mm/y where the standard error is obtained from the bootstrap with Nerem et al. (2006) quote a rate from altimeter data alone, as 3.1 ± 0.4 mm/y. Although the estimates are not independent – the state estimate uses all the altimeter data – and the altimeter data do not extend over the ice-covered region – the results are approximately consistent. (See Lanzante, 2005, for the interpretation of overlapping uncertainty estimates.)
η is calculated in the model using the full equation of state, thus accounting for the addition of freshwater, the thermal expansion from external heating, and any interior redistribution of heat and salt as functions of position and depth. The change in elevation over 20 years of 6.4 ± 0.3 cm is crudely then about 5–6 cm from the addition of freshwater, with any remainder attributable to thermal expansion.
A generalization for both temperature and salinity is that the top 100 m are noisy year-to-year, but that integrals to 700 m are much cleaner and visually very close to linear. In both fields, the abyssal region, defined as the levels below 3600 m, shows a counter-trend to that of the water-column total.
The integrated temperature to various depths is shown in Fig. 11. The best fitting, assumed linear, trend over 20 years is sought, whether deterministic or a red-noise random walk is immaterial at this stage. The least-squares fit mean slope for the top-to-bottom change is °C/y. Standard error is computed from a bootstrap of the full field, under the assumption that the time differences are basically stochastic – and which likely slightly overestimates the uncertainty. (A jackknife estimate was identical.) The mean slope implies a change over 20 years of 0.0213 ± 0.0014 °C and which differs only slightly from the value computed between first and last years as might have been inferred from Fig. 11.
Integrated salt anomalies are displayed for each year to several depths in Fig. 12. An overall freshening, top-to-bottom is evident, including a slight increase in salinity at and below 3600 m. This abyssal change accompanies the general cooling seen below 3600 m in Fig. 11, but this physics is not further described here.
As with temperature, the best fitting least-squares straight line differs slightly from the calculation using only the first and last years, producing a value of for a net change over 20 years of (For comparison, Boyer et al., 2005, estimated the trend as from a much longer and much more inhomogeneous data set. No uncertainty was specified.)
Fig. 10 displays the annual spatial average anomaly values of η and the first differences in each year. The spatial patterns do not show a single dominant singular value (10 of them are required to account for 90% of the variance).
An estimate of the trend is , again from a bootstrap average (with ), of about 2 mm/y. The corresponding mean surface height change is then 4.4 ± 0.15 cm over the 20 years. Attribution of the combined heating and salinity change into an equivalent sea level trend is a complex matter, that must include gravity field changes (see Church et al. 2010; Pugh and Woodworth 2014; and Forget and Ponte 2015) for complete discussion.
Although quantitative 20-year time-means and changes in the global average oceanic heat, salt, and dynamic topography (sea surface height) have been estimated, the main goal here is not those numbers per se: the intention is to begin the discussion of the separation of random and systematic errors in model property estimates. Results here are almost entirely heuristic, but the approach using resampling (bootstrap) methods applied to spatially decorrelated fields can perhaps be made rigorous. In particular, methods for separating deterministic and stochastic elements of the three-dimensional, time-dependent fields, in the absence of real knowledge of the probability distributions, should be explored. Formal errors here are sufficiently small that a reasonable expectation is that systematic ones dominate, but that is only surmise. Attention must then turn to the issue of systematic errors in the model and state estimate. These will never be zero, but because of the data-fitting in the state estimation process, they are expected to be much-reduced compared to those found in unadjusted climate models.
A full discussion of the structures and causes of the various fields appearing in the means and in the heating/cooling, salinification/freshening, elevation increases/decreases in time and space requires a specialized study of each field separately and is not attempted here.
1 The RTS smoother employs the Kalman filter as a sub-component in the numerical algorithm. Kalman filters are predictors and should not be confused with general smoothing estimators. In any case, true Kalman filters, which require continual updating of the covariance matrices, are never used with realistic large-scale fluid problems – the dimensionality is overwhelming. In practice, the prediction numerics are usually approximated forms of Wiener filters, which employ temporally fixed, guessed, covariances.
3 An alternative, not used here, would be a spectral expansion in spherical harmonics and a choice of vertical basis functions, and the exploitation of the non-random character of the coefficients of the deterministic elements.
4 Worthington’s (1981) value for temperature was 3.51 °C and for salinity was 34.72 g/kg on the older salinity scale, again with no stated uncertainty. Both are very close to the present estimated values, although pertaining to the historical period prior to about 1977.
G. Gebbie and D. Amrhein had useful comments on an early version. Suggestions from the anonymous reviewers also proved helpful.
No potential conflict of interest was reported by the author.
Boyer, T., Domingues, C. M., Good, S. A., Johnson, G. C., Lyman, J. M. and co-authors. 2016. Sensitivity of global upper-ocean heat content estimates to mapping methods, XBT bias corrections, and baseline climatologies. J. Clim. 29(13), 4817–4842.
Cazenave, A., Dieng, H.-B., Meyssignac, B., Von Schuckmann, K., Decharme, S. and co-authors.2014. The rate of sea-level rise. Nature Clim. Change. 4, 358–361. DOI:https://doi.org/10.1038/nclimate2159.
ECCO Consortium.2017a. A twenty-year dynamical oceanic climatology: 1994–2013. Part 1: Active scalar fields: temperature, salinity, dynamic topography, mixed-layer depth, bottom pressure. MIT DSpace. Online at: http://hdl.handle.net/1721.1/107613
ECCO Consortium.2017b. A twenty-year dynamical oceanic climatology: 1994–2013. Part 2: Velocities and property transports. MIT DSpace. Online at: http://hdl.handle.net/1721.1/109847
Forget, G., Campin, J. -M., Heimbach, P., Hill, C., PonteR. and co-authors. 2015. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation. Geosci. Model Dev. 8, 3071–3104.
Levitus, S., Antonov, J. I., Boyer, T. P., Garcia, H. E. and Locarnini, R. A.2005. Linear trends of zonally averaged thermosteric, halosteric, and total steric sea level for individual ocean basins and the world ocean, (1955–1959)–(1994–1998). Geophys. Res. Lett. 32(16), L16601. DOI:https://doi.org/10.1029/2005gl023761.
Levitus, S., Antonov, J. I., Boyer, T. P., Baranova, O. K. and co-authors.2012. World ocean heat content and thermosteric sea level change (0-2000 m), 1955–2010. Geophys. Res. Letts. 39. DOI:https://doi.org/10.1029/2012gl051106.
Millero, F. J., Feistel, R., Wright, D. G. and McDougall, T. J.2008. The composition of standard seawater and the definition of the Reference-Composition Salinity Scale. Deep Sea Res. Part I Oceanogr. Res. Pap. 55(1), 50–72.
Ocaña, V., Zorita, E. and Heimbach, P.2016. Stochastic trends in sea level rise. J. Geophys. Res. 121. DOI:https://doi.org/10.1002/2015JC011301.
Peltier, W. R.2009. Closure of the budget of global sea level rise over the GRACE era: the importance and magnitudes of the required corrections for global glacial isostatic adjustment. Quat. Sci. Rev. 28(17–18), 1658–1674.
Purkey, S. G. and Johnson, G. C. (2010). Warming of global abyssal and deep southern ocean waters between the 1990s and 2000s: contributions to global heat and sea level rise budgets. J. Clim. 23(23), 6336–6351.
Wadhams, P. and Munk, W.2004. Ocean feshening, sea level rising, sea ice melting. Geophys. Res. Lett. 31, L11311. DOI:https://doi.org/10.1029/2004gl020039.
Worthington, L. V.1981. The water masses of the world ocean: some results of a fine-scale census. In: Evolution of Physical Oceanography. Scientific Surveys in Honor of Henry Stommel (eds. B. A.Warren and C.Wunsch), The MIT Press, Cambridge, pp. 42–69. Online at: http://ocw.mit.edu/ans7870/textbooks/Wunsch/wunschtext.htm
Wunsch, C. and Heimbach, P.2013. Dynamically and kinematically consistent global ocean circulation state estimates with land and sea ice. 2nd Ed. In: Ocean Circulation and Climate, (eds. J. C. G.Siedler, W. J.Gould, and S. M.Griffies), Elsevier, Amsterdam, pp. 553–579.