Data assimilation combines a background state with observational data to obtain an improved state, called the analysis. In this process the weights given to the observation vector and the background state are inversely proportional to each of their respective error covariance matrices. Operational data assimilation systems are typically designed to use a background error covariance matrix with non-zero spatial correlations, and a diagonal observation error covariance matrix. However, if the observation errors are in fact correlated, it is known that the incorrect assumption of uncorrelated observation errors will lead to suboptimal analyses, in the sense that the root-mean-square (RMS) analysis error will be larger than if the error correlation was correctly taken into account (Liu and Rabier, 2003). To partially compensate for this, variance inflation, observation thinning, or a combination thereof, are used (Dando et al., 2007; Collard and McNally, 2009). These practices reduce the information the observations are able to provide to the analysis (Stewart et al., 2008). To overcome this shortcoming, recent progress has been made toward including the inter-channel error correlations present in satellite radiance observations ((Campbell et al., 2017); Weston et al., 2014; Bormann and Bauer, 2010) and spatially correlated observation errors present in Doppler Radar radial wind observations (Waller et al., 2016). This study considers spatially correlated observation errors in the context of sea ice thickness data assimilation.
Spatially correlated observation errors can arise from various sources. In numerical weather prediction (NWP) observations typically correspond to directly observed quantities (e.g. satellite radiance) and the main contributions to observation errors are measurement errors from instrumentation (instrument errors), forward model errors (e.g. from the radiative transfer model), errors of representation, which refers to differences between the scales resolved by the model and those resolved by the observations, and errors that can arise in preprocessing (Stewart et al., 2013). For sea ice data assimilation, observations are typically obtained using retrieval algorithms. Therefore additional sources of observation error arise from errors in the auxiliary data that are input to the retrieval algorithm, interpolation of the data to a common grid, and assumptions in the retrieval. For example, ice thickness can be retrieved using brightness temperature data from the MIRAS radiometer, which is part of the Soil Moisture Ocean Salinity (SMOS) mission. The retrieval of ice thickness observations from SMOS uses the bulk ice salinity and temperature, surface air temperature and 2m wind speed. These variables are typically provided by ice-ocean model weekly climatologies and atmospheric reanalyses (Tian-Kunze et al., 2014). In the retrieval of ice thickness from SMOS the uncertainty associated with these variables, as well as with the brightness temperatures themselves, are used to compile a total uncertainty, which is provided in the data product. This uncertainty has been used as the observation error standard deviation in recent studies assimilating SMOS ice thickness into coupled ice-ocean models ((Xie et al., 2016); Yang et al., 2014). While the retrievals from SMOS data are limited to thin ice (ice of thickness less than 0.5m), as the community moves toward a merged SMOS/CryoSat sea ice product that covers the entire range of observed ice thickness values (Ricker et al., 2017), it is likely that assimilation of sea ice thickness data will continue to be investigated in operational sea ice analysis systems.
To appreciate the possible need to consider correlated observation errors in future sea ice assimilation systems, consider the situation in which an analysis system is running a sea ice model at a high spatial resolution ($\approx $1 km) and is using an ensemble of model states to represent the background error covariance matrix, $\mathbf{B}$, for example as is done in an ensemble-variational approach (Shlyaeva et al., 2015). Due to the limited ensemble size, localization of the spatial correlations in the ensemble will be necessary to reduce the spurious long range correlations. The localization scale is often chosen to be one or two orders of magnitude larger than the grid scale. For the present example, with an analysis grid scale of 1 km, consider a localization length scale for the background error correlations of of 30 km. If observations having correlated errors with length scales larger than 30 km are assimilated, these observation error correlation length scales would be larger than those of the background error correlations. This would mean at the large scales the observations are less accurate than the background state, assuming the background error standard deviations and observation error standard deviations are equal. If the analysis is calculated according to (Lahoz et al., 2010)
where ${\mathit{x}}^{b}$ is the background state, ${\mathit{x}}^{a}$ is the analysis, $\mathbf{H}$ is the observation operator and $\mathbf{K}$ is the Kalman gain, $\mathbf{K}={\mathbf{B}\mathit{H}}^{T}{\left({\mathbf{H}\mathit{B}\mathit{H}}^{T}+\mathbf{R}\right)}^{-1}$, then the magnitude of the Kalman gain should be small at the large scales, to reduce the contribution from the observations at these scales and keep the analysis close to the background state. An example of the spectral variances of the Kalman gain matrix for this case is shown in Fig. 1 by the line with markers. However, if the observations are in fact assimilated with a diagonal observation error covariance matrix and a background error covariance matrix with non-zero spatial correlations, the analysis will be heavily weighted toward the observations at the large scales. This scenario is shown by the solid line in Fig. 1. It is this latter scenario that is usually implemented in operational sea ice analysis systems.
The present study builds on previous work looking at the role of observation error correlations on the analysis in a data assimilation system. Some of the issues addressed herein have been examined in earlier studies. For example, incorrectly specifying the correlation structure (Stewart et al., 2013), the impact of observation error variance inflation (Stewart et al., 2008), and the difference between the estimated and actual analysis error covariance matrices (Rainwater et al., 2015). The novelty of the present study is that the impact of correlations in $\mathbf{R}$ is specifically addressed for a situation that represents observations of sea ice thickness from a simulated SMOS sensor being assimilated into a sea ice state containing sea ice thickness, velocity and concentration. Unlike previous studies, the $\mathbf{B}$ matrix is generated from an ensemble of model runs, in this case a one-dimensional dynamic-thermodynamic sea ice model that is coupled to toy models of the atmosphere and the ocean. This model is also used to generate the true state from which the observation error standard deviation is obtained based on knowledge of the retrieval algorithm (Tian-Kunze et al., 2014). These observations have a footprint that is larger than the spacing of the analysis grid, and hence the system is not fully observed. A set of experiments are carried out in which analysis errors for ice thickness, concentration and velocity are examined when the assumed observation error covariance matrix is diagonal, but the true $\mathbf{R}$ contains spatial correlations. The difference between the actual analysis error standard deviation and the estimated analysis error standard deviation is then assessed as a function of wavenumber to determine the scales over which the assimilation results may be misleading due to the mispecification of $\mathbf{R}$. The outline of the paper is as follows, the sea ice model and error covariance matrices are discussed in Section 2, experimental setup and results are given in Sections 3 and 4 respectively, with concluding remarks in Section 5.
Experimental data assimilation methods are often tested on lower-dimensional models before moving to full-scale prediction systems to rapidly and inexpensively screen hypotheses, and quantify the impact of alternative data assimilation methodologies. In the present study, such a model has been developed for idealized studies of sea ice data assimilation. While the model equations and solution procedure are not novel, this is the first time that such an approach has been used in sea ice data assimilation.
The sea ice model used herein implements a two-category representation of sea ice in one-dimension and is based on the formulation of Hibler (1979). The grid spacing is chosen as 1 km, and there are 1000 grid points in the domain. The model variables are concentration a, thickness h and sea ice velocity v. Each model grid cell has a fraction a of thick ice with thickness h and a fraction $(1-a)$ of either open water or very thin ice.
A two-category model was selected for this study over more complicated multi-category models to simplify the assimilation procedure. Multi-category models represent ice thickness at a given location as a distribution of partial concentrations of ice across a range of thicknesses. This can complicate the assimilation of the observational data. For example, assimilation of thickness as a single quantity is not straightforward in a multi category model. In addition to the momentum equation, transport equations for both thickness and concentration are solved. These latter two equations have source terms representing thermodynamic growth and melt of i.e. specified following Hibler (1979). The model is forced with winds and ocean currents from shallow water models representing the atmosphere and ocean respectively. For both the model and the forcing data, periodic boundary conditions are used. The model is referred to herein as the toy model.
The toy model was designed to meet two requirements: first, that it be inexpensive and suitable for use in data assimilation experiments on a desktop computer; and second, that it produces a dynamic sea ice state with thickness and velocity distributions that are qualitatively similar to observational datasets. The model equations and validation are described in Appendix 1, with full details given in (Stonebridge, 2017).
The model described in Section 2.1 was used to generate an ensemble of sea ice states. A set of 500 (${n}_{\mathit{ens}}$ = 500) identical model states was initially created, in addition to a true state, ${\mathit{x}}^{t}$. Each ensemble member, ${\mathit{x}}_{i}^{b}$, had an initial thickness of 0.1 m, concentration of 10%, and zero velocity. To generate an ensemble of sea ice states with reasonable spread, both forcing data and selected model parameters were perturbed. The model parameters chosen to perturb were, the ice strength parameter, ${P}^{\ast}$, the ice-atmosphere and ice-ocean drag coefficients ${C}_{a}$ and ${C}_{o}$, and the ice density ${\rho}_{i}$. Perturbations were applied by multiplying the default parameter value by a factor sampled from a Gaussian distribution with a mean of 1 and a standard deviation of 0.1. The parameters were perturbed once at the beginning of the spin-up, with the values thereafter held constant. In addition to the model parameters, the ocean and wind velocities were also perturbed through multiplication each time step by a factor sampled from a Gaussian distribution with mean 1 and standard deviation 0.2, following Shlyaeva et al. (2015). In addition the ocean and atmosphere velocity fields were shifted spatially by a randomly sampled distance with a mean of zero and a standard deviation of 50 km. Note that no perturbations were applied to the true state. This follows the general procedure presented by Houtekamer and Mitchell (1998).
For each ensemble member the model was spun up over a period of thirty days. For the first fifteen days, every twenty-four hours, the growth scaling factor (see Appendix 1) was resampled from a uniform distribution spanning [$-0.2,0.5$], representing both freezing and melting conditions, while for the last fifteen the sampling was from a uniform distribution spanning [$-0.2,0.2$], representing milder thermodynamic conditions. The thirty-day freeze-up period was found sufficient to allow the ensemble spread to reach stable values, and is consistent with the spin-up periods used in Shlyaeva et al. (2015). The evolution of the ensemble spread is shown in Fig. 2. At the end of the thirty-day growth period, the mean concentration in the true state had increased to a value of 96%, while the thickness increased from 0.1 m to a mean of 0.7 m, with minimum and maximum values of 0.07 and 1.2 m. This relatively thin ice state was well-suited to experiments with a SMOS-based thickness sensor. Fig. 3 illustrates the mean and 10th and 90th percentiles of the ensemble at the end of the thirty-day growth period.
The $\mathbf{B}$ matrix was constructed from the ensemble according to Evensen (2003) and Oke et al. (2007),
from which the background error correlation matrix, ${\mathbf{C}}_{\mathit{b}}$ can be extracted using,
where ${\mathbf{D}}_{b}^{1/2}$ is a diagonal matrix with elements equal to the background error standard deviations. It was found that while most of the rows of ${\mathbf{C}}_{b}$ indicated the error correlations had decayed to zero after approximately 20-100 km, there were still some cases for which there were non-zero values of correlation at much greater distances. This was due to the fact that some of the ensemble members are physically different from each other (e.g. having different ${P}^{\ast}$ values) and hence evolve with different thickness values. To reduce these long-range correlations a localization function was applied to $\mathbf{B}$ by taking the element-wise product of each of the nine $1000\times 1000$ submatrices of the background error covariance matrix with a $1000\times 1000$ circulant correlation matrix, $\mathbf{L}$. Each of the nine $1000\times 1000$ submatrices corresponds to error correlation structure between different combinations of variables from the set (a, h, v). The correlation structure for $\mathbf{L}$ was Gaussian with a decorrelation length of 150 km. This effectively damped the long-range correlations while minimally affecting the correlations up to a distance of 100 km.
The observation error covariance matrix, $\mathbf{R}$, was defined using the decomposition,
where ${\mathbf{D}}_{r}^{1/2}$ is a diagonal matrix with elements equal to the observation error standard deviations, and ${\mathbf{C}}_{\mathit{r}}$ is a prescribed observation error correlation matrix. As noted in the introduction, when assimilating ice thickness observations from SMOS, it is typical to use the observation uncertainty provided in the data product as the observation error standard deviation. This uncertainty is a function of the ice thickness (Tian-Kunze et al., 2014). To model this behaviour, the observation error standard deviations are defined using a true observation state vector ${\mathit{y}}^{t}$ obtained by mapping the true model state, ${\mathit{x}}^{t}$, to the observation space using a linear observation operator
To mimic the SMOS sensor sampling, $\mathbf{H}$ was specified to apply a footprint of diameter of 35 km and to sample the true state at a spacing of 12.5 km. This resulted in an observation operator with dimensions of 80 by 3000. The observation operator was largely sparse, with non-zero entries, ${H}_{\mathit{jk}}$, in locations where thickness entries at grid point k fell inside the footprint of observation j. The value of the non-zero coefficients in the observation operator was a constant 1/35, meaning that each grid cell within the footprint contributed equally to the observation. This footprint operator was similar to the operator used in the regional ice prediction system (RIPS) for the assimilation of coarse-resolution passive microwave sea ice concentration observations (Buehner et al., 2013).
To model the observation error standard deviations, the reported SMOS-based ice thickness uncertainties are used, which are 0.06 m for ice less than 0.3 m thick and 1.0 m for ice 1.0 m thick (Tian-Kunze et al., 2014). In the present study the error standard deviation was constant for $h\le 0.3m$ and set to a quadratic function of thickness for $h>0.3$, as shown in Fig. 4. The observation error standard deviations, calculated as a function of the true ice thickness, are plotted alongside the square root of the diagonal components of $\mathbf{H}\mathbf{B}{\mathbf{H}}^{T}$ in Fig. 5, where the latter represents the background error standard deviations mapped to observation space. For most of the domain, the background error standard deviation is less than the observation error standard deviation. However, where the true ice thickness is thinnest, the observation error standard deviations are comparable or less than those of the background state.
The observation error correlation matrix was chosen to either be the identity, for the case when $\mathbf{R}$ is diagonal, or with entries given by a specified correlation function. In the present study the exponential correlation function was used, and the entries of ${\mathbf{C}}_{\mathit{R}}$ were ${\mathbf{C}}_{\mathit{ij}}(d)=exp(-3(d/r))$ where d is the distance between the two points i and j and r represents the length at which the correlation decreases to a values of 0.05. Results will be shown for $r=150\phantom{\rule{0.166667em}{0ex}}km$ and $r=50\phantom{\rule{0.166667em}{0ex}}km$. Fig. 6 illustrates the correlation structures of each observation error covariance matrix alongside the correlation structure extracted from the background error covariance matrix. Additional experiments were carried out using a Gaussian correlation function for $\mathbf{R}$ (Stonebridge, 2017), with similar results to those found with using the exponential function. Hence, only the latter are shown here.
Experiments were carried out to compare analyses from different specifications of $\mathbf{R}$. These experiments are not cycling experiments, and analyses are also not directly computed. Instead, the comparison examined the analysis error standard deviations, corresponding to the square root of the diagonal elements of the analysis error covariance matrices, which are a function of $\mathbf{R}$, $\mathbf{B}$ and $\mathbf{H}$. By comparing the analysis error covariance calculated for different specifications of $\mathbf{R}$, errors associated with sampling of the background state and observations were avoided. A similar approach was followed by Liu and Rabier (2003).
Three different experiments were carried out, corresponding to different forms for ${\mathbf{R}}_{\mathit{true}}$ (Table 1). For each experiment, three variants of the analysis error covariance matrix were computed, described in Table 2. For the first analysis error covariance matrix, ${\mathbf{A}}_{\mathit{ideal}}$, both the Kalman gain, $\mathbf{K}$, and the analysis error covariance matrix were computed using the true observation error covariance matrix. This represents the analysis error covariance for the ideal case where the exact observation error covariance matrix was used in the assimilation. For the second analysis error covariance matrix variant, ${\mathbf{A}}_{\mathit{est}}$, both the Kalman gain and the analysis error covariance matrix were computed using a diagonal approximation to the observation error covariance matrix. This matrix represents the naïve estimate of the analysis error covariance matrix that may be obtained when the true observation error correlation structure is assumed to be diagonal. Based on Stewart et al. (2013), for a diagonal approximation to the true observation error covariance matrix, ${\mathbf{A}}_{\mathit{est}}$ is expected to underestimate the analysis error standard deviations for the cases where true observation error covariance matrix is not diagonal. For the third matrix, ${\mathbf{A}}_{\mathit{actual}}$, an incorrect observation error covariance matrix was used in the computation of the Kalman gain. The analysis error covariance matrix was then constructed using the correct observation error covariance matrix and the incorrect Kalman gain matrix. Effectively, this actual analysis error covariance matrix is what would be obtained if an incorrect observation error covariance matrix was used in the assimilation while also knowing the true state. It would not be possible to calculate this analysis error covariance matrix, ${\mathbf{A}}_{\mathit{actual}}$, in an operational system where the true state was not known exactly. Healy and White (2005) introduced the concept of ${\mathbf{A}}_{\mathit{actual}}$, using a different, but equivalent formulation. Rainwater et al. (2015) referred to this matrix as the performance analysis error covariance matrix. By comparing ${\mathbf{A}}_{\mathit{est}}$ and ${\mathbf{A}}_{\mathit{actual}}$, it is possible to explore how the diagonal approximation to the observation error covariance matrix affects the perceived analysis quality. The background error covariance matrix was also included in Table 2 to represent the case where no data is assimilated. It was considered especially important to identify instances where the estimated analysis error covariance matrix contained larger error variances than the background error covariance matrix, since this would indicate that the assimilation had a negative impact on the state estimate.
The analysis error standard deviation, ${\sigma}^{a}$ was calculated for each experiment as the square root of the trace the block sub matrix of $\mathbf{A}$ corresponding to either thickness, concentration or velocity. With these measurements, the total decrease or increase of ${\sigma}^{a}$ relative to ${\sigma}^{b}$ due to the assimilation of thickness observations using an approximation to the true observation error covariance matrix can be quantified. To compare ${\sigma}^{a}$ and ${\sigma}^{b}$, the relative change in mean background error standard deviation as $({\sigma}^{b}-{\sigma}^{a})/{\sigma}^{b}$ was computed. To investigate the spatial scales in the background and analysis errors, the spectral densities of the error covariances are shown. These spectral densities of a given analysis error covariance matrix $\mathbf{A}$ were computed by taking the diagonal of the matrix $\mathbf{S}A{S}^{T}$, where $\mathbf{S}$ denotes the discrete Fourier transform matrix.
Finally, since a common method to compensate for lack of knowledge of the true $\mathbf{R}$ matrix is to use a diagonal $\mathbf{R}$ and inflate the error variances, an additional experiment is carried out to assess this approximation. Usually the inflation factor is determined experimentally. Here, since the actual analysis error standard deviations can be computed, the inflation factor that minimizes the actual analysis error variance can be identified by repeating the experiments for which $\mathbf{R}$ was assumed to be diagonal, with $\mathbf{R}$ matrices for which the variances were inflated by factors spanning [0.1,6].
The mean analysis error standard deviations for each of the experiments are given in Table 3. The first observation that can be made is regarding the cases where ${\mathbf{R}}_{\mathit{true}}$ is used in the analysis, which corresponds to ${\mathbf{A}}_{\mathit{ideal}}$ in Table 2. It is found the use of $\mathbf{R}$ matrices with spatial correlations resulted in analysis error standard deviations that are either smaller than or very similar to those for which $\mathbf{R}$ was diagonal, with longer correlation length scales corresponding to lower ${\sigma}^{a}$ values. Stewart et al. (2008, (2013) made a similar observation in a twin experiment with a shallow water model. The best analyses were found when the observation error covariance matrix had the largest off-diagonals, which can be represented by the largest Frobenius norm (Stewart et al., 2013). This was explained in Stewart et al. (2013) from an information theory perspective. Effectively, the use of spatially correlated observation errors incorporates more information into the analysis than the use of observations with uncorrelated errors. Consider that the goal in data assimilation is to use observational data to correct the background state, ideally over the entire range of spatial scales represented by the state. The largest scale in the background state that needs to be corrected is a single number, the spatial mean. As an increasing range of scales are considered, the number of unknowns that need to be estimated to correct the background state at a given scale increases. While all of the observations provide information about the spatial mean error, only few observations provide information about each of the many unknowns required to correct the small scales. However, as the observation error correlation scale increases, the small scales in the observations are more able to provide accurate information to correct the background state at these scales, as they become relatively more accurate (for fixed error variance).
To examine the impact of the diagonal approximation to ${\mathbf{R}}_{\mathit{true}}$ compare the fourth and fifth rows of Table 3 where ${\mathbf{R}}_{\mathit{est}}={\mathbf{R}}_{\mathit{diag}}$. For cases where the estimated $\mathbf{R}$ matrix was diagonal ${\sigma}_{\mathit{actual}}^{a}$ is greater than ${\sigma}_{\mathit{est}}^{a}$, reflecting overconfidence in the analysis. For example, when ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{150}$, ${\sigma}_{\mathit{est}}^{a}=0.197$ m whereas ${\sigma}_{\mathit{actual}}^{a}$=0.223m, reflecting an underestimate of the analysis error variance by 13.2%. In this situation, if an ensemble Kalman filter were used in an operational setting, extra attention would likely need to be paid to avoid filter divergence. Note that for the experiment where ${\mathbf{R}}_{\mathit{true}}$ was diagonal ${\sigma}_{\mathit{est}}^{a}$ was equal to ${\sigma}_{\mathit{actual}}^{a}$ as expected, and was 0.197 m, which represents a 11.7% improvement over the background state.
Inflating the entries of the diagonal $\mathbf{R}$ matrices had a generally positive effect for an ${\mathbf{R}}_{\mathit{true}}$ that contains correlations. When the inflated diagonal approximation to the observation error covariance matrix was used in the assimilation, the estimated mean analysis error standard deviations were approximately 0.209 m, 7.11%, and $1.32\times {10}^{-2}$ m/s, for thickness, concentration, and velocity, respectively, reflecting relative improvements over ${\sigma}^{b}$ of 6.3%, 3%, and 4.6%. For each of the experiments, ${\sigma}_{\mathit{est}}^{a}$ was consistently slightly larger than ${\sigma}_{\mathit{actual}}^{a}$, which is desirable since it represents a conservative underestimate of the analysis quality. Note that the exact values of ${\sigma}^{a}$ presented in Table 3 are sensitive to the formulation of both $\mathbf{B}$ and the model state. Larger values of ${\sigma}^{b}$ with smaller values of ${\sigma}^{o}$ could be expected to exaggerate the results presented herein.
In the background error covariance matrix there are cross correlations between ice thickness and both ice concentration and ice velocity. These correlations lead to changes in the analysis errors for these variables when ice thickness is assimilated. It can be seen in Table 3 that there is a reduction in ${\sigma}^{a}$ for these variables relative to ${\sigma}^{b}$. While this demonstrates a benefit of assimilating sea ice thickness on the other state variables, a possible drawback is that if a poor estimate of $\mathbf{R}$ is used for thickness observations this can have an adverse impact on the ice concentration or ice velocity. It should also be noted that for the conditions simulated in this study, ice thickness assimilation has a greater impact on ice concentration than on ice velocity, as can be seen by the greater reduction in error standard deviation for the analysis relative to the background state for ice concentration than for ice velocity.
The spectral densities of the background and analysis error covariance matrices for ice thickness are shown in Fig. 7. These plots can be used to determine the range of scales for which the estimated and actual analysis error covariance matrices depict an improvement (or degradation) over the background errors. From Fig. 7 it can be seen that the greatest decrease in spectral density occurred at the lowest wavenumbers. As the wavenumber approached 0.04 km${}^{-1}$ (25 km), the assimilation had a decreased impact, as expected because the observation footprint is $\approx $ 30 km. In Fig. 7d it can be seen that when a diagonal approximation to the true observation error covariance matrix was made, the naïve estimate of the analysis error covariance matrix, ${\mathbf{A}}_{\mathit{est}}$ , predicted a positive benefit from the assimilation at all wavenumbers less than 0.04 km${}^{-1}$. However, when considering the actual analysis error covariance matrix, ${\mathbf{A}}_{\mathit{actual}}$, the spectral density of the analysis was in fact larger than the background error at low wavenumbers. The implication is that the analysis state may contain a greater amount of errors at very large scales when a diagonal observation error covariance matrix is specified. In Fig. 7a and b it can be seen that underestimating the correlation length scale has a less significant impact than neglecting spatial correlations entirely. This is consistent with the results from Stewart et al. (2013). It can also be seen in that the use of inflation also has a positive effect. The estimated and actual analysis error spectral densities are much closer with inflation than without (compare thin solid lines with dashed lines in Fig. 7 panels c and d. Note that the inflation factor used is the optimal value from the experiments presented in Section 4.3.
The results presented in this section are again partly sensitive to the background error covariance matrix, $\mathbf{B}$. The greatest impacts from the diagonal approximation were observed at the wavenumbers in $\mathbf{B}$ where the spectral densities were greatest. Impacts at other wavenumbers might have been observed if the background error spectral densities at these wavenumbers were larger relative to those of the observation error. Similarly, larger long-range background error correlations might have exaggerated the experimental results.
The background error covariance matrix created from the ensemble of states showed correlations between ice thickness and ice concentration of reasonable magnitude, whereas the correlations between ice thickness and ice velocity were negligible. This was also seen in examining the analysis error spectra for the ice concentration and ice velocity. The analysis error spectra for ice velocity were almost the same as the background error spectra, as expected, whereas the ice concentration spectra were slightly modified. This is shown in Fig. 8. There are two items to note in this figure. First, the ice concentration analysis errors are not very sensitive to assumptions regarding error correlations in $\mathbf{R}$, since the two curves in Fig. 8a for the analysis error look similar. This is consistent with what we see in Table 3 where ${\sigma}_{\mathit{est}}^{a}$ and ${\sigma}_{\mathit{actual}}^{a}$ reflect improvements of 6.94 and 6.98% respectively for ice concentration when ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{150}$ and ${\mathbf{R}}_{\mathit{est}}={\mathbf{R}}_{\mathit{diag}}$. Second, the range of ice concentration scales affected by the ice thickness assimilation is narrower than those affected in the ice thickness.
In Section 4.2 it was demonstrated that use of an inflation factor can in part account for lack of spatial correlations in $\mathbf{R}$. Results presented in Section 4.2 used an optimal inflation factor. In this section we demonstrate the method used to find the optimal inflation factor.
Fig. 9 illustrates how the estimated and actual analysis error standard deviations vary as a function of the inflation factor. For the inflation factor experiment when ${\mathbf{R}}_{\mathit{true}}$ is diagonal, the minimum value of actual analysis error standard deviation occurred when the inflation factor had a value of 1. At this value, the actual and estimated analysis error standard deviation curves intersected. This makes sense for this experiment because when the true observation error covariance matrix is diagonal, inflation should only increase the analysis error standard deviation. For the experiments with ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{50}$ and ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{150}$, at an inflation factor of 1, the estimated analysis error standard deviation was less than the actual analysis error standard deviation. The difference between the two error terms was largest when the observation error covariance matrix had the highest Frobenius norm, that is ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{150}$. The optimal inflation factors were found to be two for ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{50}$, and three for ${\mathbf{R}}_{\mathit{true}}={\mathbf{R}}_{150}$. These optimal inflation factors are consistent with Liang et al. (2012), who found typical optimal inflation factors between one and five, and with other studies (Stewart et al., 2008; Bormann and Bauer, 2010; Weston et al., 2014), who used inflation factors between two and four. When the inflation factor decreased below a value of 0.5, the actual analysis error standard deviation increased past the background error standard deviation while the estimated analysis error standard deviation decreased toward zero. In this region, we are effectively grossly overconfident about a very poor analysis. This was observed in each of the experiments. The implication is that it is very important not to underestimate the observation error standard deviation when assuming a diagonal observation error covariance matrix. When the inflation factor was greater than its optimal value, both the estimated and actual error standard deviations increased at very similar low rates, indicating the system is less sensitive to over inflating the observation error standard deviation. These results indicate value in specifying a conservatively high inflation factor.
The results presented in this study pertain to a situation in which the ensemble mean ice thickness is approximately 0.7m and the ice cover is fairly consolidated. This was done to simulate the assimilation of ice thickness from the SMOS sensor for conditions of approximately 100% ice concentration, which is an assumption in current SMOS retrieval algorithms (Tian-Kunze et al., 2014). If the ice cover is not consolidated, and this is not taken into account in the retrieval algorithm, there will be errors in the ice thickness that are in fact due to the ice concentration state. In general, the brightness temperature measured by a passive microwave radiometer at low frequencies, such as those utilized by SMOS, is a function of the surface temperature and surface emissivity, which are in turn functions of the ice concentration and ice thickness (Scott et al., 2012). An alternative method to assimilate data from the SMOS sensor for sea ice estimation may be to assimilate the brightness temperature observations directly, and calculate increments for both the ice concentration and ice thickness (Scott et al., 2012). Such a method could be evaluated using the toy model described herein.
Results from this study demonstrate that when ${\mathbf{R}}_{\mathit{true}}$ has spatial correlations, diagonal approximations of $\mathbf{R}$ result in higher levels of analysis error in the low wavenumbers as compared to when the true $\mathbf{R}$ is used. This was expected since assuming a diagonal $\mathbf{R}$ effectively means all observational scales are assumed to have equal error standard deviation, but if ${\mathbf{R}}_{\mathit{true}}$ contains spatial correlations with a large length scale, in fact the observations are not accurate at these large scales, and this will in turn be reflected in higher analysis errors at these scales.
What would be the impact of these results? If the observations contain spatially correlated errors, and these are properly taken into account, this would result in the small scales in the observations being more heavily weighted in the analysis (Rainwater et al., 2015), although what constitutes a large or small scale depends on the problem definition. For the cases examined here, when both the true observation error covariance matrix and that used in the assimilation have spatial correlations of length scale 150 km, the analysis error standard deviation is less than the background error standard deviation for all scales greater than 50 km (Fig. 7a), while when a diagonal $\mathbf{R}$ is used the estimated analysis error standard deviation underestimates the actual analysis error standard deviation for scales greater than 100 km. A positive impact is also seen on the analysis error standard deviation of the ice concentration with the assimilation of thickness observations when the true $\mathbf{R}$ is used in the assimilation. In a continuum based sea ice model, such as that used here, the viscosity of sea ice is a function of the spatial derivatives of concentration and thickness (see Appendix 1). Following this definition, better representation of the scales could result in a sea ice state in which the dynamics are captured more accurately.
Finally, an obvious limitation of these results are that they were obtained using a simplified 1-D ice model, rather than using a quasi-operational model and actual ice thickness data. This limitation was expected and desired. The results from this study may be used to guide future experiments, ranging from toy experiments to full-scale prediction systems. While little is known regarding the true error correlations of ice thickness measurements from SMOS, it may be possible to estimate the elements of the observation error covariance matrix using the Desroziers diagnostic (Desroziers et al., 2005). While the resulting analysis equation is not trivial to solve in an operational system, due to the large number of elements in both the background state and observation vector, recent progress using the dual space formulation (Campbell et al., 2017) has shown that including spatially correlated observation errors in future operational systems is feasible.
Bormann, N. and Bauer, P.2010. Estimates of spatial and inter channel observation-error characteristics for numerical weather prediction. I Methods and applications to atovs data. Q. J. R. Meteorol. Soc.136, 1036–1050.
Buehner, M., Caya, A., Pogson, L., Carrieres, T. and Pestieau, P.2013. A new Environment Canada regional ice analysis system. Atmosphere-Ocean51(1), 18–34.
Campbell, W., Satterfield, E., Ruston, B. and Baker, N.2017. Accounting for correlated observation error in a dual-formulation 4d variational data assimilation system. Mon. Weather Rev.145, 1019–1032.
Collard, A. and McNally, A.2009. The assimilation of infrared atmospheric sounding interferometer radiances at ECMWF. Q. J. R. Meteorol. Soc.135, 1044–1058.
Dando, M., Thorpe, A. and Eyre, J.2007. The optimal density of atmospheric sounder observations in the met office NWP system. Q. J. R. Meteorol. Soc.133, 1933–1943.
Desroziers, G., Berre, L., Chapnik, B. and Poli, P.2005. Diagnosis of observation, background and analysis-error statistics in observation space. Q. J. R. Meteorol. Soc.131(613), 3385–3396. DOI:https://doi.org/10.1256/qj.05.108. ISSN 1477-870X.
Evensen, G.2003. The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dyn.53(4), 343–367. DOI:https://doi.org/10.1007/s10236-003-0036-9. ISSN 1616-7228.
Healy, S. and White, A.2005. Use of discrete Fourier transforms in the 1d-var retrieval problem. Q. J. R. Meteorol. Soc.131(605), 63–72. DOI:https://doi.org/10.1256/qj.03.193. ISSN 1477-870X.
Hibler, W. D.1979. A dynamic thermodynamic sea ice model. J. Phys. Oceanogr. 9, 815–846.
Houtekamer, P. and Mitchell, H.1998. Data assimilation using an ensemble Kalman filter technique. Mon. Weather Rev.126(3), 796–811.
Lahoz, W., Khattatov, B. and Menard, R.2010. Data assimilation and information. In: Data Assimilation: Making Sense of Observations (eds. W.Lahoz, B.Khattatov and R.Menard). Springer, Berlin Heidelberg, pp. 3–12.
Lemieux, J.-F., Knoll, D. A., Losch, M. and Girard, C.2014. A second-order accurate in time implicit-explicit (IMEX) integration scheme for sea ice dynamics. J. Comput. Phys.263, 375–392. DOI:https://doi.org/10.1016/j.jcp.2014.01.010. ISSN 0021-9991.
Liang, X., Zheng, X., Zhang, S., Wu, G., Dai, Y. and co-authors.2012. Maximum likelihood estimation of inflation factors on error covariance matrices for ensemble Kalman filter assimilation. Q. J. R. Meteorol Soc.138(662), 263–273. DOI:https://doi.org/10.1002/qj.912. ISSN 1477-870X.
Liu, Z. and Rabier, F.2003. The potential of high density observations for numerical weather prediction: a study with simulated observations. Q. J. R. Meteorol. Soc.129, 3013–3035.
Oke, P., Sakov, P. and Corney, S.2007. Impacts of localization in the EnKF and EnOI, experiments with a small model. Ocean Dyn.57, 32–45.
Rainwater, S., Bishop, C. H. and Campbell, W. F.2015. The benefits of correlated observation errors for small scales. Q. J. R. Meteorol. Soc.141(693), 3439–3445. DOI:https://doi.org/10.1002/qj.2582. ISSN1477-870X.
Ricker, R., Hendricks, S., Kaleschke, L., Tian-Kunze, X., King, J. and co-authors.2017. A weekly arctic sea-ice thickness data record from merged Cryosat-2 and SMOS satellite data. The Cryosphere. under review.
Scott, K. A., Buehner, M., Caya, A. and Carrieres, T.2012. Direct assimilation of AMSR-E brightness temperatures for estimating sea ice concentration. Mon. Weather Rev.140(3), 997–1013. DOI:https://doi.org/10.1175/MWR-D-11-00014.1.
Shlyaeva, A., Buehner, M., Caya, A., Lemieux, J.-F., Smith, G. C., and co-authors.2015. Towards ensemble data assimilation for the environment Canada regional ice prediction system. Q. J. R. Meteorol. Soc.142, 1090–1099. DOI:https://doi.org/10.1002/qj.2712. ISSN 1477-870X. QJ-15-0078.R1.
Stewart, L. M., Dance, S. L. and Nichols, N. K.2008. Correlated observation errors in data assimilation. Int. J. Numer. Methods Fluids56(8), 1521–1527. DOI:https://doi.org/10.1002/fld.1636. ISSN 1097-0363.
Stewart, L., Dance, S. and Nichols, N.2013. Data assimilation with correlated observation errors: experiments with a 1-D shallow water model. Tellus A. 65, 1–13. ISSN 1600-0870.
Stonebridge, G.2017. Diagonal approximations to the observation error covariance matrix in sea ice thickness data assimilation [Master’s thesis]University of Waterloo.
Thorndike, A. S., Rothrock, D. A., Maykut, G. A. and Colony, R.1975. The thickness distribution of sea ice. J. Geophys. Res.80(33), 4501–4513. DOI:https://doi.org/10.1029/JC080i033p04501. ISSN 2156-2202.
Tian-Kunze, X., Kaleschke, L., Maaß, N., Mäkynen, M., Serra, N., and co-authors.2014. SMOS-derived thin sea ice thickness: algorithm baseline, product specifications and initial verification. The Cryosphere8(3), 997–1018. DOI:https://doi.org/10.5194/tc-8-997-2014. Online at:http://www.the-cryosphere.net/8/997/2014/
Vallis, G. K.2006. Atmospheric and Oceanic Fluid DynamicsCambridge University Press, Cambridge, UK.
Waller, J., Simonin, D., Dance, S., Nichols, N. and Ballard, S.2016. Diagnosing observation error correlations for doppler radar radial winds in the Met Office UKV model using observation-minus-background and observation-minus-analysis statistics. Mon. Weather Rev.144, 3533–3551.
Weston, P., Bell, W. and Eyre, J.2014. Accounting for correlated error in the assimilation of high-resolution sounder data. Q. J. R. Meteorol. Soc.140, 2420–2429.
Xie, J., Counillon, F., Bertino, L., Tian-Kunze, X. and Kaleschke, L.2016. Benefits of assimilating thin sea ice thickness from SMOS into the TOPAZ system. The Cryosphere10, 2745–2761.
Yang, Q., Losa, S. N., Losch, M., Tian-Kunze, X., Nerger, L., and co-authors.2014. Assimilating SMOS sea ice thickness into a coupled ice-ocean model using a local SEIK filter. J. Geophys. Res. Oceans119(10), 6680–6692. DOI:https://doi.org/10.1002/2014JC009963. ISSN 2169-9291.
When forced by the ocean and atmosphere, the sea ice momentum equation for a two-category sea ice model in one dimension can be written as (Hibler, 1979)
where ${\rho}_{\mathit{ice}}$ represents the ice density, ${C}_{a}$ represents the ice-atmosphere drag coefficient, ${u}_{a}$ represents the wind velocity, ${C}_{o}$ represents the ice-ocean drag coefficient, and ${u}_{o}$ represents the surficial ocean current velocity. The bulk viscosity is represented by $\zeta $, which can be formulated as
where P represents the ice strength, and ${P}^{\ast}$, C, and e are empirically-derived constants. In addition to the ice momentum equation, there are transport equations for ice thickness and concentration (Hibler, 1979):
where ${S}_{a}$ and ${S}_{h}$ are thermodynamic terms given by
where ${h}^{0}=0.1$ m is a fixed demarcation between thin and thick i.e. and f(h) is the growth rate. The ice thickness growth rate from Hibler (1979) based on seasonal rates observed by Thorndike et al. (1975) was used. For this function the growth rates are higher when the ice is thin and has low concentration. When the ice is thicker or the concentration is 100%, there is little growth. To produce a more realistic spread of growth rates, we multiplied this growth function by a growth rate factor. For the freeze-up run, this factor was randomly sampled from a uniform distribution spanning [$-0.2,1$] every twenty-four hours to expose the ice to different thermodynamic conditions.
The sea ice equations were discretized using a central differencing scheme with velocity grid points at locations staggered from the thickness, concentration, and viscosity grid points. To determine the ice velocities, the sea ice momentum equations were solved using the mixed implicit explicit formulation presented by Lemieux et al. (2014). The transport equations were solved explicitly using an Adams–Bashforth-3 formulation (Gautschi, 2012). The boundary conditions were periodic. With this numerical approximation and with a grid spacing of 1 km over the 1000 km domain, time steps of approximately 0.5 h could be used without instabilities developing.
The novel component of the toy model is that it is driven by simulated forcing data to produce ice dynamics with a realistic velocity field. The simulated forcing data was provided by one-layer shallow water models, tuned to produce qualitatively realistic velocity fields. The governing equations for the shallow water model can be written as (Vallis, 2006)
where $\eta $ represents the fluid depth, u represents the fluid velocity, and g represents acceleration due to gravity. These equations were discretized and solved on the same grid as the ice model, with u and $\eta $ at staggered grid points. The same Adams–Bashforth-3 timestepping scheme used for concentration and thickness is used to March forward the velocity and fluid depth. The time step for the shallow water model was set to one second to main numerical stability.
The fluid surfaces for the shallow water models were displaced from their mean elevations by adding a simulated random field of perturbations, effectively injecting potential energy across the model domain. We used a decorrelation length scale of 10 km and a Gaussian correlation function for these perturbations, as this set of parameters produced velocity fields with the widest range of scales but without resulting in numerical dispersion.
Two additional sets of parameters were included to adjust the properties of the shallow water models. First, the velocities were multiplied by a constant to produce more realistic distributions of wind and ocean current velocities. This constant was tuned to produce wind velocities between approximately $-15$ and 15 m/s and ocean velocities between $-0.3$ and 0.3 m/s. The timesteps were also multiplied by constant parameters to produce slower wave speeds, effectively decreasing the temporal scale of the waves. These parameters were tuned to produce wind and ocean currents fields with a temporal decorrelation length of approximately ten days. Overall, these shallow water models produced nonlinear wave-like mechanical forcing with mixed regions of convergence and divergence. Table A1 lists the parameter values that define a standard model run.