A- A+
Alt. Display

State-of-the-art stochastic data assimilation methods for high-dimensional non-Gaussian problems

Abstract

This paper compares several commonly used state-of-the-art ensemble-based data assimilation methods in a coherent mathematical notation. The study encompasses different methods that are applicable to high-dimensional geophysical systems, like ocean and atmosphere and provide an uncertainty estimate. Most variants of Ensemble Kalman Filters, Particle Filters and second-order exact methods are discussed, including Gaussian Mixture Filters, while methods that require an adjoint model or a tangent linear formulation of the model are excluded. The detailed description of all the methods in a mathematically coherent way provides both novices and experienced researchers with a unique overview and new insight in the workings and relative advantages of each method, theoretically and algorithmically, even leading to new filters. Furthermore, the practical implementation details of all ensemble and particle filter methods are discussed to show similarities and differences in the filters aiding the users in what to use when. Finally, pseudo-codes are provided for all of the methods presented in this paper.

Keywords:
How to Cite: Vetra-Carvalho, S., Van Leeuwen, P.J., Nerger, L., Barth, A., Altaf, M.U., Brasseur, P., Kirchgessner, P. and Beckers, J.-M., 2018. State-of-the-art stochastic data assimilation methods for high-dimensional non-Gaussian problems. Tellus A: Dynamic Meteorology and Oceanography, 70(1), p.1445364. DOI: http://doi.org/10.1080/16000870.2018.1445364
Published on 01 Jan 2018
Accepted on 19 Feb 2018            Submitted on 5 Jul 2017
1.

Introduction

Data assimilation (DA) is the science of combining observations of a system, including their uncertainty, with estimates of that system from a dynamical model, including its uncertainty, to obtain a new and more accurate description of the system including an uncertainty estimate of that description. The uncertainty estimates point to an efficient description in terms of probability density functions, and in this paper we discuss methods that perform DA using an ensemble of model states to represent these probability density functions.

Ensemble Kalman filters are currently highly popular DA methods that are applied to a wide range of dynamical models including oceanic, atmospheric, and land surface models. The increasing popularity of Kalman-Filter-based ensemble (EnKF) methods in these fields is due to the relative ease of the filter implementation, increasing computational power and the natural forecast error evolution in EnKF schemes with the dynamical model in time. However, due to technological and scientific advances, three significant developments have occurred in the last decade that force us to look beyond standard Ensemble Kalman Filtering, which is based on linear and/or Gaussian assumptions. Firstly, continuous increase in computational capability has recently allowed to run operational models at high resolutions so that the dynamical models have become increasingly non-linear due to the direct resolution of small-scale non-linear processes in these models, e.g. small-scale turbulence. Secondly, in several geoscientific applications, such as atmosphere, ocean, land surface, hydrology and sea – i.e. it is of interest to estimate variables or parameters that are bounded requiring DA methods that can deal with non-Gaussian distributions. Thirdly, the observational network around the world has increased manyfold for weather, ocean and land surface areas providing more information about the real system with greater accuracy and higher spatial and temporal resolution. Often the so-called observation operators that connect model states to observations of these new observations are non-linear, again asking for non-Gaussian DA methods. Thus, the research in non-linear DAmethods, which can be applied to high-resolution dynamical models and/or complex observation operators, has seen major developments in the last decade with the aim to understand how existing ensemble methods cope with non-linearity in the models, to develop new ensemble methods that are more suited to non-linear dynamical models, as well as to explore non-linear filters that are not limited to Gaussian distributions, such as particle filters or hybrids between particle and ensemble Kalman filters.

The origin of this paper lies in the EU-funded research project SANGOMA (Stochastic Assimilation for the Next Generation Ocean Model Applications). The project focused on generating a coherent and transparent database of the current ensemble-based data assimilation methods and development of data assimilation tools suitable for non-linear and high-dimensional systems concentrating on methods that do not require tangent linear approximations of the model or its adjoint. The methods described within this paper have been applied in operational oceanography, like the TOPAZ system (Sakov et al., 2012) or the FOAM system of the UK Met Office (see Blockley et al., 2014). While TOPAZ is already using an EnKF, FOAM applies an optimal interpolation scheme that takes in less dynamical information to estimate the error covariance matrix. That said this paper is aimed at a very broad audience and data assimilation methods discussed in this paper are not limited to applications to ocean or atmosphere models, hence, the methods are presented without the context of any specific dynamical model, allowing the reader to make the most of each technique for their specific application.

A number of reviews have been published recently, each collating parts of the development in data assimilation, e.g. Bannister (2017) gives a comprehensive review of operational methods of variational and ensemble-variational data assimilation, Houtekamer and Zhang (2016) review the ensemble Kalman filter with a focus on application to atmospheric data assimilation, Law and Stuart (2012) review variational and Kalman filter methods, and Bocquet et al. (2010) present a review of concepts and ideas of non-Gaussian data assimilation methods and discusses various sources of non-Gaussianity. The merits of this paper lie within:

• coherent mathematical description of the main methods that are used in the current data assimilation community for application to high-dimensional and non-Gaussian problems allowing the reader to easily oversee the differences between the methods and compare them;
• discussing ensemble Kalman Filters, particle filters, second-order exact filters and Gaussian Mixture Filters within the same paper using consistent notation;
• inclusion of practical application aspects of these methods, discussing computational cost, parallelising, localisation and inflation techniques;
• provision of pseudo-code algorithms for all of the presented methods;
along with inclusion of recent developments, such as the error-subspace transform Kalman filter (ESTKF) and recent particle filters, this paper goes beyond earlier reviews (e.g. Tippett et al., 2003; Hamill et al., 2006; van Leeuwen, 2009; Houtekamer and Zhang, 2016; Bannister, 2017).

The paper is organised as follows: in Section 2 the common ground through Bayes theorem is established, in Section 3 a historical overview is given for both ensemble Kalman and particle filter fields, and in Section 4 we define the basic problem solved by all of the methods presented in this paper. In Section 5 we discuss the most popular types of ensemble Kalman filter methods. Then, in Section 6, we discuss several particle filter methods that can be applied to high-dimensional problems. In Section 7, we describe ensemble filters with second-order accuracy, namely the particle filter with Gaussian resampling (PFGR), the non-linear ensemble transform filter (NETF), and the moment-matching ensemble filter. The Gaussian mixture filter is discussed in Section 8. The practical implementation of the filters including localisation, inflation, parallelisation and the computation cost as well as the aspect of non-linearity are discussed in Section 9. Finally, Appendix 1 provides pseudo codes for resampling techniques often used in particle filter methods, and Appendix 2 contains pseudo codes for all of the methods discussed in this paper.

We note that many of the filters discussed in this paper are available freely from Sangoma project website1 along with many other tools valuable and/or necessary for data assimilation systems.

1.1.

Notation

In the data assimilation community the currently most accepted notation is described in Ide et al. (1997). We adhere to this notation where possible while also making this paper acceptable and intuitive not only to data-assimilation experts but also to a wider audience including those who might like to explore data assimilation methods simply as tools for their specific needs. To this end, throughout this paper dimensions will always be described by capital N with an underscore indicating the space in question, that is

• ${N}_{x}$ - dimension of state space;
• ${N}_{y}$ - dimension of observation space;
• ${N}_{e}$ - dimension of ensemble/particle space.
Further, the time index is always denoted in parentheses in the upper right corner of the variables, i.e. ${\left(.\right)}^{\left(m\right)}$, except for operators such as $\mathcal{M}$ (dynamical model) and $\mathcal{H}$ (observation operator) where it is in the lower right corner. However, we will omit the time index when possible to ease the notation. We willrefer to each ensemble member (or each particle) by ${\mathsf{x}}_{j}$ where the index $j=1,\dots ,{N}_{e}$ and ${N}_{e}$ is the total number of the ensemble members (or particles).

When discussing Bayesian theory in Sections 2 and 6 purely random variables will be denoted by capital letters, and fixed or deterministic or observed quantities will be denoted by lowercase letters. Probability density functions will be denoted by $p\left(..\right)$, and q(..) and we will use lower case arguments in this context.

Throughout the paper, Greek letters will refer to various errors, e.g. observational or model errors. Finally, bold lowercase letters will denote vectors and bold uppercase letters will denote matrices.

2.

Common ground through Bayes theorem

Various types of data assimilation methods, e.g. variational, ensemble Kalman filters, particle filters, etc., have originated from different fields and backgrounds, due to the needs of a particular community or application. However, all of these methods can be unified through Bayes theorem. In this section, we will give a summary of Bayes theorem showing how both ensemble Kalman filter (KF) methods and particle filter (PF) methods are linked in this context and what problems each of them solve. For an introduction to the Bayesian theory for data assimilation, the reader is referred to e.g. van Leeuwen and Evensen (1996) and Wikle and Berliner (2006).

Data assimilation is an approach for combining observations with model forecasts to obtain a more accurate estimate of the state and its uncertainty. In this context, we require

• data, that is observations $\mathsf{y}$ and a knowledge of their associated error distributions and
• a prior, that is a model forecast of the state, ${\mathsf{x}}^{f}$, and knowledge of the associated forecast and model errors;
to obtain the posterior, i.e. the analysis state ${\mathsf{x}}^{a}$, and its associated error. The posterior can be computed through Bayes theorem which states that
((1) )
$\begin{array}{c}\hfill p\left(\mathsf{x}|\mathsf{y}\right)=\frac{p\left(\mathsf{y}|\mathsf{x}\right)p\left(\mathsf{x}\right)}{p\left(\mathsf{y}\right)},\end{array}$

where $p\left(\mathsf{x}|\mathsf{y}\right)$ is the posterior or analysis probability density function, $p\left(\mathsf{y}|\mathsf{x}\right)$ is the observation probability density function or also called the likelihood, $p\left(\mathsf{x}\right)$ is the prior or forecast probability density function, and $p\left(\mathsf{y}\right)$ is the marginal probability density function of the observations, which can be thought of as a normalising constant. From now on, for the ease of the readability, we will abbreviate ‘probability density function’ with ‘pdf’.

Typically, data assimilation methods make the Markovian assumption for the dynamical model $\mathcal{M}$ and the conditional independence assumption of the observations. That is, we assume that the model state or the prior at time m, when conditioned on all previous states only depends on the state at the time $m-1$,

((2) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(0:T\right)}\right)=p\left({\mathsf{x}}^{\left(0\right)}\right)\prod _{m=1}^{T}p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}^{\left(m-1\right)}\right).\end{array}$

Here, the superscript 0 : T is to be read for time indices from initial time to time T, which is typically called assimilation window in data assimilation. Further, observations are also usually assumed to be conditionally independent, i.e. they are assumed to be independent in time,

((3) )
$\begin{array}{c}\hfill p\left({\mathsf{y}}^{\left(1:T\right)}|{\mathsf{x}}^{\left(0:T\right)}\right)=\prod _{m=1}^{T}p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}^{\left(m\right)}\right).\end{array}$

Using Equations (2) and (3) we can rewrite Bayes theorem in Equation (1) as

((4) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(T\right)}|{\mathsf{y}}^{\left(T\right)}\right)\propto p\left({\mathsf{x}}^{0}\right)\prod _{m=1}^{T}p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}^{\left(m\right)}\right)p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}^{\left(m-1\right)}\right).\end{array}$

The Markovian assumption allows us to use new observations as they become available by updating the previous estimate of the state process without having to start the calculations from scratch. This is called sequential updating and the methods described in this paper all follow this approach.

Ensemble Kalman filter methods solve this problem using Gaussian assumptions for both prior and likelihood pdf’s. Multiplying two Gaussian pdf’s leads again to a Gaussian pdf, i.e. the posterior or analysis pdf will also be Gaussian. The posterior pdf will have only one global maximum, which will correspond to the ensemble mean (also mode and median since the pdf is Gaussian). In other words, the posterior pdf in ensemble Kalman filter methods described in Section 5 is found in terms of the first two moments (mean and covariance) of the prior and likelihood pdf’s. This is also true when ensemble Kalman filters are applied to non-linear dynamical models or observation operators, in which case the information from higher moments in an ensemble KF analysis update is ignored. This is a shortcoming in ensemble Kalman filters when applied to non-Gaussian problems. However, in general ensemble Kalman filter methods are robust when applied to non-linear models and catastrophic filter divergence, where the filter deviates strongly from the observations while producing unrealistically small error estimates, occurs mainly due to sparse or inaccurate observations (Verlaan and Heemink, 2001; Tong et al., 2016). It should, of course, be realised that in non-linear settings the estimates of the posterior mean and covariance might be off.

In particle filter methods, the posterior is obtained using the prior and likelihood pdf’s directly in Equation (1) without restricting them to being Gaussian. If both prior and likelihood are Gaussian the resulting posterior or analysis pdf is also Gaussian and has a global maximum corresponding to the mean state. However, if either or both prior and likelihood pdf’s are non-Gaussian then the resulting posterior pdf will also not be Gaussian. In other words, if the dynamical model or mapping of the model variables to observation space are non-linear then particle filter methods will produce an analysis pdf which will provide knowledge of more than the first two statistical moments (mean and covariance), in contrast to ensemble Kalman filter methods. Thus, the analysis pdf could be skewed, multi-modal or of varying width in comparison to a Gaussian pdf. Hence, particle filters are, by design, able to produce analysis pdf’s for non-Gaussian problems. While standard particle filter methods suffer from filter divergence for large problems recently several particle filter variants have been developed that avoid this divergence.

In what follows, we will describe numerous filtering methods in Sections 5, 6, 7, and 8 and discuss how each method attempts to produce an analysis pdf for non-Gaussian and high-dimensional problems. However, firstly we provide an overview of the historical development of both ensemble Kalman filters and particle filter methods to show how these fields have evolved and what has given rise in the development of each of the methods.

3.

History of filtering for data assimilation

Before we precede to the main point of our paper – describing in unified notation current state-of-the-art ensemble and particle filter methods for non-linear and non-Gaussian applications, their implementation, and practical application, a short summary is in order on the historical development in both ensemble Kalman filter and particle filter areas.

3.1.

Development history of ensemble Kalman filters

Ensemble data assimilation (EnDA) started in 1994 with the introduction of the Ensemble Kalman filter (EnKF, Evensen (1994)). The use of perturbed observations was introduced a few years later simultaneously by Burgers et al. (1998) and Houtekamer and Mitchell (1998) to correct the previously too low spread of the analysis ensemble. This filter formulation defines today the basic ’Ensemble Kalman filter’, which we will denote as the Stochastic Ensemble Kalman Filter, with a slightly different interpretation and implementation, as will be described later. The first alternative variant of the original EnKF was introduced by Pham et al. (1998a) in the form of Singular ’Evolutive’ Interpolated Kalman (SEIK) filter. The SEIK filter formulates the analysis step in the space spanned by the ensemble and hence is computationally particularly efficient. In contrast to the EnKF, which was formulated as a Monte Carlo method, the SEIK filter was designed to find the analysis ensemble by writing each posterior member as a linear combination of prior members without using perturbed observations. Another ensemble Kalman filter that uses the space spanned by the ensemble was introduced with the Error-Subspace Statistical Estimation (ESSE) method (Lermusiaux and Robinson, 1999).

The filters mentioned above were all introduced for data assimilation in oceanographic problems. A new set of filter methods was introduced during the years 2001 and 2002 for meteorological applications. The Ensemble Transform Kalman Filter (ETKF, Bishop et al., 2001) was first introduced in the context of meteorological adaptive sampling. Further, the Ensemble Adjustment Kalman Filter (EAKF, Anderson, 2001) and the Ensemble Square Root Filter (EnSRF, Whitaker and Hamill, 2002) were introduced. The motivation for these three filters was to avoid the use of perturbed observations, which were found to introduce additional sampling error into the filter solution, with the meteorological community apparently being unaware of the development of the SEIK filter. The new filters were classified as ensemble square root Kalman filters and presented in a uniform notation by Tippett et al. (2003). Nerger et al. (2005a) further classified the EnKF and SEIK filters as error-subspace Kalman filters because the filters compute the correction in the error-subspace spanned by the ensemble. This likewise holds for the ETKF, EAKF, and EnSRF, however, these filters do not explicitly use a basis in the error subspace but use the ensemble to represent the space. When the EAKF and EnSRF formulations are used to assimilate all observation at once, these filters exhibit a much larger computational cost compared to the ETKF. To reduce the cost, the original study on the EnSRF (Whitaker and Hamill, 2002) already introduced a variant in which observations are assimilated sequentially, which assumes that the observation errors are uncorrelated. A similar serial formulation of the EAKF was introduced by Anderson (2003). This sequential assimilation of observations was assessed by Nerger (2015) and it was shown that this formulation can destabilise the filtering process in cases when the observations have a strong influence.

With regard to the classification as an ensemble square root Kalman filter, the SEIK filter is the first filter method that was clearly formulated in square root form. The original EnKF uses the square root form only implicitly but an explicit square root formulation of the EnKF was presented by Evensen (2003).

The methods above all solve the original equations of the Kalman filter but use the sample covariance matrix of the ensemble to represent the state error covariance matrix. An alternative was introduced with the Maximum-Likelihood Kalman Filter (MLEF, Zupanski, 2005). This filter represents the first variant of the class of hybrid filters that were introduced in later years. The filter computes the maximum-a posteriori solution (in contrast to the minimum-variance solution of the Kalman filter) by an iterative scheme.2

While the EnKFs were very successful in making the application of the Kalman filter feasible for the high-dimensional problems in oceanography and meteorology, the affordable ensemble size was always very limited. To counter the issue of sampling error in ensemble covariances (the ensemble-sampled covariance has a rank of not more than the ensemble size minus one while the applications were of very high state dimension) the method of covariance localisation was introduced by Houtekamer and Mitchell (1998) and Houtekamer and Mitchell (2001). Later, an alternative localisation was introduced for the ETKF (LETKF, Hunt et al., 2007) which uses a local analysis (also used previously, e.g. by Cohn et al. (1998)) where observations are down-weighted with increasing distance from the local analysis point through a tapering of the inverse observation covariances.

The relationship between the SEIK filter and the ETKF was investigated by Nerger et al. (2012a). The study leads to a new filter formulation, the Error-Subspace Transform Kalman Filter (ESTKF), which combined the advantages of both filter formulations.

The filters mentioned above represent main developments of the ensemble Kalman filters. However, there are many other developments, which are not included here. Some of them are discussed in the sections below, in particular with regard to localisation. Overall, while there are different reviews of selections of ensemble Kalman filters, a complete and coherent overview of the different methods is still missing.

3.2.

Development history of particle filters

Particle filters, like ensemble Kalman filters, are variants of Monte Carlo methods in which the probability distribution of the model state given the observations is approximated by a number of particles; however, unlike ensemble Kalman filters, particle filters are fully non-linear data assimilation techniques. From a sampling point of view, Ensemble Kalman Filters draw samples directly from the posterior since the probability distribution function (pdf) is assumed to be a Gaussian. In a particle filter application, the shape of the posterior is not known, and hence one cannot sample directly from it. In its simplest form, samples are generated from the prior after which importance sampling is employed to turn them into samples from the posterior where each sample is weighted with its likelihood value.

Particle filters emerged before ensemble Kalman filters, and when Gordon et al. (1993) introduced resampling in the sequential scheme the method became mainstream in non-linear filtering. This basic scheme has been made more efficient for specific applications in numerous ways, like looking ahead, adding small perturbations to resampled particles to avoid that they are the same etc. (see Doucet et al., 2001 for a very useful review of the many methods available at that time). Attempts to apply the particle filter to geophysical systems are as old as 1996 (van Leeuwen and Evensen, 1996), with the first partially successful application by van Leeuwen (2003a). However, until recently, particle filters have been deemed to be computationally unfeasible for large-dimensional systems due to the filter degeneracy problem (Bengtsson et al., 2008; Snyder et al., 2008; van Leeuwen, 2009). This means that the likelihood weights vary substantially between the particles when the number of independent observations is large, such that one particle obtains a weight close to one, while all the others have weight very close to zero. New developments in the field generated particle filter variants that have been shown to work for large dimensional systems with a limited number of particles. These methods can be divided in two classes: those that use localisation (starting with van Leeuwen, 2003b; Bengtsson et al., 2003), followed more recently by local variants of the ensemble transform particle filter (ETPF, Reich, 2013) and the Local Particle Filter (Poterjoy, 2016a) and those that exploit the future observational information via proposal densities, such as the Implicit Particle Filter (Chorin and Tu, 2009), the Equivalent Weights Particle Filter (EWPF, van Leeuwen, 2010; van Leeuwen, 2011; Ades and van Leeuwen, 2013), and the Implicit Equal Weights Particle Filter (IEWPF, Zhu et al., 2016).

In another development, second-order exact filters have been developed that ensure that the first two moments of the posterior pdf are consistent with the particle filter, and higher-order moments are not considered. The first paper of this kind was the Particle Filter with Gaussian Resampling of Xiong et al. (2006), followed by the Merging Particle Filter (Nakano et al., 2007) and the Moment Matching Ensemble Filter (Lei and Bickel, 2011). All these filters seem to have been developed independently. The Non-linear Ensemble Transform Filter (Tödter and Ahrens, 2015) can be considered a local version of the filter by Xiong et al. (2006), ironically again developed independently.

A further approximation to particle filtering is the Gaussian Mixture Filter first introduced in the geosciences by Bengtsson et al. (2003), followed by the adaptive Gaussian mixture filter variants (Hoteit et al., 2008; Stordal et al., 2011). The advantage of these filters over the standard particle filter is that each particle is ’dressed’ by a Gaussian such that the likelihood weights are calculated using a covariance that is broader than the pure observational covariance, leading to better behaving weights at the cost of reducing the influence of the observations on the posterior pdf (see e.g. van Leeuwen, 2009).

4.

The problem

Consider the following non-linear stochastic discrete-time dynamical system at a time when observations are available:

((5) )
$\begin{array}{cc}\hfill {\mathsf{x}}^{\left(m\right)}& ={\mathcal{M}}_{m}\left({\mathsf{x}}^{\left(m-1\right)}\right)+{\beta }^{\left(m\right)}\hfill \end{array}$
((6) )
$\begin{array}{cc}\hfill {\mathsf{y}}^{\left(m\right)}& ={\mathcal{H}}_{m}\left({\mathsf{x}}^{\left(m\right)}\right)+{\beta }_{o}^{\left(m\right)},\hfill \end{array}$

where ${\mathsf{x}}^{\left(m\right)}\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{\mathcal{R}}^{{N}_{x}}$ is the ${N}_{x}$ dimensional state vector, ${\mathsf{y}}^{\left(m\right)}\phantom{\rule{3.33333pt}{0ex}}\in {\mathcal{R}}^{{N}_{y}}$ is the observation vector of size ${N}_{y}\phantom{\rule{3.33333pt}{0ex}}\ll \phantom{\rule{3.33333pt}{0ex}}{N}_{x}$, ${\mathcal{M}}_{m}:\phantom{\rule{3.33333pt}{0ex}}{\mathcal{R}}^{{N}_{x}}\phantom{\rule{3.33333pt}{0ex}}\to {\mathcal{R}}^{{N}_{x}}$ is the forward model operator, ${\mathcal{H}}_{m}:\phantom{\rule{3.33333pt}{0ex}}{\mathcal{R}}^{{N}_{x}}\phantom{\rule{3.33333pt}{0ex}}\to \phantom{\rule{3.33333pt}{0ex}}{\mathcal{R}}^{{N}_{y}}$ is the observation operator, ${\beta }^{\left(m\right)}\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{\mathcal{R}}^{{N}_{x}}$ is the model noise (or error) distributed Gaussian with a covariance matrix ${\mathsf{Q}}^{\left(m\right)}$, and ${\beta }_{o}^{\left(m\right)}\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{\mathcal{R}}^{{N}_{y}}$ is the observation noise (or error) distributed Gaussian with covariance matrix ${\mathsf{R}}^{\left(m\right)}$.

Then we can define an ensemble of model forecasts obtained using Equation (5) for each ensemble or particle member as follows,

((7) )
$\begin{array}{c}\hfill {\mathsf{X}}^{f,\left(m\right)}=\left[{\mathsf{x}}_{1}^{f,\left(m\right)},{\mathsf{x}}_{2}^{f,\left(m\right)},\dots ,{\mathsf{x}}_{{N}_{e}}^{f,\left(m\right)}\right]\in {\mathcal{R}}^{{N}_{x}×{N}_{e}},\end{array}$

where superscript ${\left(.\right)}^{f}$ stands for forecast.

The aim of the stochastic data assimilation methods is to produce a posterior pdf or analysis distribution of the state, ${\mathsf{X}}^{a}$, at the time of the observations through combining the ensemble model forecast ${\mathsf{X}}^{f}$ with observations $\mathsf{y}$. In Section 5, we will discuss ensemble Kalman filter based methods and in Sections 68 we will discuss particle, second-order exact, and adaptive Gaussian mixture filter methods all achieving this aim through different approaches.

5.

Ensemble Kalman filters

Given an initial ensemble ${\mathsf{X}}^{\left(0\right)}\in {\mathcal{R}}^{{N}_{x}×{N}_{e}}$, the different proposed variants of the ensemble Kalman filter have the following steps in common:

• Forecast step: the ensemble members at each time step between the observations $0 are propagated using the full non-linear dynamical model:
((8) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{f,\left(k\right)}={\mathcal{M}}_{k}\left({\mathsf{x}}_{j}^{f,\left(k-1\right)}\right)+{\beta }_{j}^{\left(k\right)},\end{array}$
starting at the previous analysis ensemble (if $k=1$, then this would be ${\mathsf{x}}_{j}^{f,\left(1\right)}={\mathcal{M}}_{1}\left({\mathsf{x}}_{j}^{\left(0\right)}\right)+{\beta }_{j}^{\left(1\right)}$), where $j=1,\dots ,{N}_{e}$ is the ensemble member index.
• Analysis step: at the observation time $k=m$ the ensemble forecast mean and covariance are updated using the available observations to obtain a new analysis ensemble.
The various ensemble methods differ in the analysis step. Here we will discuss current methods applicable for large-dimensional systems, namely, the original ensemble Kalman filter (EnKF) (Evensen, 1994) with stochastic innovations (Burgers et al., 1998; Houtekamer and Mitchell, 1998), the singular evolutive interpolated Kalman filter (SEIK) (Pham et al., 1998a), the error-subspace statistical estimation (ESSE) (Lermusiaux and Robinson, 1999; Lermusiaux et al., 2002; Lermusiaux, 2007), the ensemble transform Kalman filter (ETKF) (Bishop et al., 2001), the ensemble adjustment Kalman filter (EAKF) (Anderson, 2001), the original ensemble square root filter (EnSRF) (Whitaker and Hamill, 2002) with synchronous and serial observation treatment, the square root formulation of the EnKF (Evensen, 2003), the error subspace transform Kalman filter (ESTKF) (Nerger et al., 2012a), and the maximum likelihood ensemble filter (MLEF) (Zupanski, 2005; Zupanski et al., 2008). We will present these methods in the square root form and point out the different ways the analysis ensemble is obtained in each of the methods. Tippett et al. (2003) gives a uniform framework for EnSRFs, which we follow closely here. In the rest of this section for ease of notation we omit the time index ${\left(·\right)}^{\left(k\right)}$ since all of the analysis operations are done at time m.

The ensemble methods discussed in this section are based on the Kalman filter (Kalman, 1960) where the updated ensemble mean follows the Kalman update for the state, given by

((9) )
$\begin{array}{c}\hfill {\overline{\mathsf{x}}}^{a}={\overline{\mathsf{x}}}^{f}+\mathsf{K}\left(\mathsf{y}-\mathcal{H}\left({\overline{\mathsf{x}}}^{f}\right)\right)={\overline{\mathsf{x}}}^{f}+\mathsf{K}\mathsf{d}\end{array}$

where $\mathsf{d}=\mathsf{y}-\mathcal{H}\left({\overline{\mathsf{x}}}^{f}\right)$ is the innovation. The ensemble covariance update follows the covariance update equation in the Kalman Filter, given by

((10) )
$\begin{array}{c}\hfill {\mathsf{P}}^{a}=\left(\mathsf{I}-\mathsf{K}\mathsf{H}\right){\mathsf{P}}^{f},\end{array}$

where $\mathsf{K}$ is the Kalman gain given by

((11) )
$\begin{array}{c}\hfill \mathsf{K}={\mathsf{P}}^{f}{\mathsf{H}}^{\mathsf{T}}{\left(\mathsf{H}{\mathsf{P}}^{f}{\mathsf{H}}^{\mathsf{T}}+\mathsf{R}\right)}^{-1}.\end{array}$

The matrix $\mathsf{H}$ is the linearised observation operator $\mathcal{H}\left(..\right)$ at the forecast mean ${\overline{\mathsf{x}}}^{f}$. Initially the Kalman filter was derived for a linear observation operator, but in the Extended Kalman Filter the non-linear observation operator is used as above.

Since for high-dimensional systems it is computationally not feasible to form the error covariance matrix $\mathsf{P}$, the analysis update of the covariance matrix in Equation (10) is formulated in a square root form by computing a transform matrix and applying it to the ensemble perturbation matrix, which is a scaled square root of $\mathsf{P}$. That is, the analysis ensemble is then given by

((12) )
$\begin{array}{c}\hfill {\mathsf{X}}^{a}={\overline{\mathsf{X}}}^{a}+{{\mathsf{X}}^{\prime }}^{a},\end{array}$

where ${\overline{\mathsf{X}}}^{a}=\left({\overline{\mathsf{x}}}^{a},\dots ,{\overline{\mathsf{x}}}^{a}\right)\in {\mathcal{R}}^{{N}_{x}×{N}_{e}}$ is a matrix with the ensemble analysis mean in each column and the ensemble analysis perturbations are a scaled matrix square root of

((13) )
$\begin{array}{c}\hfill {\mathsf{P}}^{a}=\frac{{{\mathsf{X}}^{\prime }}^{a}{\left({{\mathsf{X}}^{\prime }}^{a}\right)}^{\mathsf{T}}}{{N}_{e}-1}.\end{array}$

To obtain the general square root form we write, using (10)

((14) )
$\begin{array}{cc}\hfill {{\mathsf{X}}^{\prime }}^{a}{\left({{\mathsf{X}}^{\prime }}^{a}\right)}^{\mathsf{T}}& =\left(\mathsf{I}-{\mathsf{P}}^{f}{\mathsf{H}}^{\mathsf{T}}{\left(\mathsf{H}{\mathsf{P}}^{f}{\mathsf{H}}^{\mathsf{T}}+\mathsf{R}\right)}^{-1}\mathsf{H}\right){{\mathsf{X}}^{\prime }}^{f}{\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}\hfill \\ \hfill & ={{\mathsf{X}}^{\prime }}^{f}\left(\mathsf{I}-{\mathsf{S}}^{\mathsf{T}}{\mathsf{F}}^{-1}\mathsf{S}\right){\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}},\hfill \end{array}$

where $\mathsf{S}=\mathsf{H}{{\mathsf{X}}^{\prime }}^{f}$ is the ensemble perturbation matrix in observation space and

((15) )
$\begin{array}{c}\hfill \mathsf{F}\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}\mathsf{S}\phantom{\rule{3.33333pt}{0ex}}{\mathsf{S}}^{\mathsf{T}}\phantom{\rule{3.33333pt}{0ex}}+\phantom{\rule{3.33333pt}{0ex}}\left({N}_{e}\phantom{\rule{3.33333pt}{0ex}}-\phantom{\rule{3.33333pt}{0ex}}1\right)\phantom{\rule{3.33333pt}{0ex}}\mathsf{R}\end{array}$

is the innovation covariance.

It is possible to use a slightly different way to calculate matrix $\mathsf{S}$ using the non-linear observation operator as $\mathsf{S}=\mathcal{H}\left({\mathsf{X}}^{f}\right)-\mathcal{H}\left({\overline{\mathsf{X}}}^{f}\right)$, in which, with a slight abuse of notation, $\mathcal{H}\left({\mathsf{X}}^{f}\right)=\left(\mathcal{H}\left({\mathsf{x}}_{1}^{f}\right),\cdots ,\mathcal{H}\left({\mathsf{x}}_{{N}_{e}}^{f}\right)\right)$, and similarly for $\mathcal{H}\left({\overline{\mathsf{X}}}^{f}\right)$. This can be used in any of the ensemble Kalman filters discussed below.

To find the updated ensemble analysis perturbations ${{\mathsf{X}}^{\prime }}^{a}$ we need to compute the square root $\mathsf{T}$ of the matrix

((16) )
$\begin{array}{c}\hfill \mathsf{I}-{\mathsf{S}}^{\mathsf{T}}{\mathsf{F}}^{-1}\mathsf{S}=\mathsf{T}{\mathsf{T}}^{\mathsf{T}},\end{array}$

where $\mathsf{T}$ is called a transform matrix. Different ways exist to compute the transform matrix $\mathsf{T}$ and here we will discuss the current methods applicable to large-dimensional systems.

For the ensemble-based Kalman filters presented in this paper we can write the analysis update as linear transformations using a weight vector $\overline{\mathsf{w}}$ for the ensemble mean and a weight matrix ${\mathsf{W}}^{\prime }$ for the ensemble perturbations as

((17) )
$\begin{array}{cc}\hfill {\overline{\mathsf{x}}}^{a}& ={\overline{\mathsf{x}}}^{f}+{{\mathsf{X}}^{\prime }}^{f}\overline{\mathsf{w}},\hfill \end{array}$
((18) )
$\begin{array}{cc}\hfill {{\mathsf{X}}^{\prime }}^{a}& ={{\mathsf{X}}^{\prime }}^{f}{\mathsf{W}}^{\prime }.\hfill \end{array}$

Notice, that the ensemble analysis perturbation matrix, ${{\mathsf{X}}^{\prime }}^{a}$, in Equation (18) has a zero mean by construction. Further, we note that for most of the methods discussed in this section, matrix ${\mathsf{W}}^{\prime }$ is the transformation matrix $\mathsf{T}$ in Equation (16). However, this is not the case for EnKF, SEnKF and MLEF. Further, we can compute the analysis ensemble directly by

((19) )
$\begin{array}{c}\hfill {\mathsf{X}}^{a}={\overline{\mathsf{X}}}^{f}+{{\mathsf{X}}^{\prime }}^{f}\left(\overline{\mathsf{W}}+{\mathsf{W}}^{\prime }\right),\end{array}$

where $\overline{\mathsf{W}}=\left(\overline{\mathsf{w}},\dots ,\overline{\mathsf{w}}\right)$. In the sections below we will derive the weight matrices for each of the ensemble-based Kalman filter methods we discuss. The updated ensemble can then be obtained using Equation (19).

To aid simplicity in discussing the different methods we use the same letter for the variables with the same meaning, i.e. ${\mathsf{W}}^{\prime }$ is always the perturbation analysis transform matrix that transforms ${{\mathsf{X}}^{\prime }}^{f}$ into ${{\mathsf{X}}^{\prime }}^{a}$. Clearly, such variables do not necessarily have the same values for the various methods listed below. Thus, we subscript these variables common to all methods with a specific letter for each method. This letter is underlined in the title of each subsection that follows here, e.g. for EnKF we use ${{\mathsf{W}}^{\prime }}_{N}$. Note that some of the variables can have the same values for different methods, though. At the end of this section we will provide a table of the common variables with their dimensions and whether they are equal to the same variable in a different method.

5.1.

The Stochastic Ensemble Kalman filter (E$\underline{n}$KF)

The Stochastic EnKF was introduced at the same time by Burgers et al. (1998) and Houtekamer and Mitchell (1998). It is a modified version of the original under-dispersive EnKF as introduced by Evensen (1994) by adding measurement noise to the innovations so that the filter maintains the correct spread in the analysis ensemble and prevents filter divergence.

Although the scheme was initially interpreted as perturbing observations, a more consistent interpretation is that the predicted observations are perturbed with the observation noise. The reason for this is that it doesn’t make sense to perturb observations since they already contain measurement noise (errors), e.g. from measuring instruments, and thus have already departed from the true state of the system. Also, Bayes Theorem, see Section 2 tells us that we need the probabilities of the states given this set of observations, not a perturbed set. The idea is that each ensemble member is statistically equivalent to the true state of the system, and the true observation is a perturbed measurement of the true state. So to compare that observation with the predicted observations the latter have to be perturbed with the measurement noise too to make this comparison meaningful. This reasoning is identical to that used in rank histograms in which observations are ranked in the perturbed predicted observations from the ensemble to be statistically equivalent.

Each ensemble member individually is explicitly corrected using the Kalman filter equations, and hence the square root form is implicit only as the transform matrix and its square root are never explicitly computed. In contrast to the other filters, the stochastic EnKF perturbs the predicted observations by forming a matrix

((20) )
$\begin{array}{c}\hfill {\mathsf{Y}}^{f}=\left(\mathcal{H}\left({\mathsf{x}}_{1}^{f}\right),\mathcal{H}\left({\mathsf{x}}_{2}^{f}\right),\dots ,\mathcal{H}\left({\mathsf{x}}_{{N}_{e}}^{f}\right)\right)+{\mathsf{Y}}^{\prime }\in {\mathcal{R}}^{{N}_{y}×{N}_{e}},\end{array}$

where the observational noise (perturbation) matrix ${\mathsf{Y}}^{\prime }$ is given by:

((21) )
$\begin{array}{c}\hfill {\mathsf{Y}}^{\prime }=\left({ϵ}_{1},{ϵ}_{2},\dots ,{ϵ}_{{N}_{e}}\right)\in {\mathcal{R}}^{{N}_{y}×{N}_{e}}\end{array}$

with the noise vectors ${ϵ}_{j}$ drawn from a Gaussian distribution with mean zero and covariance $\mathsf{R}$. We also introduce the observation matrix $\mathsf{Y}=\left(\mathsf{y},\mathsf{y},\dots ,\mathsf{y}\right)\in {\mathcal{R}}^{{N}_{y}×{N}_{e}}$ consisting of ${N}_{e}$ identical copies of the observation vector.

The Stochastic EnKF uses the matrix $\mathsf{F}$ defined in Equation (15) with prescribed matrix $\mathsf{R}$ and proceeds by transforming all ensemble members according to

((22) )
$\begin{array}{c}\hfill {\mathsf{X}}^{a}={\mathsf{X}}^{f}+\frac{1}{{N}_{e}-1}{{\mathsf{X}}^{\prime }}^{f}{\mathsf{S}}^{\mathsf{T}}{\mathsf{F}}_{N}^{-1}\left(\mathsf{Y}-{\mathsf{Y}}^{f}\right),\end{array}$

Similar to the Equations (17)–(19), this can be written as

((23) )
$\begin{array}{c}\hfill {\mathsf{X}}^{a}={\mathsf{X}}^{f}+{{\mathsf{X}}^{\prime }}^{f}{{\mathsf{W}}^{\prime }}_{N}\end{array}$

with

((24) )
$\begin{array}{c}\hfill {{\mathsf{W}}^{\prime }}_{N}=\frac{1}{{N}_{e}-1}{\mathsf{S}}^{\mathsf{T}}{\mathbf{F}}_{N}^{-1}\left(\mathsf{Y}-{\mathsf{Y}}^{f}\right).\end{array}$

Due to the use of the observation ensemble $\mathsf{Y}$ no explicit transformation of the ensemble mean needs to be performed.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnKF method.

We note that while the above description of the stochastic EnKF is widely accepted and implemented, it does produce the correct posterior covariance only in a statistical sense due to extra sampling errors while the ensemble mean is not affected by the sampling error by ensuring that observation noise matrix, ${\mathsf{Y}}^{\prime }$, has zero mean. However, in the limit of infinite ensemble size and when all sources of error (both observation and model) are correctly sampled, the stochastic EnKF does produce the correct posterior covariance (Whitaker and Hamill, 2002).

5.2.

The singular evolutive interpolated Kalman filter (S$\underline{E}$IK)

The SEIK filter (Pham et al., 1998b Pham, 2001) was the first filter method that allowed for non-linear model evolution and that was explicitly formulated in square root form. The filter uses the Sherman–Morrison–Woodbury identity (Golub and Van Loan, 1996) to rewrite $\mathsf{T}{\mathsf{T}}^{\mathsf{T}}$ (Equation 16) as

((25) )
$\begin{array}{c}\hfill \mathsf{T}{\mathsf{T}}^{\mathsf{T}}=\mathsf{I}-{\mathsf{S}}^{\mathsf{T}}{\mathsf{F}}^{-1}\mathsf{S}={\left(\mathsf{I}+\frac{1}{{N}_{e}-1}{\mathsf{S}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{S}\right)}^{-1}.\end{array}$

Note, that the performance of this scheme depends on whether the product of the inverse of the observation error matrix, ${\mathsf{R}}^{-1}$, and a given vector can be efficiently computed, which is for instance the case when we assume that the observation errors are uncorrelated.

The SEIK filter computes the analysis step in the ensemble error subspace. This is achieved by defining a matrix

((26) )
$\begin{array}{c}\hfill {\mathsf{L}}_{E}={\mathsf{X}}^{f}{\mathsf{A}}_{E},\end{array}$

where ${\mathsf{A}}_{E}\in {\mathcal{R}}^{{N}_{e}×\left({N}_{e}-1\right)}$ is a matrix with full rank and zero column sums. Commonly, matrix ${\mathsf{A}}_{E}$ is identified as

((27) )
$\begin{array}{c}\hfill {\mathsf{A}}_{E}=\left[\begin{array}{c}{\mathsf{I}}_{{N}_{e}-1×{N}_{e}-1}\\ {\mathsf{0}}_{1×{N}_{e}-1}\end{array}\right]-\frac{1}{{N}_{e}}\left[{\mathsf{1}}_{{N}_{e}×{N}_{e}-1}\right],\end{array}$

where $\mathsf{0}$ is a matrix whose elements are equal to zero and $\mathsf{1}$ is a matrix whose elements are equal to one (Pham et al., 1998b). Matrix ${\mathsf{A}}_{E}$ implicitly subtracts the ensemble mean when the matrix $\mathsf{L}$ is computed. In addition, ${\mathsf{A}}_{E}$ removes the last column of ${{\mathsf{X}}^{\prime }}^{f}$. Thus, $\mathsf{L}$ is an ${N}_{e}×{N}_{e}-1$ matrix that holds the first ${N}_{e}-1$ ensemble perturbations. The product of the square root matrices in the ensemble error space becomes now

((28) )
$\begin{array}{c}\hfill {\mathsf{T}}_{E}{\mathsf{T}}_{E}^{\mathsf{T}}={\left({\mathsf{A}}_{E}^{\mathsf{T}}{\mathsf{A}}_{E}+\frac{1}{{N}_{e}-1}{\left(\mathsf{H}{\mathsf{L}}_{E}\right)}^{\mathsf{T}}{\mathsf{R}}^{-1}\left(\mathsf{H}{\mathsf{L}}_{E}\right)\right)}^{-1}.\end{array}$

The matrix ${\mathsf{T}}_{E}{\mathsf{T}}_{E}^{\mathsf{T}}$ is of size ${N}_{e}-1×{N}_{e}-1$. The square root ${\mathsf{T}}_{E}$ is obtained from the Cholesky decomposition of ${\left({\mathsf{T}}_{E}{\mathsf{T}}_{E}^{\mathsf{T}}\right)}^{-1}$. Then, the ensemble transformation weight matrices in Equations (17)–(19) are given by

((29) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{E}& ={\mathsf{A}}_{E}{\mathsf{T}}_{E}\mathrm{\Omega },\hfill \end{array}$
((30) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{E}& =\frac{1}{\sqrt{{N}_{e}-1}}{\mathsf{A}}_{E}{\mathsf{T}}_{E}{\mathsf{T}}_{E}^{\mathsf{T}}{\left(\mathsf{H}{\mathsf{L}}_{E}\right)}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{d}.\hfill \end{array}$

Here, the columns of $\mathrm{\Omega }\in {\mathcal{R}}^{{N}_{e}-1×{N}_{e}}$ are orthonormal and orthogonal to the vector ${\left(1,\dots ,1\right)}^{\mathsf{T}}$. $\mathrm{\Omega }$ can be either random or a deterministic rotation matrix. However, if a deterministic $\mathrm{\Omega }$ is used then Nerger et al. (2012a) shows that a symmetric square root of ${\mathsf{T}}_{E}{\mathsf{T}}_{E}^{\mathsf{T}}$ should be used for a more stable ensemble.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the SEIK method.

5.3.

The error-subspace statistical estimation (E$\underline{S}$SE)

The ESSE (Lermusiaux and Robinson, 1999 Lermusiaux et al., 2002 Lermusiaux, 2007) method is based on evolving an error subspace of variable size, that spans and tracks the scales and processes where the dominant errors occur (Lermusiaux et al., 2002). Here, we follow the formulation of Lermusiaux (2007) adapted to the unified notation used here.

The consideration of an evolving error subspace is analogous to the motivation of the SEIK filter. The main difference to other subspace filters mentioned here is how the ensemble matrix is truncated. That is, the full ensemble perturbation matrix ${{\mathsf{X}}^{\prime }}^{f}$ at the current analysis time with columns

$\begin{array}{c}\hfill {{\mathsf{x}}^{\prime }}_{j}^{f}={\mathsf{x}}_{j}^{f}-{\overline{\mathsf{x}}}^{f},\phantom{\rule{4pt}{0ex}}j=1,\dots ,{N}_{e}\end{array}$

is approximated by the fastest growing singular vectors. The full ensemble perturbation matrix is decomposed using the reduced or thin singular value decomposition (SVD), (e.g. p. 72, Golub and Van Loan, 1996),

((31) )
$\begin{array}{c}\hfill {\mathsf{U}}_{S}{\mathrm{\Sigma }}_{S}{\mathsf{V}}_{S}^{\mathsf{T}}={{\mathsf{X}}^{\prime }}^{f},\end{array}$

where ${\mathsf{U}}_{S}\in {\mathcal{R}}^{{N}_{x}×{N}_{e}}$ is an orthogonal matrix of left singular vectors, ${\mathrm{\Sigma }}_{S}\in {\mathcal{R}}^{{N}_{e}×{N}_{e}}$ is a diagonal matrix with singular values on the diagonal, and ${\mathsf{V}}_{S}^{\mathsf{T}}\in {\mathcal{R}}^{{N}_{e}×{N}_{e}}$ is an orthogonal matrix of right singular vectors of ${{\mathsf{X}}^{\prime }}^{f}$. Next, normalised eigenvalues are computed via

((32) )
$\begin{array}{c}\hfill {\mathsf{E}}_{S}=\frac{1}{{N}_{e}-1}{\mathrm{\Sigma }}_{S}^{2}.\end{array}$

The matrices ${\mathsf{U}}_{S}$ and ${\mathsf{E}}_{S}$ are truncated to the leading eigenvalues. Using ${\stackrel{~}{\mathsf{U}}}_{S}$, ${\stackrel{~}{\mathsf{E}}}_{S}$ with rank ${\stackrel{~}{N}}_{e}\le {N}_{e}$ and ${\stackrel{^}{\mathsf{U}}}_{S}$, ${\stackrel{^}{\mathsf{E}}}_{S}$ with rank $\stackrel{^}{p}<{\stackrel{~}{N}}_{e}$ where the similarity coefficient $\rho$ is computed via

((33) )
$\begin{array}{c}\hfill \rho =\frac{\phantom{\rule{0.333333em}{0ex}}\text{Tr}\left({\stackrel{^}{\mathsf{E}}}_{S}^{\frac{1}{2}}{\stackrel{^}{\mathsf{U}}}_{S}^{\mathsf{T}}{\stackrel{~}{\mathsf{U}}}_{S}{\stackrel{~}{\mathsf{E}}}_{S}^{\frac{1}{2}}\right)}{\phantom{\rule{0.333333em}{0ex}}\text{Tr}\left({\stackrel{~}{\mathsf{E}}}_{S}\right)}\end{array}$

and $\phantom{\rule{0.333333em}{0ex}}\text{Tr}\left(.\right)$ is the trace of a matrix. $\rho$ measures the similarity between two subspaces of different sizes. The process of reducing the subspace is repeated until $\rho$ is close to one, i.e. $\rho >\alpha$ where $1-ϵ\le \alpha \le 1$ is a user selected scalar limit.3 The dimension of the error subspace thus varies with time and in accord with model dynamics (Lermusiaux, 2007). Hence, in the following analysis update the reduced rank approximations

((34) )
$\begin{array}{ccc}\hfill {\stackrel{~}{\mathsf{U}}}_{S}& \approx \hfill & \hfill {\mathsf{U}}_{S},\end{array}$
((35) )
$\begin{array}{ccc}\hfill {\stackrel{~}{\mathsf{E}}}_{S}& \approx \hfill & \hfill {\mathsf{E}}_{S},\end{array}$
((36) )
$\begin{array}{ccc}\hfill {\stackrel{~}{\mathsf{V}}}_{S}& \approx \hfill & \hfill {\mathsf{V}}_{S}\end{array}$

are used where the right singular vector matrix ${\stackrel{~}{\mathsf{V}}}_{S}$ is also truncated to have size ${\stackrel{~}{N}}_{e}×{\stackrel{~}{N}}_{e}$.

The product of the square root matrices, using Equation (14), in the error subspace becomes

((37) )
$\begin{array}{c}\hfill {\mathsf{T}}_{S}{\mathsf{T}}_{S}^{\mathsf{T}}=\mathsf{I}-{\stackrel{~}{\mathsf{S}}}^{\mathsf{T}}{\stackrel{~}{\mathsf{F}}}^{-1}\stackrel{~}{\mathsf{S}}\end{array}$

where ensemble errors in observation space are given by $\stackrel{~}{\mathsf{S}}=\mathsf{H}{\stackrel{~}{\mathsf{U}}}_{S}{\stackrel{~}{\mathsf{E}}}_{S}^{\frac{1}{2}}{\stackrel{~}{\mathsf{V}}}_{S}^{T}$ and innovation covariances by $\stackrel{~}{\mathsf{F}}=\mathsf{H}{\stackrel{~}{\mathsf{U}}}_{S}\stackrel{~}{\mathsf{E}}{\stackrel{~}{\mathsf{U}}}^{\mathsf{T}}{\mathsf{H}}^{\mathsf{T}}+\mathsf{R}$.

The inverse of the ${N}_{y}×{N}_{y}$-matrix $\stackrel{~}{\mathsf{F}}$ is obtained by performing the eigenvalue decomposition (EVD)

((38) )
$\begin{array}{c}\hfill \stackrel{~}{\mathsf{F}}=\mathrm{\Gamma }\mathrm{\Lambda }{\mathrm{\Gamma }}^{\mathsf{T}}\end{array}$

so that Equation (37) becomes

((39) )
$\begin{array}{c}\hfill {\mathsf{T}}_{S}{\mathsf{T}}_{S}^{\mathsf{T}}=\mathsf{I}-{\stackrel{~}{\mathsf{S}}}^{\mathsf{T}}\mathrm{\Gamma }{\mathrm{\Lambda }}^{-1}{\mathrm{\Gamma }}^{\mathsf{T}}\stackrel{~}{\mathsf{S}}.\end{array}$

Performing another EVD in Equation (39),

((40) )
$\begin{array}{c}\hfill {\mathsf{T}}_{S}{\mathsf{T}}_{S}^{\mathsf{T}}=\mathsf{Z}\mathrm{\Pi }{\mathsf{Z}}^{\mathsf{T}},\end{array}$

the symmetric square root becomes

((41) )
$\begin{array}{c}\hfill {\mathsf{T}}_{S}=\mathsf{Z}{\mathrm{\Pi }}^{\frac{1}{2}}{\mathsf{Z}}^{T}.\end{array}$

Hence, the ensemble transformation weight matrices needed to form the ensemble analysis mean and analysis perturbations in Equations (17)–(19) are given by

((42) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{S}^{a}& =\mathsf{Z}{\mathrm{\Pi }}^{\frac{1}{2}}{\mathsf{Z}}^{\mathsf{T}}\hfill \end{array}$
((43) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{S}^{a}& =\frac{1}{\sqrt{{N}_{e}-1}}\stackrel{~}{\mathsf{S}}{\stackrel{~}{\mathsf{F}}}^{-1}\mathsf{d}.\hfill \end{array}$

Note, that when computing the analysis ensemble mean and perturbations, the truncated ensemble perturbation matrix ${\stackrel{~}{{\mathsf{X}}^{\prime }}}^{f}$ is used in the pseudo-algorithm  in Appendix 2. The truncation to the rank ${\stackrel{~}{N}}_{e}$ will results in a reduction of the ensemble size. To avoid that the ensemble size shrinks, Lermusiaux (2007) described an optional adaptive method to generate new ensemble members.

5.4.

The ensemble transform Kalman filter (E$\underline{T}$KF)

The ETKF (Bishop et al., 2001) was derived to explicitly transform the ensemble in a way that results in the correct spread of the analysis ensemble. As the SEIK filter, the ETKF uses the Morrison-Woodbury identity to write

((44) )
$\begin{array}{c}\hfill {\mathsf{T}}_{T}{\mathsf{T}}_{T}^{\mathsf{T}}={\left(\mathsf{I}+\frac{1}{{N}_{e}-1}{\mathsf{S}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{S}\right)}^{-1}.\end{array}$

In contrast to the SEIK filter, ${\mathsf{T}}_{T}{\mathsf{T}}_{T}^{\mathsf{T}}$ is of size ${N}_{e}×{N}_{e}$ and hence represents the error-subspace of dimension ${N}_{e}-1$ indirectly by the full ensemble.

Currently, the most widespread method to compute the update in the ETKF appears to be the formulation of the LETKF by Hunt et al. (2007), which we describe here. By performing the EVD of the symmetric matrix ${\left({\mathsf{T}}_{T}{\mathsf{T}}_{T}^{\mathsf{T}}\right)}^{-1}={\mathsf{U}}_{T}{\mathrm{\Sigma }}_{T}{\mathsf{U}}_{T}^{\mathsf{T}}$ we obtain the symmetric square root

((45) )
$\begin{array}{c}\hfill {\mathsf{T}}_{T}={\mathsf{U}}_{T}{\mathrm{\Sigma }}_{T}^{-\frac{1}{2}}{\mathsf{U}}_{T}^{\mathsf{T}}.\end{array}$

Using this decomposition, the ensemble transformation weight matrices needed to form the ensemble analysis mean and analysis perturbations in Equations (17)–(19) are given by

((46) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{T}& ={\mathsf{U}}_{T}{\mathrm{\Sigma }}_{T}^{-\frac{1}{2}}{\mathsf{U}}_{T}^{\mathsf{T}},\hfill \end{array}$
((47) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{T}& =\frac{1}{\sqrt{\left({N}_{e}-1\right)}}{\mathsf{U}}_{T}{\mathrm{\Sigma }}_{T}^{-1}{\mathsf{U}}_{T}^{\mathsf{T}}{\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}{\mathsf{H}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{d}.\hfill \end{array}$

Using the symmetric square root produces a transform matrix which is closest to the identity matrix in the Frobenius norm (Hunt et al., 2007). Thus, the ETKF results in a minimum transform in the ensemble space, which is different from the notion of ’optimal transportation’ used in the ETPF (see Section 6.3).

The original publication introducing the ETKF (Bishop et al., 2001) did not specify the form of the matrix square root ${\mathsf{T}}_{T}$. There are different possibilities to compute it, and taking a simple single-sided square root could lead to implementations with a biased transformation, such that the transformation by ${\mathsf{W}}^{\prime }$ would not preserve the ensemble mean. However, using the symmetric square root approach this bias is avoided. Livings (2005) proposed another variant normalising first the forecast observation ensemble perturbation matrix so that the observations are dimensionless with standard deviation one

((48) )
$\begin{array}{c}\hfill \stackrel{~}{\mathsf{S}}=\frac{1}{\sqrt{{N}_{e}-1}}{\mathsf{R}}^{-\frac{1}{2}}\mathsf{S}.\end{array}$

Substituting (48) into (44) gives

((49) )
$\begin{array}{c}\hfill {\mathsf{T}}_{T}{\mathsf{T}}_{T}^{\mathsf{T}}={\left(\mathsf{I}+{\stackrel{~}{\mathsf{S}}}^{\mathsf{T}}\stackrel{~}{\mathsf{S}}\right)}^{-1}.\end{array}$

To find the square root form next we perform the SVD

((50) )
$\begin{array}{c}\hfill {\stackrel{~}{\mathsf{S}}}^{\mathsf{T}}={\mathsf{U}}_{T}{\stackrel{~}{\mathrm{\Sigma }}}_{T}{\stackrel{~}{\mathsf{V}}}_{T}^{\mathsf{T}}.\end{array}$

In this case, the ensemble transformation weight matrices in Equations (17)–(19) become

((51) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{T}& ={\mathsf{U}}_{T}{\left(\mathsf{I}+{\stackrel{~}{\mathrm{\Sigma }}}_{T}{\stackrel{~}{\mathrm{\Sigma }}}_{T}^{\mathsf{T}}\right)}^{-\frac{1}{2}}{\mathsf{U}}_{T}^{\mathsf{T}},\hfill \end{array}$
((52) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{T}& =\frac{1}{\sqrt{{N}_{e}-1}}{\mathsf{U}}_{T}{\left(\mathsf{I}+{\stackrel{~}{\mathrm{\Sigma }}}^{\mathsf{T}}{\stackrel{~}{\mathrm{\Sigma }}}_{T}^{\mathsf{T}}\right)}^{-1}{\stackrel{~}{\mathrm{\Sigma }}}_{T}{\stackrel{~}{\mathsf{V}}}_{T}^{\mathsf{T}}{\mathsf{R}}^{-\frac{1}{2}}\mathsf{d}.s\hfill \end{array}$

This formulation avoids the multiplication ${\mathsf{S}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{S}$ and can hence prevent possible loss of accuracy due to rounding errors. However, this formulation also requires the computation of the square root of $\mathsf{R}$, which itself can result in rounding errors if $\mathsf{R}$ is not diagonal.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the ETKF method.

5.5.

The ensemble adjustment Kalman filter (E$\underline{A}$KF)

The EAKF was introduced by Anderson (2001). Similarly to the SEIK filter and the ETKF we require here that the matrix ${\mathsf{R}}^{-1}$ is readily available. Using scaled ensemble perturbations as discussed for the ETKF-formulation by Livings (2005) in Equations (48)–(49) we can write

((53) )
$\begin{array}{c}\hfill {\mathsf{T}}_{A}{\mathsf{T}}_{A}^{\mathsf{T}}={\left(\mathsf{I}+{\stackrel{~}{\mathsf{S}}}^{\mathsf{T}}\stackrel{~}{\mathsf{S}}\right)}^{-1}.\end{array}$

We perform the SVD on the scaled forecast ensemble observation perturbation matrix

((54) )
$\begin{array}{c}\hfill {\stackrel{~}{\mathsf{S}}}^{\mathsf{T}}={\mathsf{U}}_{A}{\mathrm{\Sigma }}_{A}{\mathsf{V}}_{A}^{\mathsf{T}}.\end{array}$

Note that ${\mathsf{U}}_{A}={\mathsf{U}}_{T}$, related to the similarity between the EAKF and the ETKF. We also use an EVD to obtain

((55) )
$\begin{array}{c}\hfill {\mathsf{P}}^{f}={\mathsf{Z}}_{A}{\mathrm{\Gamma }}_{A}{\mathsf{Z}}_{A}^{\mathsf{T}}.\end{array}$

The decomposition in Equation (55) is usually performed as an SVD of the ensemble perturbation matrix ${{\mathsf{X}}^{\prime }}^{f}$, which approximates ${\mathsf{P}}^{f}$ using ${N}_{e}$ ensemble members.

Due to the ranks of the matrices decomposed in Equations (54) and (55) there are at most $q=min\left({N}_{e}-1,{N}_{y}\right)$ non-zero singular values in ${\mathrm{\Sigma }}_{A}$ and at most ${N}_{e}-1$ non-zero eigenvalues in ${\mathrm{\Gamma }}_{A}$. Thus, the matrices in the equations below can be truncated as follows: ${\mathsf{U}}_{A}\in {\mathcal{R}}^{{N}_{e}×q}$, ${\mathrm{\Sigma }}_{A}\in {\mathcal{R}}^{q×q}$, ${\mathsf{V}}_{A}^{\mathsf{T}}\in {\mathcal{R}}^{q×{N}_{y}}$ and ${\mathrm{\Gamma }}_{A}\text{,}{\mathsf{Z}}_{A}\in {\mathcal{R}}^{{N}_{e}-1×{N}_{e}-1}$. Then, the ensemble transformation weight matrices in Equations (17)–(19) are given by

((56) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{A}& ={\mathsf{U}}_{A}{\left(\mathsf{I}+{\mathrm{\Sigma }}_{A}{\mathrm{\Sigma }}_{A}^{\mathsf{T}}\right)}^{-\frac{1}{2}}{\mathrm{\Gamma }}_{A}^{-\frac{1}{2}}{\mathsf{Z}}_{A}^{\mathsf{T}}{{\mathsf{X}}^{\prime }}^{f},\hfill \end{array}$
((57) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{A}& =\frac{1}{\sqrt{{N}_{e}-1}}{\mathsf{U}}_{A}{\left(\mathsf{I}+{\mathrm{\Sigma }}_{A}^{\mathsf{T}}{\mathrm{\Sigma }}_{A}\right)}^{-1}{\mathrm{\Sigma }}_{T}{\mathsf{V}}_{A}^{\mathsf{T}}{\mathsf{R}}^{-\frac{1}{2}}\mathsf{d}.\hfill \end{array}$

Note, that the EAKF perturbation weight matrix in Equation (5657) is the same as applying the orthogonal matrix ${\mathrm{\Gamma }}_{A}^{-\frac{1}{2}}{\mathsf{Z}}_{A}^{\mathsf{T}}{{\mathsf{X}}^{\prime }}^{f}$ instead of the orthogonal matrix ${\mathsf{U}}_{T}$ in the ETKF perturbation transform matrix given by Equation (5152) (Tippett et al., 2003).

The decomposition in Equation (55) is costly due to the size of the matrix to be decomposed. For this reason, the EAKF is typically applied with serial observation processing as will be described for the EnSRF in Section 5.7.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EAKF method.

5.6.

The ensemble square root filter (EnS$\underline{R}$F)

The EnSRF was introduced by Whitaker and Hamill (2002) to avoid the use of perturbed observations by a square root formulation. In the EnSRF the transform matrix is given by

${\mathsf{T}}_{R}{\mathsf{T}}_{R}^{\mathsf{T}}=\mathsf{I}-{\mathsf{S}}^{\mathsf{T}}{\mathsf{F}}^{-1}\mathsf{S}.$

We first perform an EVD of $\mathsf{F}$ to obtain its inverse

((58) )
$\begin{array}{c}\hfill {\mathsf{F}}^{-1}={\mathrm{\Gamma }}_{R}{\mathrm{\Lambda }}_{R}^{-1}{\mathrm{\Gamma }}_{R}^{\mathsf{T}}.\end{array}$

Then, we can write the ensemble analysis covariance as

((59) )
$\begin{array}{cc}\hfill {{\mathsf{X}}^{\prime }}^{a}{\left({{\mathsf{X}}^{\prime }}^{a}\right)}^{\mathsf{T}}& ={{\mathsf{X}}^{\prime }}^{f}\left(\mathsf{I}-{\mathsf{S}}^{\mathsf{T}}{\mathrm{\Gamma }}_{R}{\mathrm{\Lambda }}_{R}^{-1}{\mathrm{\Gamma }}_{R}^{\mathsf{T}}\mathsf{S}\right){\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}\hfill \\ \hfill & ={{\mathsf{X}}^{\prime }}^{f}\left(\mathsf{I}-{\mathsf{G}}_{R}{\mathsf{G}}_{R}^{\mathsf{T}}\right){\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}\hfill \end{array}$

where ${\mathsf{G}}_{R}={\mathsf{S}}^{\mathsf{T}}{\mathrm{\Gamma }}_{R}{\mathrm{\Lambda }}_{R}^{-\frac{1}{2}}$. Decomposing ${\mathsf{G}}_{R}={\mathsf{U}}_{R}{\mathrm{\Sigma }}_{R}{\mathsf{V}}_{R}^{\mathsf{T}}$ using an SVD we obtain

$\begin{array}{cc}\hfill {{\mathsf{X}}^{\prime }}^{a}{\left({{\mathsf{X}}^{\prime }}^{a}\right)}^{\mathsf{T}}& ={{\mathsf{X}}^{\prime }}^{f}\left(\mathsf{I}-\left[{\mathsf{U}}_{R}{\mathrm{\Sigma }}_{R}{\mathsf{V}}_{R}^{\mathsf{T}}\right]{\left[{\mathsf{U}}_{R}{\mathrm{\Sigma }}_{R}{\mathsf{V}}_{R}^{\mathsf{T}}\right]}^{\mathsf{T}}\right){\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}\hfill \\ \hfill & ={{\mathsf{X}}^{\prime }}^{f}{\mathsf{U}}_{R}\left(\mathsf{I}-{\mathrm{\Sigma }}_{R}{\mathrm{\Sigma }}_{R}^{\mathsf{T}}\right){\mathsf{U}}_{R}^{\mathsf{T}}{\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}.\hfill \end{array}$

The diagonal matrix holding the singular values is of dimension ${\mathrm{\Sigma }}_{R}\in {\mathcal{R}}^{{N}_{e}×{N}_{y}}$ and has thus at most $min\left({N}_{e},{N}_{y}\right)$ nonzero singular values. To reduce the computational cost for the case of high dimensional models with ${N}_{e}\ll {N}_{y}$, we can truncate to get the much smaller matrix ${\mathrm{\Sigma }}_{R}\in {\mathcal{R}}^{{N}_{e}×min\left({N}_{e},{N}_{y}\right)}$ (see Table 1). The square root form for the ensemble analysis perturbations is given by

((60) )
$\begin{array}{c}\hfill {{\mathsf{X}}^{\prime }}^{a}={{\mathsf{X}}^{\prime }}^{f}{\mathsf{U}}_{R}{\left(\mathsf{I}-{\mathrm{\Sigma }}_{R}{\mathrm{\Sigma }}_{R}^{\mathsf{T}}\right)}^{\frac{1}{2}},\end{array}$

and the ensemble transformation weight matrices needed to form the ensemble analysis mean and analysis perturbations in Equations (17)–(19) are given by

((61) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{R}& ={\mathsf{U}}_{R}{\left(\mathsf{I}-{\mathrm{\Sigma }}_{R}{\mathrm{\Sigma }}_{R}^{\mathsf{T}}\right)}^{\frac{1}{2}}{\mathsf{U}}_{R}^{\mathsf{T}},\hfill \end{array}$
((62) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{R}& ={\mathsf{S}}^{\mathsf{T}}{\mathrm{\Gamma }}_{R}{\mathrm{\Lambda }}_{R}^{-1}{\mathrm{\Gamma }}_{R}^{\mathsf{T}}\mathsf{d},\hfill \end{array}$

where in Equation (6162) we have post-multiplied the ensemble analysis perturbations by the orthogonal matrix of the left singular vectors ${\mathsf{U}}_{R}^{\mathsf{T}}$ to ensure that the analysis ensemble is unbiased (Livings et al., 2008; Sakov and Oke, 2008).

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnSRF method.

5.7.

EnSRF with serial observation treatment

The serial observation treatment in the EnSRF was introduced by Whitaker and Hamill (2002) together with the EnSRF assimilating all observations at once. The serial treatment reduces the computing cost. Hence, the EnSRF is typically not applied with the bulk update described above, but with serial treatment of observations, which is possible if $\mathsf{R}$ is diagonal. In this case, each single observation can be assimilated separately. Thus, $\mathsf{F}$ reduces to the scalar F and $\mathsf{S}{\mathsf{S}}^{\mathsf{T}}$ to the scalar ${S}^{2}$. For a single observation (${N}_{y}=1$), the matrix ${\mathsf{G}}_{R}$ becomes a vector given by

((63) )
$\begin{array}{c}\hfill {\mathsf{G}}_{R}=\frac{1}{\sqrt{F}}{\mathsf{S}}^{\mathsf{T}}.\end{array}$

All singular values of ${\mathsf{G}}_{R}$ are zero except the first, which is its norm,

((64) )
$\begin{array}{c}\hfill {\mathrm{\Sigma }}_{R}=\frac{S}{\sqrt{F}}\mathsf{e}\end{array}$

where $\mathsf{e}$ is a vector with ${N}_{e}$ zero elements except the first, which is one. The first column of ${\mathsf{U}}_{R}$ corresponds to the normalised vector ${\mathsf{S}}^{\mathsf{T}}$

((65) )
$\begin{array}{c}\hfill {\mathsf{U}}_{R}\mathsf{e}=\frac{1}{S}{\mathsf{S}}^{\mathsf{T}}.\end{array}$

The square root of the diagonal matrix in Equation (61) can be written as a sum of the identity matrix and a matrix proportional to $\mathsf{e}{\mathsf{e}}^{\mathsf{T}}$:

((66) )
$\begin{array}{c}\hfill {\left(\mathsf{I}-{\mathrm{\Sigma }}_{R}{\mathrm{\Sigma }}_{R}^{\mathsf{T}}\right)}^{\frac{1}{2}}=\mathsf{I}-\left(1-\sqrt{\left({N}_{e}-1\right)R/F}\right)\phantom{\rule{0.277778em}{0ex}}\mathsf{e}{\mathsf{e}}^{\mathsf{T}}.\end{array}$

Using Equation (65) and the fact that all columns of $\mathsf{U}$ are orthonormal, one obtains

((67) )
$\begin{array}{c}\hfill {{\mathsf{W}}^{\prime }}_{R}=\mathsf{I}-\frac{1-\sqrt{\left({N}_{e}-1\right)R/F}}{{S}^{2}}{\mathsf{S}}^{\mathsf{T}}\mathsf{S}\end{array}$

and the weight vector for the update of the ensemble mean is

((68) )
$\begin{array}{c}\hfill {\overline{\mathsf{w}}}_{R}=\frac{1}{F}{\mathsf{S}}^{\mathsf{T}}\mathsf{d}.\end{array}$

The equations above are then applied in a series over each single observation. The equations are likewise valid when the EAKF is formulated with a serial observation treatment.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnSRF method with serial observation treatment.

5.8.

The square root formulation of the stochastic ensemble Kalman filter (SEnK$\underline{F}$)

The SEnKF was introduced by Evensen (2003) as a square root formulation of the stochastic EnKF. Defining ${\mathsf{Y}}^{\prime }$ as for the stochastic EnKF and using a matrix

((69) )
$\begin{array}{c}\hfill {\mathsf{F}}_{F}=\mathsf{S}{\mathsf{S}}^{\mathsf{T}}+{\mathsf{Y}}^{\prime }{{\mathsf{Y}}^{\prime }}^{\mathsf{T}}\end{array}$

we obtain the matrix

((70) )
$\begin{array}{c}\hfill {\mathsf{T}}_{F}{\mathsf{T}}_{F}^{\mathsf{T}}=\mathsf{I}-{\mathsf{S}}^{\mathsf{T}}{\mathsf{F}}_{F}^{-1}\mathsf{S}.\end{array}$

We could decompose ${\mathsf{F}}_{F}$ using an EVD but this is costly if ${N}_{y}\gg {N}_{e}$ (Evensen, 2003). Instead, we assume that forecast and observation errors are uncorrelated, i.e.

((71) )
$\begin{array}{c}\hfill \mathsf{S}{{\mathsf{Y}}^{\prime }}^{\mathsf{T}}\equiv 0,\end{array}$

so that

((72) )
$\begin{array}{c}\hfill {\mathsf{F}}_{F}=\mathsf{S}{\mathsf{S}}^{\mathsf{T}}+{\mathsf{Y}}^{\prime }{{\mathsf{Y}}^{\prime }}^{\mathsf{T}}=\left(\mathsf{S}+{\mathsf{Y}}^{\prime }\right){\left(\mathsf{S}+{\mathsf{Y}}^{\prime }\right)}^{\mathsf{T}}.\end{array}$

Now we can use an SVD to decompose $\mathsf{S}+{\mathsf{Y}}^{\prime }={\mathsf{U}}_{F}{\mathrm{\Sigma }}_{F}{\mathsf{V}}_{F}^{\mathsf{T}}$, giving

((73) )
$\begin{array}{c}\hfill {\mathsf{F}}_{F}={\mathsf{U}}_{F}{\mathrm{\Sigma }}_{F}{\mathrm{\Sigma }}_{F}^{\mathsf{T}}{\mathsf{U}}_{F}^{\mathsf{T}},\end{array}$

which has a much smaller computational cost than decomposing ${\mathsf{F}}_{F}$ using an EVD when ${N}_{y}\gg {N}_{e}$.

The ensemble transformation is then computed according to Equation (23) with the weight matrix given by

((74) )
$\begin{array}{c}\hfill {{\mathsf{W}}^{\prime }}_{F}={\mathsf{S}}^{\mathsf{T}}{\mathsf{U}}_{F}{\mathrm{\Sigma }}_{F}^{-1}{\left({\mathrm{\Sigma }}_{F}^{-1}\right)}^{\mathsf{T}}{\mathsf{U}}_{F}^{\mathsf{T}}\left(\mathsf{Y}-{\mathsf{Y}}^{f}\right).\end{array}$

Algorithm  in Appendix 2 gives a pseudo-algorithm of the EnKF in square root form.

5.9.

The error-subspace transform Kalman filter (EST$\underline{K}$F)

The ESTKF has been derived from the SEIK filter (Nerger et al., 2012a) by combining the advantages of the SEIK filter and the ETKF. The ESTKF exhibits better properties than the SEIK filter, like a minimum ensemble transformation as the ETKF. However, unlike the ETKF, the ESTKF computes the ensemble transformation in the error subspace spanned by the ensemble rather than using the ensemble representation of it. That is, the error subspace of the dimension ${N}_{e}-1$ is represented directly in the ESTKF (similarly to the SEIK filter) while in the ETKF the error subspace is represented indirectly using the full ensemble of size ${N}_{e}$.

Similar to the SEIK filter, a projection matrix ${\mathsf{A}}_{K}\in {\mathcal{R}}^{{N}_{e}×{N}_{e}-1}$ is used whose elements are defined by

((75) )
$\begin{array}{c}\hfill {{\mathsf{A}}_{K}}_{\left\{i,j\right\}}:=\left\{\begin{array}{cc}\hfill 1-\frac{1}{{N}_{e}}\frac{1}{\frac{1}{\sqrt{{N}_{e}}}+1}& \phantom{\rule{1em}{0ex}}\mathrm{f}or\phantom{\rule{4pt}{0ex}}i=j,i<{N}_{e}\hfill \\ \hfill -\frac{1}{{N}_{e}}\frac{1}{\frac{1}{\sqrt{{N}_{e}}}+1}& \phantom{\rule{1em}{0ex}}\mathrm{f}or\phantom{\rule{4pt}{0ex}}i\ne j,i<{N}_{e}\hfill \\ \hfill -\frac{1}{\sqrt{{N}_{e}}}& \phantom{\rule{1em}{0ex}}\mathrm{f}or\phantom{\rule{4pt}{0ex}}i={N}_{e}\hfill \end{array}\right\\end{array}$

With this projection, the basis vectors for the error subspace are given by

((76) )
$\begin{array}{c}\hfill {\mathsf{L}}_{K}={\mathsf{X}}^{f}{\mathsf{A}}_{K}.\end{array}$

As for the matrix $\mathrm{\Omega }$ in the SEIK filter, the columns of matrix ${\mathsf{A}}_{K}$ are orthonormal and orthogonal to the vector ${\left(1,\cdots ,1\right)}^{\mathsf{T}}$. When the matrix ${\mathsf{L}}_{K}$ is computed, the multiplication with ${\mathsf{A}}_{K}$ implicitly subtracts the ensemble mean. Further, ${\mathsf{A}}_{K}$ subtracts a fraction of the last column of ${{\mathsf{X}}^{\prime }}^{f}$ from all other columns. In this way, the last column of ${{\mathsf{X}}^{\prime }}^{f}$ is not just dropped as in the SEIK filter, but its information is distributed over the other columns. The product of the square root matrices in the error subspace becomes now

((77) )
$\begin{array}{c}\hfill {\mathsf{T}}_{K}{\mathsf{T}}_{K}^{\mathsf{T}}={\left(\mathsf{I}+\frac{1}{{N}_{e}-1}{\left(\mathsf{H}{\mathsf{L}}_{K}\right)}^{\mathsf{T}}{\mathsf{R}}^{-1}\left(\mathsf{H}{\mathsf{L}}_{K}\right)\right)}^{-1}.\end{array}$

By performing the EVD of the symmetric matrix ${\left({\mathsf{T}}_{K}{\mathsf{T}}_{K}^{\mathsf{T}}\right)}^{-1}={\mathsf{U}}_{K}{\mathrm{\Sigma }}_{K}{\mathsf{U}}_{K}^{\mathsf{T}}$ we obtain the symmetric square root

((78) )
$\begin{array}{c}\hfill {\mathsf{T}}_{K}={\mathsf{U}}_{K}{\mathrm{\Sigma }}_{K}^{-\frac{1}{2}}{\mathsf{U}}_{K}^{\mathsf{T}}.\end{array}$

Then, the ensemble transformation weight matrices needed to form the ensemble analysis mean and perturbations in Equations (17)–(19) are given by

((79) )
$\begin{array}{cc}\hfill {{\mathsf{W}}^{\prime }}_{K}& ={\mathsf{A}}_{K}{\mathsf{T}}_{K}{\mathsf{A}}_{K}^{\mathsf{T}},\hfill \end{array}$
((80) )
$\begin{array}{cc}\hfill {\overline{\mathsf{w}}}_{K}& =\frac{1}{\sqrt{{N}_{e}-1}}{\mathsf{A}}_{K}{\mathsf{U}}_{K}{\mathrm{\Sigma }}_{K}^{-1}{\mathsf{U}}_{K}^{\mathsf{T}}{\left(\mathsf{H}{\mathsf{L}}_{K}\right)}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{d}.\hfill \end{array}$

Compared to the SEIK filter, both the matrices ${\mathsf{A}}_{E}$ and $\mathrm{\Omega }$ are replaced by ${\mathsf{A}}_{K}$ in the ESTKF. In addition, the ESTKF uses the symmetric square root of ${\mathsf{T}}_{K}{\mathsf{T}}_{K}^{\mathsf{T}}$. The use of ${\mathsf{A}}_{K}$ leads to a consistent projection onto the error subspace and back onto the state space, while the symmetric square root ensures that the minimum transformation is obtained. It is also possible to apply the ESTKF with a random ensemble transformation. For this case, the rightmost matrix ${\mathsf{A}}_{K}$ in Equation (79) is replaced by a random matrix with the same properties as the deterministic ${\mathsf{A}}_{K}$.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the ESTKF method.

5.10.

The Maximum Likelihood Ensemble Filter ($\underline{M}$LEF)

The MLEF (Zupanski, 2005; Zupanski et al., 2008) calculates the state estimate as the maximum of the posterior probability density (pdf) function. This is in contrast to the ensemble Kalman filter methods described in this paper, which are based on the minimum variance approach, so targeting the mean. The maximum of the pdf is found by an iterative minimisation of the cost function using a generalised non-linear conjugate-gradient method.

The original MLEF filter (Zupanski, 2005) uses a second-order Taylor approximation to the analysis increments, which requires that the cost function is twice differentiable. However, this requirement is not necessarily satisfied in many real life non-linear applications, for example where the parameterisation of some processes is used in models or for strongly non-linear observation operators. Here, we present the revised MLEF by Zupanski et al. (2008) that avoids this requirement by using a non-differentiable minimisation algorithm.

In contrast to all other ensemble filters discussed above, the MLEF maintains the actual state estimate separately from the ensemble, which is used to provide the measurement of estimation error. Thus, the analysis perturbations in the MLEF are computed for each ensemble member in a square root form without re-centring them onto the analysis ensemble mean. Hence, this filter does not follow the same square root form as the filters described above and is presented last in this section.

In the MLEF, the ensemble analysis perturbations are defined using the difference between analysis and forecast for each ensemble member, ${{\mathsf{x}}^{\prime }}_{j}^{a}={\mathsf{x}}_{j}^{a}-{\mathsf{x}}_{j}^{f}$ and not between ensemble analysis states and the analysis mean. They are found using generalised Hessian preconditioning in state space. A change of variable is performed as follows

((81) )
$\begin{array}{cc}\hfill {\mathsf{x}}_{j}^{a}& ={\mathsf{x}}_{j}^{f}+{{\mathsf{x}}^{\prime }}_{j}^{a},\hfill \end{array}$
((82) )
$\begin{array}{cc}\hfill {{\mathsf{x}}^{\prime }}_{j}^{a}& ={\mathsf{G}}^{\frac{1}{2}}{\xi }_{j},\hfill \end{array}$

where the matrix ${\mathsf{G}}^{\frac{1}{2}}={{\mathsf{X}}^{\prime }}^{f}{\left(\mathsf{I}+\mathsf{C}\right)}^{-\frac{1}{2}}\in {\mathcal{R}}^{{N}_{x}×{N}_{e}}$ represents the inverse square root of the generalised Hessian estimated at the initial point of minimisation, and ${\xi }_{j}$ is a control variable defined in ensemble subspace. Matrix $\mathsf{C}$ is the covariance matrix

((83) )
$\begin{array}{c}\hfill \mathsf{C}={\left({{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}{\mathsf{H}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{H}{{\mathsf{X}}^{\prime }}^{f}.\end{array}$

Equation (82) can be written as a transformation of ensemble perturbations by

((84) )
$\begin{array}{c}\hfill {{\mathsf{x}}^{\prime }}_{j}^{a}={{\mathsf{X}}^{\prime }}^{f}{\mathsf{w}}_{M,j}\end{array}$

where the elements of the weight vector ${\mathsf{w}}_{M,j}\in {\mathcal{R}}^{{N}_{e}}$ for ensemble member j are given by

((85) )
$\begin{array}{c}\hfill {\mathsf{w}}_{M,j}={\left(\mathsf{I}+\mathsf{C}\right)}^{-\frac{1}{2}}{\xi }_{j}.\end{array}$

Now we use an EVD of $\mathsf{C}={\mathrm{\Gamma }}_{M}{\mathrm{\Lambda }}_{M}{\mathrm{\Gamma }}_{M}^{\mathsf{T}}$ to write Equation (85) as

((86) )
$\begin{array}{c}\hfill {\mathsf{w}}_{M,j}={\mathrm{\Gamma }}_{M}{\left(\mathsf{I}+{\mathrm{\Lambda }}_{M}\right)}^{-\frac{1}{2}}{\mathrm{\Gamma }}_{M}^{\mathsf{T}}{\xi }_{j}.\end{array}$

We note, that in a linear case matrix ${\mathsf{G}}^{\frac{1}{2}}$ is a square root of ${\mathsf{P}}^{a}$. Indeed the same decomposition and inversion was used to find the square root analysis perturbations for the ETKF, see Equation (45).

After successfully accomplishing the Hessian preconditioning, the next step in the iterative minimisation is to calculate the gradient in the ensemble-spanned subspace. The preconditioned generalised gradient at the k-th minimisation iteration is obtained by

((87) )
$\begin{array}{c}\hfill {\mathrm{\nabla }}_{G}J\left({\mathsf{x}}_{k}\right)={\left[\mathsf{I}+\mathsf{C}\right]}^{-1}{\xi }_{k}-\mathsf{Z}{\left({\mathsf{x}}_{k}\right)}^{\mathsf{T}}{\mathsf{R}}^{-\frac{1}{2}}\left[\mathsf{y}-\mathcal{H}\left({\mathsf{x}}_{k}\right)\right],\end{array}$

where

((88) )
$\begin{array}{cc}\hfill \mathsf{Z}\left(\mathsf{x}\right)& =\left[{\mathsf{z}}_{1}\left(\mathsf{x}\right),{\mathsf{z}}_{2}\left(\mathsf{x}\right),\dots ,{\mathsf{z}}_{{N}_{e}}\left(\mathsf{x}\right)\right]\hfill \end{array}$
((89) )
$\begin{array}{cc}\hfill {\mathsf{z}}_{j}\left(\mathsf{x}\right)& ={\mathsf{R}}^{-\frac{1}{2}}\left[\mathcal{H}\left(\mathsf{x}+{{\mathsf{x}}^{\prime }}_{j}^{f}\right)-\mathcal{H}\left(\mathsf{x}\right)\right].\hfill \end{array}$

Upon convergence we have obtained an optimal state analysis

$\begin{array}{c}\hfill {\mathsf{x}}^{a}={\mathsf{x}}_{k}.\end{array}$

To complete the non-differential formulation of the MLEF, ensemble analysis perturbations are computed as follows

((90) )
$\begin{array}{c}\hfill {{\mathsf{X}}^{\prime }}^{a}={{\mathsf{X}}^{\prime }}^{f}{\left[\mathsf{I}+{\left(\mathsf{Z}\left({\mathsf{x}}^{a}\right)\right)}^{\mathsf{T}}\mathsf{Z}\left({\mathsf{x}}^{a}\right)\right]}^{-\frac{1}{2}}\end{array}$

where $\mathsf{Z}\left({\mathsf{x}}^{a}\right)$ is obtained by substituting $\mathsf{x}={\mathsf{x}}^{a}$ into Equation (89).

Algorithm  in Appendix 2 gives a pseudo-algorithm of the MLEF method.

5.11.

Summary of ensemble Kalman Filter methods

In this section, we have described ten most popular ensemble Kalman filter methods that are applicable to high-dimensional non-Gaussian problems.

This collection of methods could be categorised in different ways, for example in deterministic ensemble filters, where the analysis is found through explicit mathematical transformations (SEIK, ETKF, EAKF, EnSRF, ESTKF, MLEF), and stochastic ensemble filters, where perturbed forecasted observations are used (EnKF, SEnKF). Burgers et al. (1998) and Houtekamer and Mitchell (1998) showed that in order to maintain sufficient spread in the ensemble and prevent filter divergence, the observations should be treated as random variables, i.e. perturbed, while our interpretation is slightly different, as described above. This stochasticity, of course, leads to extra sampling noise in the filters. On the other hand, Lawson and Hansen (2004) showed that for large ensemble sizes, stochastic filters can handle non-linearity better than the deterministic filters. This is due to the additional Gaussian observation spread normalising the ensemble update in the stochastic filter, which tends to erase the non-Gaussian higher moments non-linear error growth has generated. However, current computational power restricts us to small ensemble sizes for high-dimensional problems, in which case stochastic filters add another source of sampling error thus underestimating the analysis update (Whitaker and Hamill, 2002).

While all ensemble Kalman filter methods use low-rank approximations of the state error covariance matrix, some of the methods in this section are referred to as error-subspace ensemble filters because they directly operate in the error subspace spanned by ensemble rather than using the ensemble representation of it. Such filters are SEIK (see Section 5.2), ESTKF (see Section 5.9), and ESSE (see Section 5.3). Nerger et al. (2005b) compares the stochastic EnKF with the SEIK filter in an idealised high-dimensional shallow water model with non-linear evolution, showing that the main difference between the filters lies in the efficiency of the representation of the covariance matrix $\mathsf{P}$. In general, the EnKF filter will require a larger ensemble size ${N}_{e}$ to achieve the same performance as the SEIK filter. The relation of the ETKF and SEIK methods has been studied by Nerger et al. (2012a), where also the ESTKF has been derived. Apart from computing the ensemble transformation in the error subspace in case of SEIK and ESKTF, the three filters are essentially equivalent. However, for the SEIK filter it has been found that the application of the Cholesky decomposition can lead to unevenly distributed variance in the ensemble members. To this end, the ESTKF and ETKF method are preferable unless a random matrix $\mathrm{\Omega }$ is used in the SEIK filter.

The methods described in this section each have nuances in which they differ one from another as well as underlying common ground. Writing these methods using unified mathematics notation allows us to see these algorithmic differences and commonalities more readily. Many filters described above have several common variables and while for some methods the variables have different sizes, others can not only have the same size but actually the same value, too. Table 1 summarises the sizes of the common variables between the methods and below we comment on whether they have the same value.

Apart from the matrices listed in Table 1, there are the finally resulting weight matrices ${\mathsf{W}}^{\prime }$ and $\overline{\mathsf{W}}$. The matrix $\overline{\mathsf{W}}$ is identical for all filters. Thus, for all filters except EnKF, SEnKF and MLEF, which don’t use this matrix, the mean of the analysis ensemble is identical. The EnKF and SEnKF are an exception because they do not explicitly transform the ensemble mean and introduce sampling error due to the perturbed observations. The maximum likelihood approach of the MLEF also results in a different analysis state estimate if the ensemble distribution is not Gaussian and the observation operator is non-linear. Further the square ${\mathsf{W}}^{\prime }{{\mathsf{W}}^{\prime }}^{T}$ is identical for all filters except the EnKF, SEnKF and MLEF. Thus, the analysis covariance matrix ${\mathsf{P}}^{a}$ will be identical.

In contrast to the equality of the matrix $\overline{\mathsf{W}}$, the matrix ${\mathsf{W}}^{\prime }$ is different for almost all methods. Thus, while many methods yield the same analysis ensemble covariance matrix, their ensemble perturbations are distinct. In the ETKF and ESTKF methods, the dimensions of the matrices $\mathsf{T}$, $\mathsf{U}$, and $\mathrm{\Sigma }$ have distinct dimensions. However, the ensemble transformation weight matrices ${\mathsf{W}}^{\prime }$ of both methods are identical (Nerger et al., 2012a).

In general, the choice of using a particular ensemble method depends on a number of the system’s components: the dynamical model at hand, model error, number and types of observations, ensemble size. Given these degrees of freedom it is not possible to attribute one data assimilation method to be better suited for a general situation. In practise operational meteorological applications most widely use the LETKF and the serial formulations of the EnSRF and EAKF, while in oceanography there are many applications of the stochastic EnKF, the SEIK filter, and the ESTKF. There are less applications of the ESSE and MLEF, despite the fact that these filters are algorithmically interesting because of the ensemble-size adaptivity of ESSE and the maximum-likelihood solution MLEF. From the algorithmic viewpoint, the stochastic EnKF will be useful if the stochasticity can be an advantage and if large ensembles can be used. Further, the filters SEIK, ETKF, ESTKF differ from the EnKF, EnSRF and EAKF also in the application of distinct localisation methods (see Section 9.1 for the discussion of localisation). EnKF, EnSRF and EAKF allow for localisation in the state space, which could be advantageous for some observation types (Campbell et al., 2010). The serial formulation of the EnSRF and EAKF requires that the observation error covariance matrix is diagonal. Thus, these filters cannot directly be applied if the observation errors are correlated. A transformation into variables that are uncorrelated is possible in theory, but it is most likely not practical for large sets of observations.

6.

Particle filters

In this section we will consider the standard particle filter followed by three efficient variants of the particle filters: the Equivalent Weights Particle Filter (EWPF, van Leeuwen, 2010;van Leeuwen, 2011; Ades and van Leeuwen, 2013), the Implicit Equal-Weights Particle Filter (Zhu et al., 2016) and the Ensemble Transform Particle Filter (Reich, 2013). Other variants of local particle filters are discussed in Section 9 on practical implementation. Another interesting particle filter for high-dimensional systems, the so called implicit particle filter (Chorin and Tu, 2009; Chorin et al., 2010; Morzfeld et al., 2012; van Leeuwen et al., 2015), is not discussed here as it needs a 4D-Var-like minimisation for each particle. The Multivariate Rank Histogram Filter (MRHF, Metref et al., 2014b), based on the Rank-Histogram Filter of Anderson (2010) that performs well in highly non-Gaussian regimes, has been recently developed in the European project Sangoma.4 However, it is still under development for high-dimensional systems and its idea is only shortly described in Section 9.1.5. Often particle filters are defined as providing approximations of $p\left({\mathsf{x}}^{\left(0:m\right)}|{\mathsf{y}}^{\left(1:m\right)}\right)$, but we restrict ourselves to particle filters that are approximations of the marginal posterior pdf $p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{y}}^{\left(1:m\right)}\right)$ as there are at present no efficient algorithms for the former for high-dimensional geophysical systems, and we have forecasting in mind. Furthermore, for ease of presentation we take all earlier observations for granted, leading to the marginal posterior at time m being denoted as $p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{y}}^{\left(m\right)}\right)$.

6.1.

The standard particle filter

This particle filter is also known as the bootstrap filter or Sequential Importance Resampling (SIR). The probability distribution function (pdf) in particle filtering, represented by ${N}_{e}$ particles or ensemble members at time k, is given by

((91) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(m\right)}\right)=\frac{1}{{N}_{e}}\sum _{j=1}^{{N}_{e}}\delta \left({\mathsf{x}}^{\left(m\right)}-{\mathsf{x}}_{j}^{\left(m\right)}\right),\end{array}$

where ${\mathsf{x}}^{\left(m\right)}\in {\mathcal{R}}^{{N}_{x}}$ is the ${N}_{x}$-dimensional state of the system that has been integrated forward in time using the stochastic forward model and $\delta \left(x\right)$ is a Dirac-delta function. We let time m to be the time of a current set of observations with the previous observation set at time 0. Then the stochastic forward model for times $0 for each particle $j=1,\dots ,{N}_{e}$ is given by

((92) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{\left(k\right)}={\mathcal{M}}_{k}\left({\mathsf{x}}_{j}^{\left(k-1\right)}\right)+{\beta }_{j}^{\left(k\right)},\end{array}$

where ${\beta }_{j}^{\left(k\right)}\in {\mathcal{R}}^{{N}_{x}}$ are random terms representing the Gaussian distributed model errors with mean zero and covariance matrix $\mathsf{Q}$, and ${\mathcal{M}}_{k}:{\mathcal{R}}^{{N}_{x}}\to {\mathcal{R}}^{{N}_{x}}$ is the deterministic model from time $k-1$ to k. Thus, the model state transition from time $k-1$ to k is fully described by the transition density given by

((93) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}_{j}^{\left(k\right)}|{\mathsf{x}}_{j}^{\left(k-1\right)}\right)=\mathcal{N}\left({\mathcal{M}}_{k}\left({\mathsf{x}}^{\left(k-1\right)}\right),\mathsf{Q}\right),\end{array}$

which will be of later use.

Using Bayes theorem

((94) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{y}}^{\left(m\right)}\right)=\frac{p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m\right)}\right)}{p\left({\mathsf{y}}^{\left(m\right)}\right)}p\left({\mathsf{x}}_{j}^{\left(m\right)}\right)\end{array}$

and the Markovian property of the model, the full posterior at observation time m is written as

((95) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{y}}^{\left(m\right)}\right)=\sum _{j=1}^{{N}_{e}}{w}_{j}^{\left(m\right)}\delta \left({\mathsf{x}}^{\left(m\right)}-{\mathsf{x}}_{j}^{\left(m\right)}\right)\end{array}$

where the weights ${w}_{j}^{\left(m\right)}$ are given by

((96) )
$\begin{array}{c}\hfill {w}_{j}^{\left(m\right)}\propto p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m\right)}\right)p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right){w}_{j}^{\left(m-1\right)}\end{array}$

and each ${w}_{j}^{\left(m-1\right)}$ is the product of all the weights from all time steps $0. The conditional pdf $p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}^{\left(m\right)}\right)$ is the pdf of the observations given the model state ${\mathsf{x}}^{\left(m\right)}$ which is often taken to be Gaussian

((97) )
$\begin{array}{cc}\hfill p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}^{\left(m\right)}\right)& \propto exp\left[-\frac{1}{2}{\left({\mathsf{y}}^{\left(m\right)}-{\mathcal{H}}_{m}\left({\mathsf{x}}^{\left(m\right)}\right)\right)}^{\mathsf{T}}\right\hfill \\ \hfill & {\mathsf{R}}^{-1}\left({\mathsf{y}}^{\left(m\right)}-{\mathcal{H}}_{m}\left({\mathsf{x}}^{\left(m\right)}\right)\right)].\hfill \end{array}$

To obtain equal-weight posterior particles one applies resampling, in which particles with high weights are duplicated, while particles with low weights are abandoned. Several schemes have been developed to perform resampling, and three of the most-used schemes are presented in Appendix 1.

The problem in high-dimensional spaces with a large number of independent observations is that these weights vary enormously over the particles, with one particle obtaining a weight close to one, while all the others have a weight very close to zero. This is the so-called degeneracy problem related to the ‘curse of dimensionality’: any resampling scheme will produce ${N}_{e}$ copies of the particle with the highest weight, and all variation in the ensemble has disappeared.

Hence, as mentioned at the beginning of this section, to apply a particle filter to a high-dimensional system additional information is needed to limit the search space of the filter. One option is to use localisation directly on the standard particle filter. Local particle filters, like the so-called Local Particle Filter (Poterjoy, 2016a) will be discussed in the section on localisation in particle filters. We next discuss the proposal-density particle filters since this technique could be applied to all filters and permits us to achieve equal-weights for the particles in a different way.

6.2.

Proposal-density particle filters

To avoid that the ensemble degenerates we aim at ensuring that equally significant particles are picked from the posterior density. To do this we have to ensure that all particles end up in the high-probability area of the posterior pdf, and that they have very similar, or even equal, weights. For the former we can use a scheme that pulls the particles towards the observations. Several methods can be used for this, including traditional methods like 4DVar, a variational method, and ensemble Kalman filters and smoothers. However, the main ingredient in efficient particle filters is the step that ensures that the weights of the different particles are close before any resampling step.

We start by writing the prior at time m as follows:

((98) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(m\right)}\right)=\int p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}^{\left(m-1\right)}\right)p\left({\mathsf{x}}^{\left(m-1\right)}\right)d{\mathsf{x}}^{\left(m-1\right)}.\end{array}$

Without loss of generality but for simplicity we assume that the particle weights in the ensemble at the previous time step $m-1$ are equal, so

((99) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(m-1\right)}\right)=\frac{1}{{N}_{e}}\sum _{j=1}^{{N}_{e}}\delta \left({\mathsf{x}}^{\left(m-1\right)}-{\mathsf{x}}_{j}^{\left(m-1\right)}\right).\end{array}$

Using Equation (99) in Equation (98) leads directly to:

((100) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(m\right)}\right)=\frac{1}{{N}_{e}}\sum _{j=1}^{{N}_{e}}p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right),\end{array}$

hence, from Equation (93) the prior can be seen as a mixture density, with each density centred around one of the forecast particles.

One can now multiply the numerator and denominator of Equation (100) by the same factor $q\left({x}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)$, in which ${\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)}$ is defined as the collection of all particles at time $m-1$, and the conditioning on j denotes that each particle does in general has a different parent to start from. This leads to

((101) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{\left(m\right)}\right)=\frac{1}{{N}_{e}}\sum _{j=1}^{{N}_{e}}\frac{p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)}{q\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)}q\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)\end{array}$

where $q\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)$ is the so-called proposal transition density, or proposal for short, whose support should be equal to or larger than that of $p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}^{\left(m-1\right)}\right)$. Note that the proposal density as formulated here is slightly more general than the usual $q\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)$ through allowing for the explicit dependence on all particles at time $m-1$.

Drawing from this density we find for the posterior:

((102) )
$\begin{array}{cc}\hfill p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{y}}^{\left(m\right)}\right)& =\frac{p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m\right)}\right)}{p\left({\mathsf{y}}^{\left(m\right)}\right)}p\left({\mathsf{x}}^{\left(m\right)}\right)\hfill \\ \hfill & =\frac{1}{{N}_{e}}\sum _{j=1}^{{N}_{e}}{w}_{j}\delta \left({\mathsf{x}}^{\left(m\right)}-{\mathsf{x}}_{j}^{\left(m\right)}\right)\hfill \end{array}$

where ${w}_{j}$ are the particle weights given by

((103) )
$\begin{array}{c}\hfill {w}_{j}=\frac{p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m\right)}\right)}{p\left({\mathsf{y}}^{\left(m\right)}\right)}\frac{p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)}{q\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)}.\end{array}$

Using Bayes’ theorem, the numerator in the expression for the weights can be expressed as

((104) )
$\begin{array}{cc}& p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}^{\left(m\right)}\right)p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}=p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)\hfill \end{array}$

Therefore, the particle weight of ensemble member j can be written as:

((105) )
$\begin{array}{c}\hfill {w}_{j}=\frac{p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)}{p\left({\mathsf{y}}^{\left(m\right)}\right)}\frac{p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)}{q\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)}.\end{array}$

In the so-called optimal proposal density (Doucet et al., 2000) one chooses

$\begin{array}{c}\hfill q\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)=p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right),\end{array}$

leading to weights ${w}_{j}\propto p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)$. For systems with a large number of independent observations these weights are again degenerate (see, e.g. Snyder et al., 2008; Ades and van Leeuwen, 2013; Snyder et al., 2015).

Several efficient particle filter schemes have been developed utilising the proposal density to avoid this degeneracy. Here we discuss the Equivalent-Weights Particle Filter (EWPF) and the Implicit Equal-Weights Particle Filter (IEWPF). As mentioned, the Implicit Particle Filter (Chorin et al., 2010), which allows for an extension of the one-time-step optimal proposal particle filter to a full time window explores a 4DVar-like method on each particle. Since it needs an adjoint of the underlying model, it is not discussed in this paper.

6.2.1.

The equivalent-weights particle filter

The EWPF works as follows:

• (1)   Determine the optimal proposal weight ${w}_{j}\propto p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)$ for each particle. Note that these weights vary enormously in high-dimensional systems.
• (2)   Choose a target weight ${w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}$ based on these weights that a certain percentage of particles can reach. For instance, if the target weight is set to the lowest of these weights we keep 100% of the particles. A choice of 50% will mean that the target weight is set to the medium value of these weights.
• (3)   Calculate the position in state space of each particle such that it has a weight exactly equal to the target weight. This is where the proposal density comes in. Note that some of the particles cannot reach this target weight no matter how we move them, and these are brought back into the ensemble via the resampling step in point 5.
• (4)   Add a small random perturbation to each particle and recalculate its weight.
• (5)   Resample all particles such that their weights are equal again.
It is in step 3 that we use the fact that the proposal density is dependent on all previous particles, and not just particle j. This step is the main reason for the efficiency of the filter.

As an example, when the error in the model equations is additive Gaussian and the observation operator is linear an analytical solution can be found for the maximum weight for each particle j, or actually, the minimum of minus the $log$ of that weight called ${\varphi }_{j}$:

((106) )
$\begin{array}{cc}\hfill {\varphi }_{j}& ={\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)}^{T}{\left(\mathsf{H}Q{H}^{T}+\mathsf{R}\right)}^{-1}\hfill \\ \hfill & ×\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right).\hfill \end{array}$

Then a target weight is set from these ${\varphi }_{j}$’s. The target weight splits the ensemble of particles in two groups: those particles that have a higher optimal proposal weight, and those with a lower optimal proposal weight. The latter are abandoned at this point, and will be regenerated in the resampling step 5.

For the retained particles, there is an infinite number of ways to move a particle in state space such that it reaches the target weight. In the EWPF that problem is solved by assuming

((107) )
$\begin{array}{c}\hfill {\stackrel{^}{\mathsf{x}}}_{j}^{\left(m\right)}=\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)+{\alpha }_{j}\mathrm{Υ}\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)\end{array}$

in which ${\alpha }_{j}$ is a scalar, and $\mathrm{Υ}$ is defined as

((108) )
$\begin{array}{c}\hfill \mathrm{Υ}=\mathsf{Q}{\mathsf{H}}^{T}{\left(\mathsf{H}\mathsf{Q}{\mathsf{H}}^{T}+\mathsf{R}\right)}^{-1}.\end{array}$

Under this assumption the number of solutions is reduced to two, and the two values for ${\alpha }_{j}$ are given by

((109) )
$\begin{array}{c}\hfill {\alpha }_{j}=1±\sqrt{1-{b}_{j}/{a}_{j}}\end{array}$

in which

((110) )
$\begin{array}{cc}\hfill {a}_{j}& =\frac{1}{2}{\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)}^{T}\hfill \\ \hfill & ×{\mathsf{R}}^{-1}{\mathsf{H}}^{T}\mathrm{Υ}\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)\hfill \end{array}$

and

((111) )
$\begin{array}{cc}\hfill {b}_{j}& =\frac{1}{2}{\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)}^{T}\hfill \\ \hfill & ×{\mathsf{R}}^{-1}\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)+\hfill \\ \hfill & log{w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}-log{w}_{j}^{\left(m-1\right)},\hfill \end{array}$

in which ${w}_{j}^{\left(m-1\right)}$ is the weight of particle j accumulated over previous time steps, included here for completeness. Note that ${w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}$ is the target weight selected from $\varphi$’s in Equation (106) (e.g. if we choose to keep 80% particles $-log\left({w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}\right)={\left\{{\stackrel{~}{\varphi }}_{j}\right\}}_{j=⌈{N}_{e}\ast 0.8⌉}$ where ${\left\{{\stackrel{~}{\varphi }}_{j}\right\}}_{j=1,\dots ,{N}_{e}}$ is a sorted list of optimal-proposal weight of each particle) and that ${\alpha }_{j}=1$ pushes the particle to its optimal-proposal weight position. The solution resembles the optimal proposal solution in which the deterministic part of the proposal is scaled to ensure equal weights. Also note the resemblance of the deterministic part with the shape of that used in a Kalman filter when we replace Q with the ensemble covariance of the state.

When the number of independent observations is large the optimal proposal density particle filter is degenerate, meaning that one particle gets a much larger weight than all the others. The EWPF is not degenerate because a set percentage of all particles has a similar weight (before the resampling step). The EWPF does not, however, converge for large ${N}_{e}$ to the posterior pdf because of this equivalent-weights construction, in which high-weight particles are moved such that their weight becomes lower, equal to the target weight. So the scheme is biased. However, the large ${N}_{e}$ limit is not that relevant in practise as the affordable number of particles will be low, below say 10,000, and typically of $O\left(20-100\right)$. In that setting, the Monte-Carlo error will be substantial, and the bias should be measured against the Monte-Carlo error. As long as the latter is larger than the former the scheme is a valid alternative in high-dimensional systems.

Algorithm  in Appendix 2 gives a pseudo-algorithm for the EWPF.

6.2.2.

The implicit equal-weights particle filter

This scheme is very similar to that of the EWPF:

• (1)   Determine the optimal proposal weight ${w}_{j}\propto p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)$ for each particle. Note that these weights vary enormously in high-dimensional systems.
• (2)   Choose a target weight based on these weights that a certain percentage of particles can reach. Typically the target weight is chosen as the minimum of the maximal weights, so that all particles are kept.
• (3)   Draw a random perturbation vector for each particle, and add this to the particle position that leads to maximal weight. So far the scheme is the same as that used in the optimal proposal density.
• (4)   Scale each random vector such that each particle will reach the target weight.
• (5)   Resample the particles such that their weights are equal in case the kept percentage is lower than 100%.
The main difference between this scheme and the EWPF is that in the EWPF we scale the deterministic part of the optimal proposal to reach a target weight, while here we scale the random part of the optimal proposal.

The implicit part of our scheme follows from drawing samples implicitly from a standard Gaussian distributed proposal density $q\left(\xi \right)$ instead of the original one $q\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)$, as in (Chorin and Tu, 2009). These two pdfs are related by:

((112) )
$\begin{array}{c}\hfill q\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{1:{N}_{e}}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)=\frac{q\left({\xi }_{j}\right)}{\left||{\frac{\mathrm{d}\mathsf{x}}{\mathrm{d}\xi }}_{j}\right||}\end{array}$

where $\left||\frac{\mathrm{d}\mathsf{x}}{\mathrm{d}{\xi }_{j}}\right||$ denotes the absolute value of the determinant of the Jacobian matrix of the ${\mathcal{R}}^{{N}_{x}}\to {\mathcal{R}}^{{N}_{x}}$ transformation ${\mathsf{x}}_{j}={g}_{j}\left({\xi }_{j}\right)$. The transformation ${g}_{j}\left(.\right)$ is now defined via the following implicit relation between variable ${\mathsf{x}}_{j}^{\left(m\right)}$ and $\xi$ as

((113) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{\left(m\right)}={\mathsf{x}}_{j}^{a}+{\alpha }_{j}^{1/2}{\mathsf{P}}^{1/2}{\xi }_{j}^{\left(m\right)}\end{array}$

where ${\mathsf{x}}_{j}^{a}$ is the mode of $p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)$, $\mathsf{P}$ a measure of the width of that pdf, and ${\alpha }_{j}$ a scalar that depends on ${\xi }_{j}^{\left(m\right)}$.

The ${\alpha }_{j}$ are now chosen such that all particles get the same weight ${w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}$, so the scalar ${\alpha }_{j}$ is determined for each particle from:

((114) )
$\begin{array}{cc}\hfill {w}_{j}& =\frac{p\left({\mathsf{x}}_{j}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)}{q\left({\xi }_{j}\right)}\left||\frac{\mathrm{d}\mathsf{x}}{\mathrm{d}{\xi }_{j}}\right||{w}_{j}^{\left(m-1\right)}\hfill \\ \hfill & ={w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}\hfill \end{array}$

This ensures that the filter is not degenerate in systems with arbitrary dimensions and an arbitrary number of independent observations. Because of the target-weight construction the filter does not converge to the correct posterior pdf, and the same discussion as for the EWPF applies here, namely that as long as this bias is smaller than the Monte-Carlo error this filter is a valid candidate for high-dimensional non-linear filtering.

As an example we assume now that observation errors and model errors are Gaussian and that the observation operator $\mathsf{H}\in {\mathcal{R}}^{{N}_{y}×{N}_{x}}$ is linear. Then we find that

((115) )
$\begin{array}{cc}& p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}^{\left(m\right)}\right)p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)\hfill \\ \hfill & =\frac{1}{A}exp\left[-\frac{1}{2}{\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}{\mathsf{x}}^{\left(m\right)}\right)}^{T}{\mathsf{R}}^{-1}\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}{\mathsf{x}}^{\left(m\right)}\right)\right\hfill \\ \hfill & -\frac{1}{2}{\left({\mathsf{x}}^{\left(m\right)}-\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)}^{T}{\mathsf{Q}}^{-1}\left({\mathsf{x}}^{\left(m\right)}-\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)]\hfill \\ \hfill & =\frac{1}{A}exp\left[-\frac{1}{2}{\left({\mathsf{x}}^{\left(m\right)}-{\mathsf{x}}_{j}^{a}\right)}^{T}{\mathsf{P}}^{-1}\left({\mathsf{x}}^{\left(m\right)}-{\mathsf{x}}_{j}^{a}\right)\right]exp\left(-\frac{1}{2}{\varphi }_{j}\right)\hfill \\ \hfill & =p\left({\mathsf{x}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)},{\mathsf{y}}^{\left(m\right)}\right)p\left({\mathsf{y}}^{\left(m\right)}|{\mathsf{x}}_{j}^{\left(m-1\right)}\right)\hfill \end{array}$

where

((116) )
$\begin{array}{cc}\hfill \mathsf{P}& ={\left({\mathsf{Q}}^{-1}+{\mathsf{H}}^{T}{\mathsf{R}}^{-1}\mathsf{H}\right)}^{-1},\hfill \end{array}$
((117) )
$\begin{array}{cc}\hfill {\mathsf{x}}_{j}^{a}& =\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)+\mathrm{Υ}\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right),\hfill \end{array}$

and

((118) )
$\begin{array}{cc}\hfill {\varphi }_{j}& ={\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right)}^{T}{\left(\mathsf{H}\mathsf{Q}{\mathsf{H}}^{T}+\mathsf{R}\right)}^{-1}\hfill \\ \hfill & ×\left({\mathsf{y}}^{\left(m\right)}-\mathsf{H}\mathcal{M}\left({\mathsf{x}}_{j}^{\left(m-1\right)}\right)\right).\hfill \end{array}$

This leads to a complicated non-linear differential equation for ${\alpha }_{j}$ that involves the determinant of $\mathsf{P}$. Since we are interested in high-dimensional problems we consider this equation in the limit of large state dimension ${N}_{x}$. In that limit it turns out that we can integrate this equation, leading to the much simpler equation (see Appendix in Zhu et al. (2016)):

((119) )
$\begin{array}{c}\hfill \left({\alpha }_{j}-1\right){\gamma }_{j}-{N}_{x}log\left({\alpha }_{j}\right)+{\varphi }_{j}-log{w}_{j}^{\left(m-1\right)}=log{w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}.\end{array}$

in which ${\gamma }_{j}={\xi }_{j}^{T}{\xi }_{j}$. This equation could be approximated by using numerical methods, such as the Newton method, etc., but analytical solutions based on the so-called Lambert W function do exist. We do not elaborate on these here.

Algorithm  in Appendix 2 gives a pseudo-algorithm for the IEWPF.

6.2.3.

Between observations: relaxation steps

If the system is not observed at every time step, the schemes mentioned above can be used over the time window between observations. No analytical solutions can be obtained in this case so that the solution has to be found iteratively. However, this procedure is rather expensive as it typically involves solving a problem similar to a 4DVar on each particle.5 Thus, typically simpler schemes are employed between observation times. These schemes will be less efficient, although we can ensure that Bayes’ theorem is fulfilled exactly for each particle.

In the following, we demonstrate the use of relaxation between observation times. We use the future observations to relax the particles at time k towards observations at next time $m>k$ by using instead of Equation (92) the modified forward model

((120) )
$\begin{array}{cc}\hfill {\mathsf{x}}_{j}^{\left(k\right)}& ={\mathcal{M}}_{k}\left({\mathsf{x}}_{j}^{\left(k-1\right)}\right)+{\stackrel{~}{\beta }}_{j}^{\left(k\right)}+\hfill \\ \hfill & +\stackrel{~}{\mathrm{Υ}}\left[{\mathsf{y}}^{\left(m\right)}-{\mathcal{H}}_{k}\left({\mathsf{x}}_{j}^{\left(k-1\right)}\right)\right],\hfill \end{array}$

where ${\stackrel{~}{\beta }}_{j}^{\left(k\right)}\in {\mathcal{R}}^{{N}_{x}}$ are random terms representing the model error distributed according to a given covariance matrix $\stackrel{~}{\mathsf{Q}}$,6${\mathcal{M}}_{k}$ is the same deterministic model as in Equation (92), $\stackrel{~}{\mathrm{Υ}}$ is a relaxation matrix given by

((121) )
$\begin{array}{c}\hfill \stackrel{~}{\mathrm{Υ}}=\tau \left(k\right)\mathsf{Q}{\mathsf{H}}^{T}{\mathsf{R}}^{-1}.\end{array}$

Here, $\tau \left(k\right)$ is a time dependent scalar that determines the strength of the relaxation. ${\mathsf{y}}^{\left(m\right)}\in {\mathcal{R}}^{{N}_{y}}$ is the vector of ${N}_{y}$ observations at time m and ${\mathcal{H}}_{k}:{\mathcal{R}}^{{N}_{x}}\to {\mathcal{R}}^{{N}_{y}}$ is the observation operator mapping model space into observation space. Note that the observations ${\mathsf{y}}^{\left(m\right)}$ exist at the later time $m>k$. The modified transition density is now given by

((122) )
$\begin{array}{cc}\hfill q\left({\mathsf{x}}_{j}^{\left(k\right)}|{\mathsf{x}}_{j}^{\left(k-1\right)},{\mathsf{y}}^{\left(m\right)}\right)=& \mathcal{N}\left({\mathcal{M}}_{k}\left({\mathsf{x}}^{\left(k-1\right)}\right)\right\hfill \\ \hfill & +\stackrel{~}{\mathrm{Υ}}\left[{\mathsf{y}}^{\left(m\right)}-{\mathcal{H}}_{k}\left({\mathsf{x}}^{\left(k-1\right)}\right)\right],\mathsf{Q}),\hfill \end{array}$

and the modified weights ${w}_{j}^{\left(k\right)}$ are accumulated as

((123) )
$\begin{array}{cc}\hfill {w}_{j}^{\left(k\right)}& \propto \frac{p\left({\mathsf{x}}_{j}^{\left(k\right)}|{\mathsf{x}}_{j}^{\left(k-1\right)}\right)}{q\left({\mathsf{x}}_{j}^{\left(k\right)}|{\mathsf{x}}_{j}^{\left(k-1\right)},{\mathsf{y}}^{\left(m\right)}\right)}{w}_{j}^{\left(k-1\right)}\hfill \\ \hfill & \propto \prod _{t=1}^{k}\frac{p\left({\mathsf{x}}_{j}^{\left(t\right)}|{\mathsf{x}}_{j}^{\left(t-1\right)}\right)}{q\left({\mathsf{x}}_{j}^{\left(t\right)}|{\mathsf{x}}_{j}^{\left(t-1\right)},{\mathsf{y}}^{\left(m\right)}\right)}.\hfill \end{array}$

This simple modification of the forward model to include information about future observations using a relaxation term is only consistent with Bayes Theorem when the weights that are introduced by this modification are properly taken into account, and it leads to efficient schemes if it is used in combination with an equal-weight scheme, like the EWPF or the IEWPF. Algorithm  in Appendix 2 gives a pseudo-algorithm of the relaxation step used in the EWPF and the IEWPF.

Note that it would also be possible to use other methods like ensemble smoothers or ensemble 4Dvar-like methods to move particles between observations, but we will not elaborate on those here.

6.3.

The Ensemble Transform Particle Filter (ETPF)

The idea of the Ensemble Transport Particle Filter (Reich, 2013) is to avoid resampling by finding a linear transportation map between the prior and the posterior ensemble such that the prior particles are minimally modified, while ensuring that the posterior particles have equal weight. We write each posterior particle as a linear combination of the prior particles as

((124) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{a}={N}_{e}\sum _{i=1}^{{N}_{e}}{\mathsf{x}}_{i}^{f}{t}_{\mathit{ij}}\end{array}$

in which we ensure that the particles have the correct mean via

((125) )
$\begin{array}{c}\hfill \sum _{i=1}^{{N}_{e}}{t}_{\mathit{ij}}=\frac{1}{{N}_{e}},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\sum _{j=1}^{{N}_{e}}{t}_{\mathit{ij}}={w}_{i}.\end{array}$

This still leads to ${N}_{e}^{2}-2$ undetermined elements ${t}_{\mathit{ij}}$. These are found by minimising the movement from old to new particles, by minimising

((126) )
$\begin{array}{c}\hfill J\left(\mathsf{T}\right)=\sum _{i,j}^{{N}_{e}}{t}_{\mathit{ij}}\left||{\mathsf{x}}_{i}^{f}-{\mathsf{x}}_{j}^{f}\right|{|}^{2}\end{array}$

under the condition that ${t}_{\mathit{ij}}\ge 0$. The above formulation is an example of an optimal transportation algorithm, see e.g. the review by Chen and Reich in van Leeuwen et al. (2015). This scheme can be combined with any proposal density discussed in the previous section.

If the dynamical model is deterministic one needs to add some small random noise to the particles to avoid ensemble collapse. Typically this noise is assumed to be Gaussian with zero mean and covariance ${\mathsf{P}}^{a}={h}^{2}{\mathsf{P}}^{f}$ with $0 a free parameter. This term is an ad-hoc addition related to inflation in Ensemble Kalman Filters.

Algorithm  in Appendix 2 gives a pseudo-algorithm of the ETPF.

7.

Second-order exact ensemble Kalman filters

Several extensions to ensemble Kalman filters have been proposed to overcome the linearity or Gaussianity assumptions. A large number of filters exists that try to bridge an ensemble Kalman filter and particle filter by defining smoothly varying parameters that move the filter between these two extremes based on the degeneracy of the particle filter. In high-dimensional systems, however, all of these filters become ensemble Kalman filters as any particle filter contribution results in complete degeneracy. These filters (not discussed here) will become useful when localisation is applied.

In a non-linear, non-Gaussian case the ensemble Kalman filters will necessarily produce an analysis where the mean and covariance are biased due to the assumption of a Gaussian prior pdf (Lei and Bickel, 2011). Here, we will discuss four ensemble filters that concentrate on getting the first two moments of the posterior distribution correct in non-linear situations. These are the Particle Filter with Gaussian Resampling of Xiong et al. (2006), the Non-linear Ensemble Transform Filter (Tödter and Ahrens, 2015), the Moment-Matching Ensemble Filter (Lei and Bickel, 2011), and the Merging Particle Filter (Nakano et al., 2007).

7.1.

Particle Filter with Gaussian Resampling (PFGR) and Non-linear Ensemble Transform Kalman Filter (NETF)

The Particle Filter with Gaussian Resampling (PFGR, Xiong et al. (2006)) introduced an explicit ensemble transformation matching the mean and covariance matrix. The Non-linear Ensemble Transform Filter (NETF, Tödter and Ahrens, 2015) is a recent reinvention of this algorithm formulated to obtain an ensemble transformation that is analogous to that of the ETKF. In addition, the NETF was introduced with localisation, so that the filter can be applied to high-dimensional systems (see Section 9.1). The presentation here follows the more modern formulation of the NETF in analogy to the ETKF presented before. As a novel feature, the presented formulation avoids the explicit computation of the analysis state, that is given by the weighted ensemble mean.

The PFGR and the NETF are designed to exactly match the first two moments of the posterior pdf in Bayes theorem without assuming that the prior or likelihood are normally distributed. The forecast ensemble is transformed into an analysis ensemble by applying a weight vector to obtain the analysis mean state and a transform matrix to obtain analysis ensemble perturbations, analogous in form to a square root filter (Equations (17) to (19)).

As in most particle filters, the likelihood weights that arise from Bayes’ theorem

((127) )
$\begin{array}{c}\hfill {w}_{j}=\frac{p\left(\mathsf{y}|{\mathsf{x}}_{j}\right)}{{\sum }_{k=1}^{{N}_{e}}p\left(\mathsf{y}|{\mathsf{x}}_{k}\right)}\end{array}$

are used. For normally distributed observation errors, the weight of each member is at first given by

((128) )
$\begin{array}{c}\hfill {w}_{j}\propto exp\left[-\frac{1}{2}{\left(\mathsf{y}-\mathcal{H}\left({\mathsf{x}}_{j}^{f}\right)\right)}^{T}{\mathbf{R}}^{-1}\left(\mathsf{y}-\mathcal{H}\left({\mathsf{x}}_{j}^{f}\right)\right)\right]\end{array}$

and then normalised so that the weights sum up to one. Before the weights are computed, the ensemble perturbations should be inflated by an inflation factor $\gamma >1$ as in the ensemble-based Kalman filters (for inflation see Section 9.2). Using the weight vector $\mathsf{w}={\left({w}_{1},\dots ,{w}_{{N}_{e}}\right)}^{T}$ the transform matrix is

((129) )
$\begin{array}{cc}\hfill \mathsf{T}{\mathsf{T}}^{T}& ={N}_{e}\left[\mathrm{d}iag\left(\mathsf{w}\right)-\mathsf{w}{\mathsf{w}}^{T}\right].\hfill \end{array}$

Here, $\mathrm{d}iag\left(\mathsf{w}\right)$ is a diagonal matrix that contains the weights ${w}_{j}$ on the diagonal. The factor ${N}_{e}$ was not present in the formulation by Xiong et al. (2006). It was introduced by Tödter and Ahrens (2015) to ensure that the ensemble has the correct analysis variance. As in the ensemble Kalman filters, the eigenvalue decomposition of $\mathsf{T}{\mathsf{T}}^{T}=\mathsf{U}\mathrm{\Sigma }{\mathsf{U}}^{T}$ yields the ensemble transformation

((130) )
$\begin{array}{cc}\hfill \mathsf{T}& =\mathsf{U}{\mathrm{\Sigma }}^{1/2}{\mathsf{U}}^{T}.\hfill \end{array}$

Combining the weight vector and transform matrix as in Equation (19), the analysis ensemble is given by

((131) )
$\begin{array}{c}\hfill {\mathsf{X}}^{a}={\mathsf{X}}^{f}\left(\mathsf{T}\mathrm{\Lambda }+\left[\mathsf{w},\dots ,\mathsf{w}\right]\right).\end{array}$

Here, $\mathrm{\Lambda }$ is an random matrix. Xiong et al. (2006) use a random matrix sampled from a normal distribution with mean zero and standard deviation one. They use $\mathrm{\Lambda }$ because in Equation (130) they omit all eigenvalues that are very close to zero and need to restore an ensemble of full size. In contrast, Tödter and Ahrens (2015) use a mean-preserving orthogonal matrix (see Pham, 2001) analogous to that used in the SEIK filter. They motivate the use of $\mathrm{\Lambda }$ also by the reduction of ensemble outliers and showed experimentally that the random transformation with mean preserving properties leads to a more stable data assimilation process.

Note, that the transformation in Equation (131) is applied to the ensemble matrix ${\mathsf{X}}^{f}$ instead of the ensemble perturbation matrix ${\mathsf{X}}^{{}^{\prime }f}$ without subsequent addition of the analysis mean state (see e.g. Equation 19). This is possible because of the property of $\mathsf{T}$ to implicitly subtract the ensemble mean, while the multiplication of ${\mathsf{X}}^{f}$ with the weight vector array adds the analysis mean state.

For high-dimensional systems, a localisation of the analysis step is required. It was introduced by Tödter and Ahrens (2015) in analogy to the localisation of the ETKF and SEIK filters (see Sec. 9.1). Algorithm  in Appendix 2 gives a pseudo-algorithm of the PFGR and NETF and Algorithm shows the computation of the weights for Gaussian observation errors.

7.2.

Moment-Matching Ensemble Filter (MMEF)

A stochastic algorithm that has second-order correct statistics was developed by Lei and Bickel (2011). In this moment-matching ensemble filter (MMEF) we generate an ensemble of perturbed pseudo-observations, ${\mathsf{Y}}^{f}$, as in the SEnKF (see Equations 20 and 21)

((132) )
$\begin{array}{c}\hfill {\mathsf{Y}}_{j}^{f}\sim {p}_{\mathcal{H}\left({\mathsf{x}}_{\mathcal{j}}\right)}\left(\mathsf{y}|{\mathsf{x}}_{j}\right)\end{array}$

using $\mathcal{H}\left({\mathsf{x}}_{j}\right)$ as variable in the density, so $\mathsf{y}$ is fixed. Then the analysis mean for each particle is generated using a corresponding pseudo-observation as follows

((133) )
$\begin{array}{c}\hfill {\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)=\sum _{k=1}^{{N}_{e}}{w}_{k}\left({\mathsf{y}}_{j}^{f}\right){\mathsf{x}}_{k}={\mathsf{X}}^{f}\mathbf{w}\left({\mathsf{y}}_{j}^{f}\right)\end{array}$

in which ${w}_{k}\left({\mathsf{y}}_{j}^{f}\right)$ is given by

((134) )
$\begin{array}{c}\hfill {w}_{k}\left({\mathsf{y}}_{j}^{f}\right)=\frac{p\left({\mathsf{y}}_{j}^{f}|{\mathsf{x}}_{k}\right)}{{\sum }_{l=1}^{{N}_{e}}p\left({\mathsf{y}}_{j}^{f}|{\mathsf{x}}_{l}\right)}.\end{array}$

Similarly, the analysis mean for actual observations is computed via

((135) )
$\begin{array}{c}\hfill {\overline{\mathsf{x}}}^{a}\left(\mathsf{y}\right)=\sum _{k=1}^{{N}_{e}}{w}_{k}\left(\mathsf{y}\right){\mathsf{x}}_{k}={\mathsf{X}}^{f}\mathbf{w}\left(\mathsf{y}\right).\end{array}$

Furthermore, we calculate equivalent expressions for covariances for perturbed and actual observations as follows

((136) )
$\begin{array}{cc}\hfill {\mathsf{P}}^{a}\left({\mathsf{y}}_{j}^{f}\right)& =\sum _{k=1}^{{N}_{e}}{w}_{k}\left({\mathsf{y}}_{j}^{f}\right)\left({\mathsf{x}}_{k}-{\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)\right)\hfill \\ \hfill & ×{\left({\mathsf{x}}_{k}-{\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)\right)}^{T}\hfill \end{array}$
((137) )
$\begin{array}{cc}\hfill {\mathsf{P}}^{a}\left(\mathsf{y}\right)& =\sum _{k=1}^{{N}_{e}}{w}_{k}\left(\mathsf{y}\right)\left({\mathsf{x}}_{k}-{\overline{\mathsf{x}}}^{a}\left(\mathsf{y}\right)\right)\hfill \\ \hfill & ×{\left({\mathsf{x}}_{k}-{\overline{\mathsf{x}}}^{a}\left(\mathsf{y}\right)\right)}^{T}.\hfill \end{array}$

Then each of the ensemble members or particles is updated via

((138) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{a}={\overline{\mathsf{x}}}^{a}\left(\mathsf{y}\right)+{\mathsf{P}}^{a}{\left(\mathsf{y}\right)}^{1/2}{\mathsf{P}}^{a}{\left({\mathsf{y}}_{j}^{f}\right)}^{-1/2}\left({\mathsf{x}}_{j}-{\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)\right).\end{array}$

This filter gives the correct posterior mean and covariance in the large-ensemble limit (Lei and Bickel, 2011). To see this, note that ${\mathsf{x}}_{j}-{\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)$ is distributed according to $N\left(0,{\mathsf{P}}^{a}\left({\mathsf{y}}_{j}^{f}\right)\right)$, so ${\mathsf{P}}^{a}{\left({\mathsf{y}}_{j}^{f}\right)}^{-1/2}\left({\mathsf{x}}_{j}-{\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)\right)$ is distributed $\mathcal{N}\left(0,\mathsf{I}\right)$, and hence the distribution of ${\mathsf{x}}_{j}^{a}$ is $\mathcal{N}\left({\overline{\mathsf{x}}}^{a}\left(\mathsf{y}\right),{\mathsf{P}}^{a}\left(\mathsf{y}\right)\right)$.

This filter cannot be used in high-dimensional systems, even when localisation is applied, because it needs the evaluation of several full covariance matrices. However, we can explore ensemble perturbations that are used to calculate these covariances, as in all ensemble Kalman filter schemes. The following was not discussed by Lei and Bickel (2011), but is a practical way to make the filter useful in high-dimensional systems.

We can express each covariance matrix ${\mathsf{P}}^{a}\left({\mathsf{y}}_{j}^{f}\right)$ directly in terms of the forecast ensemble as

((139) )
$\begin{array}{c}\hfill {\mathsf{P}}^{a}\left({\mathsf{y}}_{j}^{f}\right)={\mathsf{X}}^{f}\mathsf{T}\left({\mathsf{y}}_{j}^{f}\right){{\mathsf{X}}^{f}}^{T}\end{array}$

where matrix $\mathsf{T}\left({\mathsf{y}}_{j}^{f}\right)$ is given by

((140) )
$\begin{array}{c}\hfill \mathsf{T}\left({\mathsf{y}}_{j}^{f}\right)=\phantom{\rule{0.333333em}{0ex}}\text{diag}\left(\mathbf{w}\left({\mathsf{y}}_{j}^{f}\right)\right)-\mathbf{w}\left({\mathsf{y}}_{j}^{f}\right){\mathbf{w}}^{T}\left({\mathsf{y}}_{j}^{f}\right).\end{array}$

The square root of this matrix is

((141) )
$\begin{array}{c}\hfill {\mathsf{P}}^{a}{\left({\mathsf{y}}_{j}^{f}\right)}^{1/2}={\mathsf{X}}^{f}\mathsf{T}{\left({\mathsf{y}}_{j}^{f}\right)}^{1/2}.\end{array}$

To find the inverse of this matrix we perform an SVD on the prior ensemble matrix

((142) )
$\begin{array}{c}\hfill {{\mathsf{X}}^{\prime }}^{f}=\mathsf{U}\mathrm{\Lambda }{\mathsf{V}}^{T}\end{array}$

and compute also the EVD on the much smaller square matrices $\mathsf{T}\left({\mathsf{y}}_{j}^{f}\right)$

((143) )
$\begin{array}{c}\hfill \mathsf{T}\left({\mathsf{y}}_{j}^{f}\right)={\stackrel{~}{\mathsf{U}}}_{j}{\stackrel{~}{\mathrm{\Lambda }}}_{j}{\stackrel{~}{\mathsf{U}}}_{j}^{T}.\end{array}$

Using Equations (142) and (143) we find

((144) )
$\begin{array}{c}\hfill {\mathsf{P}}^{a}{\left({\mathsf{y}}_{j}^{f}\right)}^{-\frac{1}{2}}={\stackrel{~}{\mathsf{U}}}_{j}^{T}{\stackrel{~}{\mathrm{\Lambda }}}_{j}^{1/2}\mathsf{V}{\mathrm{\Lambda }}^{-1}{\mathsf{U}}^{T}.\end{array}$

Hence, we can write the update equation of the MMEF as

((145) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{a}={\overline{\mathsf{x}}}^{a}+{{\mathsf{X}}^{\prime }}^{f}{\mathsf{T}}^{1/2}{\stackrel{~}{\mathsf{U}}}_{j}^{T}{\stackrel{~}{\mathrm{\Lambda }}}_{j}^{1/2}\mathsf{V}{\mathrm{\Lambda }}^{-1}{\mathsf{U}}^{T}\left({\mathsf{x}}_{j}-{\overline{\mathsf{x}}}^{a}\left({\mathsf{y}}_{j}^{f}\right)\right).\end{array}$

This expression is suitable for high-dimensional applications when the matrices $\mathsf{T}\left({\mathsf{y}}_{j}^{f}\right)$ are computed with localisation.

7.3.

Merging Particle Filter (MPF)

The merging particle filter generates several sets of posterior ensembles and merges them via a weighted average to obtain a new set of particles that has the correct mean and covariance but is more robust than the standard particle filter. Specifically, the method draws a set of q ensembles each of size ${N}_{e}$ from the weighted prior ensemble at the resampling step. Denote each ensemble member as ${\mathsf{x}}_{j,i}$ for ensemble member j in ensemble i. Then new merged ensemble members are generated via

((146) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{a}=\sum _{i=1}^{q}{\alpha }_{i}{\mathsf{x}}_{j,i}.\end{array}$

To ensure that the new ensemble has the correct mean and covariance, the coefficients ${\alpha }_{j}$ need to fulfil the two conditions

((147) )
$\begin{array}{c}\hfill \sum _{j=1}^{q}{\alpha }_{j}=1;\phantom{\rule{1em}{0ex}}\sum _{j=1}^{q}{\alpha }_{j}^{2}=1,\end{array}$

where each ${\alpha }_{j}$ also has to be a real number.

When $q>3$ there is no unique solution for the $\alpha$’s, while for $q=3$ we find:

((148) )
$\begin{array}{cc}\hfill {\alpha }_{1}& =\frac{3}{4}\hfill \\ \hfill {\alpha }_{2}& =\frac{\sqrt{13}+1}{8}\hfill \\ \hfill {\alpha }_{3}& =-\frac{\sqrt{13}-1}{8}.\hfill \end{array}$

Although not discussed by Nakano et al. (2007) this scheme will be degenerate for high-dimensional problems. However, we can make the $\alpha$’s space-dependent when $q>3$ and then apply localisation.

8.

Both ensemble Kalman and Monte Carlo-based techniques discussed in Sections 5 and 6, respectively, have their drawbacks. The Gaussian mixture filter (Anderson and Anderson, 1999; Bengtsson et al., 2003; Hoteit et al., 2008) attempts to avoid these by approximating an arbitrary form of the prior by combining multiple Gaussian priors. This gives it the advantage that both the local Kalman filter type correction step as well as the weighting and resampling step of a particle filter can be applied. This possibility also makes it applicable to highly non-linear and high dimensional systems. In this paper, we discuss the adaptive Gaussian mixture filter developed by Stordal et al. (2011) as a representative scheme, out of all Gaussian mixture filters that have been proposed.

In the Gaussian mixture filter, the prior distribution is approximated by a mixture density (Silverman, 1986) where each ensemble member forms the centre of a Gaussian density function

((149) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{f}\right)=\sum _{j=1}^{{N}_{e}}\frac{1}{{N}_{e}}\mathcal{N}\left({\mathsf{x}}_{j}^{f},{\stackrel{~}{\mathsf{P}}}^{f}\right),\end{array}$

where $\mathcal{N}\left({\mathsf{x}}_{j},\stackrel{~}{\mathsf{P}}\right)$ denotes a multivariate Gaussian kernel density with ensemble member ${\mathsf{x}}_{j}$ as mean and covariance matrix ${\stackrel{~}{\mathsf{P}}}^{f}={h}^{2}{\mathsf{P}}^{f}$, in which ${\mathsf{P}}^{f}$ is the covariance of the whole forecast ensemble and h is a bandwidth parameter. Stordal et al. (2011) discuss that the optimal choice of the bandwidth h is ${h}_{\mathit{opt}}\sim {N}_{e}^{-1/5}$ if we are only interested in the marginal properties of the individual components of $\mathsf{x}$, but that it might be beneficial to choose $h>{h}_{\mathit{opt}}$ to reduce the risk of filter divergence, since the choice of the bandwidth parameter determines the magnitude of the Kalman filter update step. Thus, the parameter h is treated as the design parameter and is defined by the user. Note that each particle represents the mean of a Gaussian kernel and that the uncertainty associated with each particle is given by the covariance of that Gaussian kernel (Stordal et al., 2011).

If the likelihood is Gaussian, the posterior pdf is again a Gaussian mixture, now with pdf

((150) )
$\begin{array}{c}\hfill p\left({\mathsf{x}}^{a}|\mathsf{y}\right)=\sum _{j=1}^{{N}_{e}}{w}_{j}\mathcal{N}\left({\mathsf{x}}_{j}^{a},{\stackrel{~}{\mathsf{P}}}^{a}\right).\end{array}$

Here, the weights ${w}_{j}$ are propotional to $\mathcal{N}\left(\mathsf{y}-\mathsf{H}{\mathsf{x}}_{j}^{f},{\mathsf{R}}^{a}\right)$ with ${\sum }_{j}{w}_{j}=1$ and ${\mathsf{R}}^{a}=\mathsf{H}{\stackrel{~}{\mathsf{P}}}^{f}{\mathsf{H}}^{T}+\mathsf{R}$. So, compared to the particle filter the covariance used in the weights is inflated with a term $\mathsf{H}{\stackrel{~}{\mathsf{P}}}^{f}{\mathsf{H}}^{T}$, leading to more equal weights. Each mean ${\mathsf{x}}_{j}^{a}$ and the covariance matrix ${\stackrel{~}{\mathsf{P}}}^{a}$ are obtained using one of the EnKF variants.

In high-dimensional systems, the covariance matrices are never formed explicitly, and the algorithm in Stordal et al. (2011) cannot be used. Hoteit et al. (2008) used an update based in the SEIK filter (see Section 5.2). For a more modern formulation, we provide here an algorithm based on Stordal et al. (2011) but explore an ETKF to avoid the explicit computation of $\stackrel{~}{\mathsf{P}}$. First, the matrix

((151) )
$\begin{array}{c}\hfill {\mathsf{T}}_{\mathit{GM}}{\mathsf{T}}_{\mathit{GM}}^{\mathsf{T}}={\left(\mathsf{I}+\frac{{h}^{2}}{{N}_{e}-1}{\mathsf{S}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{S}\right)}^{-1}\end{array}$

is generated with $\mathsf{S}=\mathsf{H}{{\mathsf{X}}^{\prime }}^{f}$ similar to Equation (44), but including the factor ${h}^{2}$. Then, we perform an EVD of the symmetric matrix ${\left({\mathsf{T}}_{\mathit{GM}}{\mathsf{T}}_{\mathit{GM}}^{\mathsf{T}}\right)}^{-1}={\mathsf{U}}_{\mathit{GM}}{\mathrm{\Sigma }}_{\mathit{GM}}{\mathsf{U}}_{\mathit{GM}}^{\mathsf{T}}$ and obtain the symmetric square root

((152) )
$\begin{array}{c}\hfill {\mathsf{T}}_{\mathit{GM}}={\mathsf{U}}_{\mathit{GM}}{\mathrm{\Sigma }}_{\mathit{GM}}^{-1/2}{\mathsf{U}}_{\mathit{GM}}^{\mathsf{T}}.\end{array}$

This is used to update the mean of each Gaussian kernel by calculating the ETKF update on each of the prior particles as

((153) )
$\begin{array}{c}\hfill {\overline{\mathsf{w}}}_{j,GM}=\frac{1}{\sqrt{\left({N}_{e}-1\right)}}{\mathsf{U}}_{\mathit{GM}}{\mathrm{\Sigma }}_{\mathit{GM}}^{-1}{\mathsf{U}}_{\mathit{GM}}^{\mathsf{T}}{\left(h{{\mathsf{X}}^{\prime }}^{f}\right)}^{\mathsf{T}}{\mathsf{H}}^{\mathsf{T}}{\mathsf{R}}^{-1}{\mathsf{d}}_{j}\end{array}$

in which ${\mathsf{d}}_{j}=\mathsf{y}-\mathsf{H}{\mathsf{x}}_{j}^{f}$. The new centres of the Gaussian mixture densities are now found as

((154) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{a}={\mathsf{x}}_{j}^{f}+h{{\mathsf{X}}^{\prime }}^{f}{\overline{\mathsf{w}}}_{j,GM}.\end{array}$

A square root of the posterior covariance of each Gaussian mixture density is found by

((155) )
$\begin{array}{c}\hfill \mathsf{Z}=h{{\mathsf{X}}^{\prime }}^{f}{{\mathsf{W}}^{\prime }}_{\mathit{GM}}\end{array}$

in which

((156) )
$\begin{array}{c}\hfill {{\mathsf{W}}^{\prime }}_{\mathit{GM}}={\mathsf{U}}_{\mathit{GM}}{\mathrm{\Sigma }}_{\mathit{GM}}^{-1/2}{\mathsf{U}}_{\mathit{GM}}^{\mathsf{T}}.\end{array}$

Thus, for Equation (150) we have ${\stackrel{~}{\mathsf{P}}}^{a}={\left({N}_{e}-1\right)}^{-1}\mathsf{Z}{\mathsf{Z}}^{T}$, but to evaluate the equation one can use the square root $\mathsf{Z}$, so it is not required to compute ${\stackrel{~}{\mathsf{P}}}^{a}$ explicitly.

Until this point, the algorithm is the standard Gaussian mixture filter. The adaptive part of the filter was introduced by Stordal et al. (2011) and has been demonstrated to avoid filter divergence due to ensemble degeneration. Further, the adaptivity allows us to choose smaller values of the bandwidth parameter h. To stabilise the Gaussian mixture filter, we interpolate the original analysis weights with a uniform weight as

((157) )
$\begin{array}{c}\hfill {w}_{j}^{\alpha }=\alpha {w}_{j}+\left(1-\alpha \right){N}_{e}^{-1}.\end{array}$

For the adaptivity, $\alpha$ is chosen to be

((158) )
$\begin{array}{c}\hfill \alpha ={N}_{\phantom{\rule{0.333333em}{0ex}}\text{eff}}{N}_{e}^{-1},\end{array}$

where ${N}_{\phantom{\rule{0.333333em}{0ex}}\text{eff}}=1/\left({\sum }_{l=1}^{{N}_{e}}{w}_{l}^{2}\right)$ is the effective ensemble size. To avoid ensemble degeneration one can further add a resampling step as in particle filters. It is performed if ${N}_{\phantom{\rule{0.333333em}{0ex}}\text{eff}}<{N}_{c}$, with ${N}_{c}$ a value that can be chosen freely, for instance ${N}_{c}=0.5{N}_{e}$. The full scheme then becomes:

• (a)   When ${N}_{\phantom{\rule{0.333333em}{0ex}}\text{eff}}\ge {N}_{c}$ no resampling is needed, so the weights are calculated as above and transported with each particle to the next set of observations.
• (b)   When ${N}_{\phantom{\rule{0.333333em}{0ex}}\text{eff}}<{N}_{c}$ we will resample according to any of the resampling schemes in Appendix 1. This leads to a new set of states for the centres of the Gaussian mixtures denoted ${\mathsf{x}}_{j,\left(i\right)}$ in which j denotes the index of the state for resampling, and i its index after resampling. Note that several of the new states will coincide. To avoid identical samples we draw our final new ensemble from the Gaussian mixtures, as follows
((159) )
$\begin{array}{ccc}\hfill {\xi }_{i}& \sim \hfill & \hfill \mathcal{N}\left(0,\mathsf{I}\right),\end{array}$
((160) )
$\begin{array}{cc}\hfill {\mathsf{x}}_{i}^{a}& ={\mathsf{x}}_{j,\left(i\right)}+\mathsf{Z}{\xi }_{i},\hfill \end{array}$
((161) )
$\begin{array}{cc}\hfill {w}_{i}& =\frac{1}{{N}_{e}}.\hfill \end{array}$
Further, we set ${\stackrel{~}{\mathsf{P}}}^{a}={\left({N}_{e}-1\right)}^{-1}{\mathsf{X}}^{a}{\left({\mathsf{X}}^{a}\right)}^{T}$, but use this only in factorised form.
Note that in this scheme we never calculate a full state covariance.

It is important to realise what the adaptive part does. Indeed, by construction, the filter is not degenerate, but at the expense of strongly reducing the influence of the observations when $\alpha$ is small. In high dimensional systems with a large number of independent observations localisation is essential to avoid using the scheme as a sum of ensemble Kalman filters only.

In the scheme by Bengtsson et al. (2003), the mean of each Gaussian pdf is chosen at random from the ensemble, and the covariance in each Gaussian pdf is estimated from the ensemble members which are local in state space, including a localisation and smoothing step. Since the scheme has not been applied to high-dimensional systems it will not be discussed here.

9.

Practical implementation of the ensemble methods

This section is devoted to issues related to the practical implementation of the ensemble methods. In particular, we address the need for localisation and inflation in some of the presented ensemble methods to counteract the issues arising from ensemble undersampling in large scale problems such as ocean and atmosphere prediction. We also discuss the computational cost of each method as presented and the parallelisation of ensemble data assimilation methods. We will conclude this section with a discussion on the suitability of the ensemble data assimilation methods applied to non-linear dynamical models.

9.1.

Localisation in EnDA

The success of the EnDA methods is highly dependent on the size of the ensemble being adequate for the system we apply these methods to. Thus, for large scale problems, where the number of state variables is many magnitudes larger than the number of ensemble members, ensemble undersampling can cause major problems in EnDA methods: underestimated ensemble variance, filter divergence, and errors in estimated correlations, in particular spurious long-range correlations. In such cases, spatial localisation is a necessary tool to minimise the effect of undersampling.

Localisation damps long-range correlations, e.g. in the ensemble covariance matrix (’covariance localisation’, see Section 9.1.2). This damping can be applied to the extend to keep only correlations over limited distances and erase long-range correlations in the analysis step. Thus, localisation decouples the analysis update at distant locations in a model grid. The underlying assumption of localisation is that the assimilation problem has in fact a local structure. This means, that correlation length scales are much shorter than the extent of the model grid so that only correlations over short distances are relevant while for long distances the sampling error in the ensemble-estimated covariance matrices dominates (see, e.g. Morzfeld et al., 2017). This seems to be fulfilled for many oceanic and atmospheric applications. For example, Patil et al. (2001) described a locally low dimension for atmospheric dynamics. The success of localised filters in oceanic and atmospheric data assimilation applications also shows that this condition is dominantly fulfilled, even though it is known that long-range correlations (teleconnections) exist in the atmosphere and ocean. However, if a modelling problem does not have a local structure or if too little observations are available or the observations only represent long-range properties of the system, localisation cannot be applied.

Localisation is usually applied either explicitly by considering only observations from a region surrounding the location of the analysis or implicitly by modifying $\mathsf{P}$ or $\mathsf{R}$ so that observations from beyond a certain distance cannot affect the analysis state. The way in which such localisation is applied is still an active field of research and many variants of localisation schemes have emerged over the last decade. There are two main types of spatial localisation techniques (or simply localisation) that are widely used in ensemble data assimilation: Covariance localisation (also termed P- or B-localisation) and observation localisation (also denoted R-localisation). Both methods will be discussed here together with domain localisation, which is required for the application of observation localisation. In addition, a number of adaptive localisation schemes was developed over the recent years. A selection of these schemes is discussed in Section 9.1.4.

In general, all localisation schemes are empirical. While they improve the estimations by ensemble filters, they can disturb balances in the model state (Lorenc, 2003; Kepert, 2009). Further, the interaction of localisation with the serial observation processing usually applied with the EnSRF and EAKF methods can reduce the stability of these filters (Nerger, 2015).

9.1.1.

Domain localisation

Domain localisation or localanalysis is the oldest localisation technique. For ensemble Kalman filters it was first applied by Houtekamer and Mitchell (1998), but the method was also applied in earlier schemes of optimal interpolation (see Cohn et al., 1998). In domain localisation we only use the ensemble perturbations that belong to the domain ${D}_{\gamma }$ in which the analysis correction of the state estimate is computed. For example, this domain can be a vertical column of grid points or a single grid point. Thus, we use a linear transformation ${\mathsf{D}}_{\gamma }$ to obtain

((162) )
$\begin{array}{c}\hfill {{\mathsf{x}}^{\prime }}_{j,\gamma }={\mathsf{D}}_{\gamma }{{\mathsf{x}}^{\prime }}_{j}^{f},\end{array}$

where $j=1,\dots ,{N}_{e}$ and $\gamma =1,\dots ,\mathrm{\Gamma }$ with $\mathrm{\Gamma }$ being the total number of subdomains. To localise, we now only use observations within a specified distance – the localization radius – around the local domain ${D}_{\gamma }$. This defines a local observation domain ${\stackrel{^}{D}}_{\gamma }$. Using the corresponding linear transformation ${\stackrel{^}{\mathsf{D}}}_{\gamma }$ we can transform the observation error covariance $\mathsf{R}$, the global observation vector $\mathsf{y}$, and the global observation operator $\mathsf{H}$ analogously to Equation (162) to their local parts

((163) )
$\begin{array}{cc}\hfill {\mathsf{y}}_{\gamma }& ={\stackrel{^}{\mathsf{D}}}_{\gamma }\mathsf{y},\hfill \end{array}$
((164) )
$\begin{array}{cc}\hfill {\mathsf{H}}_{\gamma }& ={\stackrel{^}{\mathsf{D}}}_{\gamma }\mathsf{H},\hfill \end{array}$
((165) )
$\begin{array}{cc}\hfill {\mathsf{R}}_{\gamma }& ={\stackrel{^}{\mathsf{D}}}_{\gamma }\mathsf{R}{\mathsf{D}}_{\gamma }^{\mathsf{T}}.\hfill \end{array}$

Thus, we neglect observations that are outside of the domain ${\stackrel{^}{D}}_{\gamma }$. Then a general local analysis state is given by

((166) )
$\begin{array}{c}\hfill {\mathsf{X}}_{\gamma }^{a}={\overline{\mathsf{X}}}_{\gamma }^{f}+{{\mathsf{X}}^{\prime }}_{\gamma }^{f}\left({\overline{\mathsf{W}}}_{\gamma }+{{\mathsf{W}}^{\prime }}_{\gamma }\right),\end{array}$

where ${\overline{\mathsf{W}}}_{\gamma }$ and ${{\mathsf{W}}^{\prime }}_{\gamma }$ are computed using local ensemble forecast perturbations and local observations from the domain ${\stackrel{^}{D}}_{\gamma }$. For a complete analysis update, a loop over all local analysis domains has to be performed with a local analysis update for each domain.

Applying domain localisation allows significant savings in computing time since solving for the analysis update is not performed globally but on much smaller local domains. Accordingly, updates on the smaller scale domains can be done independently and therefore parallel (Nerger et al., 2006) even if the observation domains overlap. In ensemble-based Kalman filters, domain localisation was used predominantly with filters that use the analysis error covariance matrix for the calculation of the gain like SEIK, ETKF, ESTKF, all discussed in detail in Section 5. In these algorithms, the forecast error covariance matrix is never explicitly computed. Examples of the application of domain localisation can be found, e.g. in Brusdal et al. (2003) and Testut et al. (2003).

Blindly using domain localisation can result in boxed analysis fields if neighbouring local domains are updated using significantly different observation sets. Thus, great care needs to be taken to choose domains so that they overlap sufficiently to produce smooth global analysis fields with minimal increase in computational cost. Today, domain localisation is typically applied with observation localisation (Hunt et al., 2007), which is discussed in Section 9.1.3.

9.1.2.

Covariance localisation

Covariance localisation (also termed P-localisation or B-localisation, depending on whether the background covariance matrix is denoted $\mathsf{P}$ or $\mathsf{B}$ as in variational assimilation schemes) is a localisation method that is directly applied to the ensemble covariance matrix. The ensemble undersampling causes spurious cross-correlations between state variables. As realistic long-range correlations are typically small, the sampling errors are particularly pronounced for long distances. The direct covariance localisation can be used to reduce the long-range correlations in the forecast error covariance and hence damp the spurious correlations. In addition, the rank of the ensemble covariance is increased, giving more degrees of freedom to the analysis update (Hamill et al., 2001; Whitaker and Hamill, 2002).

Typically, covariance localisation is applied by first forming a correlation matrix $\mathsf{C}$ and then taking a Schur product (an element by element matrix multiplication) of this correlation matrix and the forecast error covariance. Thus, given some ${\mathsf{P}}^{f}$, our localised forecast error covariance will be

((167) )
$\begin{array}{c}\hfill {\mathsf{P}}_{L}^{f}=\mathsf{C}\circ {\mathsf{P}}^{f}.\end{array}$

The localization matrix $\mathsf{C}$ is usually formed of correlation functions with compact support similar in shape to a Gaussian function [e.g.][]Gaspari1999. Practically, the computation of the covariance matrix ${\mathsf{P}}^{f}$ can be avoided by applying the localisation matrix to the matrices ${\mathsf{P}}^{f}{\mathsf{H}}^{T}$ and $\mathsf{H}{\mathsf{P}}^{f}{\mathsf{H}}^{T}$ (see Equation (11)).

We note that, from all the ensemble-based Kalman filter methods presented in Section 5, covariance localisation can only be applied to the EnSRF and EAKF, since for these methods observations can be processed serially, and in the stochastic EnKF.

9.1.3.

Observation localisation

In the case of square root filters, presented in Section 5, the full covariance matrix is never formed. Instead, only the ensemble perturbation matrix ${{\mathbf{X}}^{\prime }}^{f}$ is calculated at each analysis step. Petrie and Dance (2010) showed that covariance-localisation for square root filters cannot be approximately decomposed into a square root of the correlation matrix $\rho$,

((168) )
$\begin{array}{c}\hfill \left(\rho {\rho }^{\mathsf{T}}\right)\circ \left({\mathsf{X}}^{\prime }{\mathsf{X}}^{\prime \mathsf{T}}\right)\ne \left(\rho \circ {\mathsf{X}}^{\prime }\right){\left(\rho \circ {\mathsf{X}}^{\prime }\right)}^{\mathsf{T}}\end{array}$

thus covariance localisation cannot be applied. For such filters, e.g. SEIK, ETKF and ESTKF, the observation localisation is a more natural choice and is currently used instead of covariance localisation (Hunt et al., 2007; Miyoshi and Yamane, 2007; Janjić et al., 2011).

Observation localisation is applied by modifying the observation error covariance matrix $\mathsf{R}$. More specifically, one modifies its inverse ${\mathsf{R}}^{-1}$ so that the inverse observation variance decreases to zero with the distance of an observation from an analysis grid point. To be able to define the distance, it is necessary to perform the analysis with the domain localization method as described in Section 9.1.1. An abrupt cutoff could be obtained by setting observation variances to zero beyond a given distance. This would be equivalent to the simple domain localisation of Section 9.1.1 and could result in non-smooth analysis updates. For a smooth analysis, e.g. Brankart et al. (2003) described to increase the observation error variance with increasing distance from the analysis grid point. Hunt et al. (2007) proposed to use a gradual observation localisation in the LETKF acting on ${\mathsf{R}}^{-1}$, which is likewise applicable with the SEIK filter and the ESTKF. In this case, elements of ${\mathsf{R}}^{-1}$ are multiplied by a smoothly decreasing function of distance from the analysis grid point. This modification smoothly reduces the observation influence and excludes observations outside a defined radius by prescribing their error to be infinitely large. As for covariance-localisation, the method uses a Schur product as

((169) )
$\begin{array}{c}\hfill \stackrel{~}{\mathsf{R}}=\stackrel{~}{\mathsf{C}}\circ \mathsf{R}\phantom{\rule{4pt}{0ex}}.\end{array}$

Here, the same correlation function (Gaspari and Cohn, 1999) as for covariance localisation can be used to construct the localisation matrix. However, in contrast to covariance localisation, $\stackrel{~}{\mathsf{C}}$ is not a correlation matrix as the values on the diagonal of this matrix vary with the distance between the observation and the local analysis domain. Then, the analysis update is computed as in the case of domain localisation, but using the weight-localised matrix $\stackrel{~}{\mathsf{R}}$. For computational savings we would in practise also discard any observations with zero weight from the analysis computations.

Both observation and covariance localisation can lead to similar assimilation results. In general, the optimal localisation has been found to be a bit larger for covariance localisation than for observation localisation (Greybush et al., 2011). The reason for this difference lies in the different effect of the localisations in the Kalman gain as was explained by Nerger et al. (2012b).

9.1.4.

The localisation methods described above are widely used and can be applied without much additional computing cost. However, the optimal localisation radius is a priori unknown and needs to be tuned in numerical experiments. For the tuning one performs several data assimilation experiments with different localisation radii, perhaps over shorter time periods, and selects the radius that results in the smallest estimation errors. Regarding the theoretical understanding of localisation, Kirchgessner et al. (2014) showed for the case of observation localisation when each grid point is observed that the optimal localisation radius should be reached when the sum over the observation weights equals the ensemble size. This finding allows for a simple form of adaptivity or a starting point for further tuning. Further, Perianez et al. (2014) showed that both the sampling error in the ensemble covariance matrix and the observation error influence the optimal localisation radius. As the sampling error has a largest influence when the true correlations are small, the dynamically generated correlations also influence the optimal localisation radius (Zhen and Zhang, 2014; Flowerdew, 2015).

To avoid the need for numerical tuning and to better adapt the localisation to the dynamically created correlation structure, several adaptive localisation methods have been developed, which we shortly mention here. A common approach is to damp the spurious correlations that are caused by sampling errors due to the small ensemble size. Anderson (2007) developed a hierarchical localisation method, in which the ensemble is partitioned into sub-ensembles. Then, the sub-ensembles are used to estimate the sampling errors. Bishop and Hodyss (2009) proposed an adaptive localisation method that uses a power of the correlations to damp small correlations and pronounce those correlations that are significant. This method can find correlations even at longer distances. Further, methods have been developed to find empirical localisation functions. In these methods, one attempts to find for a single observation the weight factor that minimises the deviation from a true solution (Anderson, 2012; Lei and Anderson, 2014; Flowerdew, 2015). These methods are typically tuned once based on observation system simulation experiments (OSSEs), in which one knows the true state. When the OSSEs are configured realistically, the obtained localisation functions should be applicable for the assimilation of real observations after the tuning.

The major advantage of the methods proposed so far is that they are able to adaptively specify the localisation function or radius according to the dynamically generated covariance structure. However, the methods still need tuning, which can be even more costly than for the fixed covariance and observation localisation methods. For example, the method by Bishop and Hodyss (2009) requires the specification of an envelope function around the locations that are found by powering the correlations and the number of powers that are computed. Lei and Anderson (2014) also showed that the localisation function can change when it is applied iteratively such that a sufficient number of iterations have to be computed.

Apart from the adaptive localisation methods, further methods like spectral localisation (Buehner and Charron, 2007) and localisation in different variables (i.e. stream function, velocity potential, Kepert, 2006) have been developed. However, none of these methods are yet a standard for operational centres.

9.1.5.

Localisation in particle filtering

Several variants of the Particle Filter that explore localisation have been developed recently, following its success in Ensemble Kalman Filters. An issue with directly localising $\mathsf{R}$ or using domain localisation is that the weight of each particle is a global property of the filter (van Leeuwen, 2009). That is, the same particle could have a high weight in one area and low weight in another making it ambiguous whether this particle should be resampled or not. Keeping parts of a number of particles that all perform well in a certain area of the domain and parts of other particles in other areas of the domain would lead to balance problems between variables and sharp gradients in the fields. In contrast, when performing parameter estimation a smooth variation of parameter values is less likely to cause imbalances in the model variables, and localisation is straightforward, as pioneered by Vossepoel and Van Leeuwen (2006).

Particle filters that use a proposal density, such as the EWPF discussed in Section 6.2.1 indirectly use localisation through the model error covariance matrix $\mathsf{Q}$. This localisation does not explicitly work on the weights but on how the states are updated, because a natural choice is to pre-multiply each update of a particle with that matrix. Since the model error covariance matrix will mainly contain short length-scale correlations related to missing or inaccurate physics at the model grid scale, each point in the state space is only influenced by observations within the radius set by that covariance matrix. In fact, as noted in Section 6.2.1, we do have the freedom to choose this matrix differently from $\mathsf{Q}$, so other choices closer to our needs are possible. This is because the effects of this choice will be taken into account in the computation of the weights of each particle. This has not been explored in any detail in the literature.

Of the full particle filters, the ETPF (Reich, 2013) can easily be localised by taking for each grid point only observations close to that grid point into account and making the transformation matrix space-dependent to ensure smooth transitions between different regions. This can for example be achieved by calculating the transformation matrix in a limited number of grid points and interpolate that matrix between grid points. This would also reduce the number of computations, which would otherwise be prohibitive (see Section 9.4 on computational costs).

The PFGR and the NETF perform an ensemble transformation similar to the ETKF, but with a transform matrix $\mathsf{T}$ computed from particle filter weights. Accordingly, observation localisation can be applied to the NETF (Tödter and Ahrens, 2015) by smoothing the weight matrix over space. This can also be applied to the MMPF in the high-dimensional implementation. Also the MPF can be localised by making the weights local and using a systematic resampling method like Stochastic Universal Resampling (see Appendix 1). In practise, more might be needed, e.g. the extra averaging as advocated by Penny and Miyoshi (2016) described below.

Several localisation schemes have been proposed and discussed in the review van Leeuwen (2009) and those will not be repeated here. The most obvious thing to do is to weight and resample locally, and somehow glue the resampled particles together via averaging at the edges between resampled local particles (van Leeuwen, 2003b). Recently, Penny and Miyoshi (2016) used this idea with more extensive averaging, and their scheme runs as follows. First, for each grid point j the observations close to that grid point are found and the weight of each particle i is calculated based on the likelihood of only those observations:

((170) )
$\begin{array}{c}\hfill {w}_{i,j}=\frac{p\left({\mathsf{y}}_{j}|{\mathsf{x}}_{i,j}\right)}{{\sum }_{k=1}^{{N}_{e}}p\left({\mathsf{y}}_{j}|{\mathsf{x}}_{i,j}\right)}\end{array}$

in which ${\mathsf{y}}_{j}$ denotes the set of observations within the localisation area. This is followed by resampling via Stochastic Universal Resampling to obtain ensemble members ${x}_{i,j}^{a}$ with $i=1,\dots ,{N}_{e}$ for each grid point j. As mentioned before, the issue is that two neighbouring grid points can have different sets of particles, and smoothing is needed to ensure that the posterior ensemble consists of smooth particles. This smoothing is performed for each grid point j for each particle i by averaging over the ${N}_{p}$ neighbouring points within the localisation area around grid point j:

((171) )
$\begin{array}{c}\hfill {x}_{i,j}^{a}=\frac{1}{2}{x}_{i,j}^{a}+\frac{1}{{N}_{p}}\sum _{k=1}^{2{N}_{p}}{x}_{i,{j}_{k}}^{a}\end{array}$

in which ${j}_{k}$ for $k=1,\dots ,{N}_{p}$ denotes the grid point index for those points in the localisation area around grid point j. The resampling via Stochastic Universal Resampling is done such that the weights are sorted before resampling, so that high-weight particles are joined up to reduce spurious gradients.

While this scheme does solve the degeneracy problem in simple one-dimensional systems it is unclear if it will work well in complex systems such as the atmosphere in which fronts can easily be smoothed out, and non-linear balances broken, see e.g. the discussion in van Leeuwen (2009).

A new scheme has recently been proposed in Poterjoy (2016a), which involves a very careful process of ensuring smooth posterior particles and retaining non-linear relations. The filter processes each observation sequentially, as follows. First, adapted weights are calculated for the first element ${\mathsf{y}}_{1}$ of the observation vector, as

((172) )
$\begin{array}{c}\hfill {\stackrel{~}{w}}_{i}=\alpha p\left({\mathsf{y}}_{1}|{\mathsf{x}}_{i}\right)+1-\alpha \end{array}$

These weights are then normalised by their sum $\stackrel{~}{W}$. Then we resample the ensemble according to these normalised weights to form particles ${\mathsf{x}}_{{k}_{i}}$.

Here, $\alpha$ is an important parameter in this scheme, with $\alpha =1$ leading to standard weighting, and $\alpha =0$ leading to all weights being equal to 1. Its importance lies in the fact that the weights are always larger than $1-\alpha$, so even a value close to 1, say $\alpha =0.99$, leads to a minimum weight of 0.01 that might seem small, but it means that particles that are more than 1.7 observational standard deviations away from the observations have their weights cut off to something close to $1-\alpha$. This seriously limits the influence the observation can have on the ensemble. Furthermore, the influence of $\alpha$ does depend on the size of the observational error, which is perhaps not what one would like. It is included to avoid losing any particle.

Now, we do the following for each grid point j. For each member i we calculate a weight

((173) )
$\begin{array}{c}\hfill {\stackrel{~}{w}}_{i}=\alpha \rho \left({\mathsf{y}}_{1},{\mathsf{x}}_{j},r\right)p\left({\mathsf{y}}_{1}|{\mathsf{x}}_{i}\right)+1-\alpha \rho \left({\mathsf{y}}_{1},{\mathsf{x}}_{j},r\right)\end{array}$

in which $\rho \left(.\right)$ is the localisation function with localisation radius r. The normalised weights for this grid point, ${w}_{i}$, are obtained by dividing ${\stackrel{~}{w}}_{i}$ by the summed weights over all the particles. Note, again, the role played by $\alpha$. Then, the posterior mean for this observation at this grid point is calculated as

((174) )
$\begin{array}{c}\hfill {\overline{\mathsf{x}}}_{j}=\sum _{i=1}^{{N}_{e}}{w}_{i}{\mathsf{x}}_{i,j}\end{array}$

in which ${\mathsf{x}}_{i,j}$ is grid point j of particle i. Next, a number of scalars are calculated that ensure smooth posterior fields (Poterjoy, 2016a):

((175) )
$\begin{array}{cc}\hfill {\sigma }_{j}^{2}& =\sum _{i=1}^{{N}_{e}}{\omega }_{i}{\left({\mathsf{x}}_{i,j}-{\overline{\mathsf{x}}}_{j}\right)}^{2}\hfill \\ \hfill {c}_{j}& =\frac{{N}_{e}\left(1-\alpha \rho \left({\mathsf{x}}_{j},{\mathsf{y}}_{1},r\right)\right)}{\alpha \rho \left({\mathsf{x}}_{j},{\mathsf{y}}_{1},r\right)\stackrel{~}{W}}\hfill \\ \hfill {r}_{1,j}& =\sqrt{\frac{{\sigma }_{j}^{2}}{\frac{1}{{N}_{e}-1}\sum _{i=1}^{{N}_{e}}{\left({\mathsf{x}}_{{k}_{i},j}-{\overline{\mathsf{x}}}_{j}+{c}_{j}\left({\mathsf{x}}_{i,j}-{\overline{\mathsf{x}}}_{j}\right)\right)}^{2}}}\hfill \\ \hfill {r}_{2,j}& ={c}_{j}{r}_{1j}\hfill \end{array}$

so that the final estimate becomes:

((176) )
$\begin{array}{c}\hfill {\mathsf{x}}_{i,j}^{a}={\overline{\mathsf{x}}}_{j}+{r}_{1,j}\left({\mathsf{x}}_{{k}_{i},j}-{\overline{\mathsf{x}}}_{j}\right)+{r}_{2,j}\left({\mathsf{x}}_{i,j}-{\overline{\mathsf{x}}}_{j}\right).\end{array}$

This procedure is followed for each grid point so that at the end we have an updated set of particles that have incorporated the first observation. As a next step the whole process is repeated for the next observation, with a small change that ${\stackrel{~}{w}}_{i}$ is multiplied by ${\stackrel{~}{w}}_{i}$ from the previous observation, until all observations have been assimilated. In this way, the full weight of all observations is accumulated in the algorithm. Now the importance of $\alpha$ comes to full light: without $\alpha$ the ensemble would collapse because the $\stackrel{~}{w}$’s would be degenerate when observations are accumulated.

The final estimate shows that each particle at grid point j is the posterior mean at that point plus a contribution from the deviation of the posterior resampled particle from that mean and a contribution from the deviation of the prior particle from that mean. So each particle is a mixture of posterior and prior particles, and departures from the prior are suppressed. When $\alpha =1$, so for a full particle filter, we find for grid points at the observation locations that ${c}_{j}=0$ because it is $\rho \left({\mathsf{y}}_{1},{\mathsf{x}}_{j},r\right)=1$ here. Accordingly, it is ${r}_{2,j}=0$ and ${r}_{1,j}\approx 1$ and indeed the scheme gives back the full particle filter.

Between observation locations it can be shown that the particles have the correct first and second order moments, but higher-order moments are not conserved. To remedy this a probabilistic correction is applied at each grid point as follows. The prior particles are dressed by Gaussians with width 1 and weighted by the likelihood weights to generate the correct posterior pdf. The posterior particles are dressed in the same way, each with weight $1/{N}_{e}$. Then the cumulative distribution functions (cdf’s) for the two densities are calculated using a trapezoidal rule integration. A cubic spline is used to find the prior cdf values at each prior particle i, denoted by cdf(i). Then a cubic spline is fitted to the other cdf, and the posterior particle i is found as the inverse of its cdf at value cdf(i). See Poterjoy (2016a) for details. The result of this procedure is that higher order moments are brought back into the ensemble between observed points.

This scheme, although rather complicated, is the only local particle filter scheme that has been applied to high-dimensional geophysical systems based on primitive equations in Poterjoy and Anderson (2016b). (van Leeuwen, 2003b applied a local particle filter to a high-dimensional quasi-geostrophic system, but that system is quite robust to sharp gradients as it does not allow gravity waves.)

Another interesting local particle filter is the Multivariate Rank Histogram Filter (Metref et al., 2014a). The idea is to write the posterior pdf in terms of an observed marginal multiplied by a set of conditional pdfs. For example, for a 3-dimensional system in which variable ${x}_{1}$ is observed we have:

((177) )
$\begin{array}{cc}\hfill p\left({x}_{1},{x}_{2},{x}_{3}|y\right)& =\frac{p\left(y|{x}_{1}\right)}{p\left(y\right)}p\left({x}_{1},{x}_{2},{x}_{3}\right)\hfill \\ \hfill & =\frac{p\left(y|{x}_{1}\right)}{p\left(y\right)}p\left({x}_{1},{x}_{2}\right)p\left({x}_{3}|{x}_{1},{x}_{2}\right)\hfill \\ \hfill & =\frac{p\left(y|{x}_{1}\right)}{p\left(y\right)}p\left({x}_{1}\right)p\left({x}_{2}|{x}_{1}\right)p\left({x}_{3}|{x}_{1},{x}_{2}\right).\hfill \end{array}$

The filter now uses the rank-histogram idea of Anderson (2010) on each component, resulting in a fully non-Gaussian update of each component. Localisation can be easily applied directly in this algorithm as it is a transformation algorithm and the transformation can be made local. Unfortunately, this procedure becomes too expensive when the system is high dimensional. However, via a so-called mean-field approximation we suppress the conditioning on non-observed variables, so that we find:

((178) )
$\begin{array}{c}\hfill p\left({x}_{1},{x}_{2},{x}_{3}|y\right)\approx \frac{p\left(y|{x}_{1}\right)}{p\left(y\right)}p\left({x}_{1}\right)p\left({x}_{2}|{x}_{1}\right)p\left({x}_{3}|{x}_{1}\right).\end{array}$

This will make the algorithm parallelisable and suitable for high-dimensional applications, although that has not been explored yet.

9.2.

Ensemble covariance inflation

In practice, an ensemble Kalman filter can diverge from the truth due to systematic underestimation of the error variances in the filter, possibly caused by model errors or ensemble undersampling as discussed in Section 9.1. In particular, estimating a too large amount of long range correlation will reduce the estimated variance too strongly. Regardless of the cause, underestimating the uncertainty leads to a filter that is overly confident in the state estimate. Thus, the analysis step of the filter puts increasingly more weight on the ensemble background estimate than on the observations and, at some point, it disregards observations completely. Localisation is one method to reduce the undersampling. However, for high-dimensional systems, localisation alone is not sufficient to ensure a stable assimilation process and covariance inflation is applied to further increase the sampled variance and thus stabilise the filter. In addition, the inflation can partly account for model error in case of an imperfect model (Pham et al., 1998b; Hamill, 2001; Anderson, 2001; Whitaker and Hamill, 2002; Hunt et al., 2007).

Most common is a fixed multiplicative covariance inflation (Anderson and Anderson, 1999). The method uses the inflation factor r to perform a multiplicative inflation for each ensemble member ${\mathsf{x}}_{j}^{a,f}$. With $j=1,\dots ,{N}_{e}$ being ensemble member indices, it is given by

((179) )
$\begin{array}{c}\hfill {\mathsf{x}}_{j}^{a,f}=r\left({\mathsf{x}}_{j}^{a,f}-{\overline{\mathsf{x}}}^{a,f}\right)+{\overline{\mathsf{x}}}^{a,f}\end{array}$

where r usually is chosen to be slightly greater than one. The specification of an optimal inflation factor may vary according to the size of the ensemble ((Hamill, 2001); Whitaker and Hamill, 2002) and the choice of r will depend on various factors, such as dynamics of the model, type of the ensemble filter used as well as the length scale of covariance localisation.

Related to covariance inflation is the so-called ’forgetting-factor’ $\rho$ introduced by Pham et al. (1998b). The forgetting factor is usually chosen to be slightly lower than one and is typically applied in the square root filters like the ETKF, SEIK and ESTKF. For example, in the ETKF it is applied to $\mathsf{T}{\mathsf{T}}^{T}$, e.g. in Equation (44) as

((180) )
$\begin{array}{c}\hfill {\mathsf{T}}_{T}{\mathsf{T}}_{T}^{\mathsf{T}}={\left(\rho \mathsf{I}+\frac{1}{{N}_{e}-1}{\mathsf{S}}^{\mathsf{T}}{\mathsf{R}}^{-1}\mathsf{S}\right)}^{-1}.\end{array}$

In this way, the inflation and forgetting factors are related as $\rho ={r}^{-2}$. Equation (180) allows one to apply inflation in a computationally very efficient way because $\mathsf{T}{\mathsf{T}}^{T}$ is much smaller than the ensemble states to which the inflation is applied in Equation (179).

Next to the multiplicative inflation, an additive inflation has been proposed. The multiplicative inflation leads to an inflation that is relative to the variance level. Thus, large variances will be inflated much more than small variances. This behaviour can be avoided with additive inflation (Ott et al., 2004), which can also be applied in combination with the multiplicative inflation. In additive inflation, all variances are inflated by the same amount, rather than a relative factor. This difference can be useful if the variances vary strongly as in this case the additive inflation acts stronger on the very small variances.

The optimal strength of the inflation is usually determined by tuning experiments, i.e. running experiments with different inflation values and analysing which value results in the smallest estimation errors. Usually a single fixed value of r or $\rho$ is chosen for all grid points. This situation is mainly motivated by the fact that a manual tuning of spatially varying inflations is not feasible for high-dimensional models. To avoid the tuning, several adaptive inflation methods have been proposed. Brankart et al. (2003) proposed to use the relation

((181) )
$\begin{array}{c}\hfill \mathrm{t}r\left({\rho }^{-1}\mathsf{S}{\mathsf{S}}^{T}+\mathsf{R}\right)={\left(\mathsf{y}-\mathcal{H}\left({\overline{\mathsf{x}}}^{f}\right)\right)}^{T}\left(\mathsf{y}-\mathcal{H}\left({\overline{\mathsf{x}}}^{f}\right)\right)\end{array}$

to estimate a temporally variable forgetting factor $\rho$ for multiplicative inflation. This equation is one of the statistical consistency relations in observation space that Kalman filters should fulfil (Desroziers et al., 2005). Further, Anderson (2009) proposed a method to adaptively estimate spatially and temporally varying inflation factors. This method also aims to fulfil Equation (181) but uses Bayesian estimation to obtain the inflation values. All of these adaptive methods do assume that we have a very good knowledge of the error covariance of the observations. Apart from adaptively inflating the ensemble spread, adaptive inflation of observation errors has been proposed by Minamide and Zhang (2017) for assimilating all-sky satellite brightness temperatures.

An alternative to the inflation can be to explicitly account for the sampling error caused by the finite ensemble size as is done in the finite-size ensemble transform Kalman filter (Bocquet, 2011). This method, while still denoted ’Kalman filter’, requires the iterative minimisation of a cost functional and is hence distinct from the Ensemble Kalman filter variants in Section 5, which compute a one-step analysis update.

9.3.

Parallelisation of EnDA

The need to integrate an ensemble of model states leads to large computational costs, because instead of computing a single model integration as in normal modelling applications an ensemble of $\mathcal{O}$(10–100) members has to be propagated. To reduce the time to perform the costly computations one can apply parallelisation of the data assimilation program and then use high-performance computers with a large number of processors to perform the computations. The ensemble integrations as the most costly part of the computations can be easily parallelised. In fact, the integration of each ensemble state is independent from the other states. Thus, this step could be parallelised by simply starting the numerical model ${N}_{e}$ times. Each model state has to be initialised from a different restart file and one has to store the final state of each model integration to keep the information on the forecast ensemble. Subsequently to the ensemble forecasts, one starts the data assimilation program, which reads the ensemble information from the files, computes the analysis step, and writes a set of new restart files to prepare the next forecast phase. The computations of the analysis step can also be parallelised as is outlined below. This implementation scheme of data assimilation can be termed ’offline coupling’ (Nerger and Hiller, 2013). While being flexible, the frequent writing and reading of the large files holding the ensemble states can take a significant amount of time.

A more sophisticated parallelisation of the ensemble data assimilation problem with a high-dimensional ocean model was discussed by Keppenne and Rienecker (2002) and Keppenne and Rienecker (2003). This method applied a domain-decomposition to the model and then integrated several ensemble states concurrently. The forecast ensemble was then collected by the use of the parallelisation technique SHMEM, which was also used for exchanging data in between processors during the analysis step of the EnKF applied in this study. Keeping the analysis step and the ensemble forecasts within one program reduced the overall computing time because the writing and reading of model state files is reduced.

The analysis step of the ensemble filters can also be parallelised using parallelisation methods like the Message Passing Interface (MPI, Gropp et al., 1994). The parallelization differs depending on whether localisation is used and on which of the filters is used. For the filter methods that assimilate all observations at once (in contrast to the serial observation processing of the EAKF and EnSRF) using the domain-decomposition of a model was found to be more efficient than using ensembles which are distributed over several processors because the amount of data that has to be exchanged using MPI is smaller for domain-decomposition (Nerger et al., 2005a).

For the ensemble Kalman filters with domain localisation, the local analysis update is independent for each local domain. Thus, this part is naturally parallel and can be distributed with MPI, the shared-memory standard OpenMP, or a combination of both. However, because the observation domains have a larger spatial extent they can reach into the grid domain held by neighbouring processors. The local analysis step needs the difference (innovation) between the observation and the corresponding part of the observed state vector. These differences need to be first computed by the processor that holds the sub-domain and then exchanged in between the different processors computing the analysis step. This computation of the observation innovations and their exchange using MPI is only required once before the loop over all local analysis domains can be computed in parallel (Nerger and Hiller, 2013). The cost for these operations depends on the total number of observations and on their distribution over the model grid. For many observations this can limit the parallel speedup of the analysis update as was shown for the localised SEIK filter by Nerger and Hiller (2013).

The EAKF and EnSRF are typically applied with serial observation processing and covariance localisation. In this case, the parallelisation of the analysis step has to take into account that for each assimilated observation the full model state has to be updated. Hence, also the innovation differences between the not yet assimilated observations and the corresponding observed model state change after each update. Anderson (2007) proposed to let each processor separately update the innovations so that the required parallel communication is limited. This parallelisation does not take the localisation into account. Taking into account that the localisation results in a limited reach of the observation influence, Wang et al. (2013) proposed another parallelisation strategy.

The analysis step of the ensemble Kalman filters requires only the model states. This allows for a generic coupling between the model and the analysis step. In particular, one can implement filter algorithms such that they can be coupled in the same way with different models. This allows one to build generic frameworks for ensemble data assimilation (Nerger et al., 2005a; Nerger and Hiller, 2013; Browne and Wilson, 2015). In the generic form, the ensemble forecast can still be computed by concurrent parallel model forecasts. The transfer of the forecast state information can then be performed either directly in memory by subroutine calls (Nerger and Hiller, 2013) or by parallel communication using MPI (Nerger, 2004; Browne and Wilson, 2015). These strategies allow a tight ’online’ coupling of the model and the data assimilation code that computes the analysis updates. The coupling can be achieved with minimal changes in the model code.

For the implementation of the EWPF and IEWPF different parallelisation schemes are applicable for the computations at each nudging step in between observations (Equations (120) and (123)) and at observation time for the EWPF between Equations (106) to (111) and for the IEWPF between Equations (112) to (114). Before the observation time, the computations for the random forcing ${\stackrel{~}{\beta }}_{j}^{\left(m\right)}$ in Equation (120) are independent for each particle since a different forcing is drawn from the covariance for each of them. Similar, the nudging term in Equation (120) and the update of the weights are independent for each particle. Thus, these operations can be performed in parallel and there is no need to gather all particles on a single process. The computation of the matrix $\mathrm{Υ}$ in Equations (108) and (117) is computationally the most expensive part. When the observation operator does not change over time, this matrix can be precomputed before beginning the assimilation. The downside of this approach is that this matrix can be huge and requires a lot of memory if the state dimension and number of observations are large. Otherwise, since the same matrix is used by all particles, it is possible to distribute the computation to all processes allocated for the particles, e.g. using a parallel matrix solver. At observation time, most of the computations are again independent for all particles. Only the maximum weight obtained from Equations (106) and (118) for EWPF and IEWPF, respectively, must be exchanged over all processes holding particles, so that the target weight ${w}_{\phantom{\rule{0.333333em}{0ex}}\text{target}}$ can be computed. Further parallelisation, e.g. to use the domain decomposition of the model, might also be possible. However, the matrix $\mathsf{Q}$ is frequently implemented in form of operators. As the parallelisation is always dependent on the particular implementation of the matrix $\mathsf{Q}$ it cannot be generalised for all models.

9.4.

Computational cost

In Section 5, we presented various ensemble-base Kalman filter methods in a clean mathematical way for ease of comparison and clarity. In Appendix 2, we give a practical and precise pseudo-algorithms on how to implement each method. Providing detailed operation counts for all the ensemble methods presented in this paper would be too lengthy but more importantly the actual operation count would depend on many details such as operators $\mathcal{H}$ and $\mathsf{R}$, which are case specific for the model and observations. The operation counts provided here have been obtained by counting them in the pseudo codes in Appendix 2.

Generally, the leading order of operation counts in the different filters are those that scale with third order in any of the dimensions ${N}_{x}$, ${N}_{y}$, and ${N}_{e}$. For the SEIK, ETKF, ESTKF, and the EAKF methods, the leading order of operation count is $O\left({N}_{y}{N}_{e}^{2}+{N}_{e}^{3}+{N}_{x}{N}_{e}^{2}\right)$ if the observation error covariance matrix is diagonal. The main cost is the update of the ensemble by multiplying with the weight matrix in Equation (19) which has a complexity of $O\left({N}_{x}{N}_{e}^{2}\right)$. Computing the matrix $\mathbf{T}{\mathbf{T}}^{T}$, e.g. in Equation (25) involves multiplications of matrix $\mathbf{S}$ with ${\mathbf{R}}^{-1}$ which has a complexity of $O\left({N}_{y}{N}_{e}^{2}\right)$. Finally computing the transform matrix $\mathbf{T}$ by a Cholesky decomposition or an EVD has a computational complexity of $O\left({N}_{y}{N}_{e}^{2}\right)$. While the leading order of operation counts is identical for all four filters, the SEIK and ESTKF are in general computationally faster than the ETKF, or the bulk formulation of the EAKF, despite equal leading operation counts due to details in the algorithms. For the EAKF, computing the SVDs of the matrices ${\mathbf{X}}^{\prime f}$ and $\stackrel{~}{\mathbf{S}}$, whose costs scale with $O\left({N}_{x}{N}_{e}^{2}\right)$ and $O\left({N}_{y}{N}_{e}^{2}\right)$, respectively, increases the computing time without changing the leading order of operation counts. Thus, the leading order of operation count does not reflect the computing speed.

The serial observation handling that is usually applied in the EAKF and EnSRF leads to an operation count of $O\left({N}_{y}{N}_{x}{N}_{e}\right)$ in the leading order. Because only the ensemble updates are of third order complexity in the serial update, it can be faster than the bulk updates that assimilate all observations at once. This is even the case when localisation is used. However, in combination with localisation, the stability of the serial formulations can be deteriorated (Nerger, 2015).

The leading order operation count of the stochastic EnKF with perturbed forecasted observations is $O\left({N}_{y}{N}_{e}^{2}+{N}_{y}^{2}{N}_{e}+{N}_{y}^{3}+{N}_{x}{N}_{e}^{2}\right)$. Here again the ensemble update, which scales as $O\left({N}_{x}{N}_{e}^{2}\right)$, is usually the most costly operation. However, the EnKF is usually more costly than the filters mentioned before because of the inversion of the ${N}_{y}×{N}_{y}$ matrix ${\mathsf{F}}_{F}$ (Equation (69)), which has a complexity of $O\left({N}_{y}^{3}\right)$. Parallelising this inversion can help to reduce the computing time. A computing cost $O\left({N}_{y}^{3}\right)$ also occurs for the bulk formulation of the EnSRF due to the EVD computed in Equation (58).

When localisation is used, the change in the cost compared to the global formulation depends on the localisation method used. For covariance localisation (Section 9.1.2), the cost for computing the weight in matrix $\mathsf{C}$ and to apply it to ${\mathsf{P}}^{f}$ or the matrices ${\mathsf{P}}^{f}{\mathsf{H}}^{T}$ and $\mathsf{H}{\mathsf{P}}^{f}{\mathsf{H}}^{T}$ is added. For observation localisation (Section 9.1.3), the cost to compute the analysis for a local analysis domain with the bulk update methods is $O\left({N}_{y,\gamma }{N}_{e}^{2}+{N}_{e}^{3}+{N}_{x,\gamma }{N}_{e}^{2}\right)$, where ${N}_{y,\gamma }$ is the number of local observations and ${N}_{x,\gamma }$ is the size of a local state vector that is corrected. Because both ${N}_{y,\gamma }$ and ${N}_{x,\gamma }$ are usually much smaller than the global dimensions ${N}_{y}$ and ${N}_{x}$, a single local analysis update is cheaper than the global update. However, the local analysis update has to be computed for each local analysis domain. Thus, the cost for the analysis with observation localisation is usually significantly higher than the global analysis. However, the local analysis can be easily parallelised to reduce the computing time as was described in Section 9.3.

The computing cost in the ETKF can be reduced using a projection matrix $\mathsf{A}$ analogous to the SEIK and ESKTF methods. For the ETKF, this projection is square-matrix with diagonal entries of $1-1/{N}_{e}$ and off-diagonal entries of $-1/{N}_{e}$. The advantage of using $\mathsf{A}$ is that one can avoid the explicit computation of ${\mathsf{X}}^{\prime }$ in favour of applying $\mathsf{A}$ to smaller matrices when evaluating the analysis equations (Nerger et al., 2012a).

The computational cost for the particle-based non-linear filter NETF (Section 7.1) is similar to that of the ETKF since the analysis is performed in the ${N}_{e}$-dimensional subspace spanned by the ensemble members. In addition, the NETF does not compute an inverse matrix thus avoiding computational instabilities caused by small singular values, which are sometimes neglected in ETKF implementations for that reason (Sakov et al., 2012). If localisation is applied to the NETF, the local analysis computations are independent and can be evaluated in parallel as for the ETKF. The generation of random rotation matrices consumes additional resources; however, it is possible to resort to a collection of pre-calculated random matrices since they only depend on ensemble size ${N}_{e}$.

9.5.

Ensemble data assimilation and non-linearity

The original EnKF was developed to overcome stability problems of the extended Kalman filter (see Jazwinski, 1970) that were discovered with ocean data assimilation applications (Evensen, 1993). Due to its use of an ensemble to propagate the state error covariance matrix, the EnKF is suited for non-linear models in this phase. However, the analysis step is based on the Kalman filter and is only optimal for Gaussian distributions. Obviously, a non-linear model forecast will transform a Gaussian distribution into a non-Gaussian distribution. Hence, the optimality of the Kalman filters is no longer preserved and the estimated analysis state and the error estimates will be suboptimal. This is a common issue for all ensemble filters whose analysis step is based on the equations of the Kalman filter. Nonetheless, the many existing data assimilation studies with non-linear models, e.g. of the ocean or atmosphere, with different formulations of the ensemble Kalman Filters show that these filters are rather stable with regard to non-linearity.

Second-order accurate ensemble filters, like the NETF, MMEF and MPF in Section 7 as well as the adaptive Gaussian mixture filter described in Section 8 avoid the assumption that the forecast ensemble has a Gaussian distribution. Thus, they should be better suited for non-linear systems. When the methods are applied with localisation, they can also be applied with large systems (e.g. the NETF in Tödter et al., 2016). However, filters like the NETF are still approximations to the full non-linear analysis that is performed by particle filters.

Particle filters do not rely on any assumption on the error distribution of the state estimate. However, the observation errors are frequently assumed to be Gaussian as is for the particle filters presented in Section 6. Additionally, while the EWPF and IEWPF do not require knowledge of forecast errors, they both require good knowledge of model errors, i.e. $\mathsf{Q}$. Of course, good knowledge of model errors is always beneficial to forecasting irrespective of the data assimilation method used, but for the application EWPF and IEWPF model errors are essential. While standard particle filters suffer the curse of dimensionality when applied to large systems, the EWPF and IEWPF by construction are designed to work for high-dimensional models, including those which are highly non-linear, with a small number of particles, e.g. Ades and van Leeuwen (2013) and Zhu et al. (2016). However, when applying the relaxation scheme between observations in an EWPF it is important to keep in mind that one has to choose the relaxation term $\stackrel{~}{\mathrm{Υ}}$ in Equation (120) very carefully. We can choose this term to suit the needs of our model and indeed we need to do so carefully by selecting an appropriate relaxation strength function $\rho$ and covariance matrix. The relaxation term can be chosen to be constant between observation times, but that would not be a good idea if the system experiences oscillations between observations. In that case, the strength term can be chosen to be linearly increasing with the time lag to the next observation we are nudging particles to, or non-linear with maximum strength close to the observations.

In all local particle filters that we discussed the posterior particles are linear combinations of the prior particles. This has the potential to break non-linear balances between variables in the model. However, the linear combinations are typically formed such that only prior particles are added that are close to each other in state space, and hence quite similar. So this is not necessarily a disadvantage.

10.

Summary and conclusion

This overview paper provides a coherent algorithmic summary and highlights differences between many currently used ensemble data assimilation methods that can be applied to high dimensional and non-linear problems such as ocean or weather prediction including well-known ensemble-based Kalman filters as well as recently developed particle filter methods and the Gaussian mixture filter.

We have presented these methods in a mathematically coherent way allowing the reader to compare many methods easily. In particular, we have presented all ensemble-based Kalman filter methods in form of a square root filter. In addition, we have included practical pseudo-algorithms for all methods since for computational reasons many of them would not be implemented in the form they are mathematically described. For some of the particle filters and for the Gaussian mixture filter we have presented the theory along with the step by step algorithm.

Finally, we have discussed important issues for practical implementation of the ensemble methods including various methods of localisation, inflation, parallelisation, computational cost and ensemble applicability to non-linear problems.

Concluding, a wealth of ensemble-based data-assimilation methods have been developed, and although they seem quite different in theory, the numerical implementations are quite similar. The implementations turn out to be quite similar to those for the particle filters, even those that explore a proposal density, where the state covariances that play an essential role in the Kalman filters are replaced by the covariance of the model errors. The main difference is that the state covariances are evolving over time and are always of low rank, while the model error covariance is given and of full rank but sparse. This means that different numerical algorithms need to be used to solve the equations when the system of interest has a high dimension.

11.

Code availability

We note that many of the algorithms here have been efficiently implemented as a part of the Sangoma project and are freely available to everyone on the project website http://sourceforge.net/p/sangoma/ along with many other tools useful to data assimilation. Table 2 provides a list of available filters. Please note that the filter implementation was done independently from this paper so that not all filters described here are available. For simplicity these filters have been implemented without parallelisation and are hence only usable for moderately large problems with a state dimension of O(${10}^{5}$).

Further, all of these analysis methods have been implemented in at least one of the toolboxes connected to the Sangoma project, these are: EMPIRE, OAK, SESAM, OPENDA, BELUGA/SEQUOIA, NERSC and PDAF. For example, the set of filters listed in Table 2, plus the SEIK filter (5.2) with localization, are available in a parallelised implementation for high-dimensional problems in the freely available data assimilation framework PDAF (Nerger et al., 2005a; Nerger and Hiller, 2013). Further, the EWPF (Section 6.2.1) and IEWPF (Section 6.2.2) are available in EMPIRE (Browne and Wilson, 2015).

Disclosure statement

{ label (or @symbol) needed for fn[@id='FN0007'] }No potential conflict of interest was reported by the authors.

2 The discussion of the increasingly growing developments in hybrid data assimilation methods is beyond the scope of this paper, instead we refer the reader to a very recent review article by Bannister (2017), and papers by Frei and Künsch (2013) and Chustagulprom et al. (2016) aiming to bridge particle and ensemble Kalman filter methods.

3 Note that $\rho =1$ if $\stackrel{^}{p}={\stackrel{~}{N}}_{e}$.

4 Many of the analysis methods discussed in this paper including MRHF have been implemented in Sangoma and are available for free to download from www.data-assimilation.net, as well as many other data assimilation tools for diagnostics, utilities etc..

5 Interestingly, the ECMWF is using an ensemble of 4DVars for their weather forecasting scheme, and it is relatively easy to turn this into a set of particles using 4DVar as proposal (see e.g. van Leeuwen et al., 2015).

6 The model error covariance matrices are usually assumed to be equal, i.e. $\stackrel{~}{\mathsf{Q}}=\mathsf{Q}$. .

Acknowledgements

PJvL thanks the European Research Council (ERC) for funding of the CUNDA project under the European Unions Horizon 2020 research and innovation programme.

References

1. Ades, M. and van Leeuwen, P. J.2013. An exploration of the equivalent weights particle filter. Q. J. R. Meteorol. Soc.139, 820–840.

2. Anderson, J.2003. A local least squares framework for ensemble filtering. Mon. Wea. Rev.131, 634–642.

3. Anderson, J. L.2001. An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev.129, 2884–2903.

4. Anderson, J. L.2007. Exploring the need for localization in ensemble data assimilation using a hierarchical ensemble filter. Physica D230, 99–111.

5. Anderson, J. L.2009. Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus61A, 72–83.

6. Anderson, J. L.2010. A non-Gaussian ensemble filter update for data assimilation. Mon. Wea. Rev.138(11), 4186–4198.

7. Anderson, J. L.2012. Localization and sampling error correction in ensemble Kalman filter data assimilation. Mon. Wea. Rev.140, 2359–2371.

8. Anderson, J. L. and Anderson, S. L.1999. A Monte Carlo implementation of the non-linear filtering problem to produce ensemble assimilations and forecasts. Mon. Wea. Rev.126, 2741–2758.

9. Bannister, R. N.2017. A review of operational methods of variational and ensemble-variational data assimilation. Q. J. R. Meteorol. Soc.143, 607–633.

10. Bengtsson, T., Snyder, C. and Nychka, D.2003. Toward a nonlinear ensemble filter for high-dimensional systems. J. Geophys. Res.108, 8775–8785.

11. Bengtsson, T., Bickel, P. and Li, B.2008. Curse-of-dimensionality revisited: collapse of the particle filter in very large scale systems. IMS Collections: Prob. Stat. Essays Honor David A. Freedman2, 316–334.

12. Bishop, C. H. and Hodyss, D.2009. Ensemble covariances adaptively localized with ECO-RAP. Part 2: a strategy for the atmosphere. Tellus61A, 97–111.

13. Bishop, C. H., Etherton, B. J. and Majumdar, S. J.2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: theoretical aspects. Mon. Wea. Rev.129, 420–436.

14. Blockley, E. W., Martin, M. J., McLaren, A. J., Ryan, A. G., Waters, A., and co-authors.2014. Recent development of the Met Office operational ocean forecasting system: an overview and assessment of the new global FOAM forecasts. Geosci. Mod. Dev.7, 2613–2638.

15. Bocquet, M.2011. Ensemble Kalman filtering without the intrinsic need for inflation. Nonl. Proc. Geophy.18, 735–750.

16. Bocquet, M., Pires, C. A. and Wu, L.2010. Beyond Gaussian statistical modelling in geophysical data assimilation. Mon. Wea. Rev.138, 2997–3023.

17. Bolić, M., Djurić, P. M. and Hong, S.2003. New resampling algorithms for particle filters. Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 IEEE International Conference, Vol. 2, IEEE, pp. II--589–592.

18. Brankart, J.-M., Testut, C.-E., Brasseur, P. and Verron, J.2003. Implementation of a multivariate data assimilation scheme for isopycnic coordinate ocean models: application to a 1993–1996 hindcast of the North Atlantic ocean circulation. J. Geophys. Res.108(C3), 3074.

19. Browne, P. A. and Wilson, S.2015. A simple method for integrating a complex model into an ensemble data assimilation system using MPI. Env. Modell. Software68, 122–128.

20. Brusdal, K., Brankart, J. M., Halberstadt, G., Evensen, G., Brasseur, P., and co-authors.2003. A demonstration of ensemble based assimilation methods with a layered OGCM from the perspective of operational ocean forecasting systems. J. Mar. Syst.40–41, 253–289.

21. Buehner, M. and Charron, M.2007. Spectral and spatial localization of background-error correlations for data assimilation. Q. J. R. Meteorol. Soc.133, 615–630.

22. Burgers, G., van Leeuwen, P. J. and Evensen, G.1998. Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev.126(6), 1719–1724.

23. Campbell, W. F., Bishop, C. H. and Hodyss, D.2010. Vertical covariance localization for satellite radiances in ensemble Kalman filters. Mon. Wea. Rev.138, 282–290.

24. Chorin, A. J. and Tu, X.2009. Implicit sampling for particle filters. PNAS106, 17249–17254.

25. Chorin, A. J., Morzfeld, M. and Tu, X.2010. Interpolation and iteration for nonlinear filters. Commun. Appl. Math. Comput. Sci.5, 221–240.

26. Chustagulprom, N., Reich, S. and Reinhardt, M.2016. A hybrid ensemble transform filter for nonlinear and spatially extended dynamical systems. SIAM/ASA J. Uncert. Quant.4, 552–591.

27. Cohn, S. E., Da Silva, A., Guo, J., Sienkiewicz, M. and Lamich, D.1998. Assessing the effects of data selection with the DAO physical-space statistical analysis system. Mon. Wea. Rev.126, 2913–2926.

28. Desroziers, G., Berre, L., Chapnik, B. and Poli, P.2005. Diagnosis of observation, background and analysis-error statistics in observation space. Q. J. R. Meteorol. Soc.131, 3385–3396.

29. Doucet, A., Godsill, S. and Andrieu, C.2000. On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput.10, 197–208.

30. Doucet, A., de Freitas, N., Gordon, N.2001. Sequential Monte-Carlo Methods in Practice. Springer-Verlag, New York.

31. Evensen, G.1993. Open boundary conditions for the extended Kalman filter with a quasi-geostrophic ocean model. J. Geophys. Res.98(C9), 16529–16546.

32. Evensen, G.1994. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte-Carlo methods to forecast error statistics. J. Geophys. Res.99, 10143–10162.

33. Evensen, G.2003. The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dyn.53, 343–367.

34. Flowerdew, J.2015. Towards a theory of optimal localisation. Tellus A67, 25257.

35. Frei, M. and Künsch, H. R.2013. Bridging the ensemble Kalman and particle filters. Biometrica100, 781–800.

36. Gaspari, G. and Cohn, S.1999. Construction of correlation functions in two and three dimensions. Q. J. R. Meteorol. Soc.125, 723–757.

37. Golub, G. H. and Van Loan, C. F.1996. Matrix computations, 3rd ed.The Johns Hopkins University Press, Baltimore and London.

38. Gordon, N. J., Salmond, D. J. and Smith, A. F. M.1993. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEEE Proc.140, 107–113.

39. Greybush, S. J., Kalnay, E., Miyoshi, T., Ide, K. and Hunt, B. R.2011. Balance and ensemble Kalman filter localization techniques. Mon. Wea. Rev.139, 511–522.

40. Gropp, W., Lusk, E. and Skjellum, A.1994. Using MPI: Portable Parallel Programming with the Message-Passing InterfaceThe MIT Press, Cambridge, Massachusetts.

41. Hamill, T. M.2001. Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev.129, 550–560.

42. Hamill, T. M.2006. Ensemble-based atmospheric data assimilation. In: Predictability of Weather and Climate (eds. T.Palmer and R.Hagedorn). Cambridge University Press, New York, pp. 124–156. chapter 6.

43. Hamill, T. M., Whitaker, J. S. and Snyder, C.2001. Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev.129, 2776–1790.

44. Hoteit, I., Pham, D.-T., Triantafyllou, G. and Korres, G.2008. A new approximate solution of the optimal nonlinear filter for data assimilation in meteorology and oceanography. Mon. Wea. Rev.136, 317–334.

45. Houtekamer, P. L. and Mitchell, H. L.1998. Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev.126, 796–811.

46. Houtekamer, P. L. and Mitchell, H. L.2001. A sequential ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev.129, 123–137.

47. Houtekamer, P. L. and Zhang, F.2016. Review of the ensemble Kalman filter for atmospheric data assimilation. Mon. Wea. Rev.144, 4489–4532.

48. Hunt, B. R., Kostelich, E. J. and Szunyogh, I.2007. Efficient data assimilation for spatiotemporal chaos: a local ensemble transform Kalman filter. Physica D230, 112–126.

49. Ide, K., Courtier, P., Ghil, M. and Lorenc, A. C.1997. Unified notation for data assimilation: pperational, l sequential and variational. J. Meteor. Soc. Jpn.75, 181–189.

50. Janjić, T., Nerger, L., Albertella, A., Schroeter, J. and Skachko, S.2011. On domain localization in ensemble-based Kalman filter algorithms. Mon. Wea. Rev.139, 2046–2060.

51. Jazwinski, A. H.1970. Stochastic Processes and Filtering TheoryAcademic Press, New York.

52. Kalman, R. E.1960. A new approach to linear filtering and prediction problems. Trans. AMSE --J. Basic Eng.82(D), 35–45.

53. Kepert, J. D.2006. Localisation, balance and choice of analysis variable in an ensemble Kalman filter. Q. J. R. Meteorol. Soc.135(642), 1157–1176.

54. Kepert, J. D.2009. Covariance localisation and balance in an ensemble Kalman filter. Q. J. R. Meteorol. Soc.135, 1157–1176.

55. Keppenne, C. L. and Rienecker, M. M.2002. Initial testing of a massively parallel ensemble Kalman filter with the Poseidon isopycnal ocean circulation model. Mon. Wea. Rev.130, 2951–2965.

56. Keppenne, C. L. and Rienecker, M. M.2003. Assimilation of temperature into an isopycnal ocean general circulation model using a parallel ensemble Kalman filter. J. Mar. Syst.40–41, 363–380.

57. Kirchgessner, P., Nerger, L. and Bunse-Gerstner, A.2014. On the choice of an optimal localization radius in ensemble Kalman filter methods. Mon. Wea. Rev.142, 2165–2175.

58. Law, K. J. H. and Stuart, A. M.2012. Evaluating data assimilation algorithms. Mon. Wea. Rev.140, 3757–3782.

59. Lawson, W. G. and Hansen, J. A.2004. Implications of stochastic and deterministic filters as ensemble-based data assimilation methods in varying regimes of error growth. Mon. Wea. Rev.132, 1966–1989.

60. Lei, J. and Bickel, P.2011. A moment matching ensemble filter for nonlinear non-Gaussian data assimilation. Mon. Wea. Rev.139, 3964–3973.

61. Lei, L. and Anderson, J.2014. Empirical localization of observations for serial ensemble Kalman filter data assimilation in an atmospheric general circulation model. Mon. Wea. Rev.142, 1835–1851.

63. Lermusiaux, P. F. J. and Robinson, A. R.1999. Data assimilation via error subspaces statistical estimation, part I: theory and schemes. Mon. Wea. Rev.127, 1385–1407.

64. Lermusiaux, P. F. J., Robinson, A. R., Haley, P. J. and Leslie, W. G.2002. Advanced interdisciplinary data assimilation: filtering and smoothing via error subspace statistical estimation. Proceedings of “The OCEANS 2002 MTS/IEEE conference, Holland”, Vol. 230, pp. 795–802.

65. Livings, D.2005. Aspects of the ensemble Kalman filter [Master’s thesis]Department of Mathematics, University of Reading, UK.

66. Livings, D., Dance, S. L. and Nichols, N. K.2008. Unbiased ensemble square root filters. Physica D237, 1021–1028.

67. Lorenc, A. C.2003. The potential of the ensemble Kalman filter for NWP - a comparison with 4D-Var. Q. J. R. Meteorol. Soc.129, 3183–3203.

68. Metref, S., Cosme, E., Snyder, C. and Brasseur, P.2014a. A non-Gaussian analysis scheme using rank histograms for ensemble data assimilation. Nonlin. Processes Geophys.21, 869–885.

69. Metref, S., Cosme, E., Snyder, C. and Brasseur, P.2014b. A non-Gaussian analysis scheme using rank histogram for ensemble data assimilation. Nonlinear Proc. Geophys.21, 869–885.

70. Minamide, M. and Zhang, F.2017. Adaptive observation error inflation for assimilating all-sky satellite radiance. Mon. Wea. Rev.145, 1063–1081.

71. Miyoshi, T. and Yamane, S.2007. Local ensemble transform Kalman filter with an AGCM at a T159/L48 resolution. Mon. Wea. Rev.135, 3841–3861.

72. Morzfeld, M., Tu, X., Atkins, E. and Chorin, A. J.2012. A random map implementation of implicit filters. J. Comput. Phys.231, 2049–2066.

73. Morzfeld, M., Hodyss, D. and Snyder, C.2017. What the collapse of the ensemble Kalman filter tells us about particle filters. Tellus A69, 1–14.

74. Nakano, S., Ueno, G. and Higuchi, T.2007. Merging particle filter for sequential data assimilation. Nonlinear Process. Geophys.14, 395–408.

75. Nerger, L.2004. Parallel Filter Algorithms for Data Assimilation in Oceanography (Number 487 in Reports on Polar and Marine Research, Alfred Wegener Institute for Polar and Marine Research, Bremerhaven, Germany, [PhD Thesis]).. University of Bremen, Germany.

76. Nerger, L.2015. On serial observation processing on localized ensemble Kalman filters. Mon. Wea. Rev.143, 1554–1567.

77. Nerger, L. and Hiller, W.2013. Software for ensemble-based data assimilation systems - implementation strategies and scalability. Comput. Geosci.55, 110–118.

78. Nerger, L., Hiller, W. and Schröter, J.2005a. PDAF -- the parallel data assimilation framework: experiences with Kalman filtering. In: Use of High Performance Computing in Meteorology: Proceedings of the Eleventh ECMWF Workshop on the Use of High Performance Computing in Meteorology, Reading, UK, 25--29 October 2004 (eds. W.Zwieflhofer and G.Mozdzynski). World Scientific, Singapore, pp. 63–83.

79. Nerger, L., Hiller, W. and Schröter, J.2005b. A comparison of error subspace Kalman filters. Tellus57A, 715–735.

80. Nerger, L., Danilov, S., Hiller, W. and Schröter, J.2006. Using sea level data to constrain a finite-element primitive-equation ocean model with a local SEIK filter. Ocean Dyn.56, 634–649.

81. Nerger, L., Janjić, T., Schroeter, J. and Hiller, W.2012a. A unification of ensemble square root filters. Mon. Wea. Rev.140, 2335–2345.

82. Nerger, L., Janjić, T., Schröter, J. and Hiller, W.2012b. A regulated localization scheme for ensemble-based Kalman filters. Q. J. Roy. Meteor. Soc.138, 802–812.

83. Ott, E., Hunt, B. R., Szunyogh, I., Zimin, A. V., Kostelich, E. J., and co-authors.2004. A local ensemble Kalman filter for atmospheric data assimilation. Tellus56A, 415–428.

84. Patil, D. J., Hunt, B. R., Kalnay, E., Yorke, J. A. and Ott, E.2001. Local low dimensionality of atmospheric dynamics. Phys. Rev. Lett.86(26), 5878–5881.

85. Penny, S. and Miyoshi, T.2016. A local particle filter for high-dimensional geophysical systems. Nonlinear Processes Geophys.23, 391–405.

86. Perianez, A., Reich, H. and Potthast, R.2014. Optimal localization for ensemble Kalman filter systems. J. Meteorol. Soc. Jpn.92, 585–597.

87. Petrie, R. E. and Dance, S. L.2010. Ensemble-based data assimilation and the localisation problem. Weather65(3), 65–69.

88. Pham, D. T.2001. Stochastic methods for sequential data assimilation in strongly nonlinear systems. Mon. Wea. Rev.129, 1194–1207.

89. Pham, D. T., Verron, J. and Gourdeau, L.1998a. Singular evolutive Kalman filters for data assimilation in oceanography. C. R. Acad. Sci. Ser. II326(4), 255–260.

90. Pham, D. T., Verron, J. and Roubaud, M. C.1998b. A singular evolutive extended Kalman filter for data assimilation in oceanography. J. Mar. Syst.16, 323–340.

91. Poterjoy, J.2016a. A localized particle filter for high-dimensional nonlinear systems. Mon. Wea. Rev.144, 59–76.

92. Poterjoy, J. and Anderson, J. L.2016b. Efficient sssimilation of simulated observations in a high-dimensional geophysical system using a localized particle filter. Mon. Wea. Rev.144, 2007–2020.

93. Reich, S.2013. A nonparametric ensemble transform method for Bayesian inference. SIAM Journal on Scientific Computing4(35), 2013–2024.

94. Sakov, P. and Oke, P. R.2008. Implications of the form of the ensemble transformation in the ensemble square root filters. Mon. Wea. Rev.136, 1042–1053.

95. Sakov, P., Counillon, F., Bertino, L., Lisaeter, K. A., Oke, P. R. and co-authors.2012. TOPAZ4: an ocean-sea ice data assimilation system for the North Atlantic and Arctic. Ocean Sci.8, 633–656.

96. Silverman, B. W.1986. Density estimation for statistics and data analysisChapman and Hall, New Work.

97. Snyder, C., Bengtsson, T., Bickel, P. and Anderson, J.2008. Obstacles to high-dimensional particle filtering. Mon. Wea. Rev.136, 4629–4640.

98. Snyder, C., Bengtsson, T. and Morzfeld, M.2015. Performance bounds for particle filters using the optimal proposal. Mon. Wea. Rev.143(11), 4750–4761.

99. Stordal, A. S., Karlsen, H. A., Nævdal, G., Skaug, H. J. and Valles, B.2011. Bridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter. Comput. Geosci.15, 293–305.

100. Testut, C.-E., Brasseur, P., Brankart, J.-M. and Verron, J.2003. Assimilation of sea-surface temperature and altimetric observationsduring 1992–1993 inot an eddy permitting primitive equation model of the North Atlantic ocean. J. Mar. Sys.40–41, 291–316.

101. Tippett, M. K., Anderson, J. L., Bishop, C. H., Hamill, T. M. and Whitaker, J. S.2003. Ensemble square root filters. Mon. Wea. Rev.131, 1485–1490.

102. Tödter, J. and Ahrens, B.2015. A Second-Order Exact Ensemble Square Root Filter for Nonlinear Data Assimilation. Mon. Wea. Rev.143(4), 1347–1367.

103. Tödter, J., Kirchgessner, P., Nerger, L. and Ahrens, B.2016. Assessment of a nonlinear ensemble transform filter for high-dimensional data assimilation. Mon. Wea. Rev.144, 409–427.

104. Tong, X. T., Majda, A. J. and Kelly, D.2016. Nonlinear stability and ergodicity of ensemble based Kalman filters. Nonlinearity29, 657–691.

105. van Leeuwen, P. J.2003a. A variance-minimizing filter for nonlinear dynamics. Mon. Wea. Rev.131, 2071–2084.

106. van Leeuwen, P. J.2003b. Nonlinear ensemble data assimilation for the ocean. Recent Developments in data assimilation for atmosphere and ocean, ECMWF Seminar 8--12 September 2003, Reading, United Kingdom, pp. 265–286.

107. van Leeuwen, P. J.2009. Particle filtering in geophysical systems. Mon. Wea. Rev.137, 4089–4114.

108. van Leeuwen, P. J.2010. Nonlinear data assimilation in Geosciences: an extremely efficient particle filter. Q. J. R. Meteorol. Soc.136, 1991–1999.

109. van Leeuwen, P. J.2011. Efficient nonlinear data-assimilation in geophysical fluid dynamics. Comput. Fluids46, 52–58.

110. van Leeuwen, P. J., Cheng, Y. and Reich, S.2015. Nonlinear data assimilation. Frontiers in Applied Dynamical Systems: Reviews and Tutorials 2Springer.

111. van Leeuwen, P. J. and Evensen, G.1996. Data assimilation and inverse methods in terms of a probabilistic formulation. Mon. Wea. Rev.124, 2898–2913.

112. Verlaan, M. and Heemink, A. W.2001. Nonlinearity in data assimilation applications: a practical method for analysis. Mon. Wea. Rev.129(6), 1578–1589.

113. Vossepoel, F. C. and Van Leeuwen, P. J.2006. Parameter estimation using a particle method: inferring mixing coefficients from sea-level observations. Mon. Wea. Rev.135, 1006–1020.

114. Wang, Y., Jung, Y., Supine, T. A. and Xue, M.2013. A hybrid MPI-OpenMP parallel algorithm and performance analysis for an ensemble square root filter designed for multiscale observations. J. Atm. and Oce. Tech.30, 1382–1397.

115. Whitaker, J. S. and Hamill, T. M.2002. Ensemble data assimilation without perturbed observations. Mon. Wea. Rev.130, 1913–1924.

116. Wikle, C. K. and Berliner, L. M.2006. A Bayesian tutorial for data assimilation. Physica D, 230(1):1–16.

117. Xiong, X., Navon, I. M. and Uzunoglu, B.2006. A note on the particle filter with posterior Gaussian resampling. Tellus A58(4), 456–460.

118. Zhen, Y. and Zhang, F.2014. A probabilistic approach to adaptive covariance localization for serial ensemble square root filters. Mon. Wea. Rev.142, 4499–4518.

119. Zhu, M., van Leeuwen, P. J. and Amezcua, J.2016. Implicit equal-weights particle filter. Q. J. R. Meteorol. Soc. page personal communication.

120. Zupanski, M.2005. Maximum likelihood ensemble filter: Theoretical aspects. Mon. Wea. Rev.133, 1710–1726.

121. Zupanski, M., Michael, N. I. and Zupanski, D.2008. The maximum likelihood ensemble filter as a non-differentiable minimization algorithm. Q. J. R. Meteorol. Soc.134, 1039–1050.

Appendix 1

Resampling methods

In this section, we give descriptions of a number of resampling techniques that can be applied to the particle filter and Gaussian mixture filter methods to turn weighted particles into equal-weight particles. The resampling techniques included here are probabilistic resampling, stochastic universal resampling and residual resampling. However, they are by no means exclusive and other techniques could be used.

Probabilistic resampling (PR)

The probabilistic resampling or the basic random resampling is the most straightforward to implement as we sample directly from the density given by the weights.

Given the weights ${\left\{{w}_{j}\right\}}_{j=1}^{{N}_{e}}$ associated with the ensemble of particles, where the sum of weights is equal to one, the total number of particles ${N}_{e}$ and the number of particles to be generated ${\stackrel{~}{N}}_{e}$, we generate an index of the sampled particles using the Algorithm .

The required input for the PR is: $w\in {\mathcal{R}}^{{N}_{e}}$ a vector of particle weights, ${N}_{e}$ the total number of particles in the filter, and ${\stackrel{~}{N}}_{e}$ the number of particles to be sampled and the method returns an index $I\in {\mathcal{R}}^{{\stackrel{~}{N}}_{e}}$, which can then be used to select the sampled particles ${\mathsf{x}}_{j}^{\ast }={\mathsf{x}}_{I\left(j\right)}$ for $j=1:{\stackrel{~}{N}}_{e}$.

Note that this scheme introduces sampling noise by drawing ${\stackrel{~}{N}}_{e}$ times from a uniform distribution.

Stochastic universal resampling (SUR)

Stochastic universal resampling is also known as systematic resampling. It performs resampling in the same way as the basic random resampling algorithm except instead of drawing each ${u}_{j}$ independently from $\mathcal{U}\left(0,1\right)$ for $j=1,\dots ,{N}_{e}$, it uses a uniform random number u according to $u\sim \mathcal{U}\left[0,1/{N}_{e}\right]$ and ${u}_{j}=u+\left(j-1\right)/{N}_{e}$ (Bolić et al., 2003).

Given the weights ${\left\{{w}_{j}\right\}}_{j=1}^{{N}_{e}}$ associated with the ensemble of particles, where the sum of weights is equal to one, the total number of particles ${N}_{e}$ and the number of particles to be generated ${\stackrel{~}{N}}_{e}$, we generate an index of the sampled particles using the Algorithm .

The required input for the SUR is: $w\in {\mathcal{R}}^{{N}_{e}}$ a vector of particle weights, ${N}_{e}$ the total number of particles in the filter, and ${\stackrel{~}{N}}_{e}$ the number of particles to be sampled and the method returns an index $I\in {\mathcal{R}}^{{\stackrel{~}{N}}_{e}}$ which can then be used to select the sampled particles ${\mathsf{x}}_{j}^{\ast }={\mathsf{x}}_{I\left(j\right)}$ for $j=1:{\stackrel{~}{N}}_{e}$.

Note, that this method has a lower sampling noise than probabilistic resampling since only one random variable is drawn.

Residual resampling (RR)

The RR algorithm samples the particles in two parts. In the first part the number of replications of particles is calculated, but since the method does not guarantee that the number of resampled particles is ${N}_{e}$, the residual ${N}_{r}$ is computed. The second step requires resampling, which produces ${N}_{r}$ of the final ${\stackrel{~}{N}}_{e}$ particles. In Algorithm  this is done by PR, but other resampling technique can be used.

The required input for the RR is: $w\in {\mathcal{R}}^{{N}_{e}}$ a vector of particle weights, ${N}_{e}$ the total number of particles in the filter, and ${\stackrel{~}{N}}_{e}$ the number of particles to be sampled and the method returns an index $I\in {\mathcal{R}}^{{\stackrel{~}{N}}_{e}}$ which can then be used to select the sampled particles ${\mathsf{x}}_{j}^{\ast }={\mathsf{x}}_{I\left(j\right)}$ for $j=1:{\stackrel{~}{N}}_{e}$. Note, that we used the PR method to obtain an array $IR\in {\mathcal{R}}^{{N}_{r}}$ with the indices of the additional sampled particles, which we then stored in the remaining empty cells of the index array $I\in {\mathcal{R}}^{{N}_{e}}$.

Note, that this method reduces the sampling noise, but not as much as the SUR method.

Appendix 2

Filter algorithms for practical implementation

This Appendix contains practical pseudo-algorithms of all the ensemble filter methods presented in Sections 57. To discuss the computational cost of each method in Section 9.4 we used the algorithms presented in this appendix because for some filter methods they are more computationally efficient or numerically stable than mathematically elegant versions given in Section 5. The algorithms are written in the way one would implement them efficiently in Fortran. For compactness, the algorithms don’t show that the final step of the ensemble filters can usually be written in a blocked form, so that only the allocation one large ensemble array $\mathsf{X}$ is required. This is different for the MLEF, where two arrays of size ${N}_{x}×{N}_{e}$ are required. If indices are given for matrices, the notation follows Fortran in that the first index defines the row, while the second index specifies the column.

Note, that in all the algorithms that follow in this appendix any variable with a number subscript is a temporary variable used to reduce the computational time and storage space needed for the algorithm. Further, for ease of reading these algorithms, we use abbreviations: SVD for singular value decomposition and EVD for eigenvalue decomposition. The values in the right column of each algorithm give the dimension of the resulting array, which helps to determine the computational cost of the operations.

Below, we use the notation $\mathcal{H}\left({\mathsf{X}}^{f}\right)$ as a shorthand notation for applying the possibly non-linear observation operator $\mathcal{H}$ individually to each ensemble state in ${\mathsf{X}}^{f}$.