A- A+
Alt. Display

# Rapid update cycling with delayed observations

## Abstract

In this paper we examine the fundamental issues associated with the cycling of data assimilation and prediction in the case where observations are received after a delay, but we seek to assimilate them immediately on receipt, or within a short time of receipt. We obtain the optimal solution to this problem in the linear and non-linear cases, and explore its relation to simplified strategies which are adaptations of contemporary methods for large-scale data assimilation. We also discuss the challenges facing such cycling in large-scale numerical weather prediction.

Keywords:
How to Cite: Payne, T.J., 2017. Rapid update cycling with delayed observations. Tellus A: Dynamic Meteorology and Oceanography, 69(1), p.1409061. DOI: http://doi.org/10.1080/16000870.2017.1409061
Published on 01 Jan 2017
Accepted on 14 Nov 2017            Submitted on 10 Aug 2017
1.

## Introduction

In the traditional cycling of forecasts and data assimilation (DA) for numerical weather prediction, the DA step for the global model has occurred every 6 or 12 hours. This was appropriate for an era when data was concentrated at the main synoptic times, and the limited area models (LAMs) for which the global model provides boundary conditions were cycled every 6 hours. However, in recent years data has become dominated by sources which are essentially continuous in time, and centres such as the Met Office will soon cycle their highest resolution LAM every hour.

By increasing the frequency of global analyses (e.g. to every hour) global forecasts can be based on more recent data, which is not only desirable in itself but provides timely lateral boundary conditions (LBCs) for high resolution LAMs. In one study of the Met Office’s 1.5 km LAM covering the British Isles (Tang et al., 2013) it was found that replacing 3-hour and 6-hour old LBCs by 3-hour and fresh LBCs improved the UK index (a basket of scores measuring forecast skill) by 1.5% (Bruce Macpherson, pers comm). Furthermore, by having more frequent analyses the analysis increments will be smaller, which will improve the validity of the linear approximations in DA schemes. More frequent analyses may also improve the affordability of DA methods as the computational load is distributed more evenly in time.

We will see that, because of the delay in receiving some data, to ensure that all the data which are received are also assimilated, the assimilation windows will need to overlap. We obtain the optimal solution to this problem, which involves manipulating simultaneously all the states in the window and their joint errors. We explore the relation between the optimal solution and simplified methods which are closer to current methods for large-scale DA.

We show that the current use of largely climatological prior error covariances may pose a challenge for high frequency cycling, and discuss how this may be overcome.

2.

## ‘Traditional’ vs. ‘Rapid Update’ cycling

An immediate issue is that observations are not received instantaneously. For example, by 09Z on 18 June 2015 the Met Office had received over 80 million observations valid between 09Z on 15 June and 03Z on 18 June 2015, including around 0.8 million surface, 2.1 million aircraft and sonde, 12.4 million satwind and 14 million ATOVS observations. The delay between validity time and receipt for these observation types is recorded in Fig. 1. We see that to receive 95% of aircraft and sonde, surface, ATOVS and satwind observation took respectively 0.6, 1.5, 3.4 and 4.1 hours.

Figure 1.

Delay in receiving various observation types in the Met Office observation processing system, for observations valid between 9Z on 15 June 2015 and 3Z on 18 June 2015.

This presents a quandary for traditional cycling which aims to produce an analysis every 6 (at some centres every 12) hours. For definiteness consider 4D-Var (e.g. Li and Navon, 2001) with a 6-hour window [T-3,T+3], which generates an analysis at T-3 (in this discussion the units are hours).

One could perform the analysis at T+3 using all observations available by T+3, which would minimise the time delay to produce the analysis, but observations received after T+3 would not be assimilated. Alternatively, one could perform the analysis at T+7, by which time (in view of Fig. 1) almost all the observations valid in the window have been received, but the analysis is onlyavailable 4 hours after the end of the window and 10 hours after its beginning. To generate an estimate of state at T+7 we could run a 10-hour forecast from the analysis, but compared with the estimate of state at T-3 this will be degraded by model error.

Centres such as the Met Office mitigate these issues by performing each analysis twice, a ‘late cut-off’ analysis currently at about $T+6$, and an ‘early cut-off’ analysis at about T+3. This is illustrated somewhat schematically in Fig. 2, which shows two adjacent non-overlapping windows [3Z,9Z) and [9Z,15Z) (where [${T}_{1}$,${T}_{2}$) denotes the time interval ${T}_{1}\le t<{T}_{2}$). For example, at 8Z we receive observations valid between 4Z and 8Z. Considering the window [3Z,9Z), for the ‘early cut-off’ run we perform the data assimilation at 9Z and use the observations in the dark blue region, and for the ‘late cut-off’ run perform it at about 12Z and use virtually all observations ever valid in the window (combined light and dark blue region).

Figure 2.

Cycling with non-overlapping windows as used at the Met Office. Observations in the dark blue (dark green) region are assimilated for an ‘early cut-off’ run at 9Z (15Z). The extra observations in the lightly shaded regions are assimilated for ‘late cut-off’ runs at 12Z and 18Z.

In principle the late cut-off analysis could make use of the early cut-off one to reduce its work load, as is done in the ‘quasi-continuous’ approach of Järvinen et al. (1996) and Veerse and Thèpaut (1998), but at the Met Office the late cut-off analyses start again from scratch, making no use of the work done for the early cut-off analysis.

Having both early and late cut-off analyses goes some way to mitigating the shortcomings of 6-hourly cycling. However, the analyses are still 6 hours apart, which makes them insufficiently timely for some purposes, notably (considering the comments in Section 1) the LBCs for hourly LAM analyses; the analysis increment is much larger than would be the case with an hourly update, so nonlinearity can be a significant problem, especially for the linear model in 4D-Var; and the approach is inefficient insofar as the early cut-off analyses are not used as part of a cycle.

In Fig. 3 we illustrate how we would like to deal with the same case: each hour we assimilate all observations received in the last hour, e.g. at 12Z we assimilate the observations received between 11Z and 12Z (green region); these are valid between 7Z and 12Z. In principle we do not re-assimilate the observations valid between 7Z and 12Z received at earlier times (blue, red yellow and cyan regions in Fig. 3) as the information from these observations has been transferred to previous analyses and thereby to the background for this cycle.

Figure 3.

Rapid update cycling with overlapping windows. Observations are assimilated within one hour of receipt.

In the context of global NWP, which provides among other outputs LBCs for hourly cycling of LAMs, an hourly update cycle is natural, but in principle the updates could be as short as one model time step. For the purposes of this paper we will refer to any cycling where observations are assimilated as soon as they are received, or as in Fig. 3 within some short time of receipt, as rapid update cycling (RUC).

We should note that the term ‘Rapid Update Cycle’ has been employed in the past to denote specific rapidly cycled NWP systems, for example by the National Centres for Environmental Prediction in the USA. In that case it referred to an operational regional forecast-analysis system over North America, where data was assimilated by 3D-Var (originally optimal interpolation) using non-overlapping windows of length one hour (Benjamin et al., 2004b; Benjamin et al., 2004a).

3.

## Optimal RUC

To examine the rapid update cycling problem further we will idealise it slightly by supposing that observations are valid at exact multiples of a time increment $\mathit{\delta }t$ (as opposed to continuously in time), and become available after delays of $0,\mathit{\delta }t,2\mathit{\delta }t,\dots ,N\mathit{\delta }t$. We will suppose that observations received at time $k\mathit{\delta }t$ are

((1) )
$\begin{array}{c}\hfill {\mathbf{y}}_{k}^{\left(k\right)},{\mathbf{y}}_{k-1}^{\left(k\right)},{\mathbf{y}}_{k-2}^{\left(k\right)},\dots ,{\mathbf{y}}_{k-N}^{\left(k\right)}\end{array}$

Superscripts denote when the observations are received and subscripts their validity time, the longest delay being $N\mathit{\delta }t$.

The first problem is to develop an optimal method for assimilating observations as soon as they are available. At time $k\mathit{\delta }t$ we seek to estimate ${\mathbf{x}}_{k},\dots ,{\mathbf{x}}_{k-N}$, given observations (1) and our previous estimate of ${\mathbf{x}}_{k-1},\dots ,{\mathbf{x}}_{k-N-1}$.

We will suppose that for $i=k,k-1,\dots ,k-N$ we have observation operators ${\mathbf{h}}_{i}^{\left(k\right)}$ such that

((2) )
$\begin{array}{c}\hfill {\mathbf{y}}_{i}^{\left(k\right)}={\mathbf{h}}_{i}^{\left(k\right)}\left({\mathbf{x}}_{i}\right)+{\mathbit{\nu }}_{i}^{\left(k\right)}\end{array}$

and for each i a model ${\mathbf{f}}_{i}$

((3) )
$\begin{array}{c}\hfill {\mathbf{x}}_{i+1}={\mathbf{f}}_{i}\left({\mathbf{x}}_{i}\right)+{\mathbit{\omega }}_{i}\end{array}$

where the distributions of the errors ${\mathbit{\nu }}_{i}^{\left(k\right)}$, ${\mathbit{\omega }}_{i}$ are supposed known.

3.1.

### Notation and problem formulation

The optimal method is obtained by formulating the problem in such a way that standard estimation theory can be applied.

We will use the convention that the underlined vector ${\underline{\mathbf{x}}}_{k}$ denotes the concatenation of the $N+1$ vectors $\left\{{\mathbf{x}}_{i},k-N\le i\le k\right\}$:

((4) )
$\begin{array}{c}\hfill {\underline{\mathbit{x}}}_{k}=\left(\begin{array}{c}{\mathbf{x}}_{k-N}\\ ⋮\\ {\mathbf{x}}_{k}\end{array}\right)\end{array}$

and similarly the underlined matrix ${\underline{A}}_{k}$ is formed from matrices $\left\{{A}_{i,j},k-N\le i,j\le k\right\}$ of compatible size:

${\underline{A}}_{k}=\left(\begin{array}{ccc}{A}_{k-N,k-N}& \cdots & {A}_{k-N,k}\\ ⋮& \ddots & ⋮\\ {A}_{k,k-N}& \cdots & {A}_{k,k}\end{array}\right)$

For example, if $\left\{{\mathbf{x}}_{i},k-N\le i\le k\right\}$ are vectors of length n and $\left\{{A}_{i,j},k-N\le i,j\le k\right\}$ are matrices of size $n×n$, then ${\underline{\mathbf{x}}}_{k}$, ${\underline{A}}_{k}$ are of size respectively $n\left(N+1\right)×1$ and $n\left(N+1\right)×n\left(N+1\right)$.

Define ${\underline{\mathbf{y}}}_{k}$ to be the observations received at time $k\mathit{\delta }t$, so

((5) )
$\begin{array}{c}\hfill {\underline{\mathbf{y}}}_{k}=\left(\begin{array}{c}{\mathbf{y}}_{k-N}^{\left(k\right)}\\ ⋮\\ {\mathbf{y}}_{k}^{\left(k\right)}\end{array}\right)\end{array}$

We seek the conditional expectations of ${\underline{\mathbf{x}}}_{k}$ and ${\underline{\mathbf{x}}}_{k+1}$ given observations received up to time $k\mathit{\delta }t$

((6) )
$\begin{array}{c}\hfill E\left[{\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},{\underline{\mathbf{y}}}_{1},\dots ,{\underline{\mathbf{y}}}_{k}\right],\phantom{\rule{0.277778em}{0ex}}E\left[{\underline{\mathbf{x}}}_{k+1}|{\underline{\mathbf{y}}}_{0},{\underline{\mathbf{y}}}_{1},\dots ,{\underline{\mathbf{y}}}_{k}\right]\end{array}$

Note that, given ${\underline{\mathbf{x}}}_{k}$, and assuming ${\mathbit{\omega }}_{k}$ has zero mean, the best estimate of ${\underline{\mathbf{x}}}_{k+1}$ before observations received at $k\mathit{\delta }t$ are assimilated is

((7) )
$\begin{array}{c}\hfill {\underline{\mathbf{f}}}_{k}\left({\underline{\mathbf{x}}}_{k}\right)\equiv \left(\begin{array}{c}{\mathbf{x}}_{k-N+1}\\ ⋮\\ {\mathbf{x}}_{k}\\ {\mathbf{f}}_{k}\left({\mathbf{x}}_{k}\right)\end{array}\right)\end{array}$

If we now define

((8) )
$\begin{array}{c}\hfill {\underline{\mathbit{\omega }}}_{k}=\left(\begin{array}{c}\mathbf{0}\\ ⋮\\ \mathbf{0}\\ {\mathbit{\omega }}_{k}\end{array}\right),\phantom{\rule{0.277778em}{0ex}}{\underline{\mathbit{\nu }}}_{k}=\left(\begin{array}{c}{\mathbit{\nu }}_{k-N}^{\left(k\right)}\\ ⋮\\ {\mathbit{\nu }}_{k-1}^{\left(k\right)}\\ {\mathbit{\nu }}_{k}^{\left(k\right)}\end{array}\right),\phantom{\rule{0.277778em}{0ex}}{\underline{\mathbf{h}}}_{k}\left({\underline{\mathbf{x}}}_{k}\right)=\left(\begin{array}{c}{\mathbf{h}}_{k-N}^{\left(k\right)}\left({\mathbf{x}}_{k-N}\right)\\ ⋮\\ {\mathbf{h}}_{k-1}^{\left(k\right)}\left({\mathbf{x}}_{k-1}\right)\\ {\mathbf{h}}_{k}^{\left(k\right)}\left({\mathbf{x}}_{k}\right)\end{array}\right)\end{array}$

Then we may write (2) and (3) as respectively

((9) )
$\begin{array}{c}\hfill {\underline{\mathbf{y}}}_{k}={\underline{\mathbf{h}}}_{k}\left({\underline{\mathbf{x}}}_{k}\right)+{\underline{\mathbit{\nu }}}_{k}\\ \hfill {\underline{\mathbf{x}}}_{k+1}={\underline{\mathbf{f}}}_{k}\left({\underline{\mathbf{x}}}_{k}\right)+{\underline{\mathbit{\omega }}}_{k}\end{array}$

which are in the standard form for observation and signal map equations in estimation theory (e.g. Jazwinski, 1970).

3.2.

### Linear Gaussian case

It is illuminating to first work out the details in the simplest case, where in (2) and (3) the observation operators ${\mathbf{h}}_{i}^{\left(k\right)}$ and model ${\mathbf{f}}_{i}$ are linear and the errors ${\mathbit{\nu }}_{i}^{\left(k\right)},{\mathbit{\omega }}_{i}$ are zero-mean, Gaussian and uncorrelated.

In this case

((10) )
$\begin{array}{cc}\hfill {\mathbf{y}}_{i}^{\left(k\right)}& ={H}_{i}^{\left(k\right)}{\mathbf{x}}_{i}+{\mathbit{\nu }}_{i}^{\left(k\right)}\hfill \\ \hfill {\mathbf{x}}_{i+1}& ={M}_{i}{\mathbf{x}}_{i}+{\mathbit{\omega }}_{i}\hfill \end{array}$

where

${\mathbit{\nu }}_{i}^{\left(k\right)}\sim N\left(\mathbf{0},{R}_{i}^{\left(k\right)}\right),\phantom{\rule{0.277778em}{0ex}}{\mathbit{\omega }}_{i}\sim N\left(\mathbf{0},{Q}_{i}\right)$

for some matrices ${R}_{i}^{\left(k\right)},{Q}_{i}$ (where $\sim N\left(\mathbit{\mu },\mathrm{\Sigma }\right)$ denotes normally distributed with mean $\mathbit{\mu }$ and variance $\mathrm{\Sigma }$), and setting

((11) )
$\begin{array}{cc}\hfill {\underline{H}}_{k}& =\left(\begin{array}{ccc}{H}_{k-N}^{\left(k\right)}& \phantom{\rule{0.166667em}{0ex}}& \\ & {H}_{k-N+1}^{\left(k\right)}& \phantom{\rule{0.166667em}{0ex}}& \\ & \phantom{\rule{0.166667em}{0ex}}& \ddots & \\ & \phantom{\rule{0.166667em}{0ex}}& \phantom{\rule{0.166667em}{0ex}}& {H}_{k}^{\left(k\right)}\end{array}\right),\hfill \\ \hfill {\underline{M}}_{k}& =\left(\begin{array}{cccc}0& I& \phantom{\rule{0.166667em}{0ex}}& \\ & \ddots & \ddots & \\ & \phantom{\rule{0.166667em}{0ex}}& 0& I\\ & \phantom{\rule{0.166667em}{0ex}}& {M}_{k}\end{array}\right)\hfill \end{array}$

and

((12) )
$\begin{array}{c}\hfill {\underline{R}}_{k}=\left(\begin{array}{ccc}{R}_{k-N}^{\left(k\right)}& \phantom{\rule{0.166667em}{0ex}}& \\ & {R}_{k-N+1}^{\left(k\right)}& \phantom{\rule{0.166667em}{0ex}}& \\ & \phantom{\rule{0.166667em}{0ex}}& \ddots & \\ & \phantom{\rule{0.166667em}{0ex}}& \phantom{\rule{0.166667em}{0ex}}& {R}_{k}^{\left(k\right)}\end{array}\right),\phantom{\rule{0.277778em}{0ex}}{\underline{Q}}_{k}=\left(\begin{array}{ccc}0& \phantom{\rule{0.166667em}{0ex}}& \\ & \ddots & \phantom{\rule{0.166667em}{0ex}}& \\ & \phantom{\rule{0.166667em}{0ex}}& 0& \\ & \phantom{\rule{0.166667em}{0ex}}& \phantom{\rule{0.166667em}{0ex}}& {Q}_{k}\end{array}\right)\end{array}$
(9,10) become
((13) )
$\begin{array}{cc}\hfill {\underline{\mathbf{y}}}_{k}& ={\underline{H}}_{k}{\underline{\mathbf{x}}}_{k}+{\underline{\mathbit{\nu }}}_{k}\hfill \\ \hfill {\underline{\mathbf{x}}}_{k+1}& ={\underline{M}}_{k}{\underline{\mathbf{x}}}_{k}+{\underline{\mathbit{\omega }}}_{k}\hfill \end{array}$

with

${\underline{\mathbit{\nu }}}_{k}\sim N\left(\underline{\mathbf{0}},{\underline{R}}_{k}\right),\phantom{\rule{0.277778em}{0ex}}{\underline{\mathbit{\omega }}}_{k}\sim N\left(\underline{\mathbf{0}},{\underline{Q}}_{k}\right)$

and the problem of finding the conditional expectations (6) of ${\underline{\mathbf{x}}}_{k}$ and ${\underline{\mathbf{x}}}_{k+1}$ given observations received up to time k, which may be denoted ${\underline{\stackrel{^}{\mathbf{x}}}}_{k|k}$, ${\underline{\stackrel{^}{\mathbf{x}}}}_{k+1|k}$, is solved by a standard Kalman Filter, as in Table 1.

The basic objects manipulated are whole windows of states and observations and the covariances of the errors in these objects. The symbols in Table 1 are whole-window analogues of their usual values, e.g. ${\underline{P}}_{k|k-1}$ and ${\underline{P}}_{k|k}$ are $n\left(N+1\right)×n\left(N+1\right)$ prior and posterior error covariance matrices of the estimated ${\underline{\mathbf{x}}}_{k}$. In special circumstances simplification is possible. For example, if new observations only occur in the final time slot, i.e. if the only observations which become available at k are ${\mathbf{y}}_{k}^{\left(k\right)}$ and there are no observations ${\mathbf{y}}_{k-1}^{\left(k\right)},\dots ,{\mathbf{y}}_{k-N}^{\left(k\right)}$, it is straightforward to show that (15)–(19) simplifies to a conventional Kalman smoother.

For linear ${M}_{i},{H}_{i}^{\left(k\right)}$ the algorithm (15)–(19) finds $E\left[{\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},{\underline{\mathbf{y}}}_{1},\dots ,{\underline{\mathbf{y}}}_{k}\right],\phantom{\rule{0.277778em}{0ex}}E\left[{\underline{\mathbf{x}}}_{k+1}|{\underline{\mathbf{y}}}_{0},{\underline{\mathbf{y}}}_{1},\dots ,{\underline{\mathbf{y}}}_{k}\right]$ if the errors ${\mathbit{\nu }}_{i}^{\left(k\right)},{\mathbit{\omega }}_{i}$ are zero-mean, Gaussian and uncorrelated. As noted in Anderson and Moore (1979), if we restrict attention to analysis-prediction equations of form (16) and (18) then we may drop the Gaussian assumption on ${\mathbit{\nu }}_{i}^{\left(k\right)},{\mathbit{\omega }}_{i}$, merely requiring them to be zero-mean and uncorrelated, and (15)–(19) still minimises the expected error variance.

3.3.

### Variational equivalent of analysis step in linear Gaussian case

For large-scale data assimilation variational methods are almost universally used, so it is of interest to cast the optimal RUC analysis step (16) in variational form. Define

$\underline{\mathbit{\delta }}\equiv \left(\begin{array}{c}{\mathbit{\delta }}_{k-N}\\ {\mathbit{\delta }}_{k-N+1}\\ ⋮\\ {\mathbit{\delta }}_{k}\end{array}\right)$
((14) )
$\begin{array}{c}\hfill {J}_{b}\left(\underline{\mathbit{\delta }}\right)=\frac{1}{2}{\left(\begin{array}{c}{\mathbit{\delta }}_{k-N}\\ ⋮\\ {\mathbit{\delta }}_{k-1}\end{array}\right)}^{T}{\stackrel{^}{A}}^{-1}\left(\begin{array}{c}{\mathbit{\delta }}_{k-N}\\ ⋮\\ {\mathbit{\delta }}_{k-1}\end{array}\right)\end{array}$
((20) )
$\begin{array}{c}\hfill {J}_{o}\left(\underline{\mathbit{\delta }}\right)=\frac{1}{2}{\left({\underline{\mathbf{y}}}_{k}-{\underline{H}}_{k}\left({\underline{\stackrel{^}{\mathbf{x}}}}_{k|k-1}+\underline{\mathbit{\delta }}\right)\right)}^{T}{\underline{R}}_{k}^{-1}\left({\underline{\mathbf{y}}}_{k}-{\underline{H}}_{k}\left({\underline{\stackrel{^}{\mathbf{x}}}}_{k|k-1}+\underline{\mathbit{\delta }}\right)\right)\end{array}$
((21) )
$\begin{array}{c}\hfill {J}_{q}\left(\underline{\mathbit{\delta }}\right)=\frac{1}{2}{\left({\mathbit{\delta }}_{k}-{M}_{k-1}{\mathbit{\delta }}_{k-1}\right)}^{T}{Q}_{k-1}^{-1}\left({\mathbit{\delta }}_{k}-{M}_{k-1}{\mathbit{\delta }}_{k-1}\right)\end{array}$

where $\stackrel{^}{A}$ is the bottom right $Nn×Nn$ submatrix of ${\underline{P}}_{k-1|k-1}$. Then the analysis step (16) is equivalent to

((22) )
$\begin{array}{c}\hfill {\underline{\stackrel{^}{\mathbf{x}}}}_{k|k}={\underline{\stackrel{^}{\mathbf{x}}}}_{k|k-1}+\underline{\mathbit{\delta }}\end{array}$

where $\underline{\mathbit{\delta }}$ minimises

((23) )
$\begin{array}{c}\hfill J\left(\underline{\mathbit{\delta }}\right)={J}_{b}\left(\underline{\mathbit{\delta }}\right)+{J}_{o}\left(\underline{\mathbit{\delta }}\right)+{J}_{q}\left(\underline{\mathbit{\delta }}\right)\end{array}$

This is proved in Appendix 1. The ${J}_{b}$ term (20) constrains the N states in the intersection between the old and new windows by the inverse of the ‘background error covariance matrix’ $\stackrel{^}{A}$. This ‘big B’ of size $Nn×Nn$ is formed by taking the analysis error covariance from the previous stage and shearing off the oldest row and column.

3.4.

### General (nonlinear, non-Gaussian) case

We saw in Section 3.1 how the problem of assimilating data immediately it becomes available can be cast into a standard signal model/observation model form (9) and (10), to which we can then apply well-established theory, e.g. Jazwinski (1970). In particular, given (9) and (10) we can compute (sequentially in k) the conditional pdfs $p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k}\right)$ and $p\left({\underline{\mathbf{x}}}_{k+1}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k}\right)$. The novel feature for us is that ${\underline{\mathbf{f}}}_{k}$ and ${\underline{\mathbf{h}}}_{k}$ in (9) and (10) have a special structure which leads to significant simplifications, in particular enabling us to express these pdfs in terms of the original (as opposed to block) variables.

In general, given the prior pdf $p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)$ and the conditional pdf of the observations given the state $p\left({\underline{\mathbf{y}}}_{k}|{\underline{\mathbf{x}}}_{k}\right)$, Bayes’ theorem tells us that the posterior pdf $p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k}\right)$ is

((24) )
$\begin{array}{c}\hfill p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k}\right)=\frac{p\left({\underline{\mathbf{y}}}_{k}|{\underline{\mathbf{x}}}_{k}\right)p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)}{\mathcal{N}}\end{array}$

where the normalisation $\mathcal{N}$ is

((25) )
$\begin{array}{c}\hfill \mathcal{N}={\int }_{{\underline{\mathbf{x}}}_{k}\in \mathcal{U}}\left[p\left({\underline{\mathbf{y}}}_{k}|{\underline{\mathbf{x}}}_{k}\right)p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\right]d{\underline{\mathbf{x}}}_{k}\end{array}$

and the domain of integration $\mathcal{U}={R}^{\left(N+1\right)n}$. We will suppose the basic process satisfies the Markov property $p\left({\mathbf{x}}_{k}|{\mathbf{x}}_{k-1},{\mathbf{x}}_{k-2},\dots \right)=p\left({\mathbf{x}}_{k}|{\mathbf{x}}_{k-1}\right)$ which implies the same for underlined states $p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{x}}}_{k-1},{\underline{\mathbf{x}}}_{k-2},\dots \right)=p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{x}}}_{k-1}\right)$. Given $p\left({\underline{\mathbf{x}}}_{k-1}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)$ (the posterior pdf at $k-1$) and $p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{x}}}_{k-1}\right)$, the Chapman-Kolmogorov equation then gives us for the prior pdf $p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)$

((26) )
$\begin{array}{cc}& p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}={\int }_{{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\in \mathcal{U}}p\left({\underline{\mathbf{x}}}_{k}|{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\right)p\left({\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\mathrm{d}{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\hfill \end{array}$

We could cycle (27) and (25) to obtain the posterior pdf for every k. However, as mentioned there are simplifications arising in this case.

By virtue of the fact that in (8) the ith sub-vector of ${\underline{\mathbf{h}}}_{k}$ depends only on ${\mathbf{x}}_{k-N+i-1}$, $p\left({\underline{\mathbf{y}}}_{k}|{\underline{\mathbf{x}}}_{k}\right)$, the conditional pdf of the observations given the state, factors into

((27) )
$\begin{array}{c}\hfill p\left({\underline{\mathbf{y}}}_{k}|{\underline{\mathbf{x}}}_{k}\right)=p\left({\mathbf{y}}_{k}^{\left(k\right)}|{\mathbf{x}}_{k}\right)×\cdots ×p\left({\mathbf{y}}_{k-N}^{\left(k\right)}|{\mathbf{x}}_{k-N}\right)\end{array}$

Additionally, the transition pdf $p\left({\underline{\mathbf{x}}}_{k}|{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\right)$ may be written

((28) )
$\begin{array}{cc}& p\left({\underline{\mathbf{x}}}_{k}|{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}=p\left({\mathbf{x}}_{k}|{\stackrel{~}{\mathbf{x}}}_{k-1}\right)\mathit{\delta }\left({\mathbf{x}}_{k-1}-{\stackrel{~}{\mathbf{x}}}_{k-1}\right)\dots \mathit{\delta }\left({\mathbf{x}}_{k-N}-{\stackrel{~}{\mathbf{x}}}_{k-N}\right)\hfill \end{array}$

Combined with (27) this gives

$\begin{array}{cc}& p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\hfill \\ \hfill & =\underset{{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\in \mathcal{U}}{\int }p\left({\underline{\mathbf{x}}}_{k}|{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\right)p\left(\begin{array}{c}{\stackrel{~}{\mathbf{x}}}_{k-N-1}\\ {\stackrel{~}{\mathbf{x}}}_{k-N}\\ ⋮\\ {\stackrel{~}{\mathbf{x}}}_{k-1}\end{array}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\mathrm{d}{\stackrel{~}{\underline{\mathbf{x}}}}_{k-1}\hfill \end{array}$
((29) )
$\begin{array}{cc}& =\underset{{\stackrel{~}{\mathbf{x}}}_{k-N-1}\in {R}^{n}}{\int }p\left({\mathbf{x}}_{k}|{\mathbf{x}}_{k-1}\right)p\left(\begin{array}{c}{\stackrel{~}{\mathbf{x}}}_{k-N-1}\\ {\mathbf{x}}_{k-N}\\ ⋮\\ {\mathbf{x}}_{k-1}\end{array}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}×\mathrm{d}{\stackrel{~}{\mathbf{x}}}_{k-N-1}\hfill \\ \hfill & =p\left({\mathbf{x}}_{k}|{\mathbf{x}}_{k-1}\right)\underset{{\stackrel{~}{\mathbf{x}}}_{k-N-1}\in {R}^{n}}{\int }p\left(\begin{array}{c}{\stackrel{~}{\mathbf{x}}}_{k-N-1}\\ {\mathbf{x}}_{k-N}\\ ⋮\\ {\mathbf{x}}_{k-1}\end{array}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k-1}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}×\mathrm{d}{\stackrel{~}{\mathbf{x}}}_{k-N-1}\hfill \end{array}$

We may cycle (30) and (25), (28) to obtain the posterior pdf for every k. For example, if the observation operators ${\mathbf{h}}_{i}^{\left(k\right)}$ and model ${\mathbf{f}}_{k}$ are linear, and the errors ${\mathbit{\nu }}_{i}^{\left(k\right)}$, ${\mathbit{\omega }}_{i}$ are Gaussian, then one may check that (30), (25) and (28) imply that the posterior pdf

$p\left({\underline{\mathbf{x}}}_{k}|{\underline{\mathbf{y}}}_{0},\dots ,{\underline{\mathbf{y}}}_{k}\right)\propto {e}^{-\left({J}_{b}+{J}_{q}+{J}_{o}\right)}$

where ${J}_{b},{J}_{q},{J}_{o}$ are given by (20)–(22).

4.

## Example

We illustrate some of the foregoing with a small example, in which the model is the 40-dimensional chaotic model proposed by Lorenz (1996):

((30) )
$\begin{array}{c}\hfill \frac{\mathrm{d}{x}_{i}}{\mathrm{d}t}=-{x}_{\left[i-2\right]}{x}_{\left[i-1\right]}+{x}_{\left[i-1\right]}{x}_{\left[i+1\right]}-{x}_{i}+F,\phantom{\rule{0.277778em}{0ex}}i=0,\dots ,n-1\end{array}$

with $F=8$, $n=40$, where [i] denotes $i,mod\phantom{\rule{0.277778em}{0ex}}n$. This system is integrated using fourth order Runge–Kutta with a time step of 0.05/6 during which time errors grow at a rate corresponding to order one hour in an atmospheric system. We therefore refer to time step k as time k. The truth is obtained by integrating (31) and to each component of x adding Gaussian model error every time step with variance ${\mathit{\sigma }}_{q}^{2}$.

We suppose that at time k eight observations have just become available, at:

points ${x}_{\left[k+1\right]}$, ${x}_{\left[k+11\right]}$, ${x}_{\left[k+21\right]}$ and ${x}_{\left[k+31\right]}$, valid at time $k-1$

point ${x}_{\left[k+6\right]}$, valid at time $k-2$

point ${x}_{\left[k+16\right]}$, valid at time $k-3$

point ${x}_{\left[k+26\right]}$, valid at time $k-4$

point ${x}_{\left[k+36\right]}$, valid at time $k-5$

In this example we suppose there are no ‘instantaneously available’ observations (i.e. available at time k and also valid at k). Each observation has Gaussian error with variance ${\mathit{\sigma }}_{o}^{2}$ where ${\mathit{\sigma }}_{o}=0.546$. Every grid point is observed every 5 time steps and the observation network repeats itself exactly every 10 time steps. The system is well-observed and for the values of ${\mathit{\sigma }}_{o},{\mathit{\sigma }}_{q}$ used here the departure from linearity small enough that the foregoing linear theory can be well applied to the linearised model.

4.1.

### Non-overlapping windows with different lags

Consider first traditional non-overlapping window strategies. Suppose we have a 6-hour cycle with 6-hour windows. For the window $\left[k-6,k\right)$ we wish to assimilate observations valid in $k-6\le t. For non-overlapping windows we use an optimal1 smoother, i.e. 4D-Var (Li and Navon, 2001) with model error correctly accounted for and correct cycling of background error covariances. For the window $\left[k-6,k\right)$ this produces analyses $\left\{{\mathbf{x}}_{k-6}^{a},\dots ,{\mathbf{x}}_{k-1}^{a}\right\}$.

As discussed in Section 2, the number of observations which are valid in the window and available for the analysis increases the longer the interval of time between the end of the window and when the analysis is performed, which we term the lag. For the present example the number of observations available at each time in the window if the analysis is performed at $k,k+2,k+4$ is shown in Table 2.

Taking for example the lag=2 case, at times k and $k+1$ the most recently available analyses are those in the window $\left[k-12,k-6\right)$ with last analysis at $k-7$, while at $k+2,\dots ,k+5$ the most recently available analysis is at $k-1$. Table 3 shows the validity time of the most recent analysis available at times $k,\dots ,k+5$ for lags of 0, 2 and 4 hours.

Fig. 4 shows the RMS forecast error (using ${\mathit{\sigma }}_{q}=0.182$ and averaged over 2000 cycles) in forecasts valid at times $t=k,k+1,\dots ,k+5$ taken from the latest available analysis using lag=0 (in black), lag=2 (in blue) and lag=4 (in green). Fig. 4 illustrates a point about non-overlapping windows made in Section 2 above, that we choose between a short lag between observations and when the analysis is performed, giving timely analyses but not using all the observations, or a longer lag using more observations, but which at any given time requires longer forecasts which will be more degraded by model error.

Figure 4.

RMS forecast error from most recent available analysis at times 0–5 hours into the next window following an assimilation window $\left[k-6,k\right)$, for lags 0 (black), 2 (blue) and 4 (green). Dashed lines denote RMS error at times before the analysis is performed. Also shown in red is the RMS error in the optimal RUC analysis using the data available at each time. In this example ${\mathit{\sigma }}_{q}=0.182$.

For the lag=2 and lag=4 cases we also show (dashed lines) the RMS forecast error for times between the end of the analysis window and the time the analysis is performed.

4.2.

### Optimal RUC

Since in our example the longest delay in receipt of observations is 5 h, for the optimal RUC method of Section 3 we have $N=5$. At time j this produces analyses $\left\{{\mathbf{x}}_{j-5}^{a},\dots ,{\mathbf{x}}_{j}^{a}\right\}$.

A comparison of the observation usage of non-overlapping windows and optimal RUC was illustrated in Figs. 2 and 3. In optimal RUC all observations are used, as soon as they are received.

The red curve in Fig. 4 shows the RMS error in the RUC analysis at ${\mathbf{x}}_{j}^{a}$ for $j=k,\dots ,k+5$. From the foregoing we know this will always be less than the RMS error at j using any available analysis using a non-overlapping window with any lag. Note however that this error can be greater than that from the lagged analyses run at a later time, eg, in this example for the lag-2 and lag-4 analyses at time k. This is because the lagged analyses are using observations not available at time k.

5.

## Suboptimal methods for RUC

In Section 3 above we derived the optimal solution to the problem of assimilating data as soon as it becomes available, and saw that if the maximum delay is N and the state is described by n variables that this involved manipulating vectors of size $n\left(N+1\right)$ and their error covariances of size $n\left(N+1\right)×n\left(N+1\right)$.

If observation and model errors are uncorrelated in time then optimal data assimilation methods for non-overlapping windows only involve vectors and matrices of size n and $n×n$.

For large-scale systems manipulating vectors of size $n\left(N+1\right)$-pagination and more particularly matrices of size $n\left(N+1\right)×n\left(N+1\right)$ may not be manageable. Furthermore, NWP centres already have methods implemented for non-overlapping windows (we will refer to these as ‘traditional methods’) and will naturally seek ways of adapting these to the RUC problem. Hence a topic of practical importance is the relationship between the optimal solution to RUC and traditional methods applied to RUC.

For simplicity we restrict attention to the case of linear forecast and observation operators whereas in Section 3.2 the errors are Gaussian and uncorrelated. We will designate the optimal solution for RUC in this case (i.e. Table 1) as Method 0. We will develop suboptimal methods for RUC based on traditional methods for non-overlapping windows, with Method 3 a ‘naive’ application of such a method to RUC, and Methods 2 and 1 adaptations of this which are progressively closer to the optimal solution. We will then examine the relation between the four methods.

5.1.

### ‘Traditional’ methods as suboptimal methods for RUC

Suppose that at time k we have prior estimates

((31) )
$\begin{array}{c}\hfill \left\{{\mathbf{x}}_{j}^{b},j=k-N,\dots ,k\right\}\end{array}$

and we have just received observations

$\left\{{\mathbf{y}}_{j}^{\left(k\right)},j=k-N,\dots ,k\right\}$

A natural extension of the 4D-Var method (e.g. Li and Navon (2001)) as used for non-overlapping windows is to form analyses at $j=k-N,\dots ,k$

((32) )
$\begin{array}{c}\hfill {\mathbf{x}}_{j}^{a}={\mathbf{x}}_{j}^{b}+{\mathbit{\delta }}_{j},j=k-N,\dots ,k\end{array}$

where

$\underline{\mathbit{\delta }}=\left(\begin{array}{c}{\mathbit{\delta }}_{k-N}\\ ⋮\\ {\mathbit{\delta }}_{k}\end{array}\right)$

minimises

((33) )
$\begin{array}{cc}\hfill J\left(\underline{\mathbit{\delta }}\right)=& \frac{1}{2}{\left({\mathbit{\delta }}_{k-N}\right)}^{T}{B}^{-1}{\mathbit{\delta }}_{k-N}\hfill \\ \hfill & +\frac{1}{2}\sum _{j=k-N}^{j=k-1}{\left({\mathbit{\delta }}_{j+1}-{M}_{j}{\mathbit{\delta }}_{j}\right)}^{T}{Q}_{j}^{-1}\left({\mathbit{\delta }}_{j+1}-{M}_{j}{\mathbit{\delta }}_{j}\right)\hfill \\ \hfill & +\frac{1}{2}\sum _{j=k-N}^{j=k}{\left({\mathbf{y}}_{j}^{\left(k\right)}-{H}_{j}^{\left(k\right)}\left({\mathbf{x}}_{j}^{b}+{\mathbit{\delta }}_{j}\right)\right)}^{T}\hfill \\ \hfill & ×{R}_{j}^{-1}\left({\mathbf{y}}_{j}^{\left(k\right)}-{H}_{j}^{\left(k\right)}\left({\mathbf{x}}_{j}^{b}+{\mathbit{\delta }}_{j}\right)\right)\hfill \end{array}$
B in (34) is the error covariance of ${\mathbf{x}}_{k-N}^{b}$, if this is known, or some approximation otherwise; we return to this point below. All our suboptimal methods 1–3 use (33) and (34) for the analysis. There are many different ways of forming new priors $\left\{{\mathbf{x}}_{j}^{b},j=k-N+1,\dots ,k+1\right\}$. A non-exhaustive selection of possibilities is:

Method 3: The most similar to traditional 4D-Var, in which the only state saved from the above analyses is the first one ${\mathbf{x}}_{k-N}^{a}$ (i.e. at the beginning of the window). Denoting the model evolution from $k-N$ to j by ${M}_{k-N}^{j}={M}_{j-1}\dots {M}_{k-N+1}{M}_{k-N}$ this gives us priors

((34) )
$\begin{array}{c}\hfill {\mathbf{x}}_{j}^{b}={M}_{k-N}^{j}{\mathbf{x}}_{k-N}^{a},\phantom{\rule{0.277778em}{0ex}}j=k-N+1,.\dots ,k+1\end{array}$
Method 2: Slightly better is to save and use the second analysed state, i.e. at time $k-N+1$, which will be at the beginning of the next window, giving us priors
((35) )
$\begin{array}{c}\hfill {\mathbf{x}}_{j}^{b}={M}_{k-N+1}^{j}{\mathbf{x}}_{k-N+1}^{a},\phantom{\rule{0.277778em}{0ex}}j=k-N+1,.\dots ,k+1\end{array}$
Method 1: Finally, we could follow the optimal solution and save and use all the analysed states, giving us priors
((36) )
$\begin{array}{c}\hfill {\mathbf{x}}_{j}^{b}=\left\{\begin{array}{cc}{\mathbf{x}}_{j}^{a}& j=k-N+1,\dots ,k\hfill \\ {M}_{k}^{k+1}{\mathbf{x}}_{k}^{a}& j=k+1\hfill \end{array}\right\\end{array}$

The number of prior states which are simply analysis states from the previous cycle is therefore 0, 1 and N for Methods 3, 2 and 1, respectively. The four RUC methods (with the optimal one of Section 3.2 labelled ‘Method 0’) are summarised in Table 4. The formation of backgrounds is shown in more detail for $N=2$ in Table 5.

5.2.

### Covariance of analysis and background errors in methods 1–3

To compare the various methods we will need the covariance of their errors. We may write (33) and (34) as

((37) )
$\begin{array}{c}\hfill {\underline{\mathbf{x}}}_{k}^{a}={\underline{\mathbf{x}}}_{k}^{b}+{\underline{K}}_{k}\left({\underline{\mathbf{y}}}_{k}-{\underline{H}}_{k}{\underline{\mathbf{x}}}_{k}^{b}\right)\end{array}$

where

((38) )
$\begin{array}{cc}\hfill {\underline{H}}_{k}& =\phantom{\rule{0.333333em}{0ex}}\text{diag}\left({H}_{k-N}^{\left(k\right)},\dots ,{H}_{k}^{\left(k\right)}\right)\hfill \\ \hfill {\left({\underline{U}}_{k}\right)}_{\underline{i},\underline{j}}& =\left\{\begin{array}{c}{M}_{k-N+j-1}^{k-N+i-1},\phantom{\rule{0.277778em}{0ex}}for\phantom{\rule{0.277778em}{0ex}}1\le j\le i\le N+1\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}for\phantom{\rule{0.277778em}{0ex}}j>i\hfill \end{array}\right\\hfill \end{array}$
((39) )
$\begin{array}{cc}\hfill {\underline{B}}_{k}^{v}& ={\underline{U}}_{k}\phantom{\rule{0.333333em}{0ex}}\text{diag}\left(B,{Q}_{k-N},\dots ,{Q}_{k-1}\right){\underline{U}}_{k}^{T}\hfill \end{array}$
((40) )
$\begin{array}{cc}\hfill {\underline{K}}_{k}& ={\underline{B}}_{k}^{v}{\underline{H}}_{k}^{T}{\left({\underline{H}}_{k}{\underline{B}}_{k}^{v}{\underline{H}}_{k}^{T}+\underline{R}\right)}^{-1}\hfill \end{array}$

where we use the notation ${\left({\underline{U}}_{k}\right)}_{\underline{i},\underline{j}}$ to denote the ij$n×n$ submatrix of ${\underline{U}}_{k}$. Denoting the truth by ${\underline{\mathbf{x}}}_{k}^{t}$ and

${\underline{\mathbit{ϵ}}}_{k}^{b}={\underline{\mathbf{x}}}_{k}^{b}-{\underline{\mathbf{x}}}_{k}^{t},\phantom{\rule{0.277778em}{0ex}}{\underline{B}}_{k}=E\left[{\underline{\mathbit{ϵ}}}_{k}^{b}{\left({\underline{\mathbit{ϵ}}}_{k}^{b}\right)}^{T}\right],\phantom{\rule{0.277778em}{0ex}}{\underline{\mathbit{ϵ}}}_{k}^{a}={\underline{\mathbf{x}}}_{k}^{a}-{\underline{\mathbf{x}}}_{k}^{t}$

it follows that for all Methods 1–3 the analysis error covariance${\underline{A}}_{k}$ is, from (38)

((41) )
$\begin{array}{c}\hfill {\underline{A}}_{k}\equiv E\left[{\underline{\mathbit{ϵ}}}_{k}^{a}{\left({\underline{\mathbit{ϵ}}}_{k}^{a}\right)}^{T}\right]=\left(I-{\underline{K}}_{k}{\underline{H}}_{k}\right){\underline{B}}_{k}{\left(I-{\underline{K}}_{k}{\underline{H}}_{k}\right)}^{T}+{\underline{K}}_{k}\underline{R}{\underline{K}}_{k}^{T}\end{array}$

We note this depends both on ${\underline{B}}_{k}$ and (via ${\underline{K}}_{k}$ and ${\underline{B}}_{k}^{v}$) the prescribed B.

The background error covariance${\underline{B}}_{k+1}$ depends on which method is used. For method $\ell$ we may write

((42) )
$\begin{array}{c}\hfill {\underline{\mathbf{x}}}_{k+1}^{b}={\underline{M}}_{k}^{\left(\ell \right)}{\underline{\mathbf{x}}}_{k}^{a}\end{array}$

where ${\underline{M}}_{k}^{\left(1\right)}={\underline{M}}_{k}^{\left(0\right)}={\underline{M}}_{k}$ as specified in (12) and ${\underline{M}}_{k}^{\left(2\right)},{\underline{M}}_{k}^{\left(3\right)}$ are illustrated for $N=2$ in Table 6. Since the error in ${\underline{\mathbf{x}}}_{k+1}^{b}$ using method $\ell$ is

((43) )
$\begin{array}{cc}\hfill {\underline{\mathbit{ϵ}}}_{k+1}^{b}\equiv {\underline{\mathbf{x}}}_{k+1}^{b}-{\underline{\mathbf{x}}}_{k+1}^{t}& ={\underline{M}}_{k}^{\left(\ell \right)}\left({\underline{\mathbf{x}}}_{k}^{a}-{\underline{\mathbf{x}}}_{k}^{t}\right)+{\underline{M}}_{k}^{\left(\ell \right)}{\underline{\mathbf{x}}}_{k}^{t}-{\underline{\mathbf{x}}}_{k+1}^{t}\hfill \\ \hfill & ={\underline{M}}_{k}^{\left(\ell \right)}{\underline{\mathbit{ϵ}}}_{k}^{a}+{\underline{\mathbit{ϵ}}}_{k}^{q}\hfill \end{array}$

where ${\underline{\mathbit{ϵ}}}_{k}^{q}={\underline{M}}_{k}^{\left(\ell \right)}{\underline{\mathbf{x}}}_{k}^{t}-{\underline{\mathbf{x}}}_{k+1}^{t}$ we can express ${\underline{B}}_{k+1}$ in terms of ${\underline{M}}_{k}^{\left(\ell \right)}$, ${\underline{A}}_{k}$, model error covariance and the cross-covariance of analysis error ${\underline{\mathbit{ϵ}}}_{k}^{a}$ and model error ${\underline{\mathbit{ϵ}}}_{k}^{q}$:

((44) )
$\begin{array}{cc}\hfill {\underline{B}}_{k+1}=& E\left[{\underline{\mathbit{ϵ}}}_{k+1}^{b}\left({\underline{\mathbit{ϵ}}}_{k+1}^{b}{\right)}^{T}\right]={\underline{M}}_{k}^{\left(\ell \right)}E\left[{\underline{\mathbit{ϵ}}}_{k}^{a}\left({\underline{\mathbit{ϵ}}}_{k}^{a}{\right)}^{T}\right]\left({\underline{M}}_{k}^{\left(\ell \right)}{\right)}^{T}\hfill \\ \hfill & +E\left[{\underline{\mathbit{ϵ}}}_{k}^{q}\left({\underline{\mathbit{ϵ}}}_{k}^{q}{\right)}^{T}\right]+{\underline{M}}_{k}^{\left(\ell \right)}E\left[{\underline{\mathbit{ϵ}}}_{k}^{a}\left({\underline{\mathbit{ϵ}}}_{k}^{q}{\right)}^{T}\right]\hfill \\ \hfill & +E\left[{\underline{\mathbit{ϵ}}}_{k}^{q}\left({\underline{\mathbit{ϵ}}}_{k}^{a}{\right)}^{T}\right]\left({\underline{M}}_{k}^{\left(\ell \right)}{\right)}^{T}\hfill \end{array}$

In order to cycle Methods 1–3 we need to specify B in (34). For the rest of this section, for the purposes of comparing the four methods, we will suppose that in Methods 1–3 we use $B={\left({\underline{B}}_{k}\right)}_{\underline{1},\underline{1}}$, which can be obtained from (45) (as above, underlined subscripts refer to submatrices, so ${\left({\underline{B}}_{k}\right)}_{\underline{1},\underline{1}}$ is the top left $n×n$ submatrix of ${\underline{B}}_{k}$). It can be shown that for methods 1 and 2 that ${\left({\underline{B}}_{k}\right)}_{\underline{1},\underline{1}}={\left({\underline{A}}_{k-1}\right)}_{\underline{2},\underline{2}}$.

5.3.

### Relation between methods 0 and 1

An important comparison is between optimal Method 0 and suboptimal Method 1. They share the same background step (43) with the same ${\underline{M}}_{k}$. In both cases the analysis step may be written in the form (38), though whereas in Method 0 the gain ${\underline{K}}_{k}$

${\underline{K}}_{k}={\underline{B}}_{k}{\underline{H}}_{k}^{T}{\left({\underline{H}}_{k}{\underline{B}}_{k}{\underline{H}}_{k}^{T}+{\underline{R}}_{k}\right)}^{-1}$

uses true ${\underline{B}}_{k}$, i.e. the covariance of the error in ${\underline{\mathbf{x}}}_{k}^{b}$, for Method 1 the gain ${\underline{K}}_{k}$ is

${\underline{K}}_{k}={\underline{B}}_{k}^{v}{\underline{H}}_{k}^{T}{\left({\underline{H}}_{k}{\underline{B}}_{k}^{v}{\underline{H}}_{k}^{T}+{\underline{R}}_{k}\right)}^{-1}$

where ${\underline{B}}_{k}^{v}$ is defined in (40). Because of the similarity in the structures of Methods 0 and 1 there is a simple and strong relation between their errors. If the sequence of background error covariances using Methods 0 and 1 are designated respectively ${\underline{B}}_{k}$ and ${\underline{\stackrel{~}{B}}}_{k}$, and we start from the same prior error covariance ${\underline{\stackrel{~}{B}}}_{N}={\underline{B}}_{N}$, then for all $k\ge N$ the difference ${\underline{\stackrel{~}{B}}}_{k}-{\underline{B}}_{k}$ is positive semi-definite, usually written

${\underline{B}}_{k}⪯{\underline{\stackrel{~}{B}}}_{k}$

This is proved in Appendix 2.

5.4.

### Relation between methods in limit ${Q}_{k}\to \infty$

We have four RUC methods, the optimal and three suboptimal ones. We can cycle each as described above, for the suboptimal methods using $B={\left({\underline{B}}_{k}\right)}_{\underline{1},\underline{1}}$ (Section 5.2).

A limiting case which exhibits some of the differences between them, in particular how information is saved from previous cycles, is obtained by letting model error covariance ${Q}_{k}\to \infty$ for all k.

For simplicity suppose $\underline{H}=I$ and ${R}_{k}$ and ${Q}_{k}$ are independent of k. If $Q=\infty$ then after N cycles for method 0, one cycle for Methods 1 and 2, and immediately for Method 3, all knowledge of the initial background state ${\mathbf{x}}_{0}^{b}$ and its error covariance is lost. In Table 7 we show the analysed state ${\underline{\mathbf{x}}}_{j+N}^{a}$ produced by the four methods for any $j\ge N$ if model error is infinite, and the corresponding background error and analysis error covariances.

The optimal Method 0 retains all the observation information ever received; at time $j+N$ the estimate of state at any time between j and $j+N$ is simply the average of all the observations ever received valid at that time. At the other extreme, Method 3 ‘forgets’ all the observation information from previous cycles: at time $j+N$ the estimate of state at any time between j and $j+N$ is just the value of the observation valid at that time and received at time $j+N$. Methods 1 and 2 retain observation information from the previous cycle only at the initial time. These different behaviours are reflected in the analysis error covariances shown in Table 7.

Comparing Methods 1 and 2, while in the limit $Q\to \infty$ Method 1 analyses are no better than those of Method 2, we note in Table 7 that it has better backgrounds than Method 2.

5.5.

### Relation between methods in the limit ${Q}_{k}\to 0$

Suppose that ${Q}_{k}=0$ for all k, and we are given an estimate ${\mathbf{x}}_{0}^{b}$ of ${\mathbf{x}}_{0}$ with error covariance ${B}_{0}$. Estimate the remaining states in the window $\left\{0,\dots ,N\right\}$ by

$\left\{{\mathbf{x}}_{j}^{b}={M}_{0}^{j}{\mathbf{x}}_{0}^{b},j=1,\dots ,N\right\}$

If ${Q}_{k}=0$ for all k then Methods 1–3 simplify to their ‘strong constraint’ forms, in which (34) simplifies to

((45) )
$\begin{array}{cc}& J\left({\mathbit{\delta }}_{k-N}\right)=\frac{1}{2}{\left({\mathbit{\delta }}_{k-N}\right)}^{T}{B}_{k-N}^{-1}{\mathbit{\delta }}_{k-N}\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\frac{1}{2}\sum _{j=k-N}^{j=k}\left({\mathbf{y}}_{j}^{\left(k\right)}-{H}_{j}^{\left(k\right)}{M}_{k-N}^{j}\left({\mathbf{x}}_{k-N}^{b}+{\mathbit{\delta }}_{k-N}\right){\right)}^{T}\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}×{R}_{j}^{-1}\left({\mathbf{y}}_{j}^{\left(k\right)}-{H}_{j}^{\left(k\right)}{M}_{k-N}^{j}\left({\mathbf{x}}_{k-N}^{b}+{\mathbit{\delta }}_{k-N}\right)\right)\hfill \end{array}$

Crucially, in the limit $Q\to 0$ Methods 0–3 coincide, so in particular Methods 1–3 are now optimal. This is proved in Appendix 3. In the absence of model error the ‘suboptimal’ methods all coincide with each other, and in fact are in this case optimal.

5.6.

### Comparison of methods 0–3 for the example of Section 4

We may apply linearised versions of Methods 0–3 to our nonlinear chaotic example of Section 4. In an attempt to mitigate the effects of linearisation error one can formulate outer loop-style iterations for these strategies, which may be worth implementing in more non-linear systems (for the examples here they made negligible difference). Alternatively one could use the ‘best linear approximation’ (Payne, 2013).

In our example of Section 4, at time k observations have just become available which are valid at $k-1,\dots ,k-5$ so $N=5$. The optimal Method 0 and suboptimal Methods 1–3 all provide analyses ${\mathbf{x}}_{k}^{a},{\mathbf{x}}_{k-1}^{a},\dots ,{\mathbf{x}}_{k-5}^{a}$. (Since in our example no observations are instantaneously available, ${\mathbf{x}}_{k}^{a}$ is here a forecast from ${\mathbf{x}}_{k-1}^{a}$).

Figure 5.

RMS error in analysis at $k-5,\dots ,k$, various RUC strategies using all observations available at time k, ${\mathit{\sigma }}_{q}=1.82$ (upper set) and ${\mathit{\sigma }}_{q}=0.455$ (lower set).

Each strategy is cycled 10,000 times and the first hundred cycles disregarded. Figure 5 shows the RMS error in the analyses at $k-5,\dots ,k$, for the optimal Method 0 (black), and suboptimal Methods 1 (blue), 2 (green) and 3 (red), for ${\mathit{\sigma }}_{q}=1.82$ and 0.455 (upper and lower set respectively).

As expected from the foregoing, the errors $\mathcal{E}$ are ordered

$\mathcal{E}\left(\phantom{\rule{0.333333em}{0ex}}\text{Method}\phantom{\rule{0.277778em}{0ex}}0\right)<\mathcal{E}\left(\phantom{\rule{0.333333em}{0ex}}\text{Method}\phantom{\rule{0.277778em}{0ex}}1\right)<\mathcal{E}\left(\phantom{\rule{0.333333em}{0ex}}\text{Method}\phantom{\rule{0.277778em}{0ex}}2\right)<\mathcal{E}\left(\phantom{\rule{0.333333em}{0ex}}\text{Method}\phantom{\rule{0.277778em}{0ex}}3\right)$

Furthermore, the analyses and therefore their errors converge as ${\mathit{\sigma }}_{q}\to 0$.

6.

## Impact of climatological background error covariances

A significant difference between the methods of the preceding sections and those used for large scale systems is that in the latter the prior error covariance (usually denoted B) is not cycled, but is either constant or is a convex combination of a constant and an estimate of cycled B (see (47) below).

It is important to note that insofar as B is fixed it is advantageous to assimilate data as far as possible simultaneously in larger units rather than split it up into smaller volumes and assimilate it in smaller units. Intuitively, by assimilating many observations simultaneously the deficiencies of the fixed B are reduced.

Illustrating this point is complicated by the fact that increasing observation batch size tends to involve making other changes which themselves have an impact. If we compare cycling using non-overlapping windows with RUC then at no instant in time are the two methodologies assimilating the same observations (see Section 2). If we use 4D-Var to compare cycling with windows of length 1 (assimilating one observation every time step) with windows of length 2 (two observations every two time steps) then the latter has the advantage of covariances evolved through the window, which is a different point to the one being made.

If we compare assimilating two observations simultaneously every time step with assimilating one after the other, in the latter case we have to decide what B to use for the second observation. In Appendix 4 we show that if in a scalar system we assimilate two observations every cycle, and have a choice between assimilating them

• (a)   simultaneously using a fixed background error covariance B, or
• (b)   separately, the first with fixed background error covariance ${B}_{1}$ and the second with fixed background error covariance ${B}_{2}$,
then it is possible to choose B so that no matter how well ${B}_{1}$ and ${B}_{2}$ are chosen strategy (a) will always outperform strategy (b).

We may contrast this result concerning the use of fixed background covariances with the fact that, if B is chosen optimally every cycle, and ${B}_{1}$ and ${B}_{2}$ are chosen optimally every cycle, the two strategies will produce identical (and optimal) results.

This means that if B is fixed then, in this respect, RUC is at a disadvantage compared with conventional cycling as now there are more cycles with fewer observations used every cycle. In practice this effect may be dwarfed by the advantages of RUC as discussed in Section 2. If not, the obvious remedy is to improve the cycling of the background error covariances.

As noted above, the effect is due to using a fixed B, and is removed if B is cycled properly. This is unattainable in current large-scale NWP, but centres such as the Met Office already employ a ‘hybrid’ B (Clayton et al., 2013)

((46) )
$\begin{array}{c}\hfill B={\mathit{\beta }}_{c}^{2}{B}_{c}+{\mathit{\beta }}_{e}^{2}{B}_{e}\end{array}$

where ${B}_{c}$ is fixed but ${B}_{e}$ is an estimate (from an ensemble) of the true prior error covariance, with ${\mathit{\beta }}_{c}^{2}+{\mathit{\beta }}_{e}^{2}=1$. For best performance ${\mathit{\beta }}_{e}\to 1$ as ensemble size increases, with the fixed part having no weight in the limit of an infinitely large ensemble.

There are other possible ways of introducing adaptivity into B, such as the ‘ensemble-variation integrated localised’ method (Auligné et al., 2016) and the so-called variational Kalman filters (Auvinen et al., 2010). In the latter the limited-memory quasi-Newton method is used to build a low-storage approximation to the Hessian of the analysis cost function, which therefore approximates the inverse of the analysis error covariance matrix, and can also be used to evolve the covariance forward to approximate B at the next analysis time.

7.

## Concluding remarks

Rapid update cycling (RUC) is the process by which we assimilate observations into a model as soon as they become available, or more practically, within some (short) time interval $\mathit{\delta }t$. We have seen that if the greatest delay in receiving observations is $N\mathit{\delta }t$ then the optimal solution to RUC at time k involves manipulating the $\left(N+1\right)n$ vectors $\underline{\mathbf{x}}$ formed from ${\mathbf{x}}_{k-N},\dots ,{\mathbf{x}}_{k}$ and the moments of the errors in $\underline{\mathbf{x}}$, such as their $\left(N+1\right)n×\left(N+1\right)n$ error covariances.

Compared with ‘traditional’ cycling RUC makes more timely use of observations, which is particularly important for the provision of LBCs for LAMs. Another advantage of RUC is that the increments are smaller and hence linearisation error is reduced.

We have purposely concentrated on fundamental topics and avoided such practically important matters as efficiency and cost. The fact that for each analysis observation volumes and increments are smaller, and we always have a recent one available, suggest that it should be possible to reduce the cost per analysis.2 On contemporary HPCs where the increased power comes through higher numbers of processors rather than increased clock speeds this is more important than the total cost per day.3

We may adapt ‘traditional’ methods designed for non-overlapping windows to RUC. These methods are suboptimal, but in all cases considered in this paper (Methods 1,2 and 3 in Section 4) they coincide with the optimal solution in the limit where model error vanishes.

Assimilating observations in smaller batches can be disadvantageous if climatological background error variances are used. This potentially poses a challenge for RUC, which could perhaps be met by improved cycling of error covariances.

## Disclosure statement

{ label (or @symbol) needed for fn[@id='FN0004'] }No potential conflict of interest was reported by the author.

1 Optimal except that we ignore linearisation error, which is small for our example.

2 Note also that there is scope for preconditioning using the work already done for recent analyses. This preconditioning could be based on Hessian eigenvectors (Fisher and Courtier, 1995), or on the vectors approximating the Hessian in the limited memory quasi-Newton method (Courtier et al., 1998).

3 We have also noted that the window length of RUC is determined by the longest delay in receiving observations, which in the operational example in Section 1 implies a window of 4 hours compared with the current 6 hours.

## Acknowledgements

The author thanks Mike Cullen and Andrew Lorenc for useful discussions on this topic, and the referees appointed by Tellus for their comments.

## References

1. Anderson, B. and Moore, J.1979. Optimal FilteringPrentice-Hall, Englewood Cliffs, NJ . 357 pp.

2. Auligné, T., Ménétrier, B., Lorenc, A. C. and Buehner, M.2016. Ensemble-variational integrated localized data assimilation. Mon. Weather Rev.144, 3677–3696. DOI:https://doi.org/10.1175/mwr-d-15-0252.1.

3. Auvinen, H., Bardsley, J. M., Haario, H. and Kauranne, T.2010. The Variational Kalman filter and an efficient implementation using limited memory BFGS. Int. J. Numer. Methods Fluids64, 314–335. DOI:https://doi.org/10.1002/fld.2153.

4. Benjamin, S. G., Dévényi, D., Weygandt, S. S., Brundage, K. J., Brown, J. M., and co-authors.2004a. An hourly assimilation-forecast cycle: the RUC. Mon. Weather Rev.132, 495–518. DOI:https://doi.org/10.1175/1520-0493.

5. Benjamin, S. G., Grell, G. A., Brown, J. M., Smirnova, T. G. and Bleck, R.2004b. Mesoscale weather prediction with the RUC hybrid isentropic terrain-following coordinate model. Mon. Weather Rev.132, 473–494. DOI:https://doi.org/10.1175/1520-0493.

6. Boyd, S. and Vandenberghe, L.2004. Convex OptimizationCambridge University Press, New York, NY . 716 pp.

7. Clayton, A. M., Lorenc, A. C. and Barker, D. M.2013. Operational implementation of a hybrid ensemble/4D-Var global data assimilation system at the Met Office. Q. J. R. Meteorol. Soc.139, 1445–1461. DOI:https://doi.org/10.1002/qj.2054.

8. Courtier, P., Andersson, E., Heckley, W., Vasiljevic, D., Hamrud, M., and co-authors.1998. The ECMWF implementation of three-dimensional variational assimilation (3D-Var). I: formulation. Q. J. R. Meteorol. Soc.124, 1783–1807. DOI:https://doi.org/10.1002/qj.49712455002.

9. Fisher, M. and Courtier, P.1995. Estimating the covariance matrices of analysis and forecast error in variational data assimilation. Tech. Memo.220, 28. ECMWF, Shinfield Park, Reading, UK.

10. Gallier, J.2010. The Schur complement and symmetric positive semidefinite (and definite) matrices. Penn Eng. Online at: www.cis.upenn.edu/~jean/schur-comp.pdf

11. Järvinen, H., Thèpaut, J.-N. and Courtier, P.1996. Quasi-continuous variational data assimilation. Q. J. R. Meteorol. Soc.122, 515–534. DOI:https://doi.org/10.1002/qj.49712253011.

12. Jazwinski, A. H.1970. Stochastic Processes and Filtering TheoryAcademic Press, New York . 376 pp.

13. Li, Z. and Navon, I. M.2001. Optimality of variational data assimilation and its relationship with the Kalman filter and smoother. Q. J. R. Meteorol. Soc.127, 661–683. DOI:https://doi.org/10.1002/qj.49712757220.

14. Lorenz, E.1996. Predictability -- a problem partly solved. Proceedings, Seminar on Predictability Vol. 1, ECMWF, Reading, UK, pp. 1–18.

15. Payne, T. J.2013. The linearisation of maps in data assimilation. Tellus A: Dyn. Meteorol. Oceanogr.65, DOI:https://doi.org/10.3402/tellusa.v65i0.18840.

16. Tang, Y., Lean, H. W. and Bornemann, J.2013. The benefits of the Met Office variable resolution NWP model for forecasting convection. Met. Apps20, 417–426. DOI:https://doi.org/10.1002/met.1300.

17. Veerse, F. and Thèpaut, J.-N.1998. Multiple-truncation incremental approach for four-dimensional variational data assimilation. Q. J. R. Meteorol. Soc.124, 1889–1908. DOI:https://doi.org/10.1002/qj.49712455006.

Appendix 1

### Proof of variational form of optimal solution given in Section 3.3

Continuing with the notation of Section 3.3 we readily obtain identities (A1)–(A3) below:

• (i)
((47) )
$\begin{array}{cc}& {J}_{b}\left(\underline{\mathbit{\delta }}\right)+{J}_{q}\left(\underline{\mathbit{\delta }}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}=\frac{1}{2}{\underline{\mathbit{\delta }}}^{T}\left[\left(\begin{array}{cc}{\stackrel{^}{A}}^{-1}& 0\\ 0& 0\end{array}\right)+\left(\begin{array}{c}-{f}^{T}\\ I\end{array}\right){Q}_{k-1}^{-1}\left(\begin{array}{cc}-f& I\end{array}\right)\right]\underline{\mathbit{\delta }}\hfill \end{array}$
where f is the $n×nN$ matrix
$f=\left(\underset{N-1\phantom{\rule{0.333333em}{0ex}}\text{times}}{\underset{⏟}{\begin{array}{ccc}0& \cdots \cdots & 0\end{array}}}{M}_{k-1}\right)$
• (ii)   By elementary algebra
((A1) )
$\begin{array}{cc}& \left[\left(\begin{array}{c}I\\ f\end{array}\right)\stackrel{^}{A}\left(\begin{array}{cc}I& {f}^{T}\end{array}\right)+\left(\begin{array}{cc}0& 0\\ 0& Q\end{array}\right)\right]\hfill \\ & \phantom{\rule{1em}{0ex}}×\left[\left(\begin{array}{c}-{f}^{T}\\ I\end{array}\right){Q}^{-1}\left(\begin{array}{cc}-f& I\end{array}\right)+\left(\begin{array}{cc}{\stackrel{^}{A}}^{-1}& 0\\ 0& 0\end{array}\right)\right]\hfill \\ & =\left(\begin{array}{cc}I& 0\\ 0& I\end{array}\right)\hfill \end{array}$
• (iii)   From the definition of ${\underline{M}}_{k-1}$ in (12) we have
${\underline{M}}_{k-1}=\left(\begin{array}{cc}0& I\\ 0& f\end{array}\right)$
so by construction of $\stackrel{^}{A}$
((A2) )
$\begin{array}{c}\hfill \left(\begin{array}{c}I\\ f\end{array}\right)\stackrel{^}{A}\left(\begin{array}{cc}I& {f}^{T}\end{array}\right)={\underline{M}}_{k-1}{\underline{P}}_{k-1|k-1}{\underline{M}}_{k-1}^{T}\end{array}$
It follows from (A1)–(A3) and (13) that
((A3) )
$\begin{array}{cc}& {J}_{b}\left(\underline{\mathbit{\delta }}\right)+{J}_{q}\left(\underline{\mathbit{\delta }}\right)\hfill \\ \hfill & =\frac{1}{2}{\underline{\mathbit{\delta }}}^{T}\left[{\left({\underline{M}}_{k-1}{\underline{P}}_{k-1|k-1}{\underline{M}}_{k-1}^{T}+{\underline{Q}}_{k-1}\right)}^{-1}\right]\underline{\mathbit{\delta }}\hfill \end{array}$
At a minimum of (24) we have ${J}^{\prime }\left(\underline{\mathbit{\delta }}\right)=\mathbf{0}$ so
((A4) )
$\begin{array}{cc}& \left[{\left({\underline{M}}_{k-1}{\underline{P}}_{k-1|k-1}{\underline{M}}_{k-1}^{T}+{\underline{Q}}_{k-1}\right)}^{-1}+{\underline{H}}_{k}^{T}{\underline{R}}_{k}^{-1}{\underline{H}}_{k}\right]\hfill \\ \hfill & ×\underline{\mathbit{\delta }}={\underline{H}}_{k}^{T}{\underline{R}}_{k}^{-1}\left({\underline{\mathbf{y}}}_{k}-{\underline{H}}_{k}{\underline{\stackrel{^}{\mathbf{x}}}}_{k|k-1}\right)\hfill \end{array}$
and using this value of $\underline{\mathbit{\delta }}$ in (23) is equivalent to (16).

Appendix 2

### Proof of theorem of Section 5.3

If the sequence of background error covariances using Methods 0 and 1 are designated, respectively,${\underline{B}}_{k}$ and ${\underline{\stackrel{~}{B}}}_{k}$, and we start from the same prior error covariance${\underline{\stackrel{~}{B}}}_{N}={\underline{B}}_{N}$, then for all$k\ge N$

${\underline{B}}_{k}⪯{\underline{\stackrel{~}{B}}}_{k}$
Proof For simplicity we shall suppose that both $\underline{R}$ and ${\underline{B}}_{k}^{v}$ are non-singular and that $\underline{H}=I$, so denoting Method 1 quantities with tildes we have $\underline{K}={\left({\underline{B}}^{-1}+{\underline{R}}^{-1}\right)}^{-1}{\underline{R}}^{-1}$ and $\underline{\stackrel{~}{K}}={\left({\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right)}^{-1}{\underline{R}}^{-1}$. Hence from (17) and (42) we have
((A5) )
$\begin{array}{cc}\hfill \underline{A}& ={\left({\underline{B}}^{-1}+{\underline{R}}^{-1}\right)}^{-1}\hfill \\ \hfill \underline{\stackrel{~}{A}}& ={\left({\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right)}^{-1}\left[{\left({\underline{B}}^{v}\right)}^{-1}\underline{\stackrel{~}{B}}{\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right]\hfill \\ \hfill & ×{\left({\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right)}^{-1}\hfill \end{array}$

and therefore

((B1) )
$\begin{array}{cc}& {\underline{A}}^{-1}-{\underline{\stackrel{~}{A}}}^{-1}=\left({\underline{B}}^{-1}-{\underline{\stackrel{~}{B}}}^{-1}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\left[{\underline{\stackrel{~}{B}}}^{-1}+{\underline{R}}^{-1}-\left({\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right)\right\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}×{\left({\left({\underline{B}}^{v}\right)}^{-1}\underline{\stackrel{~}{B}}{\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right)}^{-1}\left({\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\right)]\hfill \end{array}$

Now consider the matrix

((B2) )
$\begin{array}{cc}\hfill \underline{X}& =\left(\begin{array}{cc}{\underline{\stackrel{~}{B}}}^{-1}+{\underline{R}}^{-1}& {\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\\ {\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}& {\left({\underline{B}}^{v}\right)}^{-1}\underline{\stackrel{~}{B}}{\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}\end{array}\right)\hfill \\ \hfill & =\left(\begin{array}{c}{\underline{\stackrel{~}{B}}}^{-1}\\ {\left({\underline{B}}^{v}\right)}^{-1}\end{array}\right)\underline{\stackrel{~}{B}}\left(\begin{array}{cc}{\underline{\stackrel{~}{B}}}^{-1}& {\left({\underline{B}}^{v}\right)}^{-1}\end{array}\right)+\left(\begin{array}{c}I\\ I\end{array}\right){\underline{R}}^{-1}\left(\begin{array}{cc}I& I\end{array}\right)\hfill \end{array}$

Since $\underline{X}$ is the sum of two positive semi-definite matrices it is positive semi-definite. Note also that since $\underline{R}$ is positive definite that ${\left({\underline{B}}^{v}\right)}^{-1}\underline{\stackrel{~}{B}}{\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}$ is invertible, so its Moore–Penrose inverse is its usual matrix inverse.

The term in square brackets in (B2) is the Schur complement of ${\left({\underline{B}}^{v}\right)}^{-1}\underline{\stackrel{~}{B}}{\left({\underline{B}}^{v}\right)}^{-1}+{\underline{R}}^{-1}$ in $\underline{X}$ so by Theorem 4.3 of Gallier (2010) is positive semi-definite. Therefore if ${\underline{B}}_{k}^{-1}⪰{\underline{\stackrel{~}{B}}}_{k}^{-1}$ then ${\underline{A}}_{k}^{-1}-{\underline{\stackrel{~}{A}}}_{k}^{-1}$ is the sum of two positive semi-definite matrices, so is positive semi-definite. We conclude that if ${\underline{B}}_{k}⪯{\underline{\stackrel{~}{B}}}_{k}$ then ${\underline{A}}_{k}⪯{\underline{\stackrel{~}{A}}}_{k}$. Since both methods satisfy (43) with the same ${\underline{M}}_{k}$, both satisfy the same relation

${\underline{B}}_{k+1}={\underline{M}}_{k}{\underline{A}}_{k}{\underline{M}}_{k}^{T}+\underline{Q},\phantom{\rule{0.277778em}{0ex}}{\underline{\stackrel{~}{B}}}_{k+1}={\underline{M}}_{k}{\underline{\stackrel{~}{A}}}_{k}{\underline{M}}_{k}^{T}+\underline{Q}$

Therefore ${\underline{A}}_{k}⪯{\underline{\stackrel{~}{A}}}_{k}$ implies ${\underline{B}}_{k+1}⪯{\underline{\stackrel{~}{B}}}_{k+1}$ and the claim follows.$\square$

Appendix 3

### Proof of equivalence of methods 0–3 if $Q=0$

Set ${\underline{V}}_{k}$ to be the first n columns of the $n\left(N+1\right)×n\left(N+1\right)$ matrix ${\underline{U}}_{k}$, i.e.

${\underline{V}}_{k}=\left(\begin{array}{c}{M}_{k-N}^{k-N}\\ {M}_{k-N}^{k-\left(N-1\right)}\\ ⋮\\ {M}_{k-N}^{k}\end{array}\right)$

Because $Q=0$ it follows from (40) that

((B3) )
$\begin{array}{c}\hfill {\underline{B}}_{k}^{v}={\underline{V}}_{k}B{\underline{V}}_{k}^{T}\end{array}$

We suppose inductively that for some $k\ge N$ Method 0 and Methods 1–3 have the same ${\underline{\mathbf{x}}}_{k}^{b},{\underline{B}}_{k}$ and that

((C1) )
$\begin{array}{c}\hfill {\underline{\mathbf{x}}}_{k}^{b}={\underline{V}}_{k}{\mathbf{x}}_{k-N}^{b}\\ \hfill {\underline{B}}_{k}={\underline{B}}_{k}^{v}={\underline{V}}_{k}{B}_{k-N}{\underline{V}}_{k}^{T}\end{array}$

This is true for $k=N$ by construction. For all methods we therefore have

${\underline{K}}_{k}={\underline{V}}_{k}{B}_{k-N}{\underline{V}}_{k}^{T}{\underline{H}}_{k}^{T}{\left({\underline{H}}_{k}{\underline{V}}_{k}{B}_{k-N}{\underline{V}}_{k}^{T}{\underline{H}}_{k}^{T}+{\underline{R}}_{k}\right)}^{-1}$

Using the ‘Kalman identity’

((C2) )
$\begin{array}{c}\hfill B{\underline{V}}^{T}{\underline{H}}^{T}{\left(\underline{H}\underline{V}B{\underline{V}}^{T}{\underline{H}}^{T}+\underline{R}\right)}^{-1}=\\ \hfill {\left({B}^{-1}+{\underline{V}}^{T}{\underline{H}}^{T}{\underline{R}}^{-1}\underline{H}\underline{V}\right)}^{-1}{\underline{V}}^{T}{\underline{H}}^{T}{\underline{R}}^{-1}\end{array}$

it follows from (17), (42) that for all methods

((C3) )
$\begin{array}{c}\hfill {\underline{A}}_{k}={\underline{V}}_{k}\left[{\left({B}_{k-N}^{-1}+{\underline{V}}_{k}^{T}{\underline{H}}_{k}^{T}{\underline{R}}^{-1}{\underline{H}}_{k}{\underline{V}}_{k}\right)}^{-1}\right]{\underline{V}}_{k}^{T}\\ \hfill {\underline{K}}_{k}={\underline{A}}_{k}{\underline{H}}_{k}^{T}{\underline{R}}^{-1}\end{array}$

and therefore from (16), (38) that in all cases

((C4) )
$\begin{array}{cc}\hfill {\underline{\mathbf{x}}}_{k}^{a}=& {\underline{V}}_{k}\left[{\mathbf{x}}_{k-N}^{b}+{\left({B}_{k-N}^{-1}+{\underline{V}}_{k}^{T}{\underline{H}}_{k}^{T}{\underline{R}}^{-1}{\underline{H}}_{k}{\underline{V}}_{k}\right)}^{-1}\right\hfill \\ \hfill & ×{\underline{V}}_{k}^{T}{\underline{H}}_{k}^{T}{\underline{R}}^{-1}\left({\underline{\mathbf{y}}}_{k}-{\underline{H}}_{k}{\underline{\mathbf{x}}}_{k}^{b}\right)]\hfill \end{array}$

Recall that the superscript $\ell$ in ${\underline{M}}^{\left(\ell \right)}$ in (43) denotes the method used, with ${\underline{M}}_{k}^{\left(1\right)}={\underline{M}}_{k}^{\left(0\right)}={\underline{M}}_{k}$ as defined for the optimal solution in (12). For all $\ell =0,1,2,3$-pagination

${\underline{M}}_{k}^{\left(\ell \right)}{\underline{V}}_{k}=\left(\begin{array}{c}{M}_{k-N}^{k+1-N}\\ {M}_{k-N}^{k+1-\left(N-1\right)}\\ ⋮\\ {M}_{k-N}^{k+1}\end{array}\right)={\underline{V}}_{k+1}{M}_{k-N}^{k+1-N}$

Let ${\stackrel{~}{\underline{\mathbf{x}}}}_{k}^{a}$ denote the vector in square brackets in (C5) and ${\stackrel{~}{\underline{A}}}_{k}$ the matrix in square brackets in (C3C4). It follows from (18), (19) and (43), (45) that both for Method 0 and for Methods $\ell =1,2,3$ we have

((C5) )
$\begin{array}{cc}\hfill {\underline{\mathbf{x}}}_{k+1}^{b}& ={\underline{M}}_{k}^{\left(\ell \right)}{\underline{\mathbf{x}}}_{k}^{a}={\underline{M}}_{k}^{\left(\ell \right)}{\underline{V}}_{k}{\stackrel{~}{\underline{\mathbf{x}}}}_{k}^{a}={\underline{V}}_{k+1}{M}_{k-N}^{k+1-N}{\stackrel{~}{\underline{\mathbf{x}}}}_{k}^{a}\hfill \\ \hfill {\mathbf{x}}_{k+1-N}^{b}& ={\left({\underline{\mathbf{x}}}_{k+1}^{b}\right)}_{\underline{1}}={M}_{k-N}^{k+1-N}{\stackrel{~}{\underline{\mathbf{x}}}}_{k}^{a}\hfill \\ \hfill {\underline{B}}_{k+1}& ={\underline{M}}_{k}^{\left(\ell \right)}{\underline{A}}_{k}{\left({\underline{M}}_{k}^{\left(\ell \right)}\right)}^{T}\hfill \\ \hfill & ={\underline{V}}_{k+1}{M}_{k-N}^{k+1-N}{\stackrel{~}{\underline{A}}}_{k}{\left({M}_{k-N}^{k+1-N}\right)}^{T}{\underline{V}}_{k+1}^{T}\hfill \\ \hfill {B}_{k+1-N}& ={\left({\underline{B}}_{k+1}\right)}_{\underline{1},\underline{1}}={M}_{k-N}^{k+1-N}{\stackrel{~}{\underline{A}}}_{k}{\left({M}_{k-N}^{k+1-N}\right)}^{T}\hfill \end{array}$
(where ${\left({\underline{\mathbf{x}}}_{k+1}^{b}\right)}_{\underline{1}}$ is the first $n×1$ sub-vector of ${\underline{\mathbf{x}}}_{k+1}^{b}$). Therefore the inductive hypothesis holds for $k+1$, and therefore for all k.

Appendix 4

### 4D-Var with a fixed background error covariance: impact of observation batch size

Suppose we have a linear system, observations in some time interval [0, T] and all errors are Gaussian. If we assimilate the observations using an optimal method, such as 4D-Var with correctly cycled prior and posterior error covariances, using m assimilation windows

$\left[0,{t}_{1}\right],\left[{t}_{1},{t}_{2}\right],\dots ,\left[{t}_{m-1},T\right]$

then the estimate of state at time T is independent of m and of how we choose

${t}_{1}<{t}_{2}<\cdots <{t}_{m-1}$

However, if instead of properly cycling the error covariances the background error covariances are fixed, it is often advantageous to assimilate data in larger batches.

We illustrate this by considering a case where x is a scalar quantity, which evolves in time according to

$x\left(i+1\right)=\mathit{\mu }x\left(i\right)$

for some constant $\mathit{\mu }$ (which is supposed known, so there is no model error) with $|\mathit{\mu }|>1$. At each time i we wish to assimilate an observation ${y}_{1}\left(i\right)$ with error variance ${r}_{i}$ and an observation ${y}_{2}\left(i\right)$ with error variance ${s}_{i}$. To make the problem analytically tractable we will suppose that $\left\{\left({r}_{i},{s}_{i}\right),i=1,\dots ,\infty \right\}$ are drawn (independently of i) from $\left\{\left({R}_{j},{S}_{j}\right),j=1\dots ,k\right\}$, where for any i$\left\{{r}_{i}={R}_{j},{s}_{i}={S}_{j}\right\}$ with probability ${p}_{j},j=1,\dots ,k$.

We compare two assimilation strategies using 4D-Var with a non-cycled background:

Simultaneous (batch size of 2): at each time i we assimilate ${y}_{1}\left(i\right)$ and ${y}_{2}\left(i\right)$ simultaneously using fixed background error covariance b, i.e. ${x}_{a}={x}_{b}+\mathit{\delta }$ where $\mathit{\delta }$ minimises

$J\left(\mathit{\delta }\right)=\frac{1}{2}\frac{{\mathit{\delta }}^{2}}{b}+\frac{1}{2}\frac{{\left({y}_{1}-{x}_{b}-\mathit{\delta }\right)}^{2}}{{r}_{i}}+\frac{1}{2}\frac{{\left({y}_{2}-{x}_{b}-\mathit{\delta }\right)}^{2}}{{s}_{i}}$
Sequential (batch size of 1): at each time i we assimilate ${y}_{1}\left(i\right)$ using fixed background error covariance ${b}_{1}$ to give intermediate analysis ${x}_{a1}\left(i\right)$, then assimilate observation ${y}_{2}\left(i\right)$ using fixed background error variance ${b}_{2}$ to give final analysis ${x}_{a}\left(i\right)$, i.e. ${x}_{a1}={x}_{b}+{\mathit{\delta }}_{1}$, ${x}_{a}={x}_{a1}+{\mathit{\delta }}_{2}$, where ${\mathit{\delta }}_{1},{\mathit{\delta }}_{2}$ minimise
((C6) )
$\begin{array}{c}\hfill J\left({\mathit{\delta }}_{1}\right)=\frac{1}{2}\frac{{\mathit{\delta }}_{1}^{2}}{{b}_{1}}+\frac{1}{2}\frac{{\left({y}_{1}-{x}_{b}-{\mathit{\delta }}_{1}\right)}^{2}}{{r}_{i}}\\ \hfill J\left({\mathit{\delta }}_{2}\right)=\frac{1}{2}\frac{{\mathit{\delta }}_{2}^{2}}{{b}_{2}}+\frac{1}{2}\frac{{\left({y}_{2}-{x}_{a1}-{\mathit{\delta }}_{2}\right)}^{2}}{{s}_{i}}\end{array}$

We will show the following: we can choose b so that, however ${b}_{1}$ and ${b}_{2}$ are chosen, the mean square error using the simultaneous method is lower than that using the sequential method (and strictly lower if $k\ge 2$ and ${R}_{1}\ne {R}_{2}$).

#### Proof (summary)

• (1)   Denoting $E\left[{\left({x}_{i}^{a}-{x}_{i}^{t}\right)}^{2}\right]$ by ${A}_{i}$ it is readily shown that for both the simultaneous and sequential methods
((D1) )
$\begin{array}{c}\hfill {A}_{i}={\mathit{\beta }}_{i}^{2}{\mathit{\mu }}^{2}{A}_{i-1}+{\mathit{\rho }}_{i}^{2}{r}_{i}+{\mathit{\sigma }}_{i}^{2}{s}_{i}\end{array}$
where the parameters ${\mathit{\beta }}_{i}$, ${\mathit{\rho }}_{i}$, ${\mathit{\sigma }}_{i}$ are as listed in Table D1.
• (2)   It follows from (D1) that for any $i>1$
((D2) )
$\begin{array}{cc}\hfill {A}_{i}=& +{\left({\mathit{\beta }}_{i}{\mathit{\beta }}_{i-1}\dots {\mathit{\beta }}_{2}{\mathit{\mu }}^{i-1}\right)}^{2}{A}_{1}\hfill \\ \hfill & +{\left({\mathit{\beta }}_{i}{\mathit{\beta }}_{i-1}\dots {\mathit{\beta }}_{3}{\mathit{\mu }}^{i-2}\right)}^{2}\left({\mathit{\rho }}_{2}^{2}{r}_{2}+{\mathit{\sigma }}_{2}^{2}{s}_{2}\right)\hfill \\ \hfill & +{\left({\mathit{\beta }}_{i}{\mathit{\beta }}_{i-1}\dots {\mathit{\beta }}_{4}{\mathit{\mu }}^{i-3}\right)}^{2}\left({\mathit{\rho }}_{3}^{2}{r}_{3}+{\mathit{\sigma }}_{3}^{2}{s}_{3}\right)\hfill \\ \hfill & +\cdots \hfill \\ \hfill & +{\left({\mathit{\beta }}_{i}\mathit{\mu }\right)}^{2}\left({\mathit{\rho }}_{i-1}^{2}{r}_{i-1}+{\mathit{\sigma }}_{i-1}^{2}{s}_{i-1}\right)\hfill \\ \hfill & +{\mathit{\rho }}_{i}^{2}{r}_{i}+{\mathit{\sigma }}_{i}^{2}{s}_{i}\hfill \end{array}$
So ${A}_{i}$ is a function of $\left({r}_{1},{s}_{1}\right),\left({r}_{2},{s}_{2}\right),\dots ,\left({r}_{i},{s}_{i}\right)$ which we are considering as independent random variables, so
$\begin{array}{cc}& E\left[{\left({\mathit{\beta }}_{i}{\mathit{\beta }}_{i-1}\dots {\mathit{\beta }}_{j+1}\right)}^{2}\left({\mathit{\rho }}_{j}^{2}{r}_{j}+{\mathit{\sigma }}_{j}^{2}{s}_{j}\right)\right]\hfill \\ \hfill & =E\left[{\mathit{\beta }}_{i}^{2}\right]E\left[{\mathit{\beta }}_{i-1}^{2}\right]\dots E\left[{\mathit{\beta }}_{j+1}^{2}\right]E\left[{\mathit{\rho }}_{j}^{2}{r}_{j}+{\mathit{\sigma }}_{j}^{2}{s}_{j}\right]\hfill \end{array}$
Denote expectations over $\left({r}_{1},{s}_{1}\right),\left({r}_{2},{s}_{2}\right),\dots$ by ${E}_{\left\{{r}_{j},{s}_{j}\right\}}$. If $\left|{E}_{\left\{{r}_{j},{s}_{j}\right\}}\left[{\left({\mathit{\beta }}_{j}\mathit{\mu }\right)}^{2}\right]\right|<1$, and noting ${\mathit{\beta }}_{i}=1-{\mathit{\sigma }}_{i}-{\mathit{\rho }}_{i}$, we have
((D3) )
$\begin{array}{cc}& \phantom{\rule{0.333333em}{0ex}}{\text{lim}}_{i\to \infty }{E}_{\left\{{r}_{j},{s}_{j}\right\}}\left[{A}_{i}\right]=\frac{{E}_{\left\{{r}_{j},{s}_{j}\right\}}\left[{\mathit{\rho }}_{j}^{2}{r}_{j}+{\mathit{\sigma }}_{j}^{2}{s}_{j}\right]}{1-{E}_{\left\{{r}_{j},{s}_{j}\right\}}\left[{\left({\mathit{\beta }}_{j}\mathit{\mu }\right)}^{2}\right]}\hfill \\ \hfill & =\frac{{\sum }_{j=1}^{k}{p}_{j}\left[\mathit{\rho }{\left({R}_{j},{S}_{j}\right)}^{2}{R}_{j}+\mathit{\sigma }{\left({R}_{j},{S}_{j}\right)}^{2}{S}_{j}\right]}{1-{\mathit{\mu }}^{2}{\sum }_{j=1}^{k}{p}_{j}{\left(1-\mathit{\rho }\left({R}_{j},{S}_{j}\right)-\mathit{\sigma }\left({R}_{j},{S}_{j}\right)\right)}^{2}}\hfill \end{array}$
where in the simultaneous case $\mathit{\rho },\mathit{\sigma }$ are the functions
((D4) )
$\begin{array}{cc}\hfill {\mathit{\rho }}_{\mathit{sim}}\left(R,S,b\right)& =\frac{1/R}{1/b+1/R+1/S},\hfill \\ \hfill {\mathit{\sigma }}_{\mathit{sim}}\left(R,S,b\right)& =\frac{1/S}{1/b+1/R+1/S}\hfill \end{array}$
and in the sequential case $\mathit{\rho },\mathit{\sigma }$ are the functions
((D5) )
$\begin{array}{cc}\hfill {\mathit{\rho }}_{\mathit{seq}}\left(R,S,{b}_{1},{b}_{2}\right)& =\frac{S{b}_{1}}{\left({b}_{1}+R\right)\left({b}_{2}+S\right)},\hfill \\ \hfill {\mathit{\sigma }}_{\mathit{seq}}\left(R,S,{b}_{1},{b}_{2}\right)& =\frac{{b}_{2}}{{b}_{2}+S}\hfill \end{array}$
• (3)   Lemma. If
((D6) )
$\begin{array}{c}\hfill {R}_{1},{R}_{2},\dots {R}_{k}>0\\ \hfill {S}_{1},{S}_{2},\dots {S}_{k}>0\\ \hfill {p}_{1},{p}_{2},\dots {p}_{k}>0\end{array}$
with $\sum {p}_{i}=1$ and $\mathit{\mu }>1$, then
((D7) )
$\begin{array}{cc}& \underset{b>0}{min}\left\{\frac{\sum {p}_{j}\left[{\mathit{\rho }}_{\mathit{sim}}{\left({R}_{j},{S}_{j},b\right)}^{2}{R}_{j}+{\mathit{\sigma }}_{\mathit{sim}}{\left({R}_{j},{S}_{j},b\right)}^{2}{S}_{j}\right]}{1-{\mathit{\mu }}^{2}\sum {p}_{j}{\left(1-{\mathit{\rho }}_{\mathit{sim}}\left({R}_{j},{S}_{j},b\right)-{\mathit{\sigma }}_{\mathit{sim}}\left({R}_{j},{S}_{j},b\right)\right)}^{2}}\right\}\hfill \\ \hfill & \phantom{\rule{0.333333em}{0ex}}\text{subject to}\hfill \\ \hfill & {\mathit{\mu }}^{2}\sum {p}_{j}{\left(1-{\mathit{\rho }}_{\mathit{sim}}\left({R}_{j},{S}_{j},b\right)-{\mathit{\sigma }}_{\mathit{sim}}\left({R}_{j},{S}_{j},b\right)\right)}^{2}<1\hfill \end{array}$
is less than or equal to
$\begin{array}{cc}& \underset{\begin{array}{c}{b}_{1}>0\\ {b}_{2}>0\end{array}}{min}\left\{\frac{\sum {p}_{j}\left[{\mathit{\rho }}_{\mathit{seq}}{\left({R}_{j},{S}_{j},{b}_{1},{b}_{2}\right)}^{2}{R}_{j}+{\mathit{\sigma }}_{\mathit{seq}}{\left({R}_{j},{S}_{j},{b}_{1},{b}_{2}\right)}^{2}{S}_{j}\right]}{1-{\mathit{\mu }}^{2}\sum {p}_{j}{\left(1-{\mathit{\rho }}_{\mathit{seq}}\left({R}_{j},{S}_{j},{b}_{1},{b}_{2}\right)-{\mathit{\sigma }}_{\mathit{seq}}\left({R}_{j},{S}_{j},{b}_{1},{b}_{2}\right)\right)}^{2}}\right\}\hfill \end{array}$
((D8) )
$\begin{array}{cc}& \phantom{\rule{0.333333em}{0ex}}\text{subject to}\hfill \\ \hfill & {\mathit{\mu }}^{2}\sum {p}_{j}\left(1-{\mathit{\rho }}_{\mathit{seq}}\left({R}_{j},{S}_{j},{b}_{1},{b}_{2}\right)\right\hfill \\ \hfill & {-{\mathit{\sigma }}_{\mathit{seq}}\left({R}_{j},{S}_{j},{b}_{1},{b}_{2}\right))}^{2}<1\hfill \end{array}$

where the inequality is strict if $k\ge 2$ and ${R}_{1}\ne {R}_{2}$.

This Lemma proves the claim, and is itself proved in (3a)–(3d).

• (3a)   Given constants
((D9) )
$\begin{array}{c}\hfill \left\{{R}_{j}>0,{S}_{j}>0,{p}_{j}>0\phantom{\rule{0.277778em}{0ex}};\phantom{\rule{0.277778em}{0ex}}j=1,\dots ,k\right\}\\ \hfill \mathit{\mu }>0\end{array}$
Define the function
((D10) )
$\begin{array}{cc}& \mathrm{\Gamma }\left({\mathit{\rho }}_{1},{\mathit{\sigma }}_{1},\dots ,{\mathit{\rho }}_{k},{\mathit{\sigma }}_{k}\right)\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}=\frac{{\sum }_{j=1}^{k}{p}_{j}\left[{\mathit{\rho }}_{j}^{2}{R}_{j}+{\mathit{\sigma }}_{j}^{2}{S}_{j}\right]}{1-{\mathit{\mu }}^{2}{\sum }_{j=1}^{k}{p}_{j}{\left(1-{\mathit{\rho }}_{j}-{\mathit{\sigma }}_{j}\right)}^{2}}\hfill \end{array}$
on
$\begin{array}{cc}\hfill \mathcal{D}=& \left\{{\mathit{\rho }}_{1},{\mathit{\sigma }}_{1},\dots ,{\mathit{\rho }}_{k},{\mathit{\sigma }}_{k}:{\mathit{\rho }}_{j}>0,\phantom{\rule{0.277778em}{0ex}}{\mathit{\sigma }}_{j}>0,\hfill \\ \hfill & {\mathit{\mu }}^{2}\sum {p}_{j}{\left(1-{\mathit{\rho }}_{j}-{\mathit{\sigma }}_{j}\right)}^{2}<1\right\}\hfill \end{array}$
We may readily check that $\mathcal{D}$ is convex and that $\mathrm{\Gamma }$ is convex on $\mathcal{D}$. At any local extremum of $\mathrm{\Gamma }$ the 2k conditions hold
((D11) )
$\begin{array}{c}\hfill \left(\frac{\mathit{\partial }}{\mathit{\partial }{\mathit{\rho }}_{i}},\frac{\mathit{\partial }}{\mathit{\partial }{\mathit{\sigma }}_{i}}\right)\mathrm{\Gamma }=0,\phantom{\rule{0.277778em}{0ex}}i=1,\dots ,k\end{array}$
• (3b)   Consider the 1-parameter family of functions
${\mathit{\rho }}_{1}\left(b\right),{\mathit{\sigma }}_{1}\left(b\right),\dots ,{\mathit{\rho }}_{k}\left(b\right),{\mathit{\sigma }}_{k}\left(b\right)$
given by
((D12) )
$\begin{array}{cc}\hfill {\mathit{\rho }}_{i}\left(b\right)& ={\mathit{\rho }}_{\mathit{sim}}\left({R}_{i},{S}_{i},b\right)=\frac{1/{R}_{i}}{1/b+1/{R}_{i}+1/{S}_{i}}\hfill \\ \hfill {\mathit{\sigma }}_{i}\left(b\right)& ={\mathit{\sigma }}_{\mathit{sim}}\left({R}_{i},{S}_{i},b\right)=\frac{1/{S}_{i}}{1/b+1/{R}_{i}+1/{S}_{i}}\hfill \end{array}$
Define
((D13) )
$\begin{array}{cc}\hfill g\left(b\right)& =1-{\mathit{\mu }}^{2}\sum _{j=1}^{k}{p}_{j}{\left(1-{\mathit{\rho }}_{j}\left(b\right)-{\mathit{\sigma }}_{j}\left(b\right)\right)}^{2}\hfill \end{array}$
((D14) )
$\begin{array}{cc}\hfill h\left(b\right)& =\sum _{j=1}^{k}{p}_{j}\left[{\mathit{\rho }}_{j}{\left(b\right)}^{2}{R}_{j}+{\mathit{\sigma }}_{j}{\left(b\right)}^{2}{S}_{j}\right]\hfill \end{array}$
Then we may check that if $\left\{{\mathit{\rho }}_{j}\left(b\right),{\mathit{\sigma }}_{j}\left(b\right)\right\}$ are of the form (D12), if $g\left(b\right)\ne 0$ and b satisfies the single condition
((D15) )
$\begin{array}{c}\hfill f\left(b\right)\equiv bg\left(b\right)-{\mathit{\mu }}^{2}h\left(b\right)=0\end{array}$
then the 2k conditions (D11) hold.
• (3c)   We may also check that we have
((D16) )
$\begin{array}{c}\hfill {f}^{\prime }\left(b\right)=g\left(b\right)\phantom{\rule{4pt}{0ex}}\phantom{\rule{0.333333em}{0ex}}\text{for all}\phantom{\rule{4pt}{0ex}}b\\ \hfill {g}^{\prime }\left(b\right)>0\phantom{\rule{4pt}{0ex}}\phantom{\rule{0.333333em}{0ex}}\text{if}\phantom{\rule{4pt}{0ex}}b>0\end{array}$
Noting that as $b\to 0$ we have
((D17) )
$\begin{array}{c}\hfill f\left(b\right)\to 0\\ \hfill {f}^{\prime }\left(b\right)=g\left(b\right)\to 1-{\mathit{\mu }}^{2}\end{array}$
it follows by elementary arguments that so long as $|\mathit{\mu }|>1$ there must exist ${b}_{\ast \ast }>{b}_{\ast }>0$ such that
((D18) )
$\begin{array}{c}\hfill {f}^{\prime }\left({b}_{\ast }\right)=g\left({b}_{\ast }\right)=0\\ \hfill f\left({b}_{\ast \ast }\right)=0\end{array}$
and since for all $b>0$ we have ${g}^{\prime }\left(b\right)>0$ we must have
${f}^{\prime }\left(b\right)=g\left(b\right)>0,\phantom{\rule{0.277778em}{0ex}}for\phantom{\rule{0.277778em}{0ex}}all\phantom{\rule{0.277778em}{0ex}}b>{b}_{\ast }$
Furthermore, since $g\left(b\right)>0$ if and only if
${\left\{{\mathit{\rho }}_{j}\left(b\right),{\mathit{\sigma }}_{j}\left(b\right)\right\}}_{j=1,\dots ,k}\in \mathcal{D}$
it follows that
${\left\{{\mathit{\rho }}_{j}\left({b}_{\ast \ast }\right),{\mathit{\sigma }}_{j}\left({b}_{\ast \ast }\right)\right\}}_{j=1,\dots ,k}\in \mathcal{D}$
• (3d)   We have found a value ${b}_{\ast \ast }$ such that
((D19) )
$\begin{array}{c}\hfill \left\{{\mathit{\rho }}_{\mathit{sim}}\left({R}_{j},{S}_{j},{b}_{\ast \ast }\right),{\mathit{\sigma }}_{\mathit{sim}}\left({R}_{j},{S}_{j},{b}_{\ast \ast }\right)\right\},j=1,\dots ,k\end{array}$
is in $\mathcal{D}$ and such that at this point $\mathrm{\nabla }\mathrm{\Gamma }=\mathbf{0}$. Because $\mathrm{\Gamma }$ is convex in the convex set $\mathcal{D}$ any point where $\mathrm{\nabla }\mathrm{\Gamma }=\mathbf{0}$ is the global minimum of $\mathrm{\Gamma }$ on $\mathcal{D}$ (Boyd and Vandenberghe (2004)). Therefore the global minimum of $\mathrm{\Gamma }$ over $\mathcal{D}$ is achieved by (D19).
It remains to show that this minimum is not achievable by ${\mathit{\rho }}_{\mathit{seq}},{\mathit{\sigma }}_{\mathit{seq}}$, for any ${b}_{1}>0,{b}_{2}>0$, if ${R}_{1}\ne {R}_{2}$.

At any extremum of $\mathrm{\Gamma }$ Equation (D11) must hold, and in particular for every $i=1,\dots ,k$-pagination

$\frac{\mathit{\partial }\mathrm{\Gamma }}{\mathit{\partial }{\mathit{\rho }}_{i}}-\frac{\mathit{\partial }\mathrm{\Gamma }}{\mathit{\partial }{\mathit{\sigma }}_{i}}=\frac{2{p}_{i}\left({\mathit{\rho }}_{i}{R}_{i}-{\mathit{\sigma }}_{i}{S}_{i}\right)}{1-{\mathit{\mu }}^{2}{\sum }_{j=1}^{k}{p}_{j}{\left(1-{\mathit{\rho }}_{j}-{\mathit{\sigma }}_{j}\right)}^{2}}=0$

The denominator is positive for all points in $\mathcal{D}$, therefore at any extremum in $\mathcal{D}$

${\mathit{\rho }}_{j}{R}_{j}={\mathit{\sigma }}_{j}{S}_{j},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}j=1,\dots ,k$

Inserting this requirement with $j=1,2$ into the expressions for ${\mathit{\rho }}_{\mathit{seq}},{\mathit{\sigma }}_{\mathit{seq}}$ in (D5) it follows we would need

((D20) )
$\begin{array}{c}\hfill {R}_{1}{b}_{1}^{2}={R}_{2}{b}_{1}^{2}\end{array}$

which for ${b}_{1}>0$ is only possible if ${R}_{1}={R}_{2}$.