Start Submission Become a Reviewer

Reading: Rapid update cycling with delayed observations

Download

A- A+
Alt. Display

Original Research Papers

Rapid update cycling with delayed observations

Author:

T. J. Payne

Met Office, Exeter, GB
X close

Abstract

In this paper we examine the fundamental issues associated with the cycling of data assimilation and prediction in the case where observations are received after a delay, but we seek to assimilate them immediately on receipt, or within a short time of receipt. We obtain the optimal solution to this problem in the linear and non-linear cases, and explore its relation to simplified strategies which are adaptations of contemporary methods for large-scale data assimilation. We also discuss the challenges facing such cycling in large-scale numerical weather prediction.

How to Cite: Payne, T.J., 2017. Rapid update cycling with delayed observations. Tellus A: Dynamic Meteorology and Oceanography, 69(1), p.1409061. DOI: http://doi.org/10.1080/16000870.2017.1409061
3
Citations
  Published on 01 Jan 2017
 Accepted on 14 Nov 2017            Submitted on 10 Aug 2017
1.  

Introduction

In the traditional cycling of forecasts and data assimilation (DA) for numerical weather prediction, the DA step for the global model has occurred every 6 or 12 hours. This was appropriate for an era when data was concentrated at the main synoptic times, and the limited area models (LAMs) for which the global model provides boundary conditions were cycled every 6 hours. However, in recent years data has become dominated by sources which are essentially continuous in time, and centres such as the Met Office will soon cycle their highest resolution LAM every hour.

By increasing the frequency of global analyses (e.g. to every hour) global forecasts can be based on more recent data, which is not only desirable in itself but provides timely lateral boundary conditions (LBCs) for high resolution LAMs. In one study of the Met Office’s 1.5 km LAM covering the British Isles (Tang et al., 2013) it was found that replacing 3-hour and 6-hour old LBCs by 3-hour and fresh LBCs improved the UK index (a basket of scores measuring forecast skill) by 1.5% (Bruce Macpherson, pers comm). Furthermore, by having more frequent analyses the analysis increments will be smaller, which will improve the validity of the linear approximations in DA schemes. More frequent analyses may also improve the affordability of DA methods as the computational load is distributed more evenly in time.

We will see that, because of the delay in receiving some data, to ensure that all the data which are received are also assimilated, the assimilation windows will need to overlap. We obtain the optimal solution to this problem, which involves manipulating simultaneously all the states in the window and their joint errors. We explore the relation between the optimal solution and simplified methods which are closer to current methods for large-scale DA.

We show that the current use of largely climatological prior error covariances may pose a challenge for high frequency cycling, and discuss how this may be overcome.

2.  

‘Traditional’ vs. ‘Rapid Update’ cycling

An immediate issue is that observations are not received instantaneously. For example, by 09Z on 18 June 2015 the Met Office had received over 80 million observations valid between 09Z on 15 June and 03Z on 18 June 2015, including around 0.8 million surface, 2.1 million aircraft and sonde, 12.4 million satwind and 14 million ATOVS observations. The delay between validity time and receipt for these observation types is recorded in Fig. 1. We see that to receive 95% of aircraft and sonde, surface, ATOVS and satwind observation took respectively 0.6, 1.5, 3.4 and 4.1 hours.

Figure 1.  

Delay in receiving various observation types in the Met Office observation processing system, for observations valid between 9Z on 15 June 2015 and 3Z on 18 June 2015.

This presents a quandary for traditional cycling which aims to produce an analysis every 6 (at some centres every 12) hours. For definiteness consider 4D-Var (e.g. Li and Navon, 2001) with a 6-hour window [T-3,T+3], which generates an analysis at T-3 (in this discussion the units are hours).

One could perform the analysis at T+3 using all observations available by T+3, which would minimise the time delay to produce the analysis, but observations received after T+3 would not be assimilated. Alternatively, one could perform the analysis at T+7, by which time (in view of Fig. 1) almost all the observations valid in the window have been received, but the analysis is onlyavailable 4 hours after the end of the window and 10 hours after its beginning. To generate an estimate of state at T+7 we could run a 10-hour forecast from the analysis, but compared with the estimate of state at T-3 this will be degraded by model error.

Centres such as the Met Office mitigate these issues by performing each analysis twice, a ‘late cut-off’ analysis currently at about T+6, and an ‘early cut-off’ analysis at about T+3. This is illustrated somewhat schematically in Fig. 2, which shows two adjacent non-overlapping windows [3Z,9Z) and [9Z,15Z) (where [T1,T2) denotes the time interval T1t<T2). For example, at 8Z we receive observations valid between 4Z and 8Z. Considering the window [3Z,9Z), for the ‘early cut-off’ run we perform the data assimilation at 9Z and use the observations in the dark blue region, and for the ‘late cut-off’ run perform it at about 12Z and use virtually all observations ever valid in the window (combined light and dark blue region).

Figure 2.  

Cycling with non-overlapping windows as used at the Met Office. Observations in the dark blue (dark green) region are assimilated for an ‘early cut-off’ run at 9Z (15Z). The extra observations in the lightly shaded regions are assimilated for ‘late cut-off’ runs at 12Z and 18Z.

In principle the late cut-off analysis could make use of the early cut-off one to reduce its work load, as is done in the ‘quasi-continuous’ approach of Järvinen et al. (1996) and Veerse and Thèpaut (1998), but at the Met Office the late cut-off analyses start again from scratch, making no use of the work done for the early cut-off analysis.

Having both early and late cut-off analyses goes some way to mitigating the shortcomings of 6-hourly cycling. However, the analyses are still 6 hours apart, which makes them insufficiently timely for some purposes, notably (considering the comments in Section 1) the LBCs for hourly LAM analyses; the analysis increment is much larger than would be the case with an hourly update, so nonlinearity can be a significant problem, especially for the linear model in 4D-Var; and the approach is inefficient insofar as the early cut-off analyses are not used as part of a cycle.

In Fig. 3 we illustrate how we would like to deal with the same case: each hour we assimilate all observations received in the last hour, e.g. at 12Z we assimilate the observations received between 11Z and 12Z (green region); these are valid between 7Z and 12Z. In principle we do not re-assimilate the observations valid between 7Z and 12Z received at earlier times (blue, red yellow and cyan regions in Fig. 3) as the information from these observations has been transferred to previous analyses and thereby to the background for this cycle.

Figure 3.  

Rapid update cycling with overlapping windows. Observations are assimilated within one hour of receipt.

In the context of global NWP, which provides among other outputs LBCs for hourly cycling of LAMs, an hourly update cycle is natural, but in principle the updates could be as short as one model time step. For the purposes of this paper we will refer to any cycling where observations are assimilated as soon as they are received, or as in Fig. 3 within some short time of receipt, as rapid update cycling (RUC).

We should note that the term ‘Rapid Update Cycle’ has been employed in the past to denote specific rapidly cycled NWP systems, for example by the National Centres for Environmental Prediction in the USA. In that case it referred to an operational regional forecast-analysis system over North America, where data was assimilated by 3D-Var (originally optimal interpolation) using non-overlapping windows of length one hour (Benjamin et al., 2004b; Benjamin et al., 2004a).

3.  

Optimal RUC

To examine the rapid update cycling problem further we will idealise it slightly by supposing that observations are valid at exact multiples of a time increment δt (as opposed to continuously in time), and become available after delays of 0,δt,2δt,,Nδt. We will suppose that observations received at time kδt are

((1) )
yk(k),yk-1(k),yk-2(k),,yk-N(k)

Superscripts denote when the observations are received and subscripts their validity time, the longest delay being Nδt.

The first problem is to develop an optimal method for assimilating observations as soon as they are available. At time kδt we seek to estimate xk,,xk-N, given observations (1) and our previous estimate of xk-1,,xk-N-1.

We will suppose that for i=k,k-1,,k-N we have observation operators hi(k) such that

((2) )
yi(k)=hi(k)(xi)+νi(k)

and for each i a model fi

((3) )
xi+1=fi(xi)+ωi

where the distributions of the errors νi(k), ωi are supposed known.

3.1.  

Notation and problem formulation

The optimal method is obtained by formulating the problem in such a way that standard estimation theory can be applied.

We will use the convention that the underlined vector x̲k denotes the concatenation of the N+1 vectors {xi,k-Nik}:

((4) )
x̲k=xk-Nxk

and similarly the underlined matrix A̲k is formed from matrices {Ai,j,k-Ni,jk} of compatible size:

A̲k=Ak-N,k-NAk-N,kAk,k-NAk,k

For example, if {xi,k-Nik} are vectors of length n and {Ai,j,k-Ni,jk} are matrices of size n×n, then x̲k, A̲k are of size respectively n(N+1)×1 and n(N+1)×n(N+1).

Define y̲k to be the observations received at time kδt, so

((5) )
y̲k=yk-N(k)yk(k)

We seek the conditional expectations of x̲k and x̲k+1 given observations received up to time kδt

((6) )
E[x̲k|y̲0,y̲1,,y̲k],E[x̲k+1|y̲0,y̲1,,y̲k]

Note that, given x̲k, and assuming ωk has zero mean, the best estimate of x̲k+1 before observations received at kδt are assimilated is

((7) )
f̲k(x̲k)xk-N+1xkfk(xk)

If we now define

((8) )
ω̲k=00ωk,ν̲k=νk-N(k)νk-1(k)νk(k),h̲k(x̲k)=hk-N(k)(xk-N)hk-1(k)(xk-1)hk(k)(xk)

Then we may write (2) and (3) as respectively

((9) )
y̲k=h̲k(x̲k)+ν̲kx̲k+1=f̲k(x̲k)+ω̲k

which are in the standard form for observation and signal map equations in estimation theory (e.g. Jazwinski, 1970).

3.2.  

Linear Gaussian case

It is illuminating to first work out the details in the simplest case, where in (2) and (3) the observation operators hi(k) and model fi are linear and the errors νi(k),ωi are zero-mean, Gaussian and uncorrelated.

In this case

((10) )
yi(k)=Hi(k)xi+νi(k)xi+1=Mixi+ωi

where

νi(k)N(0,Ri(k)),ωiN(0,Qi)

for some matrices Ri(k),Qi (where N(μ,Σ) denotes normally distributed with mean μ and variance Σ), and setting

((11) )
H̲k=Hk-N(k)Hk-N+1(k)Hk(k),M̲k=0I0IMk

and

((12) )
R̲k=Rk-N(k)Rk-N+1(k)Rk(k),Q̲k=00Qk
(9,10) become
((13) )
y̲k=H̲kx̲k+ν̲kx̲k+1=M̲kx̲k+ω̲k

with

ν̲kN(0̲,R̲k),ω̲kN(0̲,Q̲k)

and the problem of finding the conditional expectations (6) of x̲k and x̲k+1 given observations received up to time k, which may be denoted x^̲k|k, x^̲k+1|k, is solved by a standard Kalman Filter, as in Table 1.

The basic objects manipulated are whole windows of states and observations and the covariances of the errors in these objects. The symbols in Table 1 are whole-window analogues of their usual values, e.g. P̲k|k-1 and P̲k|k are n(N+1)×n(N+1) prior and posterior error covariance matrices of the estimated x̲k. In special circumstances simplification is possible. For example, if new observations only occur in the final time slot, i.e. if the only observations which become available at k are yk(k) and there are no observations yk-1(k),,yk-N(k), it is straightforward to show that (15)–(19) simplifies to a conventional Kalman smoother.

For linear Mi,Hi(k) the algorithm (15)–(19) finds E[x̲k|y̲0,y̲1,,y̲k],E[x̲k+1|y̲0,y̲1,,y̲k] if the errors νi(k),ωi are zero-mean, Gaussian and uncorrelated. As noted in Anderson and Moore (1979), if we restrict attention to analysis-prediction equations of form (16) and (18) then we may drop the Gaussian assumption on νi(k),ωi, merely requiring them to be zero-mean and uncorrelated, and (15)–(19) still minimises the expected error variance.

3.3.  

Variational equivalent of analysis step in linear Gaussian case

For large-scale data assimilation variational methods are almost universally used, so it is of interest to cast the optimal RUC analysis step (16) in variational form. Define

δ̲δk-Nδk-N+1δk
((14) )
Jb(δ̲)=12δk-Nδk-1TA^-1δk-Nδk-1
((20) )
Jo(δ̲)=12(y̲k-H̲k(x^̲k|k-1+δ̲))TR̲k-1(y̲k-H̲k(x^̲k|k-1+δ̲))
((21) )
Jq(δ̲)=12(δk-Mk-1δk-1)TQk-1-1(δk-Mk-1δk-1)

where A^ is the bottom right Nn×Nn submatrix of P̲k-1|k-1. Then the analysis step (16) is equivalent to

((22) )
x^̲k|k=x^̲k|k-1+δ̲

where δ̲ minimises

((23) )
J(δ̲)=Jb(δ̲)+Jo(δ̲)+Jq(δ̲)

This is proved in Appendix 1. The Jb term (20) constrains the N states in the intersection between the old and new windows by the inverse of the ‘background error covariance matrix’ A^. This ‘big B’ of size Nn×Nn is formed by taking the analysis error covariance from the previous stage and shearing off the oldest row and column.

3.4.  

General (nonlinear, non-Gaussian) case

We saw in Section 3.1 how the problem of assimilating data immediately it becomes available can be cast into a standard signal model/observation model form (9) and (10), to which we can then apply well-established theory, e.g. Jazwinski (1970). In particular, given (9) and (10) we can compute (sequentially in k) the conditional pdfs p(x̲k|y̲0,,y̲k) and p(x̲k+1|y̲0,,y̲k). The novel feature for us is that f̲k and h̲k in (9) and (10) have a special structure which leads to significant simplifications, in particular enabling us to express these pdfs in terms of the original (as opposed to block) variables.

In general, given the prior pdf px̲k|y̲0,,y̲k-1 and the conditional pdf of the observations given the state py̲k|x̲k, Bayes’ theorem tells us that the posterior pdf px̲k|y̲0,,y̲k is

((24) )
px̲k|y̲0,,y̲k=py̲k|x̲kpx̲k|y̲0,,y̲k-1N

where the normalisation N is

((25) )
N=x̲kUpy̲k|x̲kpx̲k|y̲0,,y̲k-1dx̲k

and the domain of integration U=R(N+1)n. We will suppose the basic process satisfies the Markov property p(xk|xk-1,xk-2,)=pxk|xk-1 which implies the same for underlined states px̲k|x̲k-1,x̲k-2,=px̲k|x̲k-1. Given p(x̲k-1|y̲0,,y̲k-1) (the posterior pdf at k-1) and px̲k|x̲k-1, the Chapman-Kolmogorov equation then gives us for the prior pdf px̲k|y̲0,,y̲k-1

((26) )
px̲k|y̲0,,y̲k-1=x̲~k-1Upx̲k|x̲~k-1px̲~k-1|y̲0,,y̲k-1dx̲~k-1

We could cycle (27) and (25) to obtain the posterior pdf for every k. However, as mentioned there are simplifications arising in this case.

By virtue of the fact that in (8) the ith sub-vector of h̲k depends only on xk-N+i-1, py̲k|x̲k, the conditional pdf of the observations given the state, factors into

((27) )
py̲k|x̲k=pyk(k)|xk××pyk-N(k)|xk-N

Additionally, the transition pdf px̲k|x̲~k-1 may be written

((28) )
px̲k|x̲~k-1=pxk|x~k-1δxk-1-x~k-1δxk-N-x~k-N

Combined with (27) this gives

px̲k|y̲0,,y̲k-1=x̲~k-1Upx̲k|x̲~k-1px~k-N-1x~k-Nx~k-1|y̲0,,y̲k-1dx̲~k-1
((29) )
=x~k-N-1Rnpxk|xk-1px~k-N-1xk-Nxk-1|y̲0,,y̲k-1×dx~k-N-1=pxk|xk-1x~k-N-1Rnpx~k-N-1xk-Nxk-1|y̲0,,y̲k-1×dx~k-N-1

We may cycle (30) and (25), (28) to obtain the posterior pdf for every k. For example, if the observation operators hi(k) and model fk are linear, and the errors νi(k), ωi are Gaussian, then one may check that (30), (25) and (28) imply that the posterior pdf

px̲k|y̲0,,y̲ke-(Jb+Jq+Jo)

where Jb,Jq,Jo are given by (20)–(22).

4.  

Example

We illustrate some of the foregoing with a small example, in which the model is the 40-dimensional chaotic model proposed by Lorenz (1996):

((30) )
dxidt=-x[i-2]x[i-1]+x[i-1]x[i+1]-xi+F,i=0,,n-1

with F=8, n=40, where [i] denotes i,modn. This system is integrated using fourth order Runge–Kutta with a time step of 0.05/6 during which time errors grow at a rate corresponding to order one hour in an atmospheric system. We therefore refer to time step k as time k. The truth is obtained by integrating (31) and to each component of x adding Gaussian model error every time step with variance σq2.

We suppose that at time k eight observations have just become available, at:

points x[k+1], x[k+11], x[k+21] and x[k+31], valid at time k-1

point x[k+6], valid at time k-2

point x[k+16], valid at time k-3

point x[k+26], valid at time k-4

point x[k+36], valid at time k-5

In this example we suppose there are no ‘instantaneously available’ observations (i.e. available at time k and also valid at k). Each observation has Gaussian error with variance σo2 where σo=0.546. Every grid point is observed every 5 time steps and the observation network repeats itself exactly every 10 time steps. The system is well-observed and for the values of σo,σq used here the departure from linearity small enough that the foregoing linear theory can be well applied to the linearised model.

4.1.  

Non-overlapping windows with different lags

Consider first traditional non-overlapping window strategies. Suppose we have a 6-hour cycle with 6-hour windows. For the window [k-6,k) we wish to assimilate observations valid in k-6t<k. For non-overlapping windows we use an optimal1 smoother, i.e. 4D-Var (Li and Navon, 2001) with model error correctly accounted for and correct cycling of background error covariances. For the window [k-6,k) this produces analyses {xk-6a,,xk-1a}.

As discussed in Section 2, the number of observations which are valid in the window and available for the analysis increases the longer the interval of time between the end of the window and when the analysis is performed, which we term the lag. For the present example the number of observations available at each time in the window if the analysis is performed at k,k+2,k+4 is shown in Table 2.

Taking for example the lag=2 case, at times k and k+1 the most recently available analyses are those in the window [k-12,k-6) with last analysis at k-7, while at k+2,,k+5 the most recently available analysis is at k-1. Table 3 shows the validity time of the most recent analysis available at times k,,k+5 for lags of 0, 2 and 4 hours.

Fig. 4 shows the RMS forecast error (using σq=0.182 and averaged over 2000 cycles) in forecasts valid at times t=k,k+1,,k+5 taken from the latest available analysis using lag=0 (in black), lag=2 (in blue) and lag=4 (in green). Fig. 4 illustrates a point about non-overlapping windows made in Section 2 above, that we choose between a short lag between observations and when the analysis is performed, giving timely analyses but not using all the observations, or a longer lag using more observations, but which at any given time requires longer forecasts which will be more degraded by model error.

Figure 4.  

RMS forecast error from most recent available analysis at times 0–5 hours into the next window following an assimilation window [k-6,k), for lags 0 (black), 2 (blue) and 4 (green). Dashed lines denote RMS error at times before the analysis is performed. Also shown in red is the RMS error in the optimal RUC analysis using the data available at each time. In this example σq=0.182.

For the lag=2 and lag=4 cases we also show (dashed lines) the RMS forecast error for times between the end of the analysis window and the time the analysis is performed.

4.2.  

Optimal RUC

Since in our example the longest delay in receipt of observations is 5 h, for the optimal RUC method of Section 3 we have N=5. At time j this produces analyses {xj-5a,,xja}.

A comparison of the observation usage of non-overlapping windows and optimal RUC was illustrated in Figs. 2 and 3. In optimal RUC all observations are used, as soon as they are received.

The red curve in Fig. 4 shows the RMS error in the RUC analysis at xja for j=k,,k+5. From the foregoing we know this will always be less than the RMS error at j using any available analysis using a non-overlapping window with any lag. Note however that this error can be greater than that from the lagged analyses run at a later time, eg, in this example for the lag-2 and lag-4 analyses at time k. This is because the lagged analyses are using observations not available at time k.

5.  

Suboptimal methods for RUC

In Section 3 above we derived the optimal solution to the problem of assimilating data as soon as it becomes available, and saw that if the maximum delay is N and the state is described by n variables that this involved manipulating vectors of size n(N+1) and their error covariances of size n(N+1)×n(N+1).

If observation and model errors are uncorrelated in time then optimal data assimilation methods for non-overlapping windows only involve vectors and matrices of size n and n×n.

For large-scale systems manipulating vectors of size n(N+1)-pagination and more particularly matrices of size n(N+1)×n(N+1) may not be manageable. Furthermore, NWP centres already have methods implemented for non-overlapping windows (we will refer to these as ‘traditional methods’) and will naturally seek ways of adapting these to the RUC problem. Hence a topic of practical importance is the relationship between the optimal solution to RUC and traditional methods applied to RUC.

For simplicity we restrict attention to the case of linear forecast and observation operators whereas in Section 3.2 the errors are Gaussian and uncorrelated. We will designate the optimal solution for RUC in this case (i.e. Table 1) as Method 0. We will develop suboptimal methods for RUC based on traditional methods for non-overlapping windows, with Method 3 a ‘naive’ application of such a method to RUC, and Methods 2 and 1 adaptations of this which are progressively closer to the optimal solution. We will then examine the relation between the four methods.

5.1.  

‘Traditional’ methods as suboptimal methods for RUC

Suppose that at time k we have prior estimates

((31) )
{xjb,j=k-N,,k}

and we have just received observations

{yj(k),j=k-N,,k}

A natural extension of the 4D-Var method (e.g. Li and Navon (2001)) as used for non-overlapping windows is to form analyses at j=k-N,,k

((32) )
xja=xjb+δj,j=k-N,,k

where

δ̲=δk-Nδk

minimises

((33) )
J(δ̲)=12(δk-N)TB-1δk-N+12j=k-Nj=k-1δj+1-MjδjTQj-1(δj+1-Mjδj)+12j=k-Nj=kyj(k)-Hj(k)xjb+δjT×Rj-1yj(k)-Hj(k)xjb+δj
B in (34) is the error covariance of xk-Nb, if this is known, or some approximation otherwise; we return to this point below. All our suboptimal methods 1–3 use (33) and (34) for the analysis. There are many different ways of forming new priors {xjb,j=k-N+1,,k+1}. A non-exhaustive selection of possibilities is:

Method 3: The most similar to traditional 4D-Var, in which the only state saved from the above analyses is the first one xk-Na (i.e. at the beginning of the window). Denoting the model evolution from k-N to j by Mk-Nj=Mj-1Mk-N+1Mk-N this gives us priors

((34) )
xjb=Mk-Njxk-Na,j=k-N+1,.,k+1
Method 2: Slightly better is to save and use the second analysed state, i.e. at time k-N+1, which will be at the beginning of the next window, giving us priors
((35) )
xjb=Mk-N+1jxk-N+1a,j=k-N+1,.,k+1
Method 1: Finally, we could follow the optimal solution and save and use all the analysed states, giving us priors
((36) )
xjb=xjaj=k-N+1,,kMkk+1xkaj=k+1

The number of prior states which are simply analysis states from the previous cycle is therefore 0, 1 and N for Methods 3, 2 and 1, respectively. The four RUC methods (with the optimal one of Section 3.2 labelled ‘Method 0’) are summarised in Table 4. The formation of backgrounds is shown in more detail for N=2 in Table 5.

5.2.  

Covariance of analysis and background errors in methods 1–3

To compare the various methods we will need the covariance of their errors. We may write (33) and (34) as

((37) )
x̲ka=x̲kb+K̲k(y̲k-H̲kx̲kb)

where

((38) )
H̲k=diag(Hk-N(k),,Hk(k))(U̲k)i̲,j̲=Mk-N+j-1k-N+i-1,for1jiN+10forj>i
((39) )
B̲kv=U̲kdiagB,Qk-N,,Qk-1U̲kT
((40) )
K̲k=B̲kvH̲kT(H̲kB̲kvH̲kT+R̲)-1

where we use the notation (U̲k)i̲,j̲ to denote the ijn×n submatrix of U̲k. Denoting the truth by x̲kt and

ϵ̲kb=x̲kb-x̲kt,B̲k=E[ϵ̲kb(ϵ̲kb)T],ϵ̲ka=x̲ka-x̲kt

it follows that for all Methods 1–3 the analysis error covarianceA̲k is, from (38)

((41) )
A̲kE[ϵ̲ka(ϵ̲ka)T]=(I-K̲kH̲k)B̲k(I-K̲kH̲k)T+K̲kR̲K̲kT

We note this depends both on B̲k and (via K̲k and B̲kv) the prescribed B.

The background error covarianceB̲k+1 depends on which method is used. For method we may write

((42) )
x̲k+1b=M̲k()x̲ka

where M̲k(1)=M̲k(0)=M̲k as specified in (12) and M̲k(2),M̲k(3) are illustrated for N=2 in Table 6. Since the error in x̲k+1b using method is

((43) )
ϵ̲k+1bx̲k+1b-x̲k+1t=M̲k()(x̲ka-x̲kt)+M̲k()x̲kt-x̲k+1t=M̲k()ϵ̲ka+ϵ̲kq

where ϵ̲kq=M̲k()x̲kt-x̲k+1t we can express B̲k+1 in terms of M̲k(), A̲k, model error covariance and the cross-covariance of analysis error ϵ̲ka and model error ϵ̲kq:

((44) )
B̲k+1=E[ϵ̲k+1b(ϵ̲k+1b)T]=M̲k()E[ϵ̲ka(ϵ̲ka)T](M̲k())T+E[ϵ̲kq(ϵ̲kq)T]+M̲k()E[ϵ̲ka(ϵ̲kq)T]+E[ϵ̲kq(ϵ̲ka)T](M̲k())T

In order to cycle Methods 1–3 we need to specify B in (34). For the rest of this section, for the purposes of comparing the four methods, we will suppose that in Methods 1–3 we use B=(B̲k)1̲,1̲, which can be obtained from (45) (as above, underlined subscripts refer to submatrices, so (B̲k)1̲,1̲ is the top left n×n submatrix of B̲k). It can be shown that for methods 1 and 2 that (B̲k)1̲,1̲=(A̲k-1)2̲,2̲.

5.3.  

Relation between methods 0 and 1

An important comparison is between optimal Method 0 and suboptimal Method 1. They share the same background step (43) with the same M̲k. In both cases the analysis step may be written in the form (38), though whereas in Method 0 the gain K̲k

K̲k=B̲kH̲kT(H̲kB̲kH̲kT+R̲k)-1

uses true B̲k, i.e. the covariance of the error in x̲kb, for Method 1 the gain K̲k is

K̲k=B̲kvH̲kT(H̲kB̲kvH̲kT+R̲k)-1

where B̲kv is defined in (40). Because of the similarity in the structures of Methods 0 and 1 there is a simple and strong relation between their errors. If the sequence of background error covariances using Methods 0 and 1 are designated respectively B̲k and B~̲k, and we start from the same prior error covariance B~̲N=B̲N, then for all kN the difference B~̲k-B̲k is positive semi-definite, usually written

B̲kB~̲k

This is proved in Appendix 2.

5.4.  

Relation between methods in limit Qk

We have four RUC methods, the optimal and three suboptimal ones. We can cycle each as described above, for the suboptimal methods using B=(B̲k)1̲,1̲ (Section 5.2).

A limiting case which exhibits some of the differences between them, in particular how information is saved from previous cycles, is obtained by letting model error covariance Qk for all k.

For simplicity suppose H̲=I and Rk and Qk are independent of k. If Q= then after N cycles for method 0, one cycle for Methods 1 and 2, and immediately for Method 3, all knowledge of the initial background state x0b and its error covariance is lost. In Table 7 we show the analysed state x̲j+Na produced by the four methods for any jN if model error is infinite, and the corresponding background error and analysis error covariances.

The optimal Method 0 retains all the observation information ever received; at time j+N the estimate of state at any time between j and j+N is simply the average of all the observations ever received valid at that time. At the other extreme, Method 3 ‘forgets’ all the observation information from previous cycles: at time j+N the estimate of state at any time between j and j+N is just the value of the observation valid at that time and received at time j+N. Methods 1 and 2 retain observation information from the previous cycle only at the initial time. These different behaviours are reflected in the analysis error covariances shown in Table 7.

Comparing Methods 1 and 2, while in the limit Q Method 1 analyses are no better than those of Method 2, we note in Table 7 that it has better backgrounds than Method 2.

5.5.  

Relation between methods in the limit Qk0

Suppose that Qk=0 for all k, and we are given an estimate x0b of x0 with error covariance B0. Estimate the remaining states in the window {0,,N} by

{xjb=M0jx0b,j=1,,N}

If Qk=0 for all k then Methods 1–3 simplify to their ‘strong constraint’ forms, in which (34) simplifies to

((45) )
J(δk-N)=12(δk-N)TBk-N-1δk-N+12j=k-Nj=k(yj(k)-Hj(k)Mk-Nj(xk-Nb+δk-N))T×Rj-1(yj(k)-Hj(k)Mk-Nj(xk-Nb+δk-N))

Crucially, in the limit Q0 Methods 0–3 coincide, so in particular Methods 1–3 are now optimal. This is proved in Appendix 3. In the absence of model error the ‘suboptimal’ methods all coincide with each other, and in fact are in this case optimal.

5.6.  

Comparison of methods 0–3 for the example of Section 4

We may apply linearised versions of Methods 0–3 to our nonlinear chaotic example of Section 4. In an attempt to mitigate the effects of linearisation error one can formulate outer loop-style iterations for these strategies, which may be worth implementing in more non-linear systems (for the examples here they made negligible difference). Alternatively one could use the ‘best linear approximation’ (Payne, 2013).

In our example of Section 4, at time k observations have just become available which are valid at k-1,,k-5 so N=5. The optimal Method 0 and suboptimal Methods 1–3 all provide analyses xka,xk-1a,,xk-5a. (Since in our example no observations are instantaneously available, xka is here a forecast from xk-1a).

Figure 5.  

RMS error in analysis at k-5,,k, various RUC strategies using all observations available at time k, σq=1.82 (upper set) and σq=0.455 (lower set).

Each strategy is cycled 10,000 times and the first hundred cycles disregarded. Figure 5 shows the RMS error in the analyses at k-5,,k, for the optimal Method 0 (black), and suboptimal Methods 1 (blue), 2 (green) and 3 (red), for σq=1.82 and 0.455 (upper and lower set respectively).

As expected from the foregoing, the errors E are ordered

E(Method0)<E(Method1)<E(Method2)<E(Method3)

Furthermore, the analyses and therefore their errors converge as σq0.

6.  

Impact of climatological background error covariances

A significant difference between the methods of the preceding sections and those used for large scale systems is that in the latter the prior error covariance (usually denoted B) is not cycled, but is either constant or is a convex combination of a constant and an estimate of cycled B (see (47) below).

It is important to note that insofar as B is fixed it is advantageous to assimilate data as far as possible simultaneously in larger units rather than split it up into smaller volumes and assimilate it in smaller units. Intuitively, by assimilating many observations simultaneously the deficiencies of the fixed B are reduced.

Illustrating this point is complicated by the fact that increasing observation batch size tends to involve making other changes which themselves have an impact. If we compare cycling using non-overlapping windows with RUC then at no instant in time are the two methodologies assimilating the same observations (see Section 2). If we use 4D-Var to compare cycling with windows of length 1 (assimilating one observation every time step) with windows of length 2 (two observations every two time steps) then the latter has the advantage of covariances evolved through the window, which is a different point to the one being made.

If we compare assimilating two observations simultaneously every time step with assimilating one after the other, in the latter case we have to decide what B to use for the second observation. In Appendix 4 we show that if in a scalar system we assimilate two observations every cycle, and have a choice between assimilating them

  • (a)   simultaneously using a fixed background error covariance B, or
  • (b)   separately, the first with fixed background error covariance B1 and the second with fixed background error covariance B2,
then it is possible to choose B so that no matter how well B1 and B2 are chosen strategy (a) will always outperform strategy (b).

We may contrast this result concerning the use of fixed background covariances with the fact that, if B is chosen optimally every cycle, and B1 and B2 are chosen optimally every cycle, the two strategies will produce identical (and optimal) results.

This means that if B is fixed then, in this respect, RUC is at a disadvantage compared with conventional cycling as now there are more cycles with fewer observations used every cycle. In practice this effect may be dwarfed by the advantages of RUC as discussed in Section 2. If not, the obvious remedy is to improve the cycling of the background error covariances.

As noted above, the effect is due to using a fixed B, and is removed if B is cycled properly. This is unattainable in current large-scale NWP, but centres such as the Met Office already employ a ‘hybrid’ B (Clayton et al., 2013)

((46) )
B=βc2Bc+βe2Be

where Bc is fixed but Be is an estimate (from an ensemble) of the true prior error covariance, with βc2+βe2=1. For best performance βe1 as ensemble size increases, with the fixed part having no weight in the limit of an infinitely large ensemble.

There are other possible ways of introducing adaptivity into B, such as the ‘ensemble-variation integrated localised’ method (Auligné et al., 2016) and the so-called variational Kalman filters (Auvinen et al., 2010). In the latter the limited-memory quasi-Newton method is used to build a low-storage approximation to the Hessian of the analysis cost function, which therefore approximates the inverse of the analysis error covariance matrix, and can also be used to evolve the covariance forward to approximate B at the next analysis time.

7.  

Concluding remarks

Rapid update cycling (RUC) is the process by which we assimilate observations into a model as soon as they become available, or more practically, within some (short) time interval δt. We have seen that if the greatest delay in receiving observations is Nδt then the optimal solution to RUC at time k involves manipulating the (N+1)n vectors x̲ formed from xk-N,,xk and the moments of the errors in x̲, such as their (N+1)n×(N+1)n error covariances.

Compared with ‘traditional’ cycling RUC makes more timely use of observations, which is particularly important for the provision of LBCs for LAMs. Another advantage of RUC is that the increments are smaller and hence linearisation error is reduced.

We have purposely concentrated on fundamental topics and avoided such practically important matters as efficiency and cost. The fact that for each analysis observation volumes and increments are smaller, and we always have a recent one available, suggest that it should be possible to reduce the cost per analysis.2 On contemporary HPCs where the increased power comes through higher numbers of processors rather than increased clock speeds this is more important than the total cost per day.3

We may adapt ‘traditional’ methods designed for non-overlapping windows to RUC. These methods are suboptimal, but in all cases considered in this paper (Methods 1,2 and 3 in Section 4) they coincide with the optimal solution in the limit where model error vanishes.

Assimilating observations in smaller batches can be disadvantageous if climatological background error variances are used. This potentially poses a challenge for RUC, which could perhaps be met by improved cycling of error covariances.

Disclosure statement

{ label (or @symbol) needed for fn[@id='FN0004'] }No potential conflict of interest was reported by the author. 


1 Optimal except that we ignore linearisation error, which is small for our example. 

2 Note also that there is scope for preconditioning using the work already done for recent analyses. This preconditioning could be based on Hessian eigenvectors (Fisher and Courtier, 1995), or on the vectors approximating the Hessian in the limited memory quasi-Newton method (Courtier et al., 1998). 

3 We have also noted that the window length of RUC is determined by the longest delay in receiving observations, which in the operational example in Section 1 implies a window of 4 hours compared with the current 6 hours. 

Acknowledgements

The author thanks Mike Cullen and Andrew Lorenc for useful discussions on this topic, and the referees appointed by Tellus for their comments.

References

  1. Anderson, B. and Moore, J.1979. Optimal FilteringPrentice-Hall, Englewood Cliffs, NJ . 357 pp. 

  2. Auligné, T., Ménétrier, B., Lorenc, A. C. and Buehner, M.2016. Ensemble-variational integrated localized data assimilation. Mon. Weather Rev.144, 3677–3696. DOI:https://doi.org/10.1175/mwr-d-15-0252.1. 

  3. Auvinen, H., Bardsley, J. M., Haario, H. and Kauranne, T.2010. The Variational Kalman filter and an efficient implementation using limited memory BFGS. Int. J. Numer. Methods Fluids64, 314–335. DOI:https://doi.org/10.1002/fld.2153. 

  4. Benjamin, S. G., Dévényi, D., Weygandt, S. S., Brundage, K. J., Brown, J. M., and co-authors.2004a. An hourly assimilation-forecast cycle: the RUC. Mon. Weather Rev.132, 495–518. DOI:https://doi.org/10.1175/1520-0493. 

  5. Benjamin, S. G., Grell, G. A., Brown, J. M., Smirnova, T. G. and Bleck, R.2004b. Mesoscale weather prediction with the RUC hybrid isentropic terrain-following coordinate model. Mon. Weather Rev.132, 473–494. DOI:https://doi.org/10.1175/1520-0493. 

  6. Boyd, S. and Vandenberghe, L.2004. Convex OptimizationCambridge University Press, New York, NY . 716 pp. 

  7. Clayton, A. M., Lorenc, A. C. and Barker, D. M.2013. Operational implementation of a hybrid ensemble/4D-Var global data assimilation system at the Met Office. Q. J. R. Meteorol. Soc.139, 1445–1461. DOI:https://doi.org/10.1002/qj.2054. 

  8. Courtier, P., Andersson, E., Heckley, W., Vasiljevic, D., Hamrud, M., and co-authors.1998. The ECMWF implementation of three-dimensional variational assimilation (3D-Var). I: formulation. Q. J. R. Meteorol. Soc.124, 1783–1807. DOI:https://doi.org/10.1002/qj.49712455002. 

  9. Fisher, M. and Courtier, P.1995. Estimating the covariance matrices of analysis and forecast error in variational data assimilation. Tech. Memo.220, 28. ECMWF, Shinfield Park, Reading, UK. 

  10. Gallier, J.2010. The Schur complement and symmetric positive semidefinite (and definite) matrices. Penn Eng. Online at: www.cis.upenn.edu/~jean/schur-comp.pdf 

  11. Järvinen, H., Thèpaut, J.-N. and Courtier, P.1996. Quasi-continuous variational data assimilation. Q. J. R. Meteorol. Soc.122, 515–534. DOI:https://doi.org/10.1002/qj.49712253011. 

  12. Jazwinski, A. H.1970. Stochastic Processes and Filtering TheoryAcademic Press, New York . 376 pp. 

  13. Li, Z. and Navon, I. M.2001. Optimality of variational data assimilation and its relationship with the Kalman filter and smoother. Q. J. R. Meteorol. Soc.127, 661–683. DOI:https://doi.org/10.1002/qj.49712757220. 

  14. Lorenz, E.1996. Predictability -- a problem partly solved. Proceedings, Seminar on Predictability Vol. 1, ECMWF, Reading, UK, pp. 1–18. 

  15. Payne, T. J.2013. The linearisation of maps in data assimilation. Tellus A: Dyn. Meteorol. Oceanogr.65, DOI:https://doi.org/10.3402/tellusa.v65i0.18840. 

  16. Tang, Y., Lean, H. W. and Bornemann, J.2013. The benefits of the Met Office variable resolution NWP model for forecasting convection. Met. Apps20, 417–426. DOI:https://doi.org/10.1002/met.1300. 

  17. Veerse, F. and Thèpaut, J.-N.1998. Multiple-truncation incremental approach for four-dimensional variational data assimilation. Q. J. R. Meteorol. Soc.124, 1889–1908. DOI:https://doi.org/10.1002/qj.49712455006. 

Appendix 1  

Proof of variational form of optimal solution given in Section 3.3

Continuing with the notation of Section 3.3 we readily obtain identities (A1)–(A3) below:

  • (i)  
    ((47) )
    Jb(δ̲)+Jq(δ̲)=12δ̲TA^-1000+-fTIQk-1-1-fIδ̲
    where f is the n×nN matrix
    f=00N-1timesMk-1
  • (ii)   By elementary algebra
    ((A1) )
    IfA^IfT+000Q×-fTIQ-1-fI+A^-1000=I00I
  • (iii)   From the definition of M̲k-1 in (12) we have
    M̲k-1=0I0f
    so by construction of A^
    ((A2) )
    IfA^IfT=M̲k-1P̲k-1|k-1M̲k-1T
    It follows from (A1)–(A3) and (13) that
    ((A3) )
    Jb(δ̲)+Jq(δ̲)=12δ̲TM̲k-1P̲k-1|k-1M̲k-1T+Q̲k-1-1δ̲
    At a minimum of (24) we have J(δ̲)=0 so
    ((A4) )
    M̲k-1P̲k-1|k-1M̲k-1T+Q̲k-1-1+H̲kTR̲k-1H̲k×δ̲=H̲kTR̲k-1(y̲k-H̲kx^̲k|k-1)
    and using this value of δ̲ in (23) is equivalent to (16).


Appendix 2  

Proof of theorem of Section 5.3

If the sequence of background error covariances using Methods 0 and 1 are designated, respectively,B̲k and B~̲k, and we start from the same prior error covarianceB~̲N=B̲N, then for allkN

B̲kB~̲k
Proof For simplicity we shall suppose that both R̲ and B̲kv are non-singular and that H̲=I, so denoting Method 1 quantities with tildes we have K̲=(B̲-1+R̲-1)-1R̲-1 and K~̲=((B̲v)-1+R̲-1)-1R̲-1. Hence from (17) and (42) we have
((A5) )
A̲=(B̲-1+R̲-1)-1A~̲=((B̲v)-1+R̲-1)-1(B̲v)-1B~̲(B̲v)-1+R̲-1×((B̲v)-1+R̲-1)-1

and therefore

((B1) )
A̲-1-A~̲-1=B̲-1-B~̲-1+B~̲-1+R̲-1-(B̲v)-1+R̲-1×(B̲v)-1B~̲(B̲v)-1+R̲-1-1(B̲v)-1+R̲-1

Now consider the matrix

((B2) )
X̲=B~̲-1+R̲-1(B̲v)-1+R̲-1(B̲v)-1+R̲-1(B̲v)-1B~̲(B̲v)-1+R̲-1=B~̲-1(B̲v)-1B~̲B~̲-1(B̲v)-1+IIR̲-1II

Since X̲ is the sum of two positive semi-definite matrices it is positive semi-definite. Note also that since R̲ is positive definite that (B̲v)-1B~̲(B̲v)-1+R̲-1 is invertible, so its Moore–Penrose inverse is its usual matrix inverse.

The term in square brackets in (B2) is the Schur complement of (B̲v)-1B~̲(B̲v)-1+R̲-1 in X̲ so by Theorem 4.3 of Gallier (2010) is positive semi-definite. Therefore if B̲k-1B~̲k-1 then A̲k-1-A~̲k-1 is the sum of two positive semi-definite matrices, so is positive semi-definite. We conclude that if B̲kB~̲k then A̲kA~̲k. Since both methods satisfy (43) with the same M̲k, both satisfy the same relation

B̲k+1=M̲kA̲kM̲kT+Q̲,B~̲k+1=M̲kA~̲kM̲kT+Q̲

Therefore A̲kA~̲k implies B̲k+1B~̲k+1 and the claim follows.


Appendix 3  

Proof of equivalence of methods 0–3 if Q=0

Set V̲k to be the first n columns of the n(N+1)×n(N+1) matrix U̲k, i.e.

V̲k=Mk-Nk-NMk-Nk-(N-1)Mk-Nk

Because Q=0 it follows from (40) that

((B3) )
B̲kv=V̲kBV̲kT

We suppose inductively that for some kN Method 0 and Methods 1–3 have the same x̲kb,B̲k and that

((C1) )
x̲kb=V̲kxk-NbB̲k=B̲kv=V̲kBk-NV̲kT

This is true for k=N by construction. For all methods we therefore have

K̲k=V̲kBk-NV̲kTH̲kT(H̲kV̲kBk-NV̲kTH̲kT+R̲k)-1

Using the ‘Kalman identity’

((C2) )
BV̲TH̲T(H̲V̲BV̲TH̲T+R̲)-1=(B-1+V̲TH̲TR̲-1H̲V̲)-1V̲TH̲TR̲-1

it follows from (17), (42) that for all methods

((C3) )
A̲k=V̲k(Bk-N-1+V̲kTH̲kTR̲-1H̲kV̲k)-1V̲kTK̲k=A̲kH̲kTR̲-1

and therefore from (16), (38) that in all cases

((C4) )
x̲ka=V̲kxk-Nb+(Bk-N-1+V̲kTH̲kTR̲-1H̲kV̲k)-1×V̲kTH̲kTR̲-1(y̲k-H̲kx̲kb)

Recall that the superscript in M̲() in (43) denotes the method used, with M̲k(1)=M̲k(0)=M̲k as defined for the optimal solution in (12). For all =0,1,2,3-pagination

M̲k()V̲k=Mk-Nk+1-NMk-Nk+1-(N-1)Mk-Nk+1=V̲k+1Mk-Nk+1-N

Let x̲~ka denote the vector in square brackets in (C5) and A̲~k the matrix in square brackets in (C3C4). It follows from (18), (19) and (43), (45) that both for Method 0 and for Methods =1,2,3 we have

((C5) )
x̲k+1b=M̲k()x̲ka=M̲k()V̲kx̲~ka=V̲k+1Mk-Nk+1-Nx̲~kaxk+1-Nb=(x̲k+1b)1̲=Mk-Nk+1-Nx̲~kaB̲k+1=M̲k()A̲k(M̲k())T=V̲k+1Mk-Nk+1-NA̲~k(Mk-Nk+1-N)TV̲k+1TBk+1-N=(B̲k+1)1̲,1̲=Mk-Nk+1-NA̲~k(Mk-Nk+1-N)T
(where (x̲k+1b)1̲ is the first n×1 sub-vector of x̲k+1b). Therefore the inductive hypothesis holds for k+1, and therefore for all k.


Appendix 4  

4D-Var with a fixed background error covariance: impact of observation batch size

Suppose we have a linear system, observations in some time interval [0, T] and all errors are Gaussian. If we assimilate the observations using an optimal method, such as 4D-Var with correctly cycled prior and posterior error covariances, using m assimilation windows

[0,t1],[t1,t2],,[tm-1,T]

then the estimate of state at time T is independent of m and of how we choose

t1<t2<<tm-1

However, if instead of properly cycling the error covariances the background error covariances are fixed, it is often advantageous to assimilate data in larger batches.

We illustrate this by considering a case where x is a scalar quantity, which evolves in time according to

x(i+1)=μx(i)

for some constant μ (which is supposed known, so there is no model error) with |μ|>1. At each time i we wish to assimilate an observation y1(i) with error variance ri and an observation y2(i) with error variance si. To make the problem analytically tractable we will suppose that {(ri,si),i=1,,} are drawn (independently of i) from {(Rj,Sj),j=1,k}, where for any i{ri=Rj,si=Sj} with probability pj,j=1,,k.

We compare two assimilation strategies using 4D-Var with a non-cycled background:

Simultaneous (batch size of 2): at each time i we assimilate y1(i) and y2(i) simultaneously using fixed background error covariance b, i.e. xa=xb+δ where δ minimises

J(δ)=12δ2b+12(y1-xb-δ)2ri+12(y2-xb-δ)2si
Sequential (batch size of 1): at each time i we assimilate y1(i) using fixed background error covariance b1 to give intermediate analysis xa1(i), then assimilate observation y2(i) using fixed background error variance b2 to give final analysis xa(i), i.e. xa1=xb+δ1, xa=xa1+δ2, where δ1,δ2 minimise
((C6) )
J(δ1)=12δ12b1+12(y1-xb-δ1)2riJ(δ2)=12δ22b2+12(y2-xa1-δ2)2si

We will show the following: we can choose b so that, however b1 and b2 are chosen, the mean square error using the simultaneous method is lower than that using the sequential method (and strictly lower if k2 and R1R2).

Proof (summary)

  • (1)   Denoting E[(xia-xit)2] by Ai it is readily shown that for both the simultaneous and sequential methods
    ((D1) )
    Ai=βi2μ2Ai-1+ρi2ri+σi2si
    where the parameters βi, ρi, σi are as listed in Table D1.
  • (2)   It follows from (D1) that for any i>1
    ((D2) )
    Ai=+(βiβi-1β2μi-1)2A1+(βiβi-1β3μi-2)2(ρ22r2+σ22s2)+(βiβi-1β4μi-3)2(ρ32r3+σ32s3)++(βiμ)2(ρi-12ri-1+σi-12si-1)+ρi2ri+σi2si
    So Ai is a function of (r1,s1),(r2,s2),,(ri,si) which we are considering as independent random variables, so
    Eβiβi-1βj+12ρj2rj+σj2sj=Eβi2Eβi-12Eβj+12Eρj2rj+σj2sj
    Denote expectations over (r1,s1),(r2,s2), by E{rj,sj}. If E{rj,sj}[(βjμ)2]<1, and noting βi=1-σi-ρi, we have
    ((D3) )
    limiE{rj,sj}[Ai]=E{rj,sj}[ρj2rj+σj2sj]1-E{rj,sj}[(βjμ)2]=j=1kpj[ρ(Rj,Sj)2Rj+σ(Rj,Sj)2Sj]1-μ2j=1kpj(1-ρ(Rj,Sj)-σ(Rj,Sj))2
    where in the simultaneous case ρ,σ are the functions
    ((D4) )
    ρsim(R,S,b)=1/R1/b+1/R+1/S,σsim(R,S,b)=1/S1/b+1/R+1/S
    and in the sequential case ρ,σ are the functions
    ((D5) )
    ρseq(R,S,b1,b2)=Sb1(b1+R)(b2+S),σseq(R,S,b1,b2)=b2b2+S
  • (3)   Lemma. If
    ((D6) )
    R1,R2,Rk>0S1,S2,Sk>0p1,p2,pk>0
    with pi=1 and μ>1, then
    ((D7) )
    minb>0pj[ρsim(Rj,Sj,b)2Rj+σsim(Rj,Sj,b)2Sj]1-μ2pj(1-ρsim(Rj,Sj,b)-σsim(Rj,Sj,b))2subject toμ2pj1-ρsim(Rj,Sj,b)-σsim(Rj,Sj,b)2<1
    is less than or equal to
minb1>0b2>0pj[ρseq(Rj,Sj,b1,b2)2Rj+σseq(Rj,Sj,b1,b2)2Sj]1-μ2pj(1-ρseq(Rj,Sj,b1,b2)-σseq(Rj,Sj,b1,b2))2
((D8) )
subject toμ2pj1-ρseq(Rj,Sj,b1,b2)-σseq(Rj,Sj,b1,b2)2<1

where the inequality is strict if k2 and R1R2.

This Lemma proves the claim, and is itself proved in (3a)–(3d).

  • (3a)   Given constants
    ((D9) )
    {Rj>0,Sj>0,pj>0;j=1,,k}μ>0
    Define the function
    ((D10) )
    Γ(ρ1,σ1,,ρk,σk)=j=1kpj[ρj2Rj+σj2Sj]1-μ2j=1kpj(1-ρj-σj)2
    on
    D={ρ1,σ1,,ρk,σk:ρj>0,σj>0,μ2pj(1-ρj-σj)2<1}
    We may readily check that D is convex and that Γ is convex on D. At any local extremum of Γ the 2k conditions hold
    ((D11) )
    ρi,σiΓ=0,i=1,,k
  • (3b)   Consider the 1-parameter family of functions
    ρ1(b),σ1(b),,ρk(b),σk(b)
    given by
    ((D12) )
    ρi(b)=ρsim(Ri,Si,b)=1/Ri1/b+1/Ri+1/Siσi(b)=σsim(Ri,Si,b)=1/Si1/b+1/Ri+1/Si
    Define
    ((D13) )
    g(b)=1-μ2j=1kpj1-ρj(b)-σj(b)2
    ((D14) )
    h(b)=j=1kpj[ρj(b)2Rj+σj(b)2Sj]
    Then we may check that if {ρj(b),σj(b)} are of the form (D12), if g(b)0 and b satisfies the single condition
    ((D15) )
    f(b)bg(b)-μ2h(b)=0
    then the 2k conditions (D11) hold.
  • (3c)   We may also check that we have
    ((D16) )
    f(b)=g(b)for allbg(b)>0ifb>0
    Noting that as b0 we have
    ((D17) )
    f(b)0f(b)=g(b)1-μ2
    it follows by elementary arguments that so long as |μ|>1 there must exist b>b>0 such that
    ((D18) )
    f(b)=g(b)=0f(b)=0
    and since for all b>0 we have g(b)>0 we must have
    f(b)=g(b)>0,forallb>b
    Furthermore, since g(b)>0 if and only if
    {ρj(b),σj(b)}j=1,,kD
    it follows that
    {ρj(b),σj(b)}j=1,,kD
  • (3d)   We have found a value b such that
    ((D19) )
    {ρsim(Rj,Sj,b),σsim(Rj,Sj,b)},j=1,,k
    is in D and such that at this point Γ=0. Because Γ is convex in the convex set D any point where Γ=0 is the global minimum of Γ on D (Boyd and Vandenberghe (2004)). Therefore the global minimum of Γ over D is achieved by (D19).
It remains to show that this minimum is not achievable by ρseq,σseq, for any b1>0,b2>0, if R1R2.

At any extremum of Γ Equation (D11) must hold, and in particular for every i=1,,k-pagination

Γρi-Γσi=2pi(ρiRi-σiSi)1-μ2j=1kpj(1-ρj-σj)2=0

The denominator is positive for all points in D, therefore at any extremum in D

ρjRj=σjSj,j=1,,k

Inserting this requirement with j=1,2 into the expressions for ρseq,σseq in (D5) it follows we would need

((D20) )
R1b12=R2b12

which for b1>0 is only possible if R1=R2.

comments powered by Disqus