Start Submission Become a Reviewer

Reading: A study of enhancive parameter correction with coupled data assimilation for climate estimat...

Download

A- A+
Alt. Display

Original Research Papers

A study of enhancive parameter correction with coupled data assimilation for climate estimation and prediction using a simple coupled model

Authors:

S. Zhang ,

GFDL/NOAA, Princeton University, Princeton, NJ 08542, US
X close

Z. Liu,

Center for Climate Research and Department of Atmospheric and Oceanic Sciences, University of Wisconsin-Madison, Madison, WS 53706, US; Laboratory of Ocean-Atmospheric Studies, Peking University, Beijing, CN
X close

A. Rosati,

GFDL/NOAA, Princeton University, Princeton, NJ 08542, US
X close

T. Delworth

GFDL/NOAA, Princeton University, Princeton, NJ 08542, US
X close

Abstract

Uncertainties in physical parameters of coupled models are an important source of model bias and adversely impact initialisation for climate prediction. Data assimilation using error covariances derived from model dynamics to extract observational information provides a promising approach to optimise parameter values so as to reduce such bias. However, effective parameter estimation in a coupled model is usually difficult because the error covariance between a parameter and the model state tends to be noisy due to multiple sources of model uncertainties. Using a simple coupled model consisting of the 3-variable Lorenz model and a slowly varying slab ‘ocean’, this study first investigated how to enhance the signal-to-noise ratio in covariances between model states and parameters, and then designed a data assimilation scheme for enhancive parameter correction (DAEPC). In DAEPC, parameter estimation is facilitated after state estimation reaches a ‘quasi-equilibrium’ where the uncertainty of coupled model states is sufficiently constrained by observations so that the covariance between a parameter and the model state is signal dominant. The observation-updated parameters are applied to improving the next cycle of state estimation and the refined covariance of parameter and model state further improves parameter correction. Performing dynamically adaptive state and parameter estimations with speedy convergence, DAEPC provides a systematic way to estimate the whole array of coupled model parameters using observations, and produces more accurate state estimates. Forecast experiments show that the DAEPC initialisation with observation-estimated parameters greatly improves the model predictability – while valid ‘atmospheric’ forecasts are extended two times longer, the ‘oceanic’ predictability is almost tripled. The simple model results here provide some insights for improving climate estimation and prediction with a coupled general circulation model.

How to Cite: Zhang, S., Liu, Z., Rosati, A. and Delworth, T., 2012. A study of enhancive parameter correction with coupled data assimilation for climate estimation and prediction using a simple coupled model. Tellus A: Dynamic Meteorology and Oceanography, 64(1), p.10963. DOI: http://doi.org/10.3402/tellusa.v64i0.10963
4
Views
3
Downloads
46
Citations
  Published on 01 Dec 2012
 Submitted on 16 Mar 2011

1. Introduction

A coupled climate model is biased due to two sources of errors. The first source is the imperfect model structure such as equations, numerical schemes and physical parameterisation schemes. The second source is the errors in the values of parameters employed in the model. Here, we are concerned with the second source of model bias. We shall investigate how to use observations to optimise model parameters and thereby constrain the associated model bias.

Data assimilation incorporates observations into a climate model through background error covariances derived from model dynamics and produces a continuous time series of climate states (e.g. Ghil et al., 1981; Daley, 1991; Kalnay, 2003). The reconstructed time series with a 3-dimensional structure in space is an important data source for advancing the understanding of climate change and variability. The observation-estimated states including all climate components such as the atmosphere, land, ocean and sea-ice also serve as initial conditions for coupled model predictions. Usually, traditional data assimilation only assimilates observations into model states to estimate the atmospheric and oceanic states, or called state estimation only (SEO). Due to the existence of model bias, SEO maintains a systematic error in the estimated states (Dee and Silva, 1998; Dee, 2005) and the SEO-initialised model predictions tend to drift away. The methodology of parameter estimation exists based on data assimilation theory (e.g. Jazwinski, 1970), and many efforts have been made to include model parameters into data assimilation control variables for estimating model parameters using observations (e.g. Banks, 1992a, 1992b; Zhu and Navon, 1999; Anderson, 2001; Annan and Hargreaves, 2004; Annan et al., 2005; Aksoy et al., 2006a, 2006b; Evensen, 2007; Hansen and Penland, 2007; Kondrashov et al., 2008; Tong and Xue, 2008a, 2008b). However, when the methodology is applied to a coupled climate model in which the interaction of multiple time scales plays important roles in development and propagation of climate signals, some fundamental issues such as how to estimate a signal-dominant state-parameter covariance and how the observations in different system components to influence the result of parameter estimation still remain unclear. Although theoretically promising, the idea of parameter optimisation with observations has not been successfully applied to a coupled general circulation model (CGCM) for improving climate estimation and prediction.

For example, given that model parameters are non-observable and do not have any dynamically supported internal variability, the covariance between a parameter and the model state serves as a critical quantity to project observational information of model states to the parameter being estimated. Climate signals in a coupled system are associated with the interaction of different system components that have different time scales. Thus, how to obtain a signal-dominant covariance between parameters and coupled model states is the key for the successfulness of coupled model parameter estimation. As the first step to study coupled parameter estimation issues, here we use a simple coupled model consisting of the 3-variable Lorenz model (Lorenz, 1963) and a slowly varying slab ‘ocean’ that simulates the interaction of atmosphere and ocean with various parameters. We first investigate the impact of estimated covariances on parameter estimation and how to derive a signal-dominant parameter-state covariance in a coupled data assimilation system. Then, we design a data assimilation scheme for enhancive parameter correction (DAEPC). Data assimilation scheme for enhancive parameter correction is a modification of standard data assimilation with adaptive parameter estimation (e.g. Kulhavy, 1993; Borkar and Mundra, 1999; Tao, 2003) for enhancive parameter correction of a coupled system. Activated only after state estimation reaches a ‘quasi-equilibrium’ where the uncertainty of coupled model states has been sufficiently constrained by observations and it, therefore, becomes more controlled by parameter errors, the DAEPC parameter correction can effectively make use of updated system signals to produce updated parameter corrections online (Tao, 2003, Chapter 3). The observation-updated parameters reduce the error of model states in the next cycle of assimilation, and the improved estimate of model states enhances the signal-to-noise ratio in covariances between parameters and model states and thus further refines parameter correction. We will show that this effectively adaptive approach provides a systematic way to estimate the whole array of coupled model parameters and produces more accurate estimates of climate states. Furthermore, by conducting a series of forecast experiments, we also show that DAEPC initialisation with observation-estimated parameters applied to the prediction model greatly improves the model predictability.

The general methodology is presented in Section 2, including the construction of the conceptual coupled model and the description of ‘twin’ experiments that will be used throughout this study. Section 3 analyses the time scale by which a model ensemble is developed to establish reliable state-parameter covariances and the impact of the accuracy of state estimates on the quality of parameter estimation. Data assimilation scheme for enhancive parameter correction is designed in Section 4 in which Section 4.1 outlines the idea, Sections 4.2 and 4.3 describe the detailed implementation and Section 4.4 gives various proof-of-concept case studies. Section 5 examines the influence of biased ‘dynamical core’ and ‘physical scheme’ on DAEPC. The impact of DAEPC on ‘weather’ forecast and ‘climate’ prediction is examined in Section 6, including the relative importance of the accuracy of initial conditions and parameters for improving ‘weather’ forecast and ‘climate’ prediction. Finally, the summary and conclusions are given in Section 7.

2. Methodology in general

2.1. Ensemble filtering parameter estimation

A coupled model is biased and the model climate tends to drift away from the real world (Delworth et al., 2005; Collins et al., 2006). Model bias may be described in two types as:

The dynamical core and physical scheme biases refer to the misfittings arising from an imperfect dynamical framework (e.g. an insufficient resolution) and the incomplete understanding for physical processes in the climate system. This type of biases is from the model structure and ‘built-in’ in the model. This study addresses the second type of biases that are parameter-derived given limitations of model structure as well as how to mitigate this type of biases using observations. In climate modelling, many physics (e.g. subgrid cumulus in the atmosphere and vertical mixing in the ocean) can only be approximated by parameterisation in which one or more parameters play important roles. However, the values of these model parameters are usually set heuristically by a trial-and-error tuning procedure that only provides a guess, but not optimal estimate for the whole array of parameters.

Introducing a vector β to represent a collection of model parameters, the stochastic differential equation that describes the evolution of model climate as a stochastic dynamical process (Jazwinski, 1970) is , where f(xt, β, t) is a vector function expressing the contributions of all dynamical and physical processes with the parameters β to the budget of local fluid on momentum and heat, etc. The term G(xt, β, t) represents the contributions of uncertainties resulted from erroneous initial conditions and model parameters. Based on Bayes’ rule, the corresponding analysis equation including both state variables and model parameters is

(1 )

Here βt–1 represents the estimated parameter vector given all previous observations (Yt–1), while βt represents the adjusted parameter vector as a new observation (yt) is assimilated. Equation (1) expresses how the probability distributions of model states and parameters are updated by observations. Each previous study on parameter estimation in the referred literature represents a particular application of eq. (1).

The ensemble adjustment Kalman filter (EAKF, Anderson, 2001) is chosen for parameter estimation study here. The EAKF is a sequential implementation (e.g. Evensen, 1994; Houtekamer and Mitchell, 2001) of Kalman filter (Kalman, 1960; Kalman and Bucy, 1961) under an ‘adjustment’ idea. While the sequential implementation provides much computational convenience for data assimilation, the EAKF maintains the non-linearity of background flows as much as possible (Anderson, 2001, 2003; Zhang and Anderson, 2003). Based on the two-step EAKF (Anderson, 2003; Zhang and Anderson, 2003; Zhang et al., 2007), here we describe an implementation of ensemble filtering parameter estimation. Given that parameters are usually non-observable, the first step that computes the ensemble observational increment is identical to state estimation [see Fig. 2 and eqs. (2–5) in Zhang et al., 2007]. This is a process when a non-observable state variable is adjusted in a multivariate adjustment scheme (salinity is adjusted by the observed temperature in ocean data assimilation, for instance). The second step that projects the observational increment onto the relevant parameters is a key to understand the special perspective of parameter estimation. The knowledge of a modeller about the physical process being parameterised provides the range of the possible values of a parameter. A default value used in a model represents the expectation of these possible values. The errors associated with the default parameter values are transferred into the model states through the model integration. Due to the model non-linearity, the errors of model states caused by the uncertainties of initial conditions and model parameters grow rapidly. Then, a parameter can be estimated by applying the observational increments to the error covariance between the prior parameter ensemble and the state ensemble through a local least square fitting (Anderson, 2001, 2003). Similar to the multivariate analysis in state estimation for a non-observable variable, this process can be formulated as

(2 )

Here Δyok,i is the observational increment at the kth observational location for the ith ensemble member, c(Δβplyak) is the error covariance between the prior ensemble of the lth parameter and the model-estimated ensemble at location k and σak is the standard deviation of the model ensemble at location k. The numerical computation can be carried out in two steps:

Step 1: Before assimilation starts, draw M Gaussian random numbers for each (say, the lth) parameter to be estimated, with the mean (default value) and the guessed standard deviation (denoted as σl,0) from the knowledge of model sensitivities, where M is the ensemble size. The M random values are the new default values of the parameter in the model ensemble that is already set for state estimation, and serve as the prior ensemble of the parameter for the first step parameter estimation.

Step 2: Compute the observational increment and the covariance between a model state ensemble at the observational location and the prior ensemble of the parameter when an observation is available. Sequentially apply each observational increment with the covariance to eq. (2) to update the parameter ensemble until all (independent) observations are applied to all estimated parameters, and produce an updated prior ensemble of parameter for the next cycle of parameter correction. An inflation may be applied to the parameter ensemble (will be discussed in Section 4 for more details). Loop this step as the model ensemble is forwarded.

2.2. A conceptual coupled ‘climate’ model

To clearly demonstrate the role of a signal-dominant covariance between the model state and a parameter in parameter estimation and avoid the complexity of a CGCM, we will use a conceptual coupled model in this study. The conceptual model must share certain fundamental properties of a CGCM but is so simple that any concern in parameter estimation can be examined quantitatively. To model the concept of tropics-midlatitude interactions in an atmospheric GCM, Molteni et al. (1993) coupled two linear oscillatory variables with the Lorenz's 3-variable chaotic model (Lorenz, 1963). Inspired from this idea, we start the model construction by adding an equation representing a slowly varying slab-ocean to the Lorenz model to simulate the concept of the interaction of the fast ‘atmosphere’ and slow ‘ocean’. A prototype of the conceptual coupled model consisting of a slab ocean that only has a damping term and an external forcing term takes the form as

(3 )
where an over dot denotes time tendency; x1, x2 and x3 are the high-frequency variables of the ‘atmosphere’ with the original σ, κ and b parameters and the standard values 9.95, 28, 8/3 which sustain the chaotic nature of the ‘atmosphere’. With these standard parameter values, the atmospheric time scale is defined as 1 non-dimensional time unit (TU) [∼O(1)] by which the ‘atmosphere’ approximately goes through a lobe of the attractor. The new w is introduced as a slowly varying ‘ocean’. Except for a forcing term from the ‘atmosphere’ (c2x2), the simplest slab ocean only consists of a linear damping term – Odw and an imposed external forcing S(t). An important feature of w is that it must have a much slower time-scale than the ‘atmosphere’, which can be obtained by requiring a much larger heat capacity than the damping rate, i.e. OmOd. For example, the values of (10, 1) for (Om, Od) define the oceanic time scale as ∼O(10). Also, we simply set S(t)=Sm+Ss cos(2πt/Spd) to simulate the constant and seasonal forcings for the ‘climate’ system so that the ‘ocean’ is recharged when it is damped. The Spd is chosen as 10 so that the period of the forcing is comparable with the oceanic time scale. The parameters Sm and Ss define the magnitudes of the annual mean and seasonal cycle of the forcings that are not sensitive to the model construction and set as 10 and 1, respectively.

The coupling between the fast ‘atmosphere’ and slow ‘ocean’ is realised by choosing the values of the coupling coefficients c1 and c2, with c1 representing the oceanic forcing on the ‘atmosphere’ and c2 representing the atmospheric forcing on the ‘ocean’. Note that the ‘coupling’ here is not derived from physical principles and only represents a mathematical way to provide interaction between the fast and slow variables. The multiple atmospheric equations may allow the ‘ocean’ to force the ‘atmosphere’ in many different ways. First, similar to Molteni et al. (1993), we tried the slow variable forcing as an additive term at the right-hand side of multiple fast variable equations. However, the too simple additive forcing only changes the mean values of the Lorenz ‘atmosphere’, not modulating the amplitude of the chaotic variations. Then, we allow the ‘ocean’ to force the ‘atmosphere’ in a multiplicative way so that the coupling forcing produces some modulation for the atmospheric attractor. Eventually, for simplicity, we only keep one oceanic forcing as a multiplicative forcing in the x2 equation. Sensitivity experiments show that the coupled system is robust with respect to c2 but becomes unstable when c1 is much larger than 0.1. Thus, we choose 0.1 and 1 as the values of c1 and c2, respectively.

Using a leap-frog time stepping (Δt=0.01) and a Robert–Asselin time filter (Robert, 1969; Asselin, 1972) with a 0.25 filtering coefficient by which the computational modes are damped out, the model is set with a set of standard parameter values (σ, κ, b, c1, c2, Om, Od, Sm, Ss, Spd)=(9.95, 28, 8/3, 0.1, 1, 1, 10, 10, 1, 10). Starting from the initial conditions (x1, x2, x3, w)=(0, 1, 0, 0), the model is integrated for 104 TUs (106 steps) for spin-up. Some interesting characteristics of the simple model are summarised in Fig. 1, namely,

  1. low-frequency variability dominates the ‘oceanic’ component while the high-frequency spectrum is the feature of the ‘atmosphere’; the ‘atmosphere’ has a similar ‘strange’ attractor behaviour as in the original Lorenz model and the ‘ocean’ has ‘seasonal’-cycle variations;
  2. the fast ‘atmosphere’ and the slow ‘ocean’ are coupled together effectively; the coupling mechanism can modulate the variability of both the ‘atmosphere’ and the ‘ocean’;
  3. due to the coupling mechanism, a perturbation on any parameter of the ‘atmosphere’ or ‘ocean’ changes the model solution for both the ‘atmosphere’ and ‘ocean’.
Next, we will use the simple model and set a ‘twin’ experiment framework to thoroughly investigate the impact of the estimated state-parameter covariance on parameter estimation. It is worth mentioning that among the 10 model parameters, Om, Sm, Ss, Spd define the basic features of the model ‘ocean’ and c1 is rather sensitive for the system to sustain the coupling of the chaotic atmosphere and the low-frequency ‘ocean’, we will treat Om, Sm, Ss, Spd as a part of the ‘dynamical core’ and c1 as a basic ‘physical scheme’ in this simple system, which will not be adjusted by observations in the next parameter estimation experiments.

Fig. 1.   

Time series of (a) the fast ‘atmospheric’ variable (x2) and (b) the slow ‘oceanic’ variable (w) in the spinup run of the simple coupled model with the standard values of parameters described in Section 2.2, starting from (x1, x2, x3, w)=(0,1,0,0). The dotted-black lines are results of a de-coupled version of the model by setting c1=c2=0 in eq. (3), by which the ‘atmospheric’ model is degraded to the original Lorenz model (Lorenz, 1963). The dotted-dashed lines are the ensemble standard deviation of the corresponding variable in the coupled model by adding a Gaussian noise on c2 with a standard deviation of 0.05.

2.3. ‘Twin’ experiment setup

With the standard parameter values, after spin-up, the model is integrated for another 104 TUs to establish the ‘true’ model states for observations, and then the standard value of a parameter is the ‘true’ solution of the parameter when it is estimated using the observations. The ‘observations’ are produced through sampling the ‘true’ states by taking the values of the certain variable(s) at an observational frequency and superimposed with a white noise that simulates observational errors. The standard deviations for observational errors are 2 for x1,2,3 and 0.5 for w. We will examine the impact of observational frequencies and observational types on parameter estimation by applying the ‘atmosphere’ or/and ‘ocean’ ‘observations’ at an interval of 0.1, 0.2, 0.5 or 1 TU into the assimilation procedure. The asterisk in Fig. 2 shows an example of observations available every 0.1 TU.

Fig. 2.   

Time series of (a) x1,2,3 (solid, dotted and dotted-dashed) and (b) w observations (*,+and x) at an interval of 0.1 time unit (TU) and the control model integration (lines) with κ=29, after the spinup of 104 TUs described in Section 2.2. The observations are taken from the ‘truth’ integration with the standard values of parameters listed in Section 2.2 where κ=28, superimposed by a white noise with a standard deviation of 2 (0.5) for x1,2,3 (w).

To produce the independent initial conditions for assimilations that attempt to recover the ‘truth’ by the constraint of ‘observations’, another model spinup of 104 TUs starting from (x1, x2, x3, w)=(0, 1, 0, 0) is run with κ=29 and standard values for other nine parameters. The ensemble initial conditions are formed by superimposing a white noise on the model state at the end of this spinup. Also, we simply set the initial standard deviation of a parameter being estimated as 5% of its initial guess in this simple model case, although the initial spread could depend on the model's sensitivity with respect to the parameter in a CGCM. In addition, a free assimilation model control (with no observational constraint) starting from this state serves as a reference for the evaluation of any assimilation with an observational constraint, called the model control, or CTL. The solid lines in Fig. 2 present the model control with κ=29 but remaining other nine parameters being the standard values, showing entirely different variability from the ‘truth’ for both the ‘atmosphere’ and ‘ocean’. This assimilation setting more or less simulates the situation of using a coupled climate model to assimilate the instrumental data for climate assessment and prediction initialisation, in which the assimilation model is biased and an assimilation usually starts from a state of model simulation. Note that, as described in Section 2.1, in the case that the real instrumental data are assimilated into a CGCM, multiple bias sources exist, while the observing/assimilation system simulation study here limits the bias source only including the errors of some parameter values.

Same as in Zhang and Anderson (2003), based on the trade-off between cost and assimilation quality, after a series of sensitivity tests on ensemble sizes of 10, 20, 50 and 100, no significant difference on assimilation quality is found when the ensemble size is greater than 20. Thus, a practical ensemble size of 20 that is applicable for a CGCM, is chosen in this study. In addition, the leap-frog time stepping requires a multiple time level adjustment scheme (Zhang et al., 2004) applied to all the data assimilation experiments.

3. Critical importance of a signal-enhanced background model ensemble in parameter estimation

Unlike model states, model parameters are non-observable and do not have any dynamically supported internal variability. The covariance between a parameter and the model state serves as a critical quantity to project observational information of model states to the parameter being estimated. The state-parameter covariance is evaluated through the model state ensemble and the prior ensemble of a parameter. The prior ensemble of a parameter consists of a set of numbers produced by the filtering process at the previous analysis that are unchanged during the model integration. Therefore, the accuracy of the ensemble-evaluated covariance is directly determined by the signal-to-noise ratio of the estimated background model ensemble.

In this section, we use the simplest experiment to demonstrate how a signal-enhanced background model ensemble impacts on parameter estimation. The experiment only uses the observations of x1 (available at every 0.1 TU) to estimate the parameter κ that is erroneously guessed as 29, whereas other 9 parameters are remained as truth values. As described in Section 2.3, starting from a set of Gaussian perturbed κ values centred at 29 with a standard deviation of 1.45 (5% of 29), the traditional parameter estimation (TPE) described in Section 2.1 is conducted. We denote the experiment as TPE in which the superscript O(x1) represents only the x1 observations being used and the subscript P(κ) represents only the parameter κ being estimated. The results show that the observation-estimated ensemble of κ(green lines in Fig. 3a) does not converge to the truth (black-dotted line) even in this simple case. Next, with the diagnostics on this simple TPE case, we will analyse the reasons of parameter estimation failure.

3.1. The time scale to develop a reliable model ensemble starting from perturbed parameters

Using ensemble member 1 as an example, we retrieve the process that TPE fails to correct the parameter κ. In this case, the filtering computation [eq. (2)] is a product of three terms as . Here, ρk,x1 and are the correlation coefficient and the ratio of standard deviations of uncertainties of x1 and κ in the dynamical system, respectively. Δx01 is the observational increment [see eqs. (2–5) of Zhang et al., 2007] resulted from the observations of x1 that basically reflects the difference between an observation and the model guess of the observation (scaled by the observational and model ensemble error variances) (see the red line, blue stars and the solid-black line in Fig. 3b). These error statistics are evaluated by the prior ensemble of κ and the observation-estimated ensemble of x1. Fig. 3c,d shows that starting from a perturbed κ, while a saturated ensemble spread for x1(denoted as σx1) requires about 2.5 TUs of model ensemble integrations, a reliable correlation between κ and x1k,x1) requires at least 1 TU for the model ensemble integrations (the same time scale holds for other ‘atmospheric’ variables). The time scale for the corresponding ‘oceanic’ variable is about 20 TUs.

Fig. 3.   

(a) Time series of the κ adjustments (red) and estimated κ values (green) in the first 10 ensemble members of the traditional parameter estimation using only x1 observations to estimate κ (traditional parameter estimation (TPE) , see the detailed description in Section 3). The dotted-(dashed-)black line marks the truth (erroneously guessed initial) value of the κ being estimated. For the convenience of visualisation, the values of adjustment increments in TPE are imposed on the erroneously guessed initial value. (b) Time series of observational increments of x1 of member 1 (red) where the observations (asterisks) and the truth (blue-dashed) as well as the model estimate (solid-black) of x1 in member 1 are plotted as the reference. (c) Time series of the ensemble spread (standard deviation) of x1 (denoted by σx1) in TPE (red). To detect the time scale of developing a reliable ensemble spread when a model parameter is perturbed in the model system, the σx1 and σw produced by the free model ensemble integrations starting from a κ ensemble perturbed by a Gaussian noise described in Section 2.3 are also plotted by dotted-black and dotted-blue lines. (d) Time series of the correlation coefficient between κ and x1 (denoted by ρ(κ,x1)) (red) evaluated by the prior ensemble of κ and the observation-estimated ensemble of x1 in TPE (red). The corresponding ρ(κ,w) is also plotted by the blue line to show the different time scale of developing a reliable correlation between the parameter and the model state when κ is perturbed. The transient values of ρ(κ,x1) between 1 and –1 reflect the transient nature of the estimated atmosphere ensemble between attractor lobes.

In particular, we have to notice that within 0.5 TU of model ensemble integrations, the ρk,x1 is especially with high noise. This results in a persistent negative κ increment projected from the Δx01 during the early period of the assimilation, producing a κ adjustment amount up to –12 as an accumulation in the first five analysis steps (see red lines in Fig. 3a). This large adjustment amount seriously overshoots the truth value of κ, and the incorrectly estimated parameter causes the assimilation model to misfit the observations dramatically, eventually leading to the divergence of the parameter ensemble (green lines in Fig. 3a) and that the model blows up after 300 TUs.

The analyses above show that starting from perturbed parameter values, the coupled ensemble system requires a certain time scale to develop a reliable model ensemble. Before a reliable coupled model ensemble has been established in state estimation, the filtering equation [eq. (2)] cannot correctly project observational information of state variables for parameter estimation. It is worth mentioning that in the traditional data assimilation for SEO, due to the use of very high autocorrelation (ρx1,x1≡1 in this simple system, for instance), the stability of assimilation spinup is not so sensitive to the reliability of developed ensemble as in parameter estimation.

3.2. Impact of the accuracy of estimated background model ensemble on parameter estimation

This section further investigates the impact of the accuracy of observation-estimated model ensemble on parameter estimation, including: (1) the estimate of state-parameter covariances and (2) the representation of estimated ensemble mean to the observed state.

To demonstrate the role of the observation-constrained model ensemble in the estimate of state-parameter covariances, we conduct a parameter estimation test (PET) experiment, called PET0 . Like TPE , PET0 uses x1 observations to estimate the parameter κ but (1) it does not perform state estimation and only does parameter estimation and (2) parameter estimation starts at the 2000th-step model ensemble integration where the model ensemble has been maturely developed after the time scale described in Section 3.1. Results show that the adjusted κ ensemble tends to approach the truth but it overshoots the truth and eventually stays around the value of 26.5 (Fig. 4a).

Fig. 4.   

(a) Time series of the estimated κ values in the first 10 ensemble members of PET0 (green) and PET1 (red). The PET0 [PET1] experiment is the same as traditional parameter estimation (TPE) except that parameter estimation starts at the 2000th-step model ensemble integration without (with) the state estimation using the x1 observations (see the detailed description in Section 3.2). (b) Time series of the values of x1 of the first 10 ensemble members in PET0 (dotted-green) and PET1 (dotted-red), and their ensemble means [solid-blue for PET0 , solid-red for PET1] as well as the truth (dotted-black). (c) Time series of three factors of the signal-to-noise ratio (rs2n,1,2,3) (see the detailed description in Section 3.2) of the estimated ensemble of x1 in PET0 (green) and PET1 (red). (d) Time series of the signal-to-noise ratio () of the estimated ensemble of x1 (solid) and w (dotted-dashed) in PET0 (green) and PET1 (red) and DAEPC (see the detailed description in Section 4.4).

To understand the results of this simple experiment, let us analyse the adequacy of eq. (2) for parameter estimation. If a numerical model is symbolically denoted as ∂xt/∂t=f(xt,β,t), a Taylor expansion of model perturbations as the function of parameter errors and initial state errors is

(4 )

Equation (4) shows that any uncertainty in model parameters (say, ∂β) will be transformed into the errors of model states (∂δx/∂t) through the distortion of parameterised processes to the momentum and heat budget of a local fluid unit. As leading-order terms, they are represented by a linear operator ∂f(xt,β,t)/∂β and the non-linear interaction with the initial state errors at each integration step. Given an observational increment on a certain model state variable (i.e. a perturbation on the model state, δx), a filtering parameter estimation attempts to retrieve the associated parameter contributions using eq. (2) to regress the observational increment onto model parameters by the covariance between the model state and the parameter. The successfulness of the parameter estimation process entirely depends on the accuracy of the state-parameter covariance to simulate the terms of ∂f(xt,β,t)/∂β and ∂2f(x,β,t)/∂×∂β. The fitting of eq. (2) assumes that: (1) the model state perturbation is mainly contributed by the parameter error, i.e. the second term on the right-hand side of eq. (4) must be small and (2) the first term of the right-hand side of eq. (4) plays a leading order role in the relationship of Δx and Δβ. In an ensemble-based filtering algorithm, the state-parameter covariance is evaluated through the model state ensemble and the prior ensemble of the parameter being estimated. Again, because model parameters do not have any dynamically supported internal variability and the prior ensemble of a parameter unchanges with the model integration, the accuracy of the ensemble-evaluated covariance is only determined by the accuracy of the model ensemble to simulate the intrinsic uncertainty of the states for which the observations try to sample.

To analyse the adequacy of a model ensemble to represent the intrinsic uncertainty of the observation-sampled states, based on a defined ‘truth’ a priori, we define an ad hoc quantity called rs2n to measure the signal-to-noise ratio of a model ensemble. rs2n consists of three factors as described below:

  1. rs2n,1: use σmt to measure the representation of a model ensemble for the variation of the truth states in a local time window, where σm and σt, respectively, measure the variability represented by the model ensemble and the truth states with σt being the standard deviation of the truth variations in a time window. σm is defined as to take into account both the time mean of the ensemble spread () and fluctuations of the ensemble mean (σm0). Finally, define so that the best (worst) representation is characterised by the value of 1 (0).
  2. rs2n,2: defined as exp(−εm0) where εm0 is the root mean square error of the ensemble mean in the time window, with a value of 1 (0) representing a perfect (worst) estimate from the ensemble mean for the truth state.
  3. rs2n,3: defined as the correlation coefficient between the ensemble mean and the truth state to measure the accuracy of the phase of the ensemble mean variation.

We use a 5-TU time window to evaluate the rs2n,1,2,3 of PET0 and an example of the ensemble of x1 is shown in Fig. 4b and the computed time series of rs2n,1,2,3 for x1 are shown in Fig. 4c. Note, to avoid any 0-value of rs2n,1,2,3 destroying the representation of rs2n for other factors, we set 0.1 as the minimum value for each factor. While the spread of the free model ensemble has a good representation for the magnitude of fluctuations of the truth (solid-green in Fig. 4c), the accuracy of the ensemble mean to represent the mean state and variation phase of the truth is close to 0 (dotted and dotted-dashed green lines), resulting in a very low rs2n (solid-green in Fig. 4d) of the model state ensemble in PET0 . This says that the x1-κ covariance evaluated by the unconstrained model state ensemble is not adequate to represent and ∂2f(x,κ,t)/∂×∂κ, so the resulted parameter adjustment is unreliable.

As a proof-of-concept study case, we examine the rs2n,1,2,3 and rs2n values for the state estimation with the x1 observations and find that as the model ensemble is constrained by the observations (dotted-red lines in Fig. 4b), the signal-to-noise ratio of the x1-κ covariance is enhanced dramatically (see red lines in Fig. 4c,d). Then, we rerun the parameter estimation test case as in PET0 but after 20 TUs of state estimation (200 steps of data assimilation), denoted by PET1 . Unlike the diverged κ estimation in PET0 , the estimated κ ensemble in PET1 gradually converges to the truth after experiencing about 50 steps of parameter correction (dotted-red lines in Fig. 4a). We also compare PET1 to an experiment in which after 20 TUs of model ensemble spinup (without observational constraint), the x1 observations are used to conduct state estimation and parameter correction simultaneously [denoted as PET2 ]. We found that although the estimated κ ensemble tends to approach to the truth after experiencing a series of big adjustments, due to too larger adjustment shocks at the beginning arising from the pure ensemble spinup without data adoption, PET2 produces larger errors for both state estimation and parameter correction. Furthermore, if all ‘atmospheric’ and ‘oceanic’ observations from the observing system described in Section 2.3 are used, the signal-to-noise ratio of state-parameter covariances evaluated from the observation-constrained model ensemble is enhanced more (compare blue lines to red lines in Fig. 4d), and the resulted parameter estimation converges more efficiently [see the case DAEPC in Section 4.4].

The analyses above indicate that a sufficiently observation-constrained model state ensemble is of critical importance for enhancing the signal-to-noise ratio of parameter estimation by providing a signal-dominant state-parameter covariance and a well-representative ensemble mean to the observed state. These results suggest that when a CGCM is combined with the instrumental data for climate assessment and prediction initialisation, choosing an assimilation initial state closer to observations (a climatological state, for instance) may help shorten the assimilation spinup because a model simulation may already drift away from the climatology due to the model bias.

Next, we will design a DAEPC by delaying the parameter estimation until the model ensemble has been sufficiently constrained by observations.

4. A data assimilation scheme for enhancive parameter correction

4.1. Scheme design

Based on the analyses of Section 3, as a particular application of the adaptive signal processing technique (e.g. Kulhavy, 1993; Tao, 2003) with the Bayes theory [eq. (1)], we design a DAEPC in this section. Different from standard ensemble filtering parameter estimation, DAEPC facilitates parameter estimation using the covariances between model states and parameters only after state estimation reaches a ‘quasi-equilibrium’. The ‘quasi-equilibrium’ of state estimation refers as to a state estimation period in which the model ensemble with perturbed parameters has been sufficiently constrained by observations and therefore, it becomes more controlled by parameter errors as discussed in Sections 3.1 and 3.2, so that a background model ensemble is signal dominant for effective parameter estimation.

Data assimilation scheme for enhancive parameter correction can be numerically implemented by the following four steps:

Step 1: Same as Step 1 of TPE described in Section 2.1.

Step 2: Sequentially apply the observations in each component into coupled model system through an ensemble filter for state estimation until the state estimation reaches a ‘quasi-equilibrium’ (see Section 4.2). Denote the ‘quasi-equilibrium’ time as τ and the analysed model state as .

Step 3: After the state estimation is finished at time t, compute the covariance between the initial parameter ensemble (set in Step 1) and the model-estimated observation ensemble from ; apply the observation increment to the covariance to update the parameter ensemble using eq. (2). Denote the updated parameter ensemble as βu which is the prior ensemble of the parameter for the next cycle of parameter correction.

Step 4: Repeat Step 3 and check the spread of the updated ensemble for each parameter. Without internal variability, after a few steps of adjustments, the ensemble of a parameter may rapidly lose its spread. If the prior ensemble spread is too small, an inflation coefficient α (see Section 4.3) is applied to enlarge the prior ensemble of a parameter as so that the parameter ensemble still can be adjusted by new observations, where an over bar represents the ensemble mean and a tilde represents the inflated ensemble. Loop Step 4 until the adaptive data assimilation is completed with all observations available.

4.2. Detection of state estimation ‘quasi-equilibrium’ in a coupled system with multiple time scale variability

In practice, no pre-defined truth exists, so the diagnostic quantities discussed in Section 3.2 are not accessible. Instead, we estimate the time scale for state estimation to reach a ‘quasi-equilibrium’ through examining changes of the amplitude of analysis adjustments. Figure 5 shows an example of the time series of the norm of the model state ensemble adjustments for each coupled model variable when the observations of x1 are assimilated into the model ensemble. Consistent with the relatively long (short) time scale of ‘oceanic’ (‘atmospheric’) variability, the dramatic reduction of the adjustments of ‘oceanic’ (‘atmospheric’) variables is observed during the first five (5) [two (2)] TUs of assimilation, representing direct observational constraints in the assimilation. However, the ‘atmosphere-ocean’ interaction through exchanged fluxes between the ‘atmosphere’ and ‘ocean’ refines the state estimation, and continuously reduces the adjustment amount, especially during the transition period between attractor lobes (showing as spikes of the adjustment norms in Fig. 5).

Fig. 5.   

Time series of the root mean square (Rms) of ensemble analysis increments of (a) x1,2,3 and (b) w, normalised by the ensemble spread and climatological standard deviation (see the context of Section 4.4) produced by the ensemble filtering state estimation using x1 observations at every 0.1 time unit (TU). The thick dashed line marks an estimated time scale for state estimation to reach a ‘quasi-equilibrium’.

In general, to ensure a high signal-to-noise ratio for ensemble-evaluated state-parameter covariances, in a coupled system with multiple time scale variability, we may use the time scale by which the norm of analysis increments of the slowly varying component no longer systematically decreases to represent the time of state estimation ‘quasi-equilibrium’. For example, in the simple case shown in Fig. 5, we may start to use the x1 observations to estimate the model parameters after 20 TUs of state estimation (marked as ‘State Estimation Quasi-Equilibrium’) where direct observational constraints have been done mostly, and the associated dynamical adjustments no longer dramatically improve the state estimation. It is worth to mention that the time scale of state estimation ‘quasi-equilibrium’ has some dependence on the observing system used in the assimilation. For example, if a full observing system described in Section 2.3 is used, the direct observational constraint time scales for both the ‘atmosphere’ and ‘ocean’ become shorter and the corresponding time scale of state estimation ‘quasi-equilibrium’ is shorter too. However, test experiments show that parameter estimation is robust as long as the state estimation has gone through the direct observational constraints, although the convergence is a little faster if parameter estimation is activated a little later. For simplicity, for all DAEPC experiments in this study, we use 20 TUs as the time scale of state estimation ‘quasi-equilibrium’.

The results above suggest that it is necessary to leave at least 5 yr for coupled assimilation spinup (Zhang et al., 2007, 2009) before the global climate observing system is used to estimate coupled model parameters with a CGCM.

4.3. Inflation of prior parameter ensemble based on model sensitivities to parameters

The model sensitivity study with respect to parameters is an important step to understand the role of each parameter in the model and implement parameter estimation (e.g. Tong and Xue, 2008a, 2008b). In particular, the sensitivity of a model on a parameter is a key to determine the inflation of the prior ensemble of the parameter which is an indispensable part of ensemble-based parameter estimation (e.g. Annan and Hargreaves, 2004; Annan et al., 2005; Aksoy et al., 2006a, 2009b; Evensen, 2007). Here, the model sensitivity to each parameter is assessed by examining the temporal evolution of the ensemble spread of model states when the examined parameter is perturbed with a Gaussian white noise on its default value at the initial time. To do so, we iteratively integrate the model ensemble for each parameter that is perturbed by the Gaussian random noise superimposed on its default value (described in Section 2.3). Five percent of the default value is used as the standard deviation of the Gaussian noise. The model ensemble for each parameter is integrated for 2000 TUs starting from the initial conditions created by the spinup described in Section 2.3, and the resulting ensemble spread for the two model components is shown in Fig. 6. Generally, due to different time scales in the ‘atmosphere’ and ‘ocean’, the model shows a fast (slow) response on ‘atmospheric’ (‘ocean’) parameters, but the ensemble spread resulting from different parameter perturbations is indistinguishable after 5 (20) TUs of integrations for the ‘atmosphere’ (‘ocean’).

Fig. 6.   

Time series of (a) the averaged ensemble standard deviations of x1,2,3 and (b) the ensemble standard deviation of w when each of five model parameters is perturbed by a Gaussian noise with a standard deviation of 5% of its default value.

At each step of parameter estimation in DAEPC, the prior ensemble of a parameter is inflated by the following sensitivity-based scheme. Usually, the inflation amplitude of a parameter ensemble is inversely proportional to the model's sensitivity with respect to the parameter. We first set the inflation coefficient αl in as αl=[1+(α0σl,0)/σl] where βl represents the prior ensemble of the lth parameter, is its mean and is the inflated version. Here, σl represents an atmospheric (oceanic) sensitivity if only atmospheric (oceanic) parameters are estimated. If all atmospheric and oceanic parameters are estimated simultaneously, σl is the average of the atmospheric and oceanic sensitivities. We, then, conduct a few trial-and-error tests and find α0=0.4 as the observational frequency is 0.1 TU (it has some dependence on the observational frequency). This means that the prior ensemble of a parameter is enlarged to 0.4/σl times of its initially guessed standard deviation σl,0 if the prior ensemble spread is smaller than this amount. This inflation scheme based on model sensitivities on parameters has been applied to a simple decadal prediction model in which all 16 model parameters are optimised using ‘atmospheric’ and ‘oceanic’ observations for improving decadal predictions (Zhang, 2011a, 2011b).

4.4. Proof-of-concept case studies

In this section, we conduct a series of DAEPC experiments and compare the results to the corresponding TPE results if applicable, as summarised in Table 1. Table 1 shows that all DAEPC experiments successfully retrieve the truth value for the parameter being estimated based on various different observations, with different convergence time scales. Generally, the ‘atmospheric’ parameter converges faster than the ‘oceanic’ parameter and an ‘atmospheric’ or ‘oceanic’ parameter converges faster when the observations taken from the same model component are used. Note, in all examined six cases, only one case in which TPE converges but much slower than DAEPC, where all observations are used to estimate the parameter κ [i.e. TPE ].

To get more insights for the DAEPC performance relative to the TPE, here we analyse more details of DAEPC and TPE . While the DAEPC parameter correction converges almost immediately (in five TUs) as it starts after the ‘quasi-equilibrium’ of state estimation, TPE does not converge within 400 TUs (Fig. 7a). After 40 TUs of parameter correction, the error of the DAEPC-estimated κ is reduced by 99.5%. With the instantaneously corrected parameter, DAEPC further reduces the Rms error of the estimated ‘atmospheric’ (‘oceanic’) states by 63% (88%) over the period of 100–200 TUs compared to the error of SEO while the corresponding TPE error is larger than SEO due to the diverged parameter in the assimilation model (see Fig. 7b,c). In addition, unlike the errors of state variables in SEO and TPE that wiggle with time, the errors in the DAEPC decrease towards zero consistently (Fig. 7b,c) with the corrected parameter. This means that while DAEPC parameter estimation corrects the errors of parameters, it also helps constrain the computational variability for state estimation.

Fig. 7.   

Time series of (a) the ensemble mean of the κ values estimated by traditional parameter estimation (TPE) (green) and the data assimilation scheme for enhancive parameter correction (DAEPC) (red) and (b) the errors of the ensemble mean of x2 and (c) the errors of the ensemble mean of w produced by state estimation only (SEO) (blue), TPE (green) and DAEPC (red) using all x1,2,3 and w observations at every 0.1 time unit (TU).

We also want to see the importance of DAEPC to correctly estimate the attractor pattern (the probability distribution of weather events) when a serious bias arising from a large parameter error exists in the assimilation model. Fig. 8 gives an example in which an assimilation model set with an erroneously guessed σ(11.95) is used to recover the ‘truth’ (with σ=9.95) by assimilating all x1,2,3 and w observations (every 0.1 TU) into the model. It is clear that without parameter correction, SEO fails to correct the attractor over some period (green) although reducing the Rms error of model states greatly (by 30% in this case), while the DAEPC reconstructs the ‘true’ attractor (dotted-black) very precisely with the effective parameter correction (red).

Fig. 8.   

Variation of x3 in x1 space over the period of 110–115 time units (TUs) in the assimilations of data assimilation scheme for enhancive parameter correction (DAEPC) (red) and state estimation only (SEO) (green) when the default value of σ is set to be 11.95 in the assimilation model. The assimilations try to recover the ‘true’ attractor produced by the model integration with σ=9.95 (dotted-black), using the observations that sample the ‘truth’ every 0.1 TU. The blue line is the free model run with σ=11.95 using the same initial conditions as in the assimilations of DAEPC and SEO.

We also study the sensitivity of DAEPC to observational frequency by repeating the DAEPC experiment but using observations at different intervals of 0.2, 0.5 and 1 TU. Generally, for a reduced observational frequency, the convergence of parameter estimation requires a larger inflation coefficient applied to the prior parameter ensemble. When the observational interval changes from 0.1 to 1 TU, the ensemble mean of the parameter κ is corrected from 99% (the error is reduced from 0.78 to 0.005) to 90% (the error is reduced from 0.78 to 0.07). In all the cases, the DAEPC further reduces the error of estimated states from SEO, but when the observational frequency decreases, both errors of estimated model states and parameters increase.

5. DAEPC with biased ‘dynamical core’ and ‘physical scheme’

As discussed in Section 2.1, the model bias in a CGCM also includes the misfittings of dynamical core and physical schemes. With this simple model, we simulate the misfittings of ‘dynamical core’ and ‘physical scheme’ by setting different values for Om, Sm, Ss, Spd (representing ‘dynamical core’) and c1 (representing ‘physical scheme’) in the assimilation model from the values used in the truth model that produces ‘observations’. Then, we can examine the impact of the ‘dynamical core’ and ‘physical scheme’ biases on DAEPC which may provide some insights into the applications of CGCMs.

5.1. Perfect ‘dynamical core’ and ‘physical scheme’

We first assume that the assimilation model bias is only caused by the erroneously set parameters σ, κ, b, Od and c2 (see Table 2), i.e. the ‘dynamical core’ and ‘physical scheme’ in the assimilation model is perfect (the values of Om, Sm, Ss, Spd and c1 are set as the ‘truth’).

With the initial setting for the ensemble of each parameter in columns 4 and 5 of Table 2, DAEPC activates its parameter estimation at the 100th analysis step as state estimation reaches a ‘quasi-equilibrium’, by which the entire set of adjustable model parameters is corrected simultaneously using observations at every 0.2 TU. Generally, all parameters converge to the ‘truth’, albeit with different time scales (Fig. 9). In this example, somewhat unexpected, the convergence rate of the ‘atmospheric’ parameter κ is much slower than the ‘oceanic’ parameter Od (Fig. 9). This implies that the inflation used in the parameter correction may need to be further tuned by the different sensitivities of the ‘atmosphere’ and ‘ocean’. With the observation-updated parameters, going through 1500 steps of adjustments (300 TUs of model integrations), the accuracy of the estimated state (dashed-red line in Fig. 10) is increased dramatically with a diminishing assimilation error. The wiggling behaviour of SEO errors (dashed-green) has been eliminated by DAEPC almost completely, i.e. the compensation of state variability for the parameter errors being removed through the parameter correction. While SEO reduces the averaged Rms error of the ‘atmosphere’ and ‘ocean’ states from 7.76 to 4.17 (by 46%) from the model control, DAEPC further reduces the Rms error to 0.23 (by 95% from SEO). In this experiment with perfect model ‘dynamical core’ and ‘physical scheme’, DAEPC almost completely eliminates the mean error of model states (from 0.15 to 0.008 of SEO) (Table 3) by the parameter optimisation. Also, with the parameter optimisation, DAEPC significantly improves the ensemble assimilation consistency measured by the ratio of the Rms error of the ensemble mean to the ensemble spread (denoted by σe in Table 3) compared to its expectation value, , where M is the ensemble size (20 in this case) (see Murphy, 1988; Snyder and Zhang, 2003). Compared to the DAEPC ratio of 1.12, the SEO ratio of 15.7 addresses the fact that for a biased coupled model due to the use of erroneous parameter values, without parameter optimisation with observations, it is difficult for a filter that only performs state estimation to be convergent.

Fig. 9.   

Time series of the errors of the ensemble mean of parameters σ, κ, b, c2 and Od that are simultaneously estimated using observations of all x1,2,3, and w at every 0.2 time unit (TU).

Fig. 10.   

Time series of the errors of the ensemble mean of w in the (a) first and (b) second thousand time units (TUs) produced by data assimilation scheme for enhancive parameter correction (DAEPC) (red) and state estimation only (SEO) (green) using a perfect/biased (dashed/solid) model, with observations at every 0.2 TU. In DAEPC, the whole set of five model parameters (see Table 2) are simultaneously estimated using the observations. The dots denote the initial conditions from which 20 forecasts start in Figs. 11 and 12.

5.2. Biased ‘dynamical core’ and ‘physical scheme’

Resetting (Om, Sm, Ss, Spd)=(11, 10.1, 1.01, 10.1), we assume that the ocean ‘dynamical core’ in the assimilation model is biased with respect to the ‘observations’, with different heat capacity, external forcings and seasonal cycle from the ‘truth’, in which the ‘ocean’ heat capacity is significantly biased (10% stronger). Also, to set a systematic misfitting of the ‘physical’ coupling scheme in the assimilation model, we use 0.101 as the value of c1 in this biased case. The experiment in Section 5.1 is rerun with this biased ‘dynamical core’ and ‘physical scheme’ model setting.

The bias of the ‘dynamical core’ and ‘physical scheme’ has a small impact on DAEPC but a large impact on SEO, especially for the ‘atmospheric’ state estimation. For example, while the Rms error of ‘atmospheric’ states increases only 10% (0.46 to 0.51) from the perfect DAEPC to the biased DAEPC, it increases by 30% from the perfect SEO to the biased SEO. Interestingly, while SEO blends the model bias into the estimated model variability, DAEPC dramatically reduces the magnitude of such computational modes caused by the misfitting of ocean dynamical core (compare the solid-red line to the solid-green line of Fig. 10). Also, due to the compensation effect of parameter errors to the model bias, the accuracy of the final values of estimated parameters in the biased model case (column 7 of Table 2) is slightly lower than that in the perfect model case (column 6 of Table 2). Again, with the parameter correction, DAEPC significantly improves the ensemble assimilation consistency, although the consistency is degraded slightly by the biased model setting on ‘dynamical core’ and ‘physical scheme’ compared to the case with perfect ‘dynamical core’ and ‘physical scheme’ setting (see the lower panel of column 5 of Table 3).

6. Impact of DAEPC on ‘climate’ prediction

We now examine the impact of parameter correction on model prediction. We launch 20 forecasts (each forwarded up to 50 TUs) with the initial conditions selected every 50 TUs apart during 1000–2000 TUs (see red and green dots in Fig. 10b). In this twin experiment framework, we evaluate forecast skills using the anomaly correlation coefficient (ACC) and root mean square (Rms) error of forecasts verified with the ‘truth’. However in the real world with a CGCM and instrumental data, the variance of the innovation of forecasts to observations may be a more appropriate quantity to compare forecast skills from different data assimilation schemes (see e.g. Fukumori et al., 1999).

6.1. ‘Weather’ forecast skill

With the improved initial conditions, DAEPC improves the ‘weather’ forecast (within twos TUs for x1,2,3) dramatically, evidenced by much higher ACC and lower Rms error compared to the SEO forecast (Fig. 11a,b). If an ad hoc value of 0.6 ACC is used to characterise the time scale of a valid weather forecast (Hollingsworth et al., 1980), DAEPC extends a valid ‘atmospheric’ forecast two times longer. Both the forecast skills produced by DAEPC and SEO become indistinguishable after a lead time of three TUs which is roughly the time scale of the predictability of the ‘atmospheric’ signals in this simple model. A comparison of the difference of forecast skills between the perfect and biased settings of model ‘dynamical core’ and ‘physical scheme’ shows that the biased setting has a greater impact on the ‘weather’ forecast of SEO than that of DAEPC.

Fig. 11.   

Variations of (a,c) the Rms errors and (b,d) anomaly correlation coefficients (ACCs) of the forecasted ensemble mean of x1 (panels a, b) and w (panels c, d) with the forecast lead time based on 20 forecast cases initialised from state estimation only (SEO) and DAEPC using perfect/biased ‘dynamical core’ and ‘physical scheme’ shown in Fig. 10.

The phenomenon above can be understood by the compensation effect between state errors and parameter errors. In SEO, the compensation of state errors for erroneous parameters creates extra errors in initial conditions. The incorrect parameters and biased ‘dynamical core’ and ‘physical scheme’ acting on larger initial errors significantly restrict the forecast capability of the model. In DAEPC, however, the corrected parameters in the assimilation model create more accurate model states and reduce the bias of the initial conditions. Furthermore, the corrected parameters in the prediction model acting on the improved initial conditions greatly limit the adverse impact of the model bias on the forecast.

6.2. ‘Climate’ prediction skill

We first examine the prediction of the ‘climate’ variable w in the experiments using perfect ‘dynamical core’ and ‘physical scheme’. With very small initial condition errors (red dots on the dotted-red line of Fig. 10b) and observation-estimated parameters, all the predictions of w by DAEPC show nearly perfect forecasts within two TUs (Fig. 11c,d). The improved parameters applied to the prediction model starting from more accurate initial conditions in DAEPC extend a higher ACC and lower Rms error up to about 15 TUs from about 5 TUs of SEO (see dotted-dashed and dashed lines in Fig. 11c,d).

Compared to the SEO forecasts, overall speaking, the mean and Rms errors of the DAEPC forecasts are reduced over the entire lead time of 50 TUs (0.57 to 0.35 for mean error, 1.46 to 1.29 for Rms error) (see also Fig. 11c). However, the forecast Rms error is reduced more dramatically in the first 15 TUs (from 1.4 to 1) where the DAEPC forecasts maintain a higher ACC than 0.6, whereas the SEO forecasts have such a high value only in the first four TUs (see the dashed line in Fig. 11d). For short-term forecasts (within four TUs), DAEPC enhances the averaged forecast ACC from 0.73 to 0.91 of SEO. For long-term predictions (up to 15 TUs), DAEPC enhances the averaged forecast ACC by 53% (from 0.47 to 0.72). Thus, DAEPC improves the ‘climate’ predictability triply in this case. Fig. 11d also shows that after 15 TUs, both SEO (dashed) and DAEPC (dotted-dashed) ACCs are lower than 0.6 and indistinguishable, suggesting the limitation of the potential predictability of the ‘climate’ in this simple model.

Again, the biased ‘dynamical core’ and ‘physical scheme’ have a much smaller impact on the ‘climate’ predictions using the DAEPC initialisation and estimated parameters than on the predictions using the SEO initialisation (compare the difference of the solid and dotted-dashed lines with the difference of the dotted and dashed lines in Fig. 11c).

6.3. Relative importance of initial conditions and parameters for ‘weather’ forecast and ‘climate’ prediction

To evaluate the relative importance of initial conditions and parameters in improving ‘weather’ forecast and ‘climate’ prediction, here we conduct forecast experiments using different settings on initial conditions and model parameters. The forecast experiments described in the beginning of Section 6 are repeated using the DAEPC-produced (SEO-produced) initial conditions but applying uncorrected (DAEPC corrected) parameters to the biased prediction model as described in Section 5.2. Results show that the more accurate initial conditions created by DAEPC have a strong impact on both the ‘weather’ forecast (within two TUs for x1,2,3) and ‘climate’ prediction (up to 15 TUs for w) (compare dotted-dashed to dashed lines in Fig. 12). Applying the observation-estimated parameters in the prediction model is particularly important to produce better climate predictions (compare solid to dotted-dashed lines in Fig. 12b). If the DAEPC-produced initial conditions are used, the corrected parameters show a strong impact on ‘weather’ forecasts while the impact of the parameters becomes weak as the SEO-produced initial conditions is used (compare the difference of solid and dotted-dashed lines to the difference of dashed and dotted lines in panel a). On the contrary, the impact of the parameters on ‘climate’ predictions has a small sensitivity on the accuracy of initial conditions (panel b).

Fig. 12.   

Variations of anomaly correlation coefficients (ACCs) of the forecasted ensemble mean of (a) x1 and (b) w with the forecast lead time based on 20 forecast cases initialised from state estimation only (SEO)-produced initial conditions with data assimilation scheme for enhancive parameter correction (DAEPC)-corrected parameters applied to the prediction model (dashed) and DAEPC-produced initial conditions with uncorrected parameters applied to the prediction model (dotted-dashed). The prediction model is set with a biased ‘dynamical core and physical scheme’ described in Section 5.2. The original ACCs produced by SEO and DAEPC shown by the dotted and solid lines in Fig. 11 are also marked here as references. The thin dotted black lines mark a 0.6 ACC level in both panels.

7. Summary and discussions

A data assimilation scheme for enhancive parameter correction (DAEPC), which is modified from the standard data assimilation with adaptive parameter estimation, has been designed to improve parameter estimation using observations. DAEPC performs parameter correction only when state estimation reaches a ‘quasi-equilibrium’ so that the uncertainty of model states is sufficiently constrained by observations, and the covariances between parameters and model states are signal dominant and thereby facilitate effective parameter estimation. The observation-updated parameters improve the state estimation of next cycle, and the improved estimate of model states further enhances the signal-to-noise ratio of the state-parameter covariances for the next parameter correction.

Data assimilation scheme for enhancive parameter correction has been implemented into a conceptual ‘atmosphere-ocean’ coupled model with an ensemble filtering algorithm (Anderson, 2001, 2003; Zhang and Anderson, 2003; Zhang et al., 2007). Numerical results show that DAEPC provides a systematic approach to estimate the whole array of coupled model parameters from observations, and produces more accurate state estimates. In particular, based on a perfect or biased setting on the ‘dynamical core’ and ‘physical scheme’ of this simple model, the impacts of the model bias arising from misfittings of dynamical core and physical schemes on ‘climate’ estimation and prediction are tested and discussed. Through effective parameter correction, DAEPC mitigates computational variability of estimated states induced by model bias. With improved initial conditions and observation-estimated parameters, the model forecast skill is greatly improved and the drift of model predictions is substantially constrained. Additional forecast experiments using the DAEPC-improved initial conditions with uncorrected parameters or the initial conditions produced by traditional state estimation with the DAEPC-corrected parameters show that the use of observation-estimated parameters in the prediction model is particularly important for producing a better long-term climate prediction.

Despite many promising results shown with the simple model, the low order model has many limitations, and serious challenges remain in applying DAEPC to CGCMs. First, in a CGCM, the misfitting from complex physical processes introduces complicated model biases. How the complex model biases impact on DAEPC needs to be examined further. Second, the success of the parameter correction relies on the representation of observations and errors of state estimates. In a CGCM, a parameter usually takes a globally uniform value. Given that the representation of real observations and model sensitivities with respect to parameters always have a geographic dependence, the use of sparse observations in a non-sensitive region must degrade the signal-to-noise ratio of parameter estimation. Based on the results of model sensitivity studies, we may allow model parameters to vary geographically where observations are reliable and the impact of the parameter is substantial, but retain the default value elsewhere. Then, a CGCM-like model that has geographically dependent circulations is required to examine the impact of the geographic dependence of model sensitivities and an observing system on parameter optimisation. Third, for this simple model case, a trial-and-error procedure has been used to obtain an inflation coefficient. For a CGCM, on model sensitivity studies (see e.g. Tong and Xue, 2008a), we may need to implement an adaptive inflation correction algorithm (Anderson, 2007). In addition, we found that even in this low-order simple model, some parameters require a long time for convergence. Given that the observational records of the atmosphere and ocean extend backwards in time only one century at most (the sea surface temperature records may be longer), how to accelerate the convergence of parameter estimation would also remain as an important research topic.

Acknowledgements

The authors would like to thank Drs. Tony Gordon, Andrew Wittenberg and Rym Msadek for their thorough examination and comments on an earlier version of this manuscript. Thanks also go to Dr. Issac Held for the discussions that brought authors more thoughts on this topic. The authors thank two anonymous reviewers for their thorough examination and comments that are very useful for improving the manuscript. This research is supported by the NSF project Grant 0968383.

References

  1. Aksoy, A, Zhang, F and Nielsen-Gammon, J. W. 2006a. Ensemble-based simultaneous state and parameter estimation. Geophys. Res. Lett. 33, L12801, https://doi.org/10.3402/tellusa.v64i0.10963. 

  2. AksoyA. ZhangF. Nielsen-GammonJ. W. Ensemble-based simultaneous state and parameter estimation in a Two-Dimensional Sea-Breeze Model. Mon. Wea. Rev. 2006b; 134: 2951–2970. 

  3. AndersonJ. L. An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev. 2001; 129: 2884–2903. 

  4. AndersonJ. L. A local least squares framework for ensemble filtering. Mon. Wea. Rev. 2003; 131: 634–642. 

  5. AndersonJ. L. An adaptive covariance inflation error correction algorithm for ensemble filters. Tellus. 2007; 59A: 210–224. 

  6. AnnanJ. D. HargreavesJ. C. Efficient parameter estimation for a highly chaotic system. Tellus. 2004; 56A: 520–526. 

  7. AnnanJ. D. HargreavesJ. C. EdwardsN. R. MarshR. Parameter estimation in an intermediate complexity Earth System Model using an ensemble Kalman filter. Ocean Modelling. 2005; 8: 135–154. 

  8. AsselinR. Frequency filter for time integrations. Mon. Wea. Rev. 1972; 100: 487–490. 

  9. BanksH. T. Control and estimation in distributed parameter systems. Frontiers in Applied Mathematics. BanksH. T.SIAM: Philadelphia, 1992a; 11: 227 pp. 

  10. BanksH. T. Computational issues in parameter estimation and feedback control problems for partial differential equation systems. Physica D. 1992b; 60: 226–238. 

  11. BorkarV. S. MundraS. M. Bayesian parameter estimation and adaptive control of Markov processes with time-averaged cost. Applicationes Mathematicate. 1999; 25(4): 339–358. 

  12. Collins, W. D, Bitz, C. M, Blackmon, M. L, Bonan, G. B, Bretherton, C. S. andco-authors. 2006. The Community Climate System Model Version 3 (CCSM3). J. Clim. 19, 2122–2143. 

  13. DaleyR. Atmospheric Data Analysis. Cambridge University Press: New York, 1991; 457 pp. 

  14. DeeD. P. Bias and data assimilation. Quart. J. Roy. Meteor. Soc. 2005; 131: 3323–3343. 

  15. DeeD. P. SilvaA. M. D. Data assimilation in the presence of forecast bias. Quart. J. Roy. Meteor. Soc. 1998; 124: 269–295. 

  16. Delworth, T. L, Broccoli, A. J, Rosati, A, Balaji, R. J. S. V, Beesley, J. A. andco-authors. 2005. GFDL's CM2 global coupled climate models, Part I: Formulation and simulation characteristics. J. Clim. 19(5), 643–674. 

  17. Evensen G.. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. 1994; 99: 10143–10162. 

  18. Evensen, G. 2007. Data Assimilation: The Ensemble Kalman Filter. Springer: Berlin, 187 pp. 

  19. Fukumori, I, Raghunath, R, Fu, L and Chao, Y. 1999. Assimilation of TOPEX/POSEIDON data into a global ocean circulation model: how good are the results?. 104, 647–665. 

  20. Ghil, M, Cohn, S, Tavantzis, J, Bube, K and Isaacson, E. 1981. Applications of estimation theory to numerical weather prediction. In: Dynamical Meteorology. Data Assimilation Methods (eds. Bengtsson et al.). Springer-Verlag: New York, 139–224. 

  21. HansenJ. PenlandC. On stochastic parameter estimation using data assimilation. Mon. Wea. Rev. 2007; 230: 88–98. 

  22. HollingsworthA. ArpeK. TiedtkeM. CapaldoM. SavijarviH. The performance of a medium range forecast model in winter – impact of physical parameterizations. 1980; 108: 1736–1773. 

  23. HoutekamerP. L. , MitchellH. L. A sequential ensemble Kalman filter for atmospheric data assimilation. Trans. ASME, Ser. D, J. Basic Eng. 2001; 129: 123–137. 

  24. JazwinskiA. H. Stochastic Processes and Filtering Theory. Academic Press: New York, 1970; 376 pp. 

  25. KalmanR. A new approach to linear filtering and prediction problems. 1960; 82: 35–45. 

  26. KalmanR. BucyR. New results in linear filtering and prediction theory. Mon. Wea. Rev. 1961; 83: 95–109. 

  27. Kalnay, E. 2003. Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press: New York, 341. pp. 

  28. KondrashovD. SunC. GhilM. Data assimilation for a coupled ocean-atmosphere model, Part II: Parameter estimation. J. Atmos. Sci. 2008; 136: 5062–5076. 

  29. KulhavyR. Implementation of Bayesian parameter estimation in adaptive control and signal processing. J. Clim. 1993; 42(4): 471–482. 

  30. LorenzE. N. Deterministic non-periodic flow. Quart. J. Roy. Meteor. Soc. 1963; 20: 130–141. 

  31. MolteniF. FerrantiL. PalmerT. N. ViterboP. A dynamical interpretation of the global response to equatorial pacific SST anomalies. 1993; 6: 777–795. 

  32. MurphyJ. M. The impact of ensemble forecasts on predictability. Mon. Wea. Rev. 1988; 114: 463–493. 

  33. Robert, A. 1969. The integration of a spectral model of the atmosphere by the implicit method. Proceedings of the WMO/IUGG Symposium on NWP. Japan Meteorological Society: TokyoJapan, 19–24. 

  34. SnyderC. ZhangF. Assimilation of simulated Doppler radar observations with an ensemble Kalman filter. Mon. Wea. Rev. 2003; 131: 1663–1677. 

  35. Tao, G. 2003. Adaptive Control Design and Analysis. John Wiley & Sons: HobokenNJ, 640. pp. 

  36. TongM. XueM. Simultaneous estimation of microphysical parameters and atmospheric state with simulated Radar data and ensemble square root Kalman filter. Part I: Sensitivity analysis and parameter identifiability. 2008a; 136: 1630–1648. 

  37. TongM. XueM. Simultaneous estimation of microphysical parameters and atmospheric state with simulated Radar data and ensemble square root Kalman filter. Part II: Parameter estimation experiments. 2008b; 136: 1649–1668. 

  38. Zhang, S. 2011a. Impact of observation-optimized model parameters on decadal predictions: simulation with a simple pycnocline prediction model. Geophys. Res. Lett. 38, L02702, https://doi.org/10.3402/tellusa.v64i0.10963. 

  39. Zhang, S. 2011b. A study of impacts of coupled model initial shocks and state-parameter optimization with observations on climate predictions using a simple pycnocline prediction model. J. Clim. 24, 6210–6226.https://doi.org/10.3402/tellusa.v64i0.10963. 

  40. ZhangS. AndersonJ. L. Impact of spatially and temporally varying estimates of error covariance on assimilation in a simple atmospheric model. Mon. Wea. Rev. 2003; 55A: 126–147. 

  41. Zhang, S, Anderson, J. L, Rosati, A, Harrison, M. J, Khare, S. P. andco-authors. 2004. Multiple time level adjustment for data assimilation. 56A, 2–15. 

  42. ZhangS. HarrisonM. J. RosatiA. WittenbergA. T. System design and evaluation of coupled ensemble data assimilation for global oceanic climate studies. Mon. Wea. Rev. 2007; 135: 3541–3564. 

  43. Zhang, S, Rosati, A and Harrison, M. J. 2009. Detection of multi-decadal oceanic variability by ocean data assimilation in the context of a “perfect” coupled GCM. J. Geophys. Res. 114, C12018, https://doi.org/10.3402/tellusa.v64i0.10963. 

  44. ZhuY. NavonI. M. Impact of parameter estimation on the performance of the FSU Global Spectral Model using its full physics adjoint. Mon. Wea. Res. 1999; 127: 1497–1517. 

comments powered by Disqus