In recent decades, data assimilation (DA) has been developed for complex systems, such as the variational DA, to assimilate the surface and atmospheric observations. A cost function of the variational DA is composed of mainly four variables: the background, observations, background error covariance (B), and observation error covariance (R). The analysis provides an estimate of the atmospheric state with minimal discrepancy from the background and observations.
Nowadays, a variety of observations, including millions of satellite data, are assimilated with a numerical model in the DA system. The contribution of each observation to forecasts (i.e. observation impact) can be quantitatively estimated within a very short computation time using the adjoint-based forecast sensitivity to observation (FSO) method (Baker and Daley, 2000). The observation impact using the FSO has been used semi-operationally to determine observation types, locations, and variables that are beneficial or detrimental to forecasts (Langland and Baker, 2004; Cardinali, 2009; Gelaro and Zhu, 2009; Gelaro et al., 2010; Joo et al., 2013; Jung et al., 2013; Kim et al., 2013; Kim and Kim, 2013; Kim and Kim, 2014; Lorenc and Marriott, 2014; Kim et al., 2017; Kim and Kim, 2017).
The performance of numerical weather prediction (NWP) is also associated with error covariances in DA. The forecast sensitivity to the error covariance parameters indicates whether the error covariances should be inflated or deflated to obtain more precise forecasts (Daescu and Todling, 2010; Daescu and Langland, 2013; Jung et al., 2013; Kim et al., 2014). Daescu and Todling (2010) and Jung et al. (2013) demonstrated that inflated B and deflated Rs are helpful to reduce forecast errors in the DA system they used, which implies that the Rs are too large and need to be decreased to improve forecasts by the guidance obtained from the adjoint-based forecast sensitivity. Weston et al. (2014) and Bormann et al. (2016) showed that the satellite observation error variances are too large and need to be decreased by considering inter-channel error correlations to improve forecasts. In addition, recent studies (e.g. Bormann and Bauer, 2010, Bormann et al., 2010, Weston et al., 2014, Bormann et al., 2016, Waller et al., 2016a, 2016b; Cordoba et al., 2017) showed that the error covariances of the Advanced Microwave Sounding Unit-A (AMSU-A), the Infrared Atmospheric Sounding Interferometer (IASI), and the Atmospheric Infrared Sounder (AIRS) calculated based on diagnostic analyses are smaller than those error covariances currently used in most operational centres. Therefore, the Rs used in most operational centres can be modified appropriately to obtain better forecasts. Lupu et al. (2015) suggested approximately 60% deflation of the observation error variances for 33 IASI channels using the diagnosed observation-error standard deviation (Desroziers et al., 2005) with the guidance obtained from the adjoint-based forecast sensitivity in the recent version of European Centre for Medium-Range Weather Forecasts (ECMWF) four-dimensional variational (4DVAR) system. However, Lupu et al. (2015) do not provide the information of the specific inflation and deflation of Rs using the forecast sensitivity to observation error covariance parameters. Thus, the specific magnitude of inflation and deflation of the specific Rs has never been suggested, using the forecast sensitivity to observation error covariance parameters.
As a way to improve the operational NWP, this study provides a method for adjusting the error covariances using the forecast sensitivity to error covariance parameters and investigates the effect of observation error variance adjustment on operational NWP. In the Korea Meteorological Administration (KMA), various satellite data and other observations are assimilated in DA to form the initial conditions (i.e. analysis) of an operational forecast. Kim et al. (2013) diagnosed the characteristics of the FSO for high-impact weather cases in summer and winter over the Korean peninsula. Kim and Kim (2014) showed that the uncertainty (i.e. sampling error) associated with the observation impact statistics should consider lagged correlations between observation impact (i.e. total observation impact) data because the observation impact data at different times are correlated. The Unified Model (UM) 4DVAR system at the KMA (hereafter the KMA UM 4DVAR system) is used to investigate the effect of observation error variance adjustment on operational NWP. The adjusted observation error variance parameters were calculated based on a multiple linear regression method for NWP in July and August 2012. Then, the adjusted observation error variance was applied to the operational KMA UM 4DVAR system for NWP in August 2012 to verify the performance of the adjusted observation error variance for one-month period. Sections 2–4 provide the methodology, results, and conclusions, respectively.
In the KMA UM 4DVAR system, the 27-hour forecast error ($e$) with respect to the true state ${\mathbf{x}}_{t}$ which is measured by the total energy norm (Lorenc and Marriott, 2014) is expressed as
The nonlinear forecast error reduction (FER; $\delta e$) is defined as the difference between the error of the forecast integrated from the analysis (${\mathbf{x}}_{a}^{f}$) and the error of the forecast integrated from the background (${\mathbf{x}}_{b}^{f}$) (Jung et al., 2013; Kim and Kim, 2014):
The ${\mathbf{x}}_{a}$ is determined from the optimal linear analysis equation as
Then FSO, the gradient of $\delta e$ with respect to $\mathbf{y}$, is represented as
The observation impact corresponding to the ith observation type is expressed using FSO as
In the KMA UM, the APF model ${\mathbf{M}}^{T}$ is linearised along an average trajectory between two forecast trajectories initialised at the analysis and the background (Lorenc and Marriott, 2014), and the adjoint of the Kalman gain is designed as an iterative linear system which is an algorithm for estimating the minimum of a cost function, similar to the algorithm by Cardinali (2009). Lorenc and Marriott (2014) showed that the linear and nonlinear perturbation growths based on an average trajectory between two forecast trajectories initialised at the analysis and the background are closer compared to those based on either analysis or background forecast trajectory. Furthermore, Lorenc and Marriott (2014) demonstrated that correlations between nonlinear FERs for various runs are higher for the FERs based on the average trajectory than that based on either analysis or background forecast trajectory. Lorenc and Marriott (2014) mentioned that these results are caused by the fact that the PF model of the 4DVAR system of the United Kingdom Met Office (UKMO) and the KMA does not use the Taylor expansion that assumes infinitesimal perturbation, but uses the linearised function of the analysis increment due to the batch of observations that allows finite perturbations.
In a DA system, the analysis is obtained by combining background and observations considering the associated error covariances $\mathbf{B}$ and $\mathbf{R}$. $\mathbf{B}$ and $\mathbf{R}$ can be adjusted using parametric variables (i.e. proper weighting) in parametric space (Desroziers et al., 2009):
Daescu and Todling (2010) derived the forecast sensitivity to error covariance parameters in parametric space. The forecast sensitivity to error covariance parameters has been globally calculated in the Naval Research Laboratory (NRL) atmospheric variational DA system (Daescu and Langland, 2013) and regionally calculated in the Advanced Research version of the Weather Research and Forecasting (WRF)-ARW system (Jung et al., 2013; Kim et al., 2017).
Following Daescu and Todling (2010) and Daescu and Langland (2013), the forecast sensitivity to error covariance parameters is represented as:
Different from Daescu and Todling (2010) and Daescu and Langland (2013) in which the $\delta e$ and forecast sensitivity to error covariance parameters (i.e. Equation (8a) and (8b)) are calculated based on an analysis trajectory, those in this study are calculated based on an average trajectory between two forecast trajectories initialised at the analysis and the background, as in Equation (4). Although the forecast sensitivity to error covariance parameters is more directly associated with the analysis trajectory, the average trajectory between two forecast trajectories initialised at the analysis and the background is assumed to be a substitute for the analysis trajectory in this study. The average trajectory is used to reduce the computational cost and storage space for data backup in the operational KMA UM 4DVAR system. By using the FSO calculated routinely in the operational KMA UM 4DVAR system, additional adjoint integrations to evaluate the FSO based on the analysis trajectory are not necessary. Because the methodology suggested in this study aims at application to the operational KMA NWP system, the computational cost and storage space for adjoint integration are important issues. In addition, the difference between the analysis trajectory and the average trajectory may not be large for short-term forecasts in the UKMO and KMA UM 4DVAR system as shown in Lorenc and Marriott (2014).
Using matrix representation, the error covariance impact in Equation (9) can be expressed as
If the first matrix on the right-hand side of Equation (10) has a full rank or is over-determined, then the error covariance adjustment parameters (i.e. the vector on the right-hand side of Equation (10)) can be calculated using multiple linear regression. The estimated error covariance adjustment parameters are used to calculate the error covariance parameters as:
Then, Equation (11a) and (11b), along with Equation (7a) and (7b), are used to obtain $\mathbf{B}$ and $\mathbf{R}$.
However, $\delta {s}^{b}$ and $\delta {s}_{i}^{o}$ cannot be uniquely determined by Equation (10) because the first matrix on the right-hand side of Equation (10) is rank deficit by the relationship ${\sum}_{i=1}^{P}\frac{\delta e}{\delta {s}_{i}^{o}}+\frac{\delta e}{\delta {s}^{b}}=0$. To calculate $\delta {s}_{i}^{o}$ in Equation (10), an appropriate value needs to be assigned to $\delta {s}^{b}$, which converts the first matrix on the right-hand side of Equation (10) to full rank. After assigning the pre-determined $\delta {s}^{b}$, Equation (10) becomes
Because the first matrix on the right-hand side of Equation (12) has full rank, the observation error covariance adjustment parameters $\delta {s}_{i}^{o}$ (i.e. the vector on the right-hand side of Equation (12)) can be calculated using multiple linear regression.
The KMA UM version 7.7 with 4DVAR DA system version 27.2 (Courtier et al., 1994; Clayton, 2004) was used for this study. The model domain covers 769 × 1024 horizontal grid points with a resolution of approximately 25 km; 70 vertical eta-height hybrid layers exist from the surface to 80 km. The eta-height hybrid layers follow terrain near the surface but evolve to constant height surfaces as the layers go up. To make the minimisation more affordable, the KMA UM 4DVAR system uses a simplification operator, which truncates the forecast trajectory to a reduced resolution and dynamically balances the truncated trajectories (Lorenc and Payne, 2007). The model domain of the 4DVAR DA system covers 217 × 288 horizontal grid points with a resolution of approximately 80 km.
The physical parameterisations used in the KMA UM include Edwards–Slingo radiation (Edwards and Slingo, 1996), mixed-phase precipitation (Wilson and Ballard, 1999), the UKMO surface exchange scheme (Essery et al., 2001), the non-local boundary layer (Lock et al., 2000), the new gravity wave drag scheme (Webster et al., 2003), and the mass flux convection scheme (Kershaw and Gregory, 1997; Gregory et al., 1997). The same physical parameterisations are employed in the PF and APF models of the DA system, with the exception that a fixed boundary layer scheme is used instead of the non-local boundary layer scheme and that the simplified moisture physics is used instead of the mixed-phase precipitation and mass flux convection schemes. That is, the DA system uses simple physics parameterisations instead of complex physics parameterisations in the KMA UM to achieve numerical stability. The FSO tool is version 27.2 developed by the UKMO (Lorenc and Marriott, 2014). The assimilated observations include all operational observations from the KMA (Table 1).
The error covariance parameters were estimated for July and August 2012. Then, the adjusted observation error variance using the error covariance parameters was applied to the operational KMA UM 4DVAR system for August 2012 to verify the suitability of the adjusted observation error variance for operational forecasts. The analyses and forecasts using the current operational observation error variances and the analyses and forecasts with the adjusted observation error variances were compared to verify the effect of the newly adjusted observation error variances. The experiment using the operational (adjusted) observation error variances is called the CTL (ADJ_COV) experiment. For both ADJ_COV and CTL experiments, the operational B matrix of the KMA UM 4DVAR system was used and not inflated.
As in most operational DA system, the B and R of the KMA UM 4DVAR system are error variances with only block diagonal and diagonal elements, respectively. Therefore, the background and observation error covariances are the background and observation error variances in this study.
Figure 1 shows the time-averaged forecast sensitivity to error covariance parameters and standard deviation in August 2012. The error bar in Fig. 1 was calculated based on the first-order auto regression, as discussed in Kim and Kim (2014). On the vertical axis, the B matrix indicates ${s}^{b}$– sensitivity and others correspond to ${s}_{i}^{o}$– sensitivity for each type of observation. The ${s}^{b}$– sensitivity is a linear combination of all FSOs (Equation (8a)) and can be projected onto the observation space (Daescu and Todling, 2010; Daescu and Langland, 2013; Jung et al., 2013). The ${s}^{b}$– sensitivity is negative, which implies that $\delta {s}^{b}$ should be positive and B be inflated to have negative $\delta e$ (i.e. reduction of the forecast error), as indicated in Equation (9). All average ${s}_{i}^{o}$– sensitivity values are positive except for the sensitivity associated with SONDE_q and SURFACE_q, which implies that all Rs except for SONDE_q and SURFACE_q need to be deflated to reduce the forecast error. The ${s}_{i}^{o}$– sensitivity of ATOVS is largest, followed by that of IASI, Geo_AMV, SONDE winds (SONDE_uv), and aircraft winds (AIRCRAFT_uv). A large ${s}_{i}^{o}$– sensitivity indicates that the variation in $\delta e$ may be large when the associated error covariance is adjusted. The ${s}_{i}^{o}$– sensitivities associated with SONDE and surface-specific humidity (i.e. SONDE_q and SURFACE_q) are relatively small, which may be attributable to the dry energy norm used to calculate $\delta e$.
The $\delta {s}^{b}$ needs to be determined to solve $\delta {s}_{i}^{o}$ in Equation (12). As shown in Fig. 1, the negative ${s}^{b}$– sensitivity implies that B should be inflated to reduce the forecast error, similar to the results of Daescu and Todling (2010) and Jung et al. (2013). To inflate B, $\delta {s}^{b}$ should be positive from Equation (7a) and (11a). The magnitude of inflation may depend on the DA system. Then the question is how much inflation is appropriate in the KMA UM 4DVAR system. The B of the hybrid ensemble/4DVAR DA system of the KMA is a linear combination of the statistical error covariance and the ensemble-based error covariance. Clayton et al. (2013) showed that a combination of 100% weighted statistical B and 30% weighted ensemble-based B resulted in the better analysis in a hybrid ensemble/4DVAR DA system at the UKMO which is a similar system with that in the KMA. Although theoretically, the sum of statistical and ensemble-based B needs to be 100%, operational performance of the UM was improved by adding 30% weighted ensemble-based B to the 100% weighted statistical B. Therefore, the 30% inflation of the statistical B may be appropriate in the KMA UM 4DVAR system to lead better forecasts. The validity of specifying $\delta {s}^{b}$ as 0.3 is discussed in detail at the end of this subsection.
Once $\delta {s}^{b}$ was pre-determined as 0.3, $\delta {s}_{i}^{o}$ in Equation (12) were calculated using multiple linear regression of Chatterjee and Hadi (1986). The ${s}^{b}$– sensitivity, ${s}_{i}^{o}$– sensitivity, and error covariance impact $\delta e$ are in the parametric space and should be known to calculate $\delta {s}_{i}^{o}$. Because the error covariance impact cannot be diagnosed without $\delta {s}^{b}$ and $\delta {s}_{i}^{o}$, the observation impact in Equation (4) was used as a substitute for the error covariance impact. The similarity between the observation impact and error covariance impact is discussed in detail in the following Section 3.4. The ${s}^{b}$– sensitivity (Equation (8a)), ${s}_{i}^{o}$– sensitivity (Equation (8b)), and observation impact (Equation (4)) for July and August 2012 were used to solve Equation (12) in a preconditioned conjugate gradient algorithm of the multiple linear regression method. The ${s}^{b}$– sensitivity and ${s}_{i}^{o}$– sensitivity greater than three times of standard deviation from the time-averaged values were considered outliers and not included in solving Equation (12). By solving Equation (12) by the aforementioned method, $\delta {s}_{i}^{o}$ that correspond to $\delta {s}^{b}$ are obtained.
Table 2 shows $\delta {s}_{i}^{o}$ obtained by solving Equation (12) based on data for July and August 2012. Because $\delta {s}_{i}^{o}$ are independent variables of the multiple linear regression equation, as shown in Wilks (2006), they can be used to adjust the error covariance of August 2012. Because the ${s}_{i}^{o}$–sensitivity of ATOVS was largest (Fig. 1), the reduction of the R of ATOVS used in the KMA UM 4DVAR system by 72.14% (Table 2) may have large reduction of forecast error. The R of ATOVS for July and August, which was diagnosed by one analysis-forecast iteration using the method suggested by Desroziers et al. (2005), indicates that the current R of ATOVS should be deflated by 62.2%, which supports that the 72.14% reduction in the ATOVS R may be appropriate. ATOVS observations include AMSU-A, AMSU-B, and HIRS (Table 1); the Rs of AMSU-A, AMSU-B, and HIRS used in the KMA UM 4DVAR system are shown in Table 3. In addition, Lupu et al. (2015) suggested the average 60% deflation of the R for 33 IASI channels assimilated in the ECMWF 4DVAR system. The Rs of AMSU-A and IASI used in the KMA UM 4DVAR system are similar to those in the UKMO 4DVAR system, but are 25–50% greater than those of the ECMWF 4DVAR system. Considering the overinflation of the Rs of AMSU-A and IASI in the KMA UM 4DVAR system, the 72.14% reduction seems appropriate for the ATOVS R. The overinflation of observation error variances in most operational centres is associated with the compensation for the incorrect assumption of uncorrelated observation errors.
To check the validity of 30% inflation of the statistical B, dependence of $\delta {s}_{i}^{o}$ on the choice of $\delta {s}^{b}$ was investigated. The $\delta {s}^{b}$ was varied to 0.1, 0.3, 0.5, and the corresponding $\delta {s}_{i}^{o}$ was calculated. The $\delta {s}_{i}^{o}$ of ATOVS corresponding to 0.1, 0.3, 0.5 $\delta {s}^{b}$ are −0.8714, −0.7214, and −0.4714, respectively. As $\delta {s}^{b}$ increases, the absolute value of $\delta {s}_{i}^{o}$ for most observation types decrease approaching 0, which results in additional reduction in forecast error (not shown). Figure 2 shows the $\delta {s}_{i}^{o}$ of each observation type corresponding to 0.1, 0.3, 0.5, 0.7, and 0.9 $\delta {s}^{b}$. When $\delta {s}^{b}$ is 0.1, $\delta {s}_{i}^{o}$ of Geo_AMV, AIRCRAFT_uv, SONDE_q, SURFACE_t are less than −1. When $\delta {s}^{b}$ is 0.9, $\delta {s}_{i}^{o}$ of SSMIS is above 1. When $\delta {s}^{b}$ is 0.3, 0.5, 0.7, $\delta {s}_{i}^{o}$ of all observation types show physically meaningful values between −1 and 1. Because the ${s}_{i}^{o}$– sensitivity of SURFACE_q and SONDE_q are negative (Fig. 1), the $\delta {s}_{i}^{o}$ of SURFACE_q and SONDE_q need to be positive. Only 0.3 $\delta {s}^{b}$ provides positive $\delta {s}_{i}^{o}$ for SURFACE_q and SONDE_q, which is consistent with the sensitivities shown in Fig. 1. Therefore, 0.3 $\delta {s}^{b}$ is the most appropriate choice among the several values tested.
Figure 3a shows the time series of the nonlinear FER, the approximated FER in the observation space (i.e. observation impact), and the approximated FER in the parametric space (i.e. error covariance impact) for July and August 2012. The time series shows generally negative values because the forecast error integrated from the analysis is smaller than the forecast error integrated from the background due to the DA effect (Langland and Baker, 2004; Gelaro et al., 2007; Jung et al., 2013; Lorenc and Marriott, 2014). Because the approximated FERs that correspond to 06, 12, 18 UTC 6, 00 UTC 7, 12 UTC 21, 12 UTC 22, 06 UTC 30 July 2012, 06, 12, 18 UTC 6, 00 UTC 7, and 12 UTC 17 August 2012 were zero or positive due to numerical instability, the approximated FERs for those times were not used for determining $\delta {s}_{i}^{o}$. The numerical instability is originated from problems in the computational stability of the adjoint model’s time-integration scheme and temporal resolution of the adjoint model, which may be associated with the static instability of the linearisation trajectory, small grid length near the poles, and steep orography as mentioned in Joo et al. (2012).
The error covariance impact in July and August 2012 was estimated using the $\delta {s}^{b}$ and $\delta {s}_{i}^{o}$ in Table 2. The correlation coefficient between the error covariance impact and the observation impact of July and August 2012 is 0.98 (Fig. 3a), which is quite reasonable because the observation impact is used as a substitute for the error covariance impact to calculate the $\delta {s}_{i}^{o}$. Once the 16 $\delta {s}_{i}^{o}$ of July and August 2012 listed in Table 2 were calculated, the $\delta {s}_{i}^{o}$ were applied to calculate the error covariance impact of August 2012 by the inner product with ${s}^{b}$– sensitivity and ${s}_{i}^{o}$– sensitivity of August 2012, and 0.3 $\delta {s}^{b}$. The resulting error covariance impact is similar to the observation impact of August 2012 (Fig. 3b). Therefore, the number of predictors used in the multiple linear regression is appropriate for statistically estimating the error covariance impact of August 2012, as indicated by Wilks (2006). In an additional experiment, the error covariance impact calculated by the error covariance adjustment parameters in July 2012 and the forecast sensitivity to error covariance parameters in August 2012 were similar to the observation impact in August 2012, which implies that the number of predictors was appropriate for statistically forecasting the error covariance impact in August 2012 (not shown) although some of the error covariance adjustment parameters in July 2012 are not physically meaningful due to small number of data during one month period for the multiple linear regression.
To verify the impact of the adjusted observation error variance using $\delta {s}_{i}^{o}$, the adjusted observation error variance was applied to the operational KMA UM 4DVAR system for NWP in August 2012. Because the ${s}_{i}^{o}$– sensitivity of ATOVS is largest (Fig. 1), the ATOVS R was adjusted by applying the $\delta {s}_{i}^{o}$ in Table 2 to Equation (11b). Then, 24-h forecasts were generated from the updated analyses ($\mathbf{x}{\left(\stackrel{\u2322}{\sigma}\right)}_{a}$) using the deflated ATOVS R in the cycling DA system for August 2012 (ADJ_COV). These forecasts were compared with the operational (i.e. control) forecasts generated from the analyses ($\mathbf{x}{(\sigma )}_{a}$) using the operational ATOVS R in the cycling DA system (CTL) for the same period. For both ADJ_COV and CTL experiments, the operational B matrix of the KMA UM 4DVAR system was used and not inflated. Thus, the difference between ADJ_COV and CTL is caused by the reduced R of ATOVS. The green shading in Fig. 3b represents the additional error covariance impact by applying the $\delta {s}_{i}^{o}$ of ATOVS in ADJ_COV compared to CTL, which suggests that additional approximated FER is expected when using a new analysis generated by the adjusted R of ATOVS for forecasts. The approximated FERs corresponding to 06, 12, 18 UTC 5, 00 UTC 6, 12 UTC 17, and 12 UTC 21 August 2012 were zero or positive due to numerical instability (Fig. 3b).
The updated analysis began at 03 UTC 24 July 2012 for spin-up. The residual within the assimilation window (i.e. O-A; the difference between observations and analysis trajectory within the assimilation window) of CTL and ADJ_COV were compared for the corresponding time range (i.e. 0–5 h from T–3 that is 03 UTC in the KMA UM) (Fig. 4). Similarly, the residual in the 24–29 h forecast range (i.e. O-F: the difference between observations and forecast trajectory) of CTL and ADJ_COV were compared (Fig. 4). The O-A (O-F) was verified within a 5-h period because the observation window of the KMA UM 4DVAR is from the T–3 (forecast time) to 5 h after the T–3 (forecast time). The observations used for the verification were quality-controlled over the globe and assimilated in the KMA UM 4DVAR system.
The ratio between the time-integrated O-A of ADJ_COV and that of CTL in August 2012 was calculated as
The ratio between the time-integrated O-F of ADJ_COV and that of CTL in August 2012 is calculated as
Vertical profiles of the root mean square (RMS) of the average O-F of ADJ_COV and CTL verified by SONDE observations are shown in Fig. 8. The O-F of ADJ_COV verified by SONDE observational variables (i.e. zonal wind, meridional wind, temperature, and specific humidity) were smaller than that of CTL. The differences between the O-F of ADJ_COV and CTL increased from the surface towards the mid-troposphere but decreased near the upper troposphere. The small differences in the O-F of ADJ_COV and CTL in the upper troposphere may reflect the small number of reliable SONDE observations in the upper troposphere. Therefore, the O-F of ADJ_COV and CTL should be verified with another reliable observation type in the upper troposphere.
Figure 9 shows the RMS of the average O-F of ADJ_COV and CTL compared with AMSU-A observations for the Meteorological Operational Satellite-A (Metop-A) and the National Oceanic and Atmospheric Administration (NOAA) 19 satellite. The reduction rate for ADJ_COV is calculated as
The channel numbers 3, 7 (8), 15 for Metop-A (NOAA 19) were excluded in Fig. 9 because they are not used in the KMA UM. The O-F of ADJ_COV was smaller than that of CTL for most channels, with the exception of channel 13 of Metop-A AMSU-A. An average of 2.34% (3.53%) of the O-F verified by Metop-A (NOAA 19) AMSU-A was reduced for ADJ_COV compared with CTL (Fig. 9). The O-F for channels 11–14 (with the exception of channel 13 of Metop-A AMSU-A), which are sensitive in the upper troposphere, decreased for both satellites. In addition, the O-F of ADJ_COV verified by the hyperspectral sensor of IASI was reduced compared with that of CTL, showing 21.7% reduction rate for ADJ_COV (Fig. 10). Ozone and window channels of IASI were not used. In addition, the O-F for ADJ_COV in the upper troposphere was reduced similarly with IASI when verified by the AIRS sensor (not shown).
In this study, the error covariance parameters for July and August 2012 were estimated by applying multiple linear regression to the observation impact data and forecast sensitivity to error covariance parameters. Adjusted observation error variances were applied to one-month (August 2012) forecasts using the KMA UM 4DVAR system. In the multiple linear regression analysis, a total of 17 error covariance parameters were used as predictors, and large enough number of data during two month period is necessary to avoid physically meaningless error covariance adjustment parameters. The error covariance impact calculated by the error covariance adjustment parameters in July and August 2012 and the forecast sensitivity to error covariance parameters in August 2012 were similar to the observation impact in August 2012; therefore, the number of predictors was appropriate for statistically estimating the error covariance impact in August 2012.
The statistics of the forecast sensitivity to error covariance parameters showed that the observation (background) error covariance should be deflated (inflated) to reduce the forecast error. The ${s}_{i}^{o}$– sensitivity is highest for ATOVS, followed by IASI, Geo_AMV, SONDE wind (SONDE_uv), and aircraft wind (AIRCRAFT_uv). The high ${s}_{i}^{o}$– sensitivity indicates that the forecast error reduces greatly when the corresponding observation error covariance is adjusted. Based on sensitivity tests, the B was inflated by 30% and the observation error covariance adjustment parameters were calculated by the multiple linear regression method. Because the ${s}_{i}^{o}$– sensitivity of ATOVS was highest, the observation error variance of ATOVS was adjusted. The multiple linear regression indicates that the observation error variance of ATOVS should be deflated by 72.14%. The observation error variances of ATOVS calculated by Desroziers’ method are roughly similar to that calculated by the multiple linear regression. Compared with Desroziers’ method which calculates the adjustment values of observation error variances sequentially using a single analysis-forecast cycle in practice, the method in this study provides the adjustment values of all observation types simultaneously. In addition, the adjustment values of the observation error variances based on the FSO method can be updated easily using the data of two months period up to the analysis time and the process can move forward day by day because the FSO statistics are calculated semi-operationally in the KMA UM 4DVAR system.
The ADJ_COV experiment that uses the adjusted ATOVS error variance was compared with the CTL experiment that employs the current ATOVS error variance used operationally in the KMA UM 4DVAR system. In both experiments, the background error covariance used was the operational one in the KMA UM 4DVAR system. The analysis and forecast of ADJ_COV and CTL were verified by SONDE observations for August 2012. Both residuals within the assimilation window (O-A) and in the 24–29 h forecast range (O-F) decreased for ADJ_COV compared with CTL. This implies that the adjusted ATOVS observation error variance reduced the residuals in both the assimilation window and forecast range. The new analysis of ADJ_COV was most similar to the ATOVS observations. The time-integrated O-A of ADJ_COV was reduced by 10.62% (3.83% when excluded ATOVS) compared with that of CTL for all observation types. Because the ATOVS satellite observations retrieve the vertical profile of temperature and specific humidity in the atmosphere, the adjusted ATOVS observation error variance reduced the O-F verified by the satellite observations as well as those verified by the SONDE and surface observations. The time-integrated O-F of ADJ_COV was reduced by 5.4% compared with that of CTL for all observation types.
The RMS of the average O-F of ADJ_COV was smaller than that of CTL in the vertical when verified by SONDE observations. The difference between the O-F of ADJ_COV and that of CTL increased from the surface towards the mid-troposphere. Because the number of SONDE observations was very small in the upper troposphere, it was difficult to verify the O-F based on the SONDE observations in the upper troposphere. Alternatively, the O-F in the upper troposphere were verified by AMSU-A channels 11–14 (with the exception of channel 13 of Metop-A AMSU-A), IASI, and AIRS sensors. In the upper troposphere, the O-F of ADJ_COV was also smaller than that of CTL, which implies that the adjusted error variance of ATOVS helps to reduce the O-A and O-F compared with the ATOVS error variance currently used in the KMA UM 4DVAR system.
This study suggests a method for calculating error covariance adjustment parameters and demonstrates that the method can be used to adjust the error covariances in the KMA NWP and DA systems. In future studies, the effect of specific observation error covariance adjustment will be investigated in a hybrid system that uses an error covariance combining both a static and an ensemble-based background error covariance. In addition, application of the error covariance adjustment methodology to NWP in other seasons and adjustment of other observation error variances will be investigated to identify more optimised observation error variances in the KMA UM 4DVAR system. Furthermore, the sensitivity of the adjustment results depending on the use of a moist energy norm instead of a dry energy norm needs to be addressed when adjusting error variances of observation types which are sensitive to humidity (e.g. AMSU-B).
The authors appreciate reviewers for their valuable comments. The authors appreciate the Numerical Modeling Center of the Korea Meteorological Administration and the Met Office for providing computer facility support and resources for this study.
No potential conflict of interest was reported by the authors.
Baker, N. L. and Daley, R. 2000. Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Qjr. Meteorol. Soc. 126, 1431–1454. doi: https://doi.org/10.1002/qj.49712656511.
Bormann, N. and Bauer, P. 2010. Estimates of spatial and interchannel observation-error characteristics for numerical weather prediction. I: Methods and application to ATOVS data. Qjr. Meteorol. Soc. 136, 1036–1050. doi: https://doi.org/10.1002/qj.616.
Bormann, N., Collard, A. and Bauer, P. 2010. Estimates of spatial and interchannel observation-error characteristics for numerical weather prediction. II: Application to AIRS and IASI data. Qjr. Meteorol. Soc. 136, 1051–1063. doi: https://doi.org/10.1002/qj.615.
Bormann, N., Bonavita, M., Dragani, R., Eresmaa, R., Matricardi, M. and co-authors. 2016. Enhancing the impact of IASI observations through an updated observation-error covariance matrix. Qjr. Meteorol. Soc. 142, 1767–1780. doi: https://doi.org/10.1002/qj.2774.
Cardinali, C. 2009. Monitoring the observation impact on the short-range forecast. Qjr. Meteorol. Soc. 135, 239–250. doi: https://doi.org/10.1002/qj.366.
Chatterjee, S. and Hadi, A. S. 1986. Influential observations, high leverage points, and outliers in linear regression. Stat. Sci. 1, 379–393. doi: https://doi.org/10.1214/ss/1177013622.
Clayton, A. M. 2004. Var Scientific Documentation 60. UK Met Office, Exeter, UK, pp. 1–12.
Clayton, A. M., Lorenc, A. C. and Barker, D. M. 2013. Operational implementation of a hybrid ensemble/4D-Var global data assimilation system at the Met Office. Qjr. Meteorol. Soc. 139, 1445–1461. doi: https://doi.org/10.1002/qj.2054.
Cordoba, M., Dance, S. L., Kelly, G. A., Nichols, N. K. and Waller, J. A. 2017. Diagnosing atmospheric motion vector observation errors for an operational high-resolution data assimilation system. Qjr. Meteorol. Soc. 143, 333–341. doi: https://doi.org/10.1002/qj.2925.
Courtier, P., Thépaut, J.-N. and Hollingsworth, A. 1994. A strategy of operational implementation of 4D-Var using an incremental approach. Qjr. Meteorol. Soc. 120, 1367–1387. doi: https://doi.org/10.1002/qj.49712051912.
Daescu, D. N. and Todling, R. 2010. Adjoint sensitivity of the model forecast to data assimilation system error covariance parameters. Qjr. Meteorol. Soc. 136, 2000–2012. doi: https://doi.org/10.1002/qj.693.
Daescu, D. N. and Langland, R. H. 2013. Error covariance sensitivity and impact estimation with adjoint 4D-Var: theoretical aspects and first applications to NAVDAS-AR. Qjr. Meteorol. Soc. 139, 226–241. doi: https://doi.org/10.1002/qj.1943.
Desroziers, G., Berre, L., Chapnik, B. and Poli, P. 2005. Diagnosis of observation, background and analysis-error statistics in observation space. Qjr. Meteorol. Soc. 131, 3385–3396. doi: https://doi.org/10.1256/qj.05.108.
Desroziers, G., Berre, L., Chabot, V. and Chapnik, B. 2009. A posteriori diagnostics in an ensemble of perturbed analyses. Mon. Wea. Rev. 137, 3420–3436. doi: https://doi.org/10.1175/2009MWR2778.1.
Edwards, J. M. and Slingo, A. 1996. Studies with a flexible new radiation code. I: Choosing a configuration for a large-scale model. Qjr. Meteorol. Soc. 122, 689–719. doi: https://doi.org/10.1002/qj.49712253107.
Essery, R., Best, M. and Cox, P. 2001. MOSES 2.2 Technical Documentation. Hadley Centre Technical Note 30. UK Met Office, Exeter, UK, pp. 1–30.
Gelaro, R., Zhu, Y. and Errico, R. M. 2007. Examination of various-order adjoint-based approximations of observation impact. Metz. 16, 685–692. doi: https://doi.org/10.1127/0941-2948/2007/0248.
Gelaro, R. and Zhu, Y. 2009. Examination of observation impacts derived from observing system experiments (OSEs) and adjoint models. Tellus A. 61, 179–193. doi: https://doi.org/10.1111/j.1600-0870.2008.00388.x.
Gelaro, R., Langland, R. H., Pellerin, S. and Todling, R. 2010. The THORPEX observation impact intercomparison experiment. Mon. Wea. Rev. 138, 4009–4025. doi: https://doi.org/10.1175/2010MWR3393.1.
Gregory, D., Kershaw, R. and Inness, P. M. 1997. Parametrization of momentum transport by convection. II: Tests in single-column and general circulation models. Qjr. Meteorol. Soc. 123, 1153–1183. doi: https://doi.org/10.1002/qj.49712354103.
Joo, S., Lorenc, A. C. and Marriott, R. 2012. Diagnosis of exaggerated impacts on adjoint-based sensitivity studies, Met Office Technical Report 564. UK Met Office, Exeter, UK, pp. 1–18.
Joo, S., Eyre, J. and Marriott, R. 2013. The impact of Metop and other satellite data within the Met Office global NWP system using an adjoint-based sensitivity method. Mon. Wea. Rev. 141, 3331–3342. doi: https://doi.org/10.1175/MWR-D-12-00232.1.
Jung, B. J., Kim, H. M., Auligné, T., Zhang, X., Zhang, X. and co-authors. 2013. Adjoint-derived observation impact using WRF in the western North Pacific. Mon. Wea. Rev. 141, 4080–4097. doi: https://doi.org/10.1175/MWR-D-12-00197.1.
Kershaw, R. and Gregory, D. 1997. Parametrization of momentum transport by convection. I: Theory and cloud modeling results. Qjr. Meteorol. Soc. 123, 1133–1151. doi: https://doi.org/10.1002/qj.49712354102.
Kim, H. M., Kim, S.-M., Joo, S. and Kim, E.-J. 2014. Estimation of satellite observation impact on numerical weather forecast using adjoint-based method. 26 March – 1 April 2014. In: 19th International TOVS Study Conference (ITSC-19), Jeju, Republic of Korea.
Kim, S., Kim, H. M., Kim, E.-J. and Shin, H.-C. 2013. Forecast sensitivity to observations for high-impact weather events in the Korean Peninsula. Atmosphere. 23, 171–186. doi: https://doi.org/10.14191/Atmos.2013.23.2.171 (in Korean with English abstract).
Kim, S.-M. and Kim, H. M. 2013. Observation impact estimation using a Forecast Sensitivity to Observation (FSO) method in the Global and East Asia Regions. 7–12 April 2013. In: EGU General Assembly 2013, Vienna, Austria.
Kim, S.-M. and Kim, H. M. 2014. Sampling error of observation impact statistics. Tellus A. 66, 25435. doi: https://doi.org/10.3402/tellusa.v66.25435.
Kim, S.-M. and Kim, H. M. 2017. Adjoint-based observation impact of Advanced Microwave Sounding Unit-A (AMSU-A) on the short-range forecast in East Asia. Atmosphere. 27, 93–104. doi: https://doi.org/10.14191/Atmos.2017.27.1.093 (in Korean with English abstract).
Kim, M., Kim, H. M., Kim, J., Kim, S.-M., Velden, C. and Hoover, B. 2017. Effect of enhanced satellite-derived atmospheric motion vectors on numerical weather prediction in East Asia using an adjoint-based observation impact method. Wea. Forecasting. 32, 579–594. doi: https://doi.org/10.1175/WAF-D-16-0061.1.
Langland, R. H. and Baker, N. L. 2004. Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus A. 56, 189–201. doi: https://doi.org/10.1111/j.1600-0870.2004.00056.x.
Lorenc, A. C. and Marriott, R. 2014. Forecast sensitivity to observations in the Met Office Global numerical weather prediction system. Qjr. Meteorol. Soc. 140, 209–224. doi: https://doi.org/10.1002/qj.2122.
Lorenc, A. C. and Payne, T. 2007. 4D-Var and the butterfly effect: statistical four-dimensional data assimilation for a wide range of scales. Qjr. Meteorol. Soc. 133, 607–614. doi: https://doi.org/10.1002/qj.36.
Lock, A. R., Brown, A. R., Bush, M. R., Martin, G. M. and Smith, R. N. B. 2000. A new boundary-layer mixing scheme. Part 1: Scheme description and single-column model tests. Mon. Wea. Rev. 128, 3187–3199. doi: https://doi.org/10.1175/1520-0493(2000)128<3187:ANBLMS>2.0.CO;2.
Lupu, C., Cardinali, C. and McNally, A. P. 2015. Adjoint‐based forecast sensitivity applied to observation‐error variance tuning. Qjr. Meteorol. Soc. 141, 3157–3165. doi: https://doi.org/10.1002/qj.2599.
Waller, J. A., Simonin, D., Dance, S. L., Nichols, N. K. and Ballard, S. P. 2016a. Diagnosing observation error correlations for Doppler radar radial winds in the Met Office UKV model using observation-minus-background and observation-minus-analysis statistics. Mon. Wea. Rev. 144, 3533–3551. doi: https://doi.org/10.1175/MWR-D-15-0340.1.
Waller, J. A., Ballard, S. P., Dance, S. L., Kelly, G. A., and Nichols, N. K. 2016b. Diagnosing horizontal and inter-channel observation error correlation for SEVIRI observations using observation-minus-background and observation-minus-analysis statistics. Remote Sens. 8, 581. doi: https://doi.org/10.3390/rs8070581.
Webster, S. A., Brown, A. R., Cameron, D. R., and Jones, C. P., 2003. Improvements to the representation of orography in the Met Office Unified Model. Qjr. Meteorol. Soc. 129, 1989–2010. doi: https://doi.org/10.1256/qj.02.133.
Weston, P. P., Bell, W. and Eyre, J. R. 2014. Accounting for correlated error in the assimilation of high resolution sounder data. Qjr. Meteorol. Soc. 140, 2420–2429. doi: https://doi.org/10.1002/qj.2306.
Wilks, D. S. 2006. Statistical Methods in the Atmospheric Sciences. 2nd ed. Academic Press, Burlington, MA, pp. 627.
Wilson, D. R. and Ballard, S. P. 1999. A microphysically based precipitation scheme for the UK Meteorological Office unified model. Qjr. Meteorol. Soc. 125, 1607–1636. doi: https://doi.org/10.1002/qj.49712555707.