A- A+
Alt. Display

# On the relationship between climate sensitivity and modelling uncertainty

## Abstract

Climate model projections are used to investigate the potential impacts of climate change on future weather, agriculture, water resources, human health, the global economy, etc. However, climate projections have a broad range of associated uncertainties, and it is a challenge to take account of these uncertainties in impact studies and risk assessments. Knowing which uncertainties matter and which may be reduced via scientific research or political decisions can help policy-makers in making informed decisions, scientists in focusing their resources, and businesses in building resilience to uncertainties that cannot be avoided. On the global scale, the present political resistance or ability to move from agreements to significant action provides the largest uncertainty in climate projections, followed by the uncertainty associated with climate modelling itself. Here, we show that climate sensitivity is a very important source of model uncertainty over large parts of the globe not only for temperature, but also for precipitation and wind projections. Because ‘climate sensitivity’ is a collective term that encompasses a wide range of feedback mechanisms in the climate system, we may not know for a long time whether models with high or low climate sensitivities are more relevant for the twenty-first century projections. Nevertheless, investigations of climate impacts cannot wait. Here we argue that it is physically and statistically unsound to mix climate model with high and low climate sensitivities, and that the subset chosen for any impact study should depend on the question one is trying to answer.

Keywords:
How to Cite: Mauritzen, C., Zivkovic, T. and Veldore, V., 2017. On the relationship between climate sensitivity and modelling uncertainty. Tellus A: Dynamic Meteorology and Oceanography, 69(1), p.1327765. DOI: http://doi.org/10.1080/16000870.2017.1327765
Published on 01 Jan 2017
Accepted on 26 Apr 2017            Submitted on 2 Feb 2017
1.

## Introduction

Climate model projections are extensively used to investigate the potential impacts of climate change (IPCC, 2014). However, climate projections are uncertain for a variety of reasons. We do not know what the future will bring in terms of political decisions, technological breakthroughs, conflicts and agreements that will affect manmade climate change. We also do not know what volcanic and cosmic activities may affect the climate in the future. In addition to these ‘scenario’ uncertainties, there are inherent uncertainties in climate modelling, as well as uncertainties associated with the internal variability of the climate system. Usually, we think of the latter as year-to-year variations in average weather, i.e. things that happen too quickly to be regarded as climate. It is a technical as well as philosophical challenge to take stock of these uncertainties in impact studies and risk assessments. Questions about the political implementation capacity or willingness to follow up international agreements such as the 2015 Paris accord represent the greatest uncertainties regarding climate projections, at least for the latter part of the century. Closer in time, the uncertainty associated with modelling climate is huge. The aim of this paper is to make sense of model uncertainty. Though model uncertainty involves uncertainty about all the processes and feedbacks that make up the climate system, as well as mathematical inaccuracies due to the numerical methods used, it turns out that much of the model uncertainty can be pinned on one key player: ‘climate sensitivity’.

‘Climate sensitivity’ is intended as a measure of how fast Earth responds to changes in atmospheric CO2 concentration. The estimate has remained fairly constant for 40 years: In 1979 a committee on anthropogenic global warming convened by the National Academy of Sciences in the United States estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. At that time, only two sets of models were available; one exhibited a climate sensitivity of 2 °C, the other exhibited a climate sensitivity of 4 °C. Thirty-five years later, the IPCC’s Fifth Assessment Report came to the same conclusion. Specifically, it stated: ‘Equilibrium climate sensitivity is likely in the range 1.5 °C to 4.5 °C (high confidence), extremely unlikely less than 1 °C (high confidence), and very unlikely greater than 6 °C (medium confidence)’.

There is, in our opinion, no reason to think that this value should be constant over time: complex systems do not respond linearly to forcing (case in point: the response of human bodies to external stress). But it is a handy scaling mechanism. And decades of research have taught us that it is even more: we now know much more about which physical processes and feedback mechanisms are important for determining a model’s climate sensitivity. According to the latest climate assessment (IPCC, 2014), the water vapour/lapse rate, albedo and cloud feedbacks are the principal determinants of Equilibrium Climate Sensitivity (ECS).1

We may not know for a long time whether models with high or low climate sensitivities are more relevant for the twenty-first century projections. However, to be prepared for the key risks of climate change – recognised by Oppenheimer et al. (2014) to be the breakdown of infrastructure due to extreme weather, ill-health, disturbed livelihoods due to inland flooding, mortality due to storm surges, flooding, heat-waves, breakdown of food systems due to extreme weather, droughts, flooding, loss of rural livelihoods due to insufficient access to drinking and irrigation water, loss of marine ecosystems and the loss of terrestrial ecosystems – we must provide advice to decision-makers despite these very large uncertainties.

It is our hope that the insights into the relationship between climate sensitivity and model uncertainty provided here will be of help to investigators who need to provide advice in the light of large uncertainties. The paper is organised as follows: the methods for calculating projection uncertainty, and the data used, are presented in Section 2. In Section 3, model uncertainty for the three variables surface air temperature change [°C], precipitation change [%] and wind speed change [m/s] is estimated both on global and regional scales. And finally, in Section 4, we discuss the results.

2.

## Methods

2.1.

### Models and scenarios

Coordinated climate modelling experiments at a variety of modelling centres around the world performed using common future-forcing scenarios have become the standard for producing climate projections (Knutti and Sedláček, 2013). These multi-model ensembles provide our best, though imperfect, basis for estimating projection uncertainties. In this study, we have used the CMIP5 data-set (Taylor et al., 2012). The specific models used are listed in Table 1. We use uniform model weights; i.e. all models are equally important for the uncertainty estimates, and we only use one simulation for each model version. In the analyses we use temperature, precipitation and wind speed anomalies, calculated as the difference from the 1971 to 1999 mean of each model centre’s historical (‘twentieth century’) climate simulation.

We use three future scenarios (Representative Concentration Pathways; RCPs) assessed in IPCC AR5: the ‘2-degree’ scenario (RCP2.6), the ‘business-as-usual’ scenario (RCP8.5) and an intermediate scenario (RCP4.5) (Moss et al., 2010).

The estimates of ECS for each CMIP5 model member in Table 1 column 2 are from Flato et al. (2013) and Sherwood et al. (2014).

In Fig. 1, we show the difference in projected surface air temperature anomaly obtained from the 23 CMIP5 climate models and for the three different emissions scenarios presented in Table 1.

Fig. 1.

Global surface temperature anomaly relative to the 1971–1999 period. One line for each model simulation in Table 1, using a 10-year moving average filter. Each scenario is shown in a different colour: RCP2.6 in blue, RCP4.5 in orange and RCP8.5 in red.

2.2.

### Calculating climate projection uncertainties

Uncertainty assessments in climate science (on global and regional scales) have been a topic of interest for many decades. From the advent of IPCC in 1988 and CMIP in 1995, the ability to compare models developed at different centres has provided an opportunity to understand the differences between these models to quantify the associated uncertainty (Giorgi and Mearns, 2002; Murphy et al., 2004; Tebaldi et al., 2005; Stephenson et al., 2012). Advances in climate modelling approaches and the inclusion of additional feedbacks have provided updated data-sets, methods and approaches to quantifying uncertainty (see, e.g. (Knutti et al., 2008; Hawkins and Sutton, 2009; Hawkins and Sutton, 2011; Yip et al., 2011; Northrop and Chandler, 2014; Ylhäisi et al., 2015).

The literature generally recommends that the uncertainty in climate projections be assessed through the estimation of internal, I(t); model, M(t); and scenario, S(t), uncertainties, where the total projection uncertainty, T(t), is the sum of these three: T(t) = I(t) + M(t) + S(t).

The internal uncertainty, I(t), is typically associated with the short-term variability of the Earth’s climate (where ‘short’ is undefined, but certainly shorter than 30 years, which is the meteorological definition of climate). El Niño is a much-used example of a phenomenon that strongly contributes to internal variability. By definition, internal uncertainty cannot be reduced because it represents variability unrelated to the phenomenon under study.

The model uncertainty, M(t), is associated with the model parameterizations, the set of equations used, and other model-specific issues. Model uncertainty can be reduced through the better representation of the physical processes in the model, but it can never be eliminated. Model uncertainty is typically estimated using the spread of climate model outputs obtained from different model centres and their ensemble members.

Finally, the scenario uncertainty, S(t), is the uncertainty associated with not knowing what the emissions of climate-affecting gases and aerosols will actually be in the coming decades. That uncertainty is so large that we need to allow for a large range of possibilities. Thus, we have chosen to keep the full range of Representative Concentration Pathways, from RCP2.6 to RCP8.5 (Table 1). Typically, the scenario uncertainty is estimated by using the spread of climate projections obtained from various scenarios. In Fig. 1, we see that until around 2040, the spread between models is as large as the spread between scenarios. After that, the spread between models within one scenario is less than the spread between scenarios.

The method for estimating the various components of climate uncertainty that we have chosen follows closely that of (Hawkins and Sutton, 2009). Assume x(t) is the annual average model output, and $\stackrel{~}{\left(}t\right)$ is a smoothed version of x. The model uncertainty M(t) is calculated as the variance over all smoothed climate model outputs, averaged over all scenarios:

((1) )
$M\left(t\right)={〈\frac{1}{MD-1}\sum _{m=1}^{MD}{\left(\stackrel{~}{x}\left(m,s,t\right)-\stackrel{~}{x}{\left(m,s,t\right)}_{m}\right)}^{2}〉}_{s}$

where $\stackrel{~}{x}\left(m,s,t\right)$ is the low-pass-filtered (20-year moving average) climate model output, m is the model (of a total of MD model runs), s is the scenario (of a total of SC scenarios) and t is time. ${⟨\cdots ⟩}_{m}$ and ${⟨\cdots ⟩}_{s}$ denote the means over all models and over all scenarios, respectively. Generally, $\stackrel{~}{x}\left(m,s,t\right)$ is defined for every point on the grid. However, when we compute M(t) for the global mean, we first spatially average the quantity of interest and then proceed with Equation (1); i.e. $\stackrel{~}{x}\left(m,s,t\right)$ is, in that case, spatially averaged over the entire globe.

Similarly, the scenario uncertainty, S(t), is defined as the variance over all scenarios, SC, averaged over all existing models, MD:

((2) )
$S\left(t\right)={〈\frac{1}{SC-1}\sum _{s=1}^{sc}{\left(\stackrel{~}{x}\left(m,s,t\right)-{〈\stackrel{~}{x}\left(m,s,t\right)〉}_{s}\right)}^{2}〉}_{m}$

The internal variability ε is, per construction, the high-pass-filtered part of the model output signal:

((3) )
${\mathit{\epsilon }}_{}\left(m,s,t\right)={x}_{}\left(m,s,t\right)-\stackrel{~}{x}\left(m,s,t\right)$

The internal uncertainty is then defined as the internal variability’s variance over time:

((4) )
${I}_{}={〈{〈\frac{1}{TM-1}\sum _{t=1}^{TM}{\left({\mathit{\epsilon }}_{}\left(m,s,t\right)-{〈{\mathit{\epsilon }}_{}\left(m,s,t\right)〉}_{t}\right)}^{2}〉}_{m}〉}_{s}$

where TM is the total timespan of the climate model output, i.e. data in the interval from 2005 to 2099 for future projections and from 1900 to 2005 for the ‘twentieth century’ simulations.

As is well known, the year-to-year variability of a climate variable like temperature is large. To avoid that this variability completely overwhelms any climate signal in the time series it is common to smooth the time series. In terms of the uncertainty calculations, the effect of substituting the annual data x with 2-year (x2) and 10-year (x10) moving averages in Equation (3) is to lower the relative importance of this uncertainty so that the two other uncertainties (model and scenario) becomes important at an earlier stage (Fig. 2). For the duration of this paper we have used x10 in Equation (3) (identical to the choice made by Hawkins and Sutton (2009)).

Fig. 2.

Climate projections uncertainty in temperature, using all models and all scenarios in Table 1. (a) Using x in Equation (3), i.e. a 1-year average; (b) as in (a), but using a two-year average x2; (c) as in (a) but using x10.

We therefore rewrite Equations (3) and (4) as:

((5) )
${\mathit{\epsilon }}_{10}\left(m,s,t\right)={x}_{10}\left(m,s,t\right)-\stackrel{~}{x}\left(m,s,t\right)$
((6) )
${I}_{10}={〈{〈\frac{1}{TM-1}\sum _{t=1}^{TM}{\left({\mathit{\epsilon }}_{10}\left(m,s,t\right)-{〈{\mathit{\epsilon }}_{10}\left(m,s,t\right)〉}_{t}\right)}^{2}〉}_{m}〉}_{s}$

Note that we have assumed (Equations (4) and (6)) the internal uncertainty to be time-independent; i.e. we assume that it is not influenced by global warming.

Scenario uncertainty is also shown in Fig. 2. In the global average it grows exponentially with time and completely dominates the climate projection uncertainty for the latter half of the twenty-first century. The only way to reduce scenario variability is to remove one or more of the future scenarios. Despite the Paris Agreements, which would favour future scenario RCP2.6, we believe that all three scenarios should still be considered possible futures, and thus we include all three in the uncertainty calculations.

3.

## Model uncertainty and climate sensitivity

The model uncertainty is, by construction (Equation 1), a measure of the model spread. Since the model spread increases with time (Fig. 1), so does the model uncertainty (Fig. 2). In order to investigate the source of the time dependency we have coloured, in Fig. 3, the temperature projections for the individual models according to their ECS (Table 1; see also Flato et al., 2013). We find that models with high ECS (coloured pink) systematically warm faster through the twenty-first century, whereas models with low ECS (coloured green) warm slower. Though the models have not reached equilibrium at the end of the twenty-first century, there is obviously a strong relationship between each model’s speed of temperature increase throughout the twenty-first century (and therefore the magnitude of the model spread at the end of the twenty-first century) and its ECS.

Fig. 3.

Global average surface temperature anomaly [°C] relative to the 1971–1999 period. Models with high ECS (HS; see Table 1, column 5) are coloured pink, low-sensitivity models (LS) coloured green. (a) Scenario RCP2.6; (b) RCP4.5; (c) RCP8.5.

We therefore separate the models with high ECS from those with low ECS and calculate the modelling uncertainties for the two sets separately (Fig. 4b and c). The time dependence of the model uncertainty is now practically removed, and the model uncertainty is, in both cases, much less than when we calculated model uncertainty based on all the models (Fig. 4a), supporting the hypothesis that most of the model uncertainty in Fig. 4a is connected to climate sensitivity-related mechanisms.

Fig. 4.

Climate projection uncertainty [(°C)2] for global mean air surface temperature: green (stippled) is scenario uncertainty, blue (dotted) is model uncertainty and red (solid) is internal uncertainty, following the method of (Hawkins and Sutton, 2009). The model runs used are listed in Table 1. The scenarios used are RCP 8.5, 4.5 and 2.6 (Moss et al., 2010). (a) Using all models in Table 1 (identical to Fig. 2c); (b) using only models with high ECS (see Table 1, models listed as H in Column 5; (c) using only models with low ECS (see Table 1, models listed as L in Column 5).

It might be argued that the decrease in model spread and model uncertainty in Fig. 4b and c is simply due to the reduction in the number of models in those subsets. To test this, we randomly selected 20 subsets of equal size as the high-ECS and low-ECS sets from full list of models in Table 1 and calculated the model uncertainty for each subset (Fig. 5). The figure shows that the model uncertainties for the high-ECS and low-ECS subsets are smaller than for any of the randomly selected subsets, thus it is very unlikely that reduction in modelling uncertainty in Fig. 4b and c should have happened by chance. It also shows that model uncertainty is strongly related to model’s physics and feedbacks.

Fig. 5.

Model uncertainty for global average temperature change, calculated for various sets of model simulations from Table 1. In solid purple: using ALL models; in stippled green and dotted blue: the low-ECS (LS) and high-ECS (HS) model subsets (see column 5 of Table 1); in dash-dot purple: 20 randomly selected subsets of models from Table 1.

3.1.

### Geographical distribution of model uncertainty

Maps of temperature projections, and the square root of the model uncertainties, are shown in Fig. 6 (we use square root so that temperature change can be directly compared, using the same unit, to the magnitude of the model spread). The structure of warming is the same in all three scenarios and in all three groupings of models (All, HS and LS): more warming over land than over ocean and more warming over high northern latitudes. These regional differences in global warming have well-understood explanations related to reinforcing feedback mechanisms on land and near the ice edge. However, note how much clearer the features become in the HS and LS cases (Fig. 6, middle and right panels) compared to the (average) ‘ALL’ case (Fig. 6, left panel), as well as how much smaller the model uncertainty becomes in the HS and LS cases (Fig. 6, bottom panel), in particular over the continents. There are only two areas that retain high model uncertainty in the HS and LS cases, namely the Southern Ocean and the Arctic Ocean. By implication, the model spread in these regions is not dominated by climate-sensitivity-related processes. In the case of the Arctic, the uncertainty is still much smaller than the temperature change itself, whereas for the Southern Ocean, the uncertainty is as large as the signal (compare directly in Fig. 6). The latter is a complex region of dense water formation, seasonal ice cover and large air-sea fluxes, probably pushing the climate models to the limit of what they can deal with at the present time.

Fig. 6.

Multi-model ensemble mean projections of surface temperature anomalies for the 2080–2099 period relative to the 1971–1999 period [°C] and the square root of associated model uncertainties ([°C]; bottom row). Upper three rows: scenarios RCP 8.5, 4.5 and 2.6, respectively. Left column: using all models in Table 1; Middle column: using only high-sensitivity temperature projection models (see Table 1, models listed as H in Column 5; Right column: using low-sensitivity temperature projections models (see Table 1, models listed as L in Column 5).

3.2.

### Precipitation and climate sensitivity

As it turns out, models with high ECS exhibit not only faster global temperature rise, but also a higher global precipitation percentage change, in the twenty-first century compared to models with low ECS (Fig. 7; compare the pink to the green lines). We should not be surprised by this finding: Many of the feedback mechanism associated with climate sensitivity, such as water vapor feedback, cloud feedback and lapse rate feedback – relate directly to precipitation as well. Nevertheless, we find Fig. 7 to provide an unusually clear signal of this relationship, especially for the highest emission scenario, RCP8.5.

Fig. 7.

Global average precipitation anomaly [%] relative to the 1971–1999 period. High-ECS models (Table 1, Column 7) coloured pink, low-ECS models are coloured green. Only for one model, MRI-CGCM3, do we not use the ECS definition of sensitivity in precipitation; see note in Table 1. (a) Scenario RCP2.6; (b) RCP4.5; (c) RCP8.5.

The dependence upon ECS is reflected in the model uncertainty as well (Fig. 8). As in the case of temperature (Fig. 5), the model uncertainty is much reduced when considering the high-ECS and low-ECS models separately than when calculating model uncertainty for all the models (purple solid line in Fig. 8).

Fig. 8.

Model uncertainty for global average precipitation change, calculated for various sets of model simulations from Table 1. In solid purple: using ALL models; in stippled green and dotted blue: the LS and HS model sets (see column 7 of Table 1); in dash-dot purple: 20 randomly selected subsets of models from Table 1, with similar size as the HS and LS model sets.

Figure 8 also shows that as in the case of temperature, for precipitation it is unlikely that one by chance would pick a set of models with as little spread (model uncertainty) as the high-ECS and low-ECS model subsets. The high-ECS subset in Fig. 8 shows the overall smallest model uncertainty. The low-ECS subset is by-passed by a couple of random sets, but even this subset has an unusually low model uncertainty.

3.3.

### Geographical distribution of precipitation model uncertainty

The geographical distribution of precipitation change (Fig. 9) is much more complex than that of temperature (Fig. 6) due to the fact that some regions exhibit large reductions in precipitation, others enormous increases (coloured blue and red, respectively, in Fig. 9). The structure is generally the same in all cases and it follows the present climatological mean: the reduction in precipitation is projected to take place primarily over the world oceans and in subtropical bands, whereas the main increases in precipitation are over the subpolar and polar latitudes, as well as in the equatorial band. The model uncertainty is generally the least in the mid-latitude bands and highest over the equatorial band/tropics (Fig. 9, bottom panel).

Fig. 9.

Same as Fig. 6 but for percentage precipitation change [%].

As in the case of the global average (Fig. 8), the model uncertainty for precipitation change drops for both the high-ECS and low-ECS subsets compared to the full model set (Fig. 9, bottom panel), although not as much and as clearly as was the case for temperature (compare to Fig. 6, bottom panel). The geographical distribution of the model uncertainty also differs from that of temperature: in the case of precipitation, the model uncertainty remains high in the tropical zone even after separating the high- and low-sensitivity models. This indicates that there remains significant model spread in the tropical zone which is not dominated by climate sensitivity-related processes. That there is high uncertainty regarding precipitation in the tropical zone is of course expected: most of the CMIP5 models tend to simulate a stronger, wider, and slightly northwardly positioned ITCZ compared to observations (Stanfield et al., 2016), and the tropical zone was indeed pointed to by Flato et al. (2013) as the region where the CMIP 5 models collectively have the largest systematic errors in precipitation. But what, if any, are the differences in the projections themselves?

The two subsets of precipitation model projections (columns 2 and 3 of Fig. 9) differ in that the former exhibits much larger contrasts in precipitation across the globe, regardless of scenario: regions of increase in precipitation (coloured red) generally exhibit a larger increase in the high-ECS subset than in the low-ECS subset or the full model set. And vice versa: regions of projected reduction in precipitation (coloured blue) exhibit larger reductions in the high-ECS subset of models.

3.4.

### Wind speed and climate sensitivity

Surface temperature, precipitation and wind speed are inherently linked through the equations governing atmospheric motion, namely the conservation of mass, momentum and energy. It is therefore to be expected that if temperature and precipitation projections are sensitive to CO2 emissions, wind speed should be as well. In Fig. 10, we show the global average wind speed change for the three scenarios RCP2.6, 4.5 and 8.5, and again, the high-ECS models are coloured pink and the low-ECS models are coloured green. Not for any of the scenarios is the climate sensitivity an indicator of the speed of change of global average wind speed through the twenty-first century.

Fig. 10.

Global average wind speed anomaly [m/s] relative to the 1971–1999 period. High-ECS models (Table 1, Column 5, though note that we do not have the wind fields for all models – see Table 1, column 8) coloured pink, low-ECS models coloured green. (a) Scenario RCP2.6; (b) RCP4.5; (c) RCP8.5.

This finding is reflected in the model uncertainty as well (Fig. 11), which shows that the high-ECS and low-ECS model subsets do not stand out as exhibiting particularly low model uncertainty in the global average, neither compared to the full set of models (solid purple line) nor to the 20 random subsets.

Fig. 11.

Model uncertainty for global average wind speed change, calculated for random sets of model simulations from Table 1. In solid purple: the average of all the models; in stippled green and dotted blue: the LS and HS model sets (as in Fig. 10); in dash-dot purple: 20 randomly selected subsets of models from Table 1, with similar size as the HS and LS model sets.

So could it be that the uncertainty in the projection of wind speed is dominated by some other mechanisms, some sort of ‘wind speed sensitivity’? To resolve that issue we make a new division of the model projections, depending on how fast the wind speed increases in the twenty-first century (Table 1, column 9 and Fig. 12).

Fig. 12.

Global average wind speed anomaly [m/s] relative to the 1971–1999 period. High-sensitivity models (determined by each model’s global average wind speed change in the twenty-first century; see Table 1, Column 9) coloured pink, low-sensitivity models colored green. (a) Scenario RCP2.6; (b) RCP4.5; (c) RCP8.5.

Unfortunately, this latter division does not give a clearer picture of a physical difference. For instance: what is a ‘sensitive’ model for RCP8.5 is not so for RCP4.5 or RCP2.6 (Fig. 12). The finding is supported in Fig. 13: even for this latter division of high-sensitive and low-sensitive subsets of wind models, the model uncertainty can be easily undercut by random subsets.

Fig. 13.

Model uncertainty for global average wind speed change, calculated for various sets of model simulations from Table 1. In solid purple: ALL models; in stippled green and dotted blue: The L and H models sets based on wind-speed change (see Fig. 12 and column 9 of Table 1); in dash-dot purple: 20 randomly selected subsets of models from Table 1, with similar size as the H and L model sets.

In the following, we therefore go back to the original division of models based on their ECS (as used in Figs. 10 and 11) and investigate if there is any hint of difference in projections when looking at the geographical distribution of wind speed anomalies at the end of the twenty-first century.

3.5.

### Geographical distribution of wind speed model uncertainty

As for precipitation (Fig. 9), the wind speed is projected to increase over large regions of the world by the end of the century whilst decrease over others (contrast red and blue regions in Fig. 14). The largest model uncertainties are found over the Southern Ocean and over the oceans in general. In the case of wind speed the model uncertainty is in many places of the same order of magnitude as the change. We have masked out those areas in the maps in Fig. 14.

Fig. 14.

Same as Fig. 6 but for wind speed change [m/s]. Since the model uncertainties are of the same order of magnitude as the projected change in the case of wind speed, we have overlaid a mask indicating change that is not significantly larger than the uncertainty. Specifically, it is the result of a two-sided rank sum test with p-values showing the null hypothesis is not valid; i.e. the hatching shows areas where there is no significant change for the 2071–2099 period relative to the baseline period (1971–1999) at a 95% level.

Despite the lack of a convincing signal in the global average (Figs. 10 and 11), we find in the maps of Fig. 14 systematic differences between the wind speed projections from the high-ECS and low-ECS model subsets (contrast columns 2 and 3 in Fig. 14): The regions of projected increases in wind speed, such as the Southern Ocean, exhibit much larger increases in the subset of high-ECS models than in the subset of low-ECS. Vice versa for the regions of projected reduction in wind speed (the blue regions are ‘bluer’ in the middle column). We take this finding to support our expectation, namely that the wind speed is sensitive to CO2 emissions and that this sensitivity is related to the sensitivity of precipitation and temperature.

4.

## Discussion

The steps involved in investigating impacts of climate change typically include three steps: (1) global climate projection, (2) regional downscaling to much higher horizontal resolution and (3) regional impact modelling (for example hydrology, agriculture, ecology, economy models). The first step is necessary because the emission problem is a global one, so to make a future projection one needs to consider the entire globe. The second step, which involves using the global projection as boundary condition for the regional model, is necessary because the resolution of the former is too crude to understand what goes on in the region of interest, and the third step gives the information one is actually after, for instance to provide an impact assessment.

Although there are more than 20 Global Climate Models (GCMs) available (Table 1), the computational resources needed for running a Regional Climate Model (RCM) are so large that a downscaling experiment typically involves one RCM and a maximum of three to five GCMs (Mearns et al., 2012; Katzfey et al., 2016). This is inconsistent with the GCM perspective, where, as mentioned in the beginning, the multi-model ensembles are thought to be best basis for estimating projection uncertainties. According to that perspective one should use all the GCMs as input to the RCMs. But computing power sets a limit.

The subset of GCMs to be used as boundary condition to the RCMs is normally selected based on ranking methods of models performance in terms of their ability to simulate important climate phenomena (such as El Nino and the North Atlantic Oscillation) and to reproduce historical observations, such as temperature, precipitation, sea level pressure records (Watterson et al., 2013a, 2013b, Grose et al., 2014).

Here we argue that there is another method to select a subset of GCMs. We argue that the model uncertainty of the full set of GCMs is artificially high, due to the fact that it encompasses two sets of models which represent the physics and feedbacks in two different ways, namely those with high and those with low sensitivity to CO2 emissions. We have seen this for temperature (Fig. 5) and for precipitation (Fig. 8). In both these cases, the model uncertainty for the two subsets drop much below the uncertainty of the full set, and also below what could be obtained by randomly picking subsets. The case of wind speed was not as clear (Fig. 11), but even in that case we find systematic differences between the wind speed projections from the high-ECS and low-ECS model subsets (Fig. 14), just as we did for temperature (Fig. 6) and precipitation (Fig. 9).

So what is the difference in physics between the high-sensitive and low-sensitive climate models? A recent study of climate sensitivity in 43 CMIP5 climate models (Sherwood et al., 2014) show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half the variance in climate sensitivity. This makes sense – we know for instance that operational forecast models are quite sensitive to the choice of turbulent mixing scheme within the lower troposphere and that a realistic representation of the turbulent mixing is needed to accurately portray the vertical thermodynamic and kinematic profiles of the atmosphere (Cohen et al., 2015). Furthermore, (Sherwood et al., 2014) find that observations are consistent with strong, not weak, convective mixing and are thereby consistent with high-sensitivity, not low-sensitivity, models. The mechanism they propose relates mixing to cloud feedback through the dehydration of the low-cloud layer at a rate that increases as the climate warms.

We therefore suggest that it is physically and statistically unsound to mix models from the two climate model families (high and low sensitivity models). But how to pick the family? One could pick a set of high-sensitive climate models based on a belief that those models are more physically reliable, a belief that finds support for instance in the finding of (Sherwood et al., 2014). Or one could pick a set of high-sensitive climate models if one’s aim is to provide advice guided by the precautionary principle. The precautionary principle states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of a scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action. This principle was included in the 1992 Rio Declaration on Environment and Development and in the United Nations Framework Convention on Climate Change, and has later been incorporated into many international agreements. In order to offer advice within the framework of the precautionary principle, one needs to make impact assessments based on the high-sensitivity projections (i.e. the worst case).

On the other hand, one could pick a set of low-sensitive climate models if one wishes to address the question: what is the least that can happen? What must we prepare for?

To summarise, we argue that it is physically and statistically unsound to mix climate model with high and low ECS, and that the subset chosen for any impact study should depend on the question one is trying to answer.

## Disclosure statement

No potential conflict of interest was reported by the authors.

## Note

bIs model used to show temperature projections?

cIs model used to show precipitation projections?

dIs model used to show wind speed projections?

eThe CNRM-CM5 winddata we downloaded from the CMIP5 databank appear erroneous, so they are excluded from the present analysis.

fMRI-CGCM3 is the only model for which precipitation and temperature differ strongly in terms of sensitivity. Therefore we have, in the precipitation analysis, lumped this model together with all the high-sensitive models.

1. ‘Equilibrium Climate Sensitivity’ is formally defined as the equilibrium change in temperature [°C] associated with a doubling of the concentration of carbon dioxide in Earth's atmosphere.

## Acknowledgements

The authors wish to thank Edward Hawkins and Michael Wehner for advice at the outset of the project and Arnaud Le Breton, Rene Castberg, Akos Buzinkay and Jonathan Niesel for support regarding the data analytics aspects of the study. We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modelling groups (listed in Table 1 of this paper) for producing and making available their model output. For CMIP, the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and leads the development of software infrastructure, in partnership with the Global Organization for Earth System Science Portals. Finally, we would like to acknowledge the generous support for this project provided by DNV GL and NIVA.

## References

1. Cohen , A. E. , Cavallo , S. M. , Coniglio , M. C. and Brooks , H. E. 2015 . A review of planetary boundary layer parameterization schemes and their sensitivity in simulating southeastern U.S. cold season severe weather environments . Wea. Forecast. 30 , 591 – 612 . https://doi.org/10.1175/WAF-D-14-00105.1

2. Flato , G. , Marotzke , J. , Abiodun , B. , Braconnot , P. , Chou , S. C. and et al. . 2013 . Evaluation of climate models . In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change , Vol. 5 (eds. T. F. Stocker , D. Qin , G.-K. Plattner , M. Tignor , S. K. Allen , J. Boschung , A. Nauels , Y. Xia , V. Bex , and P. M. Midgley ). Cambridge University Press , Cambridge, United Kingdom and New York, NY, USA , pp. 741 – 866 .

3. Giorgi , F. and Mearns , L. O. 2002 . Calculation of average, uncertainty range, and reliability of regional climate changes from AOGCM simulations via the ‘reliability ensemble averaging’(REA) method . J. Clim. 15 , 1141 – 1158 . https://doi.org/10.1175/1520-0442(2002)015<1141:COAURA>2.0.CO;2

4. Grose , M. R. , Brown , J. N. , Narsey , S. , Brown , J. R. , Murphy , B. F. and et al. 2014 . Assessment of the CMIP5 global climate model simulations of the western tropical Pacific climate system and comparison to CMIP3 . Int. J. Climatol. 34 , 3382 – 3399 . DOI: https://doi.org/10.1002/joc.3916 .

5. Hawkins , E. and Sutton , R. 2009 . The potential to narrow uncertainty in regional climate predictions . Bull. Am. Meteorol. Soci. 90 , 1095 – 1107 . https://doi.org/10.1175/2009BAMS2607.1

6. Hawkins , E. and Sutton , R. 2011 . The potential to narrow uncertainty in projections of regional precipitation change . Clim. Dyn. 37 , 407 – 418 . https://doi.org/10.1007/s00382-010-0810-6

7. IPCC . 2014 . Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press , Cambridge, UK .

8. Katzfey , J. , Nguyen , K. , McGregor , J. , Hoffmann , P. , Ramasamy , S. and et al. . 2016 . High-resolution simulations for vietnam – methodology and evaluation of current climate . J. Atmos. Sci. 52 ( 2 ), 1 – 16 .

9. Knutti , R. , Allen , M. R. , Friedlingstein , P. , Gregory , J. M. , Hegerl , G. C. and et al. 2008 . A review of uncertainties in global temperature projections over the twenty-first century . J. Clim. 21 , 2651 – 2663 . https://doi.org/10.1175/2007JCLI2119.1

10. Knutti , R. and Sedláček , J. 2013 . Robustness and uncertainties in the new CMIP5 climate model projections . Nat. Clim. Change 3 , 369 – 373 .

11. Mearns , L. O. , Arritt , R. , Biner , S. , Bukovsky , M. S. , McGinnis , S. and et al. . 2012 . The North American regional climate change assessment program: overview of phase I results . Bull. Am. Meteorol. Soc. 93 , 1337 – 1362 . https://doi.org/10.1175/BAMS-D-11-00223.1

12. Moss , R. H. , Edmonds , J. A. , Hibbard , K. A. , Manning , M. R. , Rose , S. K. and et al. . 2010 . The next generation of scenarios for climate change research and assessment . Nature 463 , 747 – 756 . https://doi.org/10.1038/nature08823

13. Murphy , J. M. , Sexton , D. M. , Barnett , D. N. , Jones , G. S. , Webb , M. J. and et al. 2004 . Quantification of modelling uncertainties in a large ensemble of climate change simulations . Nature 430 , 768 – 772 . https://doi.org/10.1038/nature02771

14. Northrop , P. J. and Chandler , R. E. 2014 . Quantifying sources of uncertainty in projections of future climate* . J. Clim. 27 , 8793 – 8808 . https://doi.org/10.1175/JCLI-D-14-00265.1

15. Oppenheimer , M. , Campos , M. , Warren , R. , Birkmann , J. , Luber , G. and et al. . 2014 . Emergent risks and key vulnerabilities . In: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds. C. B. Field , V. R. Barros , D. J. Dokken , K. J. Mach , M. D. Mastrandrea , T. E. Bilir , M. Chatterjee , K. L. Ebi , Y. O. Estrada , R. C. Genova , B. Girma , E. S. Kissel , A. N. Levy , S. MacCracken , P. R. Mastrandrea and L. L. White ). Cambridge University Press , Cambridge, UK , pp. 1039 – 1099 .

16. Sherwood , S. C. , Bony , S. and Dufresne , J.-L. 2014 . Spread in model climate sensitivity traced to atmospheric convective mixing . Nature 505 , 37 – 42 . https://doi.org/10.1038/nature12829

17. Stanfield , R. E. , Jiang , J. H. , Dong , X. , Xi , B. , Su , H. et al. . 2016 . A quantitative assessment of precipitation associated with the ITCZ in the CMIP5 GCM simulations . Clim. Dyn. 47 , 1863 – 1880 .

18. Stephenson , D. B. , Collins , M. , Rougier , J. C. and Chandler , R. E. 2012 . Statistical problems in the probabilistic prediction of climate change . Environmetrics 23 , 364 – 372 . https://doi.org/10.1002/env.2153

19. Taylor , K. E. , Stouffer , R. J. and Meehl , G. A. 2012 . An overview of CMIP5 and the experiment design . Bull. Am. Meteorol. Soc. 93 , 485 . https://doi.org/10.1175/BAMS-D-11-00094.1

20. Tebaldi , C. , Smith , R. L. , Nychka , D. and Mearns , L. O. 2005 . Quantifying uncertainty in projections of regional climate change: a bayesian approach to the analysis of multimodel ensembles . J. Clim. 18 , 1524 – 1540 . https://doi.org/10.1175/JCLI3363.1

21. Watterson , I. G. , Bathols , J. and Heady , C. 2013a . What influences the skill of climate models over the continents? Bull. Amer. Meteor. Soc. 95 ( 5 ), 689 – 700 . DOI: https://doi.org/10.1175/BAMS-D-12-00136.1

22. Watterson , I. G. , Hirst , A. C. and Rotstayn , L. D. 2013b . A skill-score based evaluation of simulated Australian climate . Aust. Meteor. Oceanogr. J. 63 , 181 – 190 . https://doi.org/10.22499/2.00000

23. Yip , S. , Ferro , C. A. , Stephenson , D. B. and Hawkins , E. 2011 . A simple, coherent framework for partitioning uncertainty in climate predictions . J. Clim. 24 , 4634 – 4643 . https://doi.org/10.1175/2011JCLI4085.1

24. Ylhäisi , J. S. , Garrè , L. , Daron , J. and Räisänen , J. 2015 . Quantifying sources of climate uncertainty to inform risk analysis for climate change decision-making . Local Environ. 20 , 811 – 835 . https://doi.org/10.1080/13549839.2013.874987