Climate model projections are extensively used to investigate the potential impacts of climate change (IPCC, 2014). However, climate projections are uncertain for a variety of reasons. We do not know what the future will bring in terms of political decisions, technological breakthroughs, conflicts and agreements that will affect manmade climate change. We also do not know what volcanic and cosmic activities may affect the climate in the future. In addition to these ‘scenario’ uncertainties, there are inherent uncertainties in climate modelling, as well as uncertainties associated with the internal variability of the climate system. Usually, we think of the latter as year-to-year variations in average weather, i.e. things that happen too quickly to be regarded as climate. It is a technical as well as philosophical challenge to take stock of these uncertainties in impact studies and risk assessments. Questions about the political implementation capacity or willingness to follow up international agreements such as the 2015 Paris accord represent the greatest uncertainties regarding climate projections, at least for the latter part of the century. Closer in time, the uncertainty associated with modelling climate is huge. The aim of this paper is to make sense of model uncertainty. Though model uncertainty involves uncertainty about all the processes and feedbacks that make up the climate system, as well as mathematical inaccuracies due to the numerical methods used, it turns out that much of the model uncertainty can be pinned on one key player: ‘climate sensitivity’.
‘Climate sensitivity’ is intended as a measure of how fast Earth responds to changes in atmospheric CO2 concentration. The estimate has remained fairly constant for 40 years: In 1979 a committee on anthropogenic global warming convened by the National Academy of Sciences in the United States estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. At that time, only two sets of models were available; one exhibited a climate sensitivity of 2 °C, the other exhibited a climate sensitivity of 4 °C. Thirty-five years later, the IPCC’s Fifth Assessment Report came to the same conclusion. Specifically, it stated: ‘Equilibrium climate sensitivity is likely in the range 1.5 °C to 4.5 °C (high confidence), extremely unlikely less than 1 °C (high confidence), and very unlikely greater than 6 °C (medium confidence)’.
There is, in our opinion, no reason to think that this value should be constant over time: complex systems do not respond linearly to forcing (case in point: the response of human bodies to external stress). But it is a handy scaling mechanism. And decades of research have taught us that it is even more: we now know much more about which physical processes and feedback mechanisms are important for determining a model’s climate sensitivity. According to the latest climate assessment (IPCC, 2014), the water vapour/lapse rate, albedo and cloud feedbacks are the principal determinants of Equilibrium Climate Sensitivity (ECS).1
We may not know for a long time whether models with high or low climate sensitivities are more relevant for the twenty-first century projections. However, to be prepared for the key risks of climate change – recognised by Oppenheimer et al. (2014) to be the breakdown of infrastructure due to extreme weather, ill-health, disturbed livelihoods due to inland flooding, mortality due to storm surges, flooding, heat-waves, breakdown of food systems due to extreme weather, droughts, flooding, loss of rural livelihoods due to insufficient access to drinking and irrigation water, loss of marine ecosystems and the loss of terrestrial ecosystems – we must provide advice to decision-makers despite these very large uncertainties.
It is our hope that the insights into the relationship between climate sensitivity and model uncertainty provided here will be of help to investigators who need to provide advice in the light of large uncertainties. The paper is organised as follows: the methods for calculating projection uncertainty, and the data used, are presented in Section 2. In Section 3, model uncertainty for the three variables surface air temperature change [°C], precipitation change [%] and wind speed change [m/s] is estimated both on global and regional scales. And finally, in Section 4, we discuss the results.
Coordinated climate modelling experiments at a variety of modelling centres around the world performed using common future-forcing scenarios have become the standard for producing climate projections (Knutti and Sedláček, 2013). These multi-model ensembles provide our best, though imperfect, basis for estimating projection uncertainties. In this study, we have used the CMIP5 data-set (Taylor et al., 2012). The specific models used are listed in Table 1. We use uniform model weights; i.e. all models are equally important for the uncertainty estimates, and we only use one simulation for each model version. In the analyses we use temperature, precipitation and wind speed anomalies, calculated as the difference from the 1971 to 1999 mean of each model centre’s historical (‘twentieth century’) climate simulation.
We use three future scenarios (Representative Concentration Pathways; RCPs) assessed in IPCC AR5: the ‘2-degree’ scenario (RCP2.6), the ‘business-as-usual’ scenario (RCP8.5) and an intermediate scenario (RCP4.5) (Moss et al., 2010).
Uncertainty assessments in climate science (on global and regional scales) have been a topic of interest for many decades. From the advent of IPCC in 1988 and CMIP in 1995, the ability to compare models developed at different centres has provided an opportunity to understand the differences between these models to quantify the associated uncertainty (Giorgi and Mearns, 2002; Murphy et al., 2004; Tebaldi et al., 2005; Stephenson et al., 2012). Advances in climate modelling approaches and the inclusion of additional feedbacks have provided updated data-sets, methods and approaches to quantifying uncertainty (see, e.g. (Knutti et al., 2008; Hawkins and Sutton, 2009; Hawkins and Sutton, 2011; Yip et al., 2011; Northrop and Chandler, 2014; Ylhäisi et al., 2015).
The literature generally recommends that the uncertainty in climate projections be assessed through the estimation of internal, I(t); model, M(t); and scenario, S(t), uncertainties, where the total projection uncertainty, T(t), is the sum of these three: T(t) = I(t) + M(t) + S(t).
The internal uncertainty, I(t), is typically associated with the short-term variability of the Earth’s climate (where ‘short’ is undefined, but certainly shorter than 30 years, which is the meteorological definition of climate). El Niño is a much-used example of a phenomenon that strongly contributes to internal variability. By definition, internal uncertainty cannot be reduced because it represents variability unrelated to the phenomenon under study.
The model uncertainty, M(t), is associated with the model parameterizations, the set of equations used, and other model-specific issues. Model uncertainty can be reduced through the better representation of the physical processes in the model, but it can never be eliminated. Model uncertainty is typically estimated using the spread of climate model outputs obtained from different model centres and their ensemble members.
Finally, the scenario uncertainty, S(t), is the uncertainty associated with not knowing what the emissions of climate-affecting gases and aerosols will actually be in the coming decades. That uncertainty is so large that we need to allow for a large range of possibilities. Thus, we have chosen to keep the full range of Representative Concentration Pathways, from RCP2.6 to RCP8.5 (Table 1). Typically, the scenario uncertainty is estimated by using the spread of climate projections obtained from various scenarios. In Fig. 1, we see that until around 2040, the spread between models is as large as the spread between scenarios. After that, the spread between models within one scenario is less than the spread between scenarios.
The method for estimating the various components of climate uncertainty that we have chosen follows closely that of (Hawkins and Sutton, 2009). Assume x(t) is the annual average model output, and is a smoothed version of x. The model uncertainty M(t) is calculated as the variance over all smoothed climate model outputs, averaged over all scenarios:
where is the low-pass-filtered (20-year moving average) climate model output, m is the model (of a total of MD model runs), s is the scenario (of a total of SC scenarios) and t is time. and denote the means over all models and over all scenarios, respectively. Generally, is defined for every point on the grid. However, when we compute M(t) for the global mean, we first spatially average the quantity of interest and then proceed with Equation (1); i.e. is, in that case, spatially averaged over the entire globe.
Similarly, the scenario uncertainty, S(t), is defined as the variance over all scenarios, SC, averaged over all existing models, MD:
The internal variability ε is, per construction, the high-pass-filtered part of the model output signal:
The internal uncertainty is then defined as the internal variability’s variance over time:
where TM is the total timespan of the climate model output, i.e. data in the interval from 2005 to 2099 for future projections and from 1900 to 2005 for the ‘twentieth century’ simulations.
As is well known, the year-to-year variability of a climate variable like temperature is large. To avoid that this variability completely overwhelms any climate signal in the time series it is common to smooth the time series. In terms of the uncertainty calculations, the effect of substituting the annual data x with 2-year (x2) and 10-year (x10) moving averages in Equation (3) is to lower the relative importance of this uncertainty so that the two other uncertainties (model and scenario) becomes important at an earlier stage (Fig. 2). For the duration of this paper we have used x10 in Equation (3) (identical to the choice made by Hawkins and Sutton (2009)).
Scenario uncertainty is also shown in Fig. 2. In the global average it grows exponentially with time and completely dominates the climate projection uncertainty for the latter half of the twenty-first century. The only way to reduce scenario variability is to remove one or more of the future scenarios. Despite the Paris Agreements, which would favour future scenario RCP2.6, we believe that all three scenarios should still be considered possible futures, and thus we include all three in the uncertainty calculations.
The model uncertainty is, by construction (Equation 1), a measure of the model spread. Since the model spread increases with time (Fig. 1), so does the model uncertainty (Fig. 2). In order to investigate the source of the time dependency we have coloured, in Fig. 3, the temperature projections for the individual models according to their ECS (Table 1; see also Flato et al., 2013). We find that models with high ECS (coloured pink) systematically warm faster through the twenty-first century, whereas models with low ECS (coloured green) warm slower. Though the models have not reached equilibrium at the end of the twenty-first century, there is obviously a strong relationship between each model’s speed of temperature increase throughout the twenty-first century (and therefore the magnitude of the model spread at the end of the twenty-first century) and its ECS.
We therefore separate the models with high ECS from those with low ECS and calculate the modelling uncertainties for the two sets separately (Fig. 4b and c). The time dependence of the model uncertainty is now practically removed, and the model uncertainty is, in both cases, much less than when we calculated model uncertainty based on all the models (Fig. 4a), supporting the hypothesis that most of the model uncertainty in Fig. 4a is connected to climate sensitivity-related mechanisms.
It might be argued that the decrease in model spread and model uncertainty in Fig. 4b and c is simply due to the reduction in the number of models in those subsets. To test this, we randomly selected 20 subsets of equal size as the high-ECS and low-ECS sets from full list of models in Table 1 and calculated the model uncertainty for each subset (Fig. 5). The figure shows that the model uncertainties for the high-ECS and low-ECS subsets are smaller than for any of the randomly selected subsets, thus it is very unlikely that reduction in modelling uncertainty in Fig. 4b and c should have happened by chance. It also shows that model uncertainty is strongly related to model’s physics and feedbacks.
Maps of temperature projections, and the square root of the model uncertainties, are shown in Fig. 6 (we use square root so that temperature change can be directly compared, using the same unit, to the magnitude of the model spread). The structure of warming is the same in all three scenarios and in all three groupings of models (All, HS and LS): more warming over land than over ocean and more warming over high northern latitudes. These regional differences in global warming have well-understood explanations related to reinforcing feedback mechanisms on land and near the ice edge. However, note how much clearer the features become in the HS and LS cases (Fig. 6, middle and right panels) compared to the (average) ‘ALL’ case (Fig. 6, left panel), as well as how much smaller the model uncertainty becomes in the HS and LS cases (Fig. 6, bottom panel), in particular over the continents. There are only two areas that retain high model uncertainty in the HS and LS cases, namely the Southern Ocean and the Arctic Ocean. By implication, the model spread in these regions is not dominated by climate-sensitivity-related processes. In the case of the Arctic, the uncertainty is still much smaller than the temperature change itself, whereas for the Southern Ocean, the uncertainty is as large as the signal (compare directly in Fig. 6). The latter is a complex region of dense water formation, seasonal ice cover and large air-sea fluxes, probably pushing the climate models to the limit of what they can deal with at the present time.
As it turns out, models with high ECS exhibit not only faster global temperature rise, but also a higher global precipitation percentage change, in the twenty-first century compared to models with low ECS (Fig. 7; compare the pink to the green lines). We should not be surprised by this finding: Many of the feedback mechanism associated with climate sensitivity, such as water vapor feedback, cloud feedback and lapse rate feedback – relate directly to precipitation as well. Nevertheless, we find Fig. 7 to provide an unusually clear signal of this relationship, especially for the highest emission scenario, RCP8.5.
The dependence upon ECS is reflected in the model uncertainty as well (Fig. 8). As in the case of temperature (Fig. 5), the model uncertainty is much reduced when considering the high-ECS and low-ECS models separately than when calculating model uncertainty for all the models (purple solid line in Fig. 8).
Figure 8 also shows that as in the case of temperature, for precipitation it is unlikely that one by chance would pick a set of models with as little spread (model uncertainty) as the high-ECS and low-ECS model subsets. The high-ECS subset in Fig. 8 shows the overall smallest model uncertainty. The low-ECS subset is by-passed by a couple of random sets, but even this subset has an unusually low model uncertainty.
The geographical distribution of precipitation change (Fig. 9) is much more complex than that of temperature (Fig. 6) due to the fact that some regions exhibit large reductions in precipitation, others enormous increases (coloured blue and red, respectively, in Fig. 9). The structure is generally the same in all cases and it follows the present climatological mean: the reduction in precipitation is projected to take place primarily over the world oceans and in subtropical bands, whereas the main increases in precipitation are over the subpolar and polar latitudes, as well as in the equatorial band. The model uncertainty is generally the least in the mid-latitude bands and highest over the equatorial band/tropics (Fig. 9, bottom panel).
As in the case of the global average (Fig. 8), the model uncertainty for precipitation change drops for both the high-ECS and low-ECS subsets compared to the full model set (Fig. 9, bottom panel), although not as much and as clearly as was the case for temperature (compare to Fig. 6, bottom panel). The geographical distribution of the model uncertainty also differs from that of temperature: in the case of precipitation, the model uncertainty remains high in the tropical zone even after separating the high- and low-sensitivity models. This indicates that there remains significant model spread in the tropical zone which is not dominated by climate sensitivity-related processes. That there is high uncertainty regarding precipitation in the tropical zone is of course expected: most of the CMIP5 models tend to simulate a stronger, wider, and slightly northwardly positioned ITCZ compared to observations (Stanfield et al., 2016), and the tropical zone was indeed pointed to by Flato et al. (2013) as the region where the CMIP 5 models collectively have the largest systematic errors in precipitation. But what, if any, are the differences in the projections themselves?
The two subsets of precipitation model projections (columns 2 and 3 of Fig. 9) differ in that the former exhibits much larger contrasts in precipitation across the globe, regardless of scenario: regions of increase in precipitation (coloured red) generally exhibit a larger increase in the high-ECS subset than in the low-ECS subset or the full model set. And vice versa: regions of projected reduction in precipitation (coloured blue) exhibit larger reductions in the high-ECS subset of models.
Surface temperature, precipitation and wind speed are inherently linked through the equations governing atmospheric motion, namely the conservation of mass, momentum and energy. It is therefore to be expected that if temperature and precipitation projections are sensitive to CO2 emissions, wind speed should be as well. In Fig. 10, we show the global average wind speed change for the three scenarios RCP2.6, 4.5 and 8.5, and again, the high-ECS models are coloured pink and the low-ECS models are coloured green. Not for any of the scenarios is the climate sensitivity an indicator of the speed of change of global average wind speed through the twenty-first century.
This finding is reflected in the model uncertainty as well (Fig. 11), which shows that the high-ECS and low-ECS model subsets do not stand out as exhibiting particularly low model uncertainty in the global average, neither compared to the full set of models (solid purple line) nor to the 20 random subsets.
So could it be that the uncertainty in the projection of wind speed is dominated by some other mechanisms, some sort of ‘wind speed sensitivity’? To resolve that issue we make a new division of the model projections, depending on how fast the wind speed increases in the twenty-first century (Table 1, column 9 and Fig. 12).
Unfortunately, this latter division does not give a clearer picture of a physical difference. For instance: what is a ‘sensitive’ model for RCP8.5 is not so for RCP4.5 or RCP2.6 (Fig. 12). The finding is supported in Fig. 13: even for this latter division of high-sensitive and low-sensitive subsets of wind models, the model uncertainty can be easily undercut by random subsets.
In the following, we therefore go back to the original division of models based on their ECS (as used in Figs. 10 and 11) and investigate if there is any hint of difference in projections when looking at the geographical distribution of wind speed anomalies at the end of the twenty-first century.
As for precipitation (Fig. 9), the wind speed is projected to increase over large regions of the world by the end of the century whilst decrease over others (contrast red and blue regions in Fig. 14). The largest model uncertainties are found over the Southern Ocean and over the oceans in general. In the case of wind speed the model uncertainty is in many places of the same order of magnitude as the change. We have masked out those areas in the maps in Fig. 14.
Despite the lack of a convincing signal in the global average (Figs. 10 and 11), we find in the maps of Fig. 14 systematic differences between the wind speed projections from the high-ECS and low-ECS model subsets (contrast columns 2 and 3 in Fig. 14): The regions of projected increases in wind speed, such as the Southern Ocean, exhibit much larger increases in the subset of high-ECS models than in the subset of low-ECS. Vice versa for the regions of projected reduction in wind speed (the blue regions are ‘bluer’ in the middle column). We take this finding to support our expectation, namely that the wind speed is sensitive to CO2 emissions and that this sensitivity is related to the sensitivity of precipitation and temperature.
The steps involved in investigating impacts of climate change typically include three steps: (1) global climate projection, (2) regional downscaling to much higher horizontal resolution and (3) regional impact modelling (for example hydrology, agriculture, ecology, economy models). The first step is necessary because the emission problem is a global one, so to make a future projection one needs to consider the entire globe. The second step, which involves using the global projection as boundary condition for the regional model, is necessary because the resolution of the former is too crude to understand what goes on in the region of interest, and the third step gives the information one is actually after, for instance to provide an impact assessment.
Although there are more than 20 Global Climate Models (GCMs) available (Table 1), the computational resources needed for running a Regional Climate Model (RCM) are so large that a downscaling experiment typically involves one RCM and a maximum of three to five GCMs (Mearns et al., 2012; Katzfey et al., 2016). This is inconsistent with the GCM perspective, where, as mentioned in the beginning, the multi-model ensembles are thought to be best basis for estimating projection uncertainties. According to that perspective one should use all the GCMs as input to the RCMs. But computing power sets a limit.
The subset of GCMs to be used as boundary condition to the RCMs is normally selected based on ranking methods of models performance in terms of their ability to simulate important climate phenomena (such as El Nino and the North Atlantic Oscillation) and to reproduce historical observations, such as temperature, precipitation, sea level pressure records (Watterson et al., 2013a, 2013b, Grose et al., 2014).
Here we argue that there is another method to select a subset of GCMs. We argue that the model uncertainty of the full set of GCMs is artificially high, due to the fact that it encompasses two sets of models which represent the physics and feedbacks in two different ways, namely those with high and those with low sensitivity to CO2 emissions. We have seen this for temperature (Fig. 5) and for precipitation (Fig. 8). In both these cases, the model uncertainty for the two subsets drop much below the uncertainty of the full set, and also below what could be obtained by randomly picking subsets. The case of wind speed was not as clear (Fig. 11), but even in that case we find systematic differences between the wind speed projections from the high-ECS and low-ECS model subsets (Fig. 14), just as we did for temperature (Fig. 6) and precipitation (Fig. 9).
So what is the difference in physics between the high-sensitive and low-sensitive climate models? A recent study of climate sensitivity in 43 CMIP5 climate models (Sherwood et al., 2014) show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half the variance in climate sensitivity. This makes sense – we know for instance that operational forecast models are quite sensitive to the choice of turbulent mixing scheme within the lower troposphere and that a realistic representation of the turbulent mixing is needed to accurately portray the vertical thermodynamic and kinematic profiles of the atmosphere (Cohen et al., 2015). Furthermore, (Sherwood et al., 2014) find that observations are consistent with strong, not weak, convective mixing and are thereby consistent with high-sensitivity, not low-sensitivity, models. The mechanism they propose relates mixing to cloud feedback through the dehydration of the low-cloud layer at a rate that increases as the climate warms.
We therefore suggest that it is physically and statistically unsound to mix models from the two climate model families (high and low sensitivity models). But how to pick the family? One could pick a set of high-sensitive climate models based on a belief that those models are more physically reliable, a belief that finds support for instance in the finding of (Sherwood et al., 2014). Or one could pick a set of high-sensitive climate models if one’s aim is to provide advice guided by the precautionary principle. The precautionary principle states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of a scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action. This principle was included in the 1992 Rio Declaration on Environment and Development and in the United Nations Framework Convention on Climate Change, and has later been incorporated into many international agreements. In order to offer advice within the framework of the precautionary principle, one needs to make impact assessments based on the high-sensitivity projections (i.e. the worst case).
On the other hand, one could pick a set of low-sensitive climate models if one wishes to address the question: what is the least that can happen? What must we prepare for?
To summarise, we argue that it is physically and statistically unsound to mix climate model with high and low ECS, and that the subset chosen for any impact study should depend on the question one is trying to answer.
No potential conflict of interest was reported by the authors.
fMRI-CGCM3 is the only model for which precipitation and temperature differ strongly in terms of sensitivity. Therefore we have, in the precipitation analysis, lumped this model together with all the high-sensitive models.
The authors wish to thank Edward Hawkins and Michael Wehner for advice at the outset of the project and Arnaud Le Breton, Rene Castberg, Akos Buzinkay and Jonathan Niesel for support regarding the data analytics aspects of the study. We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modelling groups (listed in Table 1 of this paper) for producing and making available their model output. For CMIP, the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and leads the development of software infrastructure, in partnership with the Global Organization for Earth System Science Portals. Finally, we would like to acknowledge the generous support for this project provided by DNV GL and NIVA.
Cohen , A. E. , Cavallo , S. M. , Coniglio , M. C. and Brooks , H. E. 2015 . A review of planetary boundary layer parameterization schemes and their sensitivity in simulating southeastern U.S. cold season severe weather environments . Wea. Forecast. 30 , 591 – 612 . https://doi.org/10.1175/WAF-D-14-00105.1
Flato , G. , Marotzke , J. , Abiodun , B. , Braconnot , P. , Chou , S. C. and et al. . 2013 . Evaluation of climate models . In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change , Vol. 5 (eds. T. F. Stocker , D. Qin , G.-K. Plattner , M. Tignor , S. K. Allen , J. Boschung , A. Nauels , Y. Xia , V. Bex , and P. M. Midgley ). Cambridge University Press , Cambridge, United Kingdom and New York, NY, USA , pp. 741 – 866 .
Giorgi , F. and Mearns , L. O. 2002 . Calculation of average, uncertainty range, and reliability of regional climate changes from AOGCM simulations via the ‘reliability ensemble averaging’(REA) method . J. Clim. 15 , 1141 – 1158 . https://doi.org/10.1175/1520-0442(2002)015<1141:COAURA>2.0.CO;2
Grose , M. R. , Brown , J. N. , Narsey , S. , Brown , J. R. , Murphy , B. F. and et al. 2014 . Assessment of the CMIP5 global climate model simulations of the western tropical Pacific climate system and comparison to CMIP3 . Int. J. Climatol. 34 , 3382 – 3399 . DOI: https://doi.org/10.1002/joc.3916 .
Hawkins , E. and Sutton , R. 2009 . The potential to narrow uncertainty in regional climate predictions . Bull. Am. Meteorol. Soci. 90 , 1095 – 1107 . https://doi.org/10.1175/2009BAMS2607.1
Hawkins , E. and Sutton , R. 2011 . The potential to narrow uncertainty in projections of regional precipitation change . Clim. Dyn. 37 , 407 – 418 . https://doi.org/10.1007/s00382-010-0810-6
IPCC . 2014 . Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press , Cambridge, UK .
Katzfey , J. , Nguyen , K. , McGregor , J. , Hoffmann , P. , Ramasamy , S. and et al. . 2016 . High-resolution simulations for vietnam – methodology and evaluation of current climate . J. Atmos. Sci. 52 ( 2 ), 1 – 16 .
Knutti , R. , Allen , M. R. , Friedlingstein , P. , Gregory , J. M. , Hegerl , G. C. and et al. 2008 . A review of uncertainties in global temperature projections over the twenty-first century . J. Clim. 21 , 2651 – 2663 . https://doi.org/10.1175/2007JCLI2119.1
Mearns , L. O. , Arritt , R. , Biner , S. , Bukovsky , M. S. , McGinnis , S. and et al. . 2012 . The North American regional climate change assessment program: overview of phase I results . Bull. Am. Meteorol. Soc. 93 , 1337 – 1362 . https://doi.org/10.1175/BAMS-D-11-00223.1
Moss , R. H. , Edmonds , J. A. , Hibbard , K. A. , Manning , M. R. , Rose , S. K. and et al. . 2010 . The next generation of scenarios for climate change research and assessment . Nature 463 , 747 – 756 . https://doi.org/10.1038/nature08823
Murphy , J. M. , Sexton , D. M. , Barnett , D. N. , Jones , G. S. , Webb , M. J. and et al. 2004 . Quantification of modelling uncertainties in a large ensemble of climate change simulations . Nature 430 , 768 – 772 . https://doi.org/10.1038/nature02771
Northrop , P. J. and Chandler , R. E. 2014 . Quantifying sources of uncertainty in projections of future climate* . J. Clim. 27 , 8793 – 8808 . https://doi.org/10.1175/JCLI-D-14-00265.1
Oppenheimer , M. , Campos , M. , Warren , R. , Birkmann , J. , Luber , G. and et al. . 2014 . Emergent risks and key vulnerabilities . In: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds. C. B. Field , V. R. Barros , D. J. Dokken , K. J. Mach , M. D. Mastrandrea , T. E. Bilir , M. Chatterjee , K. L. Ebi , Y. O. Estrada , R. C. Genova , B. Girma , E. S. Kissel , A. N. Levy , S. MacCracken , P. R. Mastrandrea and L. L. White ). Cambridge University Press , Cambridge, UK , pp. 1039 – 1099 .
Sherwood , S. C. , Bony , S. and Dufresne , J.-L. 2014 . Spread in model climate sensitivity traced to atmospheric convective mixing . Nature 505 , 37 – 42 . https://doi.org/10.1038/nature12829
Stanfield , R. E. , Jiang , J. H. , Dong , X. , Xi , B. , Su , H. et al. . 2016 . A quantitative assessment of precipitation associated with the ITCZ in the CMIP5 GCM simulations . Clim. Dyn. 47 , 1863 – 1880 .
Stephenson , D. B. , Collins , M. , Rougier , J. C. and Chandler , R. E. 2012 . Statistical problems in the probabilistic prediction of climate change . Environmetrics 23 , 364 – 372 . https://doi.org/10.1002/env.2153
Taylor , K. E. , Stouffer , R. J. and Meehl , G. A. 2012 . An overview of CMIP5 and the experiment design . Bull. Am. Meteorol. Soc. 93 , 485 . https://doi.org/10.1175/BAMS-D-11-00094.1
Tebaldi , C. , Smith , R. L. , Nychka , D. and Mearns , L. O. 2005 . Quantifying uncertainty in projections of regional climate change: a bayesian approach to the analysis of multimodel ensembles . J. Clim. 18 , 1524 – 1540 . https://doi.org/10.1175/JCLI3363.1
Watterson , I. G. , Bathols , J. and Heady , C. 2013a . What influences the skill of climate models over the continents? Bull. Amer. Meteor. Soc. 95 ( 5 ), 689 – 700 . DOI: https://doi.org/10.1175/BAMS-D-12-00136.1
Watterson , I. G. , Hirst , A. C. and Rotstayn , L. D. 2013b . A skill-score based evaluation of simulated Australian climate . Aust. Meteor. Oceanogr. J. 63 , 181 – 190 . https://doi.org/10.22499/2.00000
Yip , S. , Ferro , C. A. , Stephenson , D. B. and Hawkins , E. 2011 . A simple, coherent framework for partitioning uncertainty in climate predictions . J. Clim. 24 , 4634 – 4643 . https://doi.org/10.1175/2011JCLI4085.1
Ylhäisi , J. S. , Garrè , L. , Daron , J. and Räisänen , J. 2015 . Quantifying sources of climate uncertainty to inform risk analysis for climate change decision-making . Local Environ. 20 , 811 – 835 . https://doi.org/10.1080/13549839.2013.874987