The latest climate models are not expected to increase future warming projections



[ad_1]

The latest climate models are not expected to increase future warming projections

A noteworthy plot in the climate system over the past year or two has been the effort to make sense of the latest generation of climate models. In the service of the upcoming Intergovernmental Panel on Climate Change (IPCC) report, global climate models presented their simulations to the latest database, known as CMIP6. These observations showed that updates to a number of models had made them more sensitive to greenhouse gases, meaning they project greater amounts of future warming.

In addition to diagnosing the behavior responsible for that change, climate scientists have also addressed the implications. Should we be alarmed by the results or are they outliers? Climate models are just one of many tools for estimating Earth’s true “climate sensitivity”, so their behavior must be considered in the full context of all other evidence.

For a variety of reasons, research is converging on the idea that high-temperature projections are outliers; these hottest models seem to be mashed potato hot. This will pose a challenge for scientists working on the next IPCC report: how much influence should these outliers have on projections of future warming?

Weighting game

One way to represent the uncertainty range in projections is to simply average all available model simulations, delimited by error bars showing the highest and lowest simulation. This is an agnostic solution that does not seek to judge the quality of each model. But another approach is to use a weighted average, scoring each model in some way to generate what is hopefully a more realistic projection. That way, adding different versions of a model that gets drastically different results, for example, wouldn’t unduly change your overall response.

A new study conducted by Lukas Brunner at ETH Zurich uses an established method to evaluate simulations of the new model based on the accuracy with which they correspond over the past decades, as well as the close correlation between each model.

While the different climate models out there provide very beneficial independent controls, they are not entirely independent. Some models are born of others, some share components and some share methods. Dealing with this situation isn’t as simple as checking the history of a GitHub fork. In this case, the researchers analyze spatial models of atmospheric temperature and pressure to calculate the similarity between the models. The more similar the two models are, the more methods or code they are assumed to share, so each has a little less influence on the overall average. This process minimizes the effect of double counting models that are not truly independent of each other.

In general, the most important factor for weighting was the models’ ability to recreate the past. Of course, models must demonstrate skill at matching real-world data before you can trust their projections of the future. All models are tested in this way during their development, but will not end up with identical adaptations to the past, especially since the past is complicated, so models can be compared based on regional temperatures, rainfall, atmospheric pressure and so on. above.

The researchers used five key properties from global datasets between 1980 and 2014. Each model was evaluated based on how accurately it matched temperatures, trends, temperature variability, atmospheric pressures and atmospheric pressure variability.

Here's how the models were weighted: A simple average would have given each model the same weight (dashed line).
Zoom in / Here’s how the models were weighted: A simple average would have given each model the same weight (dashed line).

Don’t be so sensitive

This process ends up reducing the weight of the models with the highest climate sensitivity, since they do not even adapt to past observations. For example, using a sensitivity metric called “Transient Climate Response”, the average before weighting is 2 ° C, with error bars ranging from 1.6 to 2.5 ° C. But after weighting the models, the average drops slightly to 1.9 ° C and the error bars decrease to 1.6-2.2 ° C. (this range aligns quite well with recent estimates of the true value).

The gray line / shading shows the unweighted average projection of the models, but the weighting slightly reduces the average and the top (colored lines / shading).
Zoom in / The gray line / shading shows the unweighted average projection of the models, but the weighting slightly reduces the average and the top (colored lines / shading).

Apply that to future warming projections and something similar happens. In a high-emission scenario, 21st century warming goes from 4.1 ° C (3.1-4.9 ° C) to 3.7 ° C (3.1-4.6 ° C). In the low-emission scenario, the expected average warming of 1.1 ° C (0.7-1.6 ° C) decreases to 1.0 ° C (0.7-1.4 ° C) after weighting.

The result here is that there is no indication that the state of scientific knowledge has changed. As a result of ongoing development, particularly attempts to improve the realism of simulated clouds, some models have become more sensitive to increases in greenhouse gases, so they predict stronger future warming. But it doesn’t seem to have made them a better representation of the Earth’s climate as a whole. Rather than worry that the physics of climate change are even worse than we thought, we can focus on the urgent need to eliminate greenhouse gas emissions.

Earth System Dynamics, 2020. DOI: 10.5194 / esd-11-995-2020 (About DOI).



[ad_2]
Source link