In principle, yes. Predictions in climate science are mostly conditioned predictions, scenarios. The emission of greenhouse gases, for example, is something you cannot predict but it is a crucial parameter for the evolution of the climate. You can make an assumption about the level of emissions and then calculate how the climate evolves under that assumption.
But taking this into account, all the models yield quite similar results. There might be small differences, maybe the equilibrium temperature for a doubling of the CO2 concentration in the atmosphere is 3°C for one model and 4°C for another model. But the general trends are reproduced with all models. Of course that is something, one could also be critical about. The scientists making these models know each other more or less. And if somebody then finds very unusual results, he might become shaky and say: Well, maybe my model is not as good as the other 17 models that are around. And then he tries to adjust his model to agree with the other 17 models. There is also a social process that leads to the agreement between all the different climate models.
But in general, the question about the quality of these models is not easy to answer. There is a huge difference to weather predictions, where you can permanently validate and adjust your models since you basically see within three days whether your prediction was good or not. You can try out much more, that is something you unfortunately cannot do for climate models.
At the Helmholtz Zentrum Geesthacht you are also investigating regional aspects of climate change. Is this not even more difficult? I could imagine, you can average much less when looking at smaller scales, compared to global scenarios.
I do not agree. Maybe it is a mathematically less well defined problem compared to the global climate. But what we do in practice is to set the global-scale climate: the temperature, wind direction etc. And then we determine what happens on a regional scale—that works quite well.
How large are regional fluctuations for a given global evolution of the climate? Is it possible that the climate in northern Europe develops very differently than in the Mediterranean?
Temperatures are rising everywhere and there are no huge differences between neighboring regions. But if you look at precipitation, regional differences can be significant and one has to check carefully whether different models give the same results. We have recently made a survey for the Baltic Sea region and we find that it gets more humid in the entire area, especially in the north and during winters. Around the Mediterranean sea, however, we expect it to get dryer. You see, here we have very different regional manifestations of the same emission scenario.
Is this a general feature—that predictions about precipitation are much harder to make than about temperature?
Absolutely. Precipitation is a quantity that you can only represent by parame-trization. There are no raindrops falling in your climate models. It is rather the warming which would result from precipitation that you specify in the models. The model does not care about rain, it cares about the release of heat which happens through condensation. Afterwards, you can convert this into precipitation—but inside the model there is no explicit rain. There are also models that determine the amount of water stored in clouds—this a bit more explicit but still not what we would call "rain" in our daily life.
Extreme weather events have been widely discussed during the past years. I think of the heat waves in Russia 2010 and central Europe 2003 or also the strong winter 2009/2010. Are that statistical fluctuations or do you have to take complicated mechanisms into account to explain these events?
You can view the weather system as a random generator and the climate is the statistics of this random process. If you average a certain variable—let us say temperature—over a certain period then you will get one number for the last decade and a different number for the decade before. But the emergence of a difference does not imply that there is a reason for this change, it might just be a result of the randomness, smoke without a fire. The weather is a complex, nonlinear system that also has some intrinsic inertia. It is very well possible that the weather system remembers: I was warm last summer, now I will be warm this summer again. But this does not mean that the long-term statistics are changing.
The question: ''Is the climate changing?'' should rather read ''Are the long-term statistics different?''. Only if you need different statistics to explain the changes, you can talk about climate change. One example: We have measured global temperatures for the past 126 years. The probability that the 13 hottest years of this period all occurred within the last 16 years (these are real numbers, by the way) is very small, about 1/1000. This is something we have calculated (taking also this memory effect into account, which I mentioned earlier). That means, we have with a very high probability a change of the statistics—the climate gets warmer. From just two warm summers, you cannot draw such a conclusion.
A few years ago, there was a controversy concerning the so-called hockeystick graph, that illustrates the strong global warming during the last century as compared to the past 1000 years. You have criticized the underlying mathematical methods, leading to this hockeystick. Could you summarize what the controversy was about?
The hockeystick graph has the following background: If you want to reconstruct the temperature record of the past 1000 years, you are facing the problem that there are no measured temperature records for most of this period. Therefore, people have analyzed data which is believed to contain climate information (so called proxydata, e.g. data from tree rings) and tried to reconstruct temperatures from that with statistical methods. This is an appealing idea and it resulted in the hockeystick.
We have then tested the underlying statistical method, by using data from a regular climate model which also was run over a period of 1,000 years. We used a simulation which gave us some significant changes in temperature because we varied not only the CO2 content of the atmosphere. As a result, we got a temperature record which was comparable to the real one. Now we created artificial proxydata using simulated local temperatures to which we added some noise, at first we used white noise. We took this artificial climate data and applied the hockeystick—method to it. Now you would expect to find back something comparable to the original data from our simulation. But what we got was a temperature record, where the low frequency components where heavily damped, that means the slow changes were not reproduced correctly.
We then did some additional checks to make sure we really investigated a situation equivalent to the original hockeystick problem—with the same result. So we concluded that the low frequency components of the temperature change were not represented correctly—that is something you do not want when investigating this kind of problem since it means the shaft of the hockeystick is not reproduced correctly.
In my eyes, the controversy about the hockeystick was not that significant from a scientific point of view. But the topic has great political potential because the hockeystick has become an icon. Its purpose is to show in a simple way and even to the most stubborn skeptics that climate change is really taking place and a threat. However, you certainly score an own goal if it turns out that the scientific method you used has some serious methodological issues. But in science, I would say, this was one controversy among many others.
Was this article helpful?