May 282008

In a previous post (here) I reported that 150 of the world’s leading climate modellers had gathered in Reading to discuss their need for computers one thousand times more powerful than the present generation of supercomputers. Now I want to look at another couple of aspects of this conference.
Here is an extract from the delegate’s official website under the heading Background:

The reality of climate change has been accepted by the world. Thanks to the sustained, comprehensive and objective assessments by the Intergovernmental Panel on Climate Change (IPCC), a consensus, with a high degree of confidence, has emerged in the scientific community that human activities are contributing to climate change. A systematic program of numerical experimentation with climate models during the past 40 years has played a crucial role in creating this scientific consensus, and in its acceptance by the world.

Now is it just me, or does this sound like someone whistling in the dark to keep their spirits up? Surely if what this paragraph says is true and incontrovertible, then there is no need to restate it to the very people who have brought about this situation? Presumably it is something that they are very well aware of; if there is a confident consensus, then these scientists, of all people, must be part of it. Or is it possible that some of them have not noticed and need reminding?

The Background statement continues:

The nations of the world have therefore begun, with great urgency, discussion about mitigation and adaptation to climate change, the inevitability of which is now beyond doubt. The climate models will, as in the past, play an important, and perhaps central, role in guiding the trillion dollar decisions that the peoples, governments and industries of the world will be making to cope with the consequences of changing climate.

The climate modeling community is therefore faced with a major new challenge: Is the current generation of climate models adequate to provide societies with accurate and reliable predictions of regional climate change, including the statistics of extreme events and high impact weather, which are required for global and local adaptation strategies? It is in this context that the World Climate Research Program (WCRP) and the World Weather Research Programme (WWRP) asked the WCRP Modelling Panel (WMP) and a small group of scientists to review the current state of modelling, and to suggest a strategy for seamless prediction of weather and climate from days to centuries for the benefit of and value to society.

This leaves us in no doubt about the importance of the issues being discussed at Reading; no less than the scientific basis for research that will shape the thinking of politicians and policy makers thereby influencing ‘trillion dollar decisions’. It would seem reasonable to expect that, at this point, there might be some discussion of the confidence that can be attached to the predictions that the models provide. After all, if they influence ‘trillion dollar decisions’ it would be wise to consider whether they are likely to be right. But the modeller’s seem to be concerned with something else; creating models that provide higher definition, not greater reliability.

What then is the distinction between the predictive skill of models and the definition of the predictions that they make?

So far, climate models can be described as having low definition. They make predictions about global temperatures, and associated weather patterns such as increased precipitation or drought, over a long period, typically one hndred years ahead. It is no good asking them what will be happening in Manchester during the late 2060s, or even what average rainfall will be in south-east Africa in ten years time. The present generation of models cannot do this because they do not deliver this level of detail. But from a practical point of view, this is precisely the kind of information that policy makers and business now want. The modellers say that without more powerful computers, they cannot provide this.

In part 1 of this post (here), we saw that the uncertainties surrounding the predictions made by climate models has not been reduced during the last two decades. This has a great deal to do with our less than complete knowledge and understanding of how the immensely complex processes that drive the earth’s climate function. Are the modeller’s really talking about massive investment in a new generation of supercomputers so that they can provide more detailed predictions, but without reducing their level of uncertainty? It would seem that they are.

Here is what Prof. Julia Slingo, Director of Climate Research, Centre for Atmospheric Science at Reading University told the BBC:

“We’ve reached the end of the road of being able to improve models significantly so we can provide the sort of information that policymakers and business require,” she told BBC News.

“In terms of computing power, it’s proving totally inadequate. With climate models we know how to make them much better to provide much more information at the local level… we know how to do that, but we don’t have the computing power to deliver it.”

Professor Slingo said several hundred million pounds of investment were needed.

[My emphasis]

But what about those fundamental uncertainties? Here is a little more from the same BBC report:

Knowing the unknowns

One trouble is that as some climate uncertainties are resolved, new uncertainties are uncovered.

Some modellers are now warning that feedback mechanisms in the natural environment which either accelerate or mitigate warming may be even more difficult to predict than previously assumed.

Research suggests the feedbacks may be very different on different timescales and in response to different drivers of climate change.

Just last week, preliminary research at the Leibniz Institute of Marine Sciences in Kiel, Germany, suggested that natural variations in sea temperatures will cancel out the decade’s 0.3C global average rise predicted by the IPCC, before emissions start to warm the Earth again after 2015.

IPCC authors said this was not incompatible with their models; but the German research provoked some sceptics to ask whether models could be believed at all.

Of course, if the uncertainties surrounding the models are large enough, then nothing will be incompatible with them. Here is another modeller’s somewhat gnomic response to this problem:

“If we ask models the questions they are capable of answering, they answer them reliably,” counters Professor Jim Kinter from the Center for Ocean-Land-Atmosphere Studies near Washington DC, who is attending the Reading meeting.

“If we ask the questions they’re not capable of answering, we get unreliable answers.”

What does all this mean in practical terms? This is something else that Prof. Slingo told the BBC:

Climate models have been enormously successful in alerting us to the dangers of our activities in emitting greenhouse gases and so forth.

Listen Again: BBC, Today, 06/05/2008, Item at 08:51

The use of the word ‘successful’, coming from a scientist, is rather strange in this context. Climate models make predictions about what will be happening to our planet far in the future. Therefor it will be many decades before we will know whether they are ‘successful’. All we can be certain of at the moment is that the terrifying output from these models has been very successful in persuading people that anthropogenic global warming is a problem.

This conference might be seen as an esoteric brainstorming exercise that is relevant only to those directly involved in this highly specialised branch of research, but looking at the list of participants we find that this is not the case. Some of the very biggest names in climate science have come to Reading including Rajendra Pachuri, Chairman of the IPCC, Michel Jarraud, Secretary-General of the World Meteorological Organization , and Kevin Trenberth, one of the coordinating lead authors on the IPCC’s Fourth Assessment Report which was published last year. The term summit is not a misnomer.

What conclusions should we draw from what we know about this summit? Probably only that there is very considerable confusion in the world of climate modelling at the moment, and this calls into question the relevance of model predictions when considering ‘trillion dollar decisions’. Climate modelling may not need vastly expensive new hardware, but a period of reflection during which confidence in predictions about future climate can be vigorously re-assessed.

2 Responses to “World Climate Modelling Summit at Reading: Part 2”

  1. 1
    The Met Office brings doom to a place near you « Watts Up With That? Says:

    [...] this is rather unexpected. In May last year, I posted here and here about a world summit of climate modellers that took place at Reading University. On the agenda was [...]

  2. 2
    Bob_FJ Says:

    Tony N,
    I’ve recently linked your “Reading Summit” parts 1) and 2) over at Chris Colose’s site.
    It’s an interesting visit there because although Chris is clearly in the alarmist camp, he allows sceptical posts that in my case have not been permitted at RC.

    The correspondent Patrick 027 (below) appears to be some kind of physicist that writes great reams of argumentative academic stuff at his favourite place; RC, and elsewhere, but this time it was goofy, concerning the Trenberth cartoon.

    Here we go:
    Patrick 027, Reur Dec. 30, 09 @ 1:25 pm
    I’m trying to catch-up during my summer holiday season in Oz:
    You wrote in part :

    Actually, I don’t know offhand how they figured it out, but
    1. A lot of computer models, and much else, boils down to math. Do you believe that 1 + 1 = 2, or do you think it might be 1 + 1 = 2.01334523 ?
    2. If you see two people standing next to each other, you might be able to accurately gauge how much taller one is than the other even if you don’t know how tall either one is.

    You [Patrick] did not answer my ask of you to clarify what you were trying to convey, but anyway, I agree in your 1) that computers (etc) do accurately predict that 1 + 1 = 2, but so what?.
    Computer models depend on many parameter inputs, and when those inputs are of poor understanding, as has been openly declared by the IPCC etc, then the outputs cannot be sensibly accepted as a substitute for ABSENT empirical data.
    As for your 2), I have no idea how to respond….. What do you mean?
    BTW; Gavin Schmidt wrote this over at RC; in part, not long ago, my emphasis added:

    “Alert readers will have noticed the fewer-than-normal postings over the last couple of weeks. This is related mostly to pressures associated with real work (remember that we do have day jobs). In my case, it is because of the preparations for the next IPCC assessment and the need for our group to have a functioning and reasonably realistic climate model with which to start the new round of simulations. These all need to be up and running very quickly if we are going to make the early 2010 deadlines…”

    May I also refer you to part 1 and 2 of a commentary on a talkfest on improved climate modelling in Reading (England)
    So, if the earlier models are sufficient, in the absence of supporting empirical data, to make massive policy and financial decisions etc, why is it recently necessary to “improve” those models????????????????????????????????????????
    Is that a complicated question?
    Feel free to Google on what I’ve written!

    The site has been closed down for about a week, so a response is not given.
    Max has been having fun there too, more than me!

Leave a Reply


− three = 6

© 2011 Harmless Sky Suffusion theme by Sayontan Sinha