(Sublime: producing an overwhelming sense of awe or other high emotion through being vast or grand)

Commenters on this blog have posed the question: why are social scientists not taking more interest in the climate science community and the mechanisms by which its activities over the last decade have come to influence public opinion and public policy? What follows may give a clue as to why they would be inclined to steer very well clear of this area of research.

Myanna Lahsen is an anthropologist who has studied a new tribe that has emerged as part of the wider community of climate scientists: climate modellers. Over a period of 6 years (1994-2000), while she was based at NCAR (National Center for Atmospheric Research, Boulder, Colorado), a major base for modellers, she travelled widely to conduct over 100 interviews with atmospheric scientists, 15 of whom were climate modellers. Her findings were published in Social Studies of Science as Seductive Simulations? Uncertainty Distribution Around Climate Models in 2005.

The purpose of Lahsen’s paper was to consider the distribution of certainty around General Circulation Models (GSMs), with particular reference to Donald MacKenzie’s concept of the ‘certainty trough’, and propose a more multidimensional and dynamic conceptualisation of how certainty is distributed around technology. That, thankfully, is not the subject of this post, interesting though it is once you work out what she is talking about.

At the heart of her research is the question of whether modellers are just too close to what they are doing to assess the accuracy of their simulations of Earth’s climate  that they create. She suggests that atmospheric scientists who are at some distance from this field of research may be better able to do so, which is not so surprising, but she also reveals a darker side of the culture that climate modellers are part of which is much more disturbing.

It is the ethnographic observations that emerged from her extensive fieldwork that I want to concentrate on here, and I’m going to quote from her paper without adding much by way of comment. In light of the Climategate emails, these extracts speak for themselves.

At the end of the introductory section of the paper, and under the heading The Epistemology1of Models we find two quotations that give a hint of what is in store:

The biggest problem with models is the fact that they are made by humans who tend to shape or use their models in ways that mirror their own notion of what a desirable outcome would be. (John Firor [1998], Senior Research Associate and former Director of NCAR, Boulder, CO,

USA)

In climate modeling, nearly everybody cheats a little. (Kerr, 1994) [Writing in Scinece]

Page 898

Lahsen goes on to highlight concerns that modellers are too close to their work, and have too much invested in it professionally and socially, to feel able to speak freely to those outside their field about uncertainties. Furthermore, she presents evidence that the nature of their work makes it difficult for them to distinguish between their simulations and the real world as represented by scientific observations. She does not suggest that modellers are systemically dishonest, but she does present a very compelling case that pressures on them make it very difficult to be entirely truthful when assessing their work or describing their findings to others. A model may take decades to develop meaning that, if its output is perceived to be flawed, the developers’ entire professional reputations and careers may be on the line.    Here is an example of the way in which external influence can be applied:

… modelers – keen to preserve the authority of their models – deliberately present and encourage interpretations of models as ‘truth machines’ when speaking to external audiences.

Like scientists in other fields, modelers might ‘oversell’ their products (as acknowledged in quotes presented below), because of funding considerations. In a highly competitive funding environment they have an interest in presenting the models in a positive light. The centrality of climate models in politics can also shape how modelers and others who promote concern about climate change present them. GCMs figure centrally in heated political controversies about the reality of climate change, the impact of human activities, and competing policy options. In this context, caveats, qualifications, and other acknowledgements of model limitations can become fodder for the antienvironmental movement

Speaking to a full room of NCAR scientists in 1994, a prominent scientist and frequent governmental advisor on global change warned an audience mostly made up of atmospheric scientists to be cautious about public expressions of reservations about the models. ‘Choose carefully your adjectives to describe the models’, he said, ‘Confidence or lack of confidence in the models is the deciding factor in whether or not there will be policy response on behalf of climate change.’ While such explicit and public references to the political impact of the science are rare (I only encountered this one instance during my fieldwork), a similar lesson is communicated in more informal and subtle ways. It is also impressed on many who witness fellow atmospheric scientists being subjected to what they perceive as unfair attacks in media-driven public relations campaigns …

[The speaker was Dan Albritton, although for some reason Lahsen fails to identify him]

Page 905

Albritton’s assumption that his audience will have a commitment to political action as well as scientific research is chilling. Elsewhere, in a comment on Roger Pielke Jr’s blog, Lahsen says:

To my knowledge, there are no studies of climate modeller’s preferences related to climate change. On the basis of my observations, I would say that climate modellers, as a whole, are environmentally concerned. However, few of them involve themselves in any active way with policy issues. As a whole they are much more interested in the science than in the associated policy consequences.

This would seem rather naive. If your field of research is a driving force of public policy, then what need is there to involve yourself in political activism?  And the link between public policy initiative and the funding which fuels the science that underpin them, is obvious.

But it is the confusion that Lahsen documents between the simulated worlds of the modeller’s ‘truth machines’ and the real world of scientific observations that is most troubling.

During modelers’ presentations to fellow atmospheric scientists that I attended during my years at NCAR, I regularly saw confusion arise in the audience because it was unclear whether overhead charts and figures were based on observations or simulations.

I realized that I was not alone in my confusion when scientists in the audience stopped the presenter to ask for clarification as to whether the overhead figures were based on observations or model extrapolations. The presenter specified that the figures were based on models, and then continued his presentation.

… modelers may have been strategic when alternating between speaking of their models as heuristics and presenting them as ‘truth machines’. However, the oscillation also may reflect how some modelers think and feel about their models at particular moments when they fail to maintain sufficient critical distance. In interviews, modelers indicated that they have to be continually mindful to maintain critical distance from their own models. For example:

Interviewer: Do modelers come to think of their models as reality?

Modeler A: Yes! Yes. You have to constantly be careful about that [laughs].

He described how it happens that modelers can come to forget known and potential errors:

You spend a lot of time working on something, and you are really trying to do the best job you can of simulating what happens in the real world. It is easy to get caught up in it; you start to believe that what happens in your model must be what happens in the real world. And often that is not true . . . The danger is that you begin to lose some objectivity on the response of the model [and] begin to believe that the model really works like the real world . . . then you begin to take too seriously how it responds to a change in forcing. Going back to trace gases, CO2 models – or an ozone change in the stratosphere: if you really believe your model is so wonderful, then the danger is that it’s very tempting to believe that the way it responds to a change in forcing must be right. [Emphasis added]

This modeler articulates that the persuasive power of the simulations can affect the very process of creating them: modelers are at times tempted to ‘get caught up in’ their own creations and to ‘start to believe’ them, to the point of losing awareness about potential inaccuracies. Erroneous assumptions and questionable interpretations of model accuracy can, in turn, be sustained by the difficulty of validating the models in the absence of consistent and independent data sets.

Page 908

The highly specialised nature of a modeler’s work can cut them off from the rest of the research community.

Critical distance is also difficult to maintain when scientists spend the vast majority of their time producing and studying simulations, rather than less mediated empirical representations. Noting that he and fellow modelers spend 90% of their time studying simulations rather than empirical evidence, a modeler explained the difficulty of distinguishing a model from nature:

Modeler B: Well, just in the words that you use. You start referring to your simulated ocean as ‘the ocean’ – you know, ‘the ocean gets warm’, ‘the ocean gets salty’. And you don’t really mean the ocean, you mean your modeled ocean. Yeah! If you step away from your model you realize ‘this is just my model’. But [because we spend 90% of our time studying our models] there is a tendency to forget that just because your model says x, y, or z doesn’t mean that that’s going to happen in the real world.

This modeler suggests that modelers may talk about their models in ways they don’t really mean (‘you don’t really mean the ocean, you mean your modeled ocean . . . ‘). However, in the sentence that immediately follows, he implies that modelers sometimes actually come to think about their models as truth-machines (they ‘forget to step away from their models to realize that it is just a model’; they have a ‘tendency to forget’).

Page 909

And again:

The following interview extract arguably reflects such an instance of forgetting. This modeler had sought to model the effects of the possible ‘surprise’ event of a change in the ocean’s climate-maintaining thermohaline circulation. On the basis of his simulation he concluded that the widely theorized change in the ocean’s circulation due to warmer global temperatures is not likely to be catastrophic:

Modeler C: One of the surprises that people have been worrying about is whether the thermohaline circulation of the oceans [the big pump that could change the Gulf Stream] shuts off . . . . If the models are correct, the effect even of something like that is not as catastrophic as what most people think. You have to do something really nasty to [seriously perturb the system] . . . The reality is, it really is an ocean thing, it is basically an ocean phenomenon; it really doesn’t touch land very much.

Interviewer: But wouldn’t it change the Gulf Stream and therefore . . . ?

Modeler C: Yes, look right here [shows me the model output, which looks like a map]. If the model is right. [Slight pause] I put that caveat in at the beginning [laughs]. But right there is the picture.

Modeler C struggles to not speak of his model as a ‘truth machine’, but lapses before catching himself when presented with a question. Though he starts off indicating that the models could be wrong (‘if the models are correct’), he soon treats the model as a truth machine, referring to the modeled phenomena as reliable predictions of future reality (‘The reality is, it really is an ocean thing’). Catching himself, he then refers back to the caveat, followed by a little laugh.

Page 909

As simulations become more detailed, they may become even more seductive for their creators, but the likelihood of model output becoming more distorted increases too:

The increasingly realistic appearance of ever-more comprehensive simulations may increase the temptation to think of them as ‘truthmachines’. As Shackley et al. (1998) have noted, there is a tendency among modelers to give greater credence to models the more comprehensive and detailed they are, a tendency they identify as cultural in nature because of a common trade-off between comprehensiveness and error range. As GCMs incorporate ever more details – even things such as dust and vegetation – the models increasingly appear like the real world, but the addition of each variable increases the error range (Syukuro Manabe, quoted in Revkin, 2001).

Page 910

Then there are the usual pressures of ‘office politics’ and conflict with those who are not members of the modelling tribe:

Modelers’ professional and emotional investment in their own models reduces their inclination and ability to maintain critical awareness about the uncertainties and inaccuracies of their own simulations. Shackley and Wynne suggest that modelers talk freely about their models’ shortcomings among themselves. However, the following researcher identified a general reluctance on the part of modelers to discuss their models’ weaknesses, even among themselves:

Modeler E: What I try to do [when presenting my model results to other modelers] . . . is that I say ‘this is what is wrong in my model, and I think this is the same in all models, and I think it is because of the way we’re resolving the equations, that we have these systematic problems’. And it often gets you in trouble with the other people doing the modeling. But it rarely gets you in trouble with people who are interested in the real world. They are much more receptive to that, typically, than they are if you say ‘here, this is my result, doesn’t this look like the real world?’ And ‘this looks like the real world, and everything is wonderful’.

Interviewer: Why do you get in trouble with modelers with that?

Modeler E: Because . . . when I present it, I say ‘this model is at least as good as everyone else’s, and these problems are there and they are in everybody else’s models too.’ They often don’t like that, even if I am not singling out a particular model, which I have done on occasion [smiles] – not necessarily as being worse than mine but as having the same flaws. Not when they are trying to sell some point of view and I go in there saying ‘Hey, this is where I go wrong [in my model], and you are doing the same thing! And you can’t be doing any better than that because I know that this isn’t a coding error problem’ [laughs].

This modeler confirmed statements about modelers I encountered in other contexts, who also identified a disinclination on the part of modelers to highlight, discuss, and sometimes even perceive problems in their model output.

Page 911

And:

Modeler E noted that theoreticians and empiricists often criticize modelers for claiming unwarranted levels of accuracy, to the point of conflating their models with reality. My fieldwork revealed that such criticisms circulate widely among atmospheric scientists. Sometimes such criticisms portray modelers as motivated by a need to secure funding for their research, but they also suggest that modelers have genuine difficulty with gaining critical distance from their models’ strengths and weaknesses. Moreover, they criticize modelers for lacking empirical understanding of how the atmosphere works (‘Modelers don’t know anything about the atmosphere’).

Page 913

And finally:

Empiricists complain that model developers often freeze others out and tend to be resistant to critical input. At least at the time of my fieldwork, close users and potential close users at NCAR (mostly synoptically trained meteorologists who would like to have a chance to validate the models) complained that modelers had a ‘fortress mentality’. In the words of one such user I interviewed, the model developers had ‘built themselves into a shell into which external ideas do not enter’. His criticism suggests that users who were more removed from the sites of GCM development sometimes have knowledge of model limitations that modelers themselves are unwilling, and perhaps unable, to countenance. A model developer acknowledged this tendency and explained it as follows:

Modeler F: There will always be a tension there. Look at it this way: I spent ten years building a model and then somebody will come in and say ‘well, that’s wrong and that’s wrong and that’s wrong’. Well, fine! And then they say, ‘well, fix it!’ [And my response to them is:] ‘you fix it! [laughs] I mean, if I knew how to fix it, I would have done it right in the first place!!! [Laughs] And what is more, I don’t like you anymore – all you do is you come in and tell me what is wrong with my model! Go away!’ [laughter]. I mean, this is the field.

Modeler F’s acknowledgement of inaccuracies in his model is implied in his comment that he would have improved the model if he knew how.

Page 916

Lahsen’s use of the term ‘fortress mentality’ is worryingly reminiscent of Judith Curry’s initial response to the Climategate emails which was posted at Climate Audit soon after they appeared on the net. She refers to politicisation of climate scinece, the circling of  wagons in the face of criticism, professional egos out of control, and issues surrounding scientific integrity all being features of tribalism in the climate community.

Lahsen’s paper appeared five years ago and received some attention in the blogosphere before becoming a back number. Had it been published at the time of Climategate, then it’s very likely that her findings would have become part of the story. That it did not do so makes the spotlight that she has shone on an apparently rather murky aspect of climate research no less important. Predictions about future climate have been one of the major factors in promoting climate alarmism, and it is important that there should be a general understanding of the professional environment and culture in which this research originates. In this context, Lahsen’s findings are very disturbing indeed.

This post started with the question: why are social scientists not taking more interest in the climate science community and the mechanism by which its activities over the last decade have come to influence public opinion and public policy? Given the peer pressure that exists in academia to conform to the orthodoxies surrounding global warming, and not ask awkward questions, I can well understand why most social scientists would run a mile from embarking on such research. Professionally it would be a very dumb move.

Geoff Chambers has (jokingly I think) accused Harmless Sky of becoming the Antiques Roadshow of bloggery, with a succession of posts based on trawling through old files. I make no apology for this. As a wise old histiorian once said:

The further backward you can look, the further forward you can see.

Blogging about the climate debate tends to be very much focused on what is happening now, but it would seem to me that there is good reason to look back occasionally and consider how we have arrived at the present situation. Lahsen’s paper takes on a whole new significance when re-considered in the light of Climategate.

Next week the inter academies report on the procedures used by the IPCC when compiling their assessment reports will be published I wonder if they will have considered whether it is a good idea to have modellers as lead authors and as the review editor, on the chapter in the next report dealing with climate models?


1 Epistemology: the theory of knowledge distinction between justified belief and opinion.

34 Responses to “Sublime Stimulations: climate models, uncertainty, and the ‘fortress mentality’”

  1. Decline and Fall

    OT, but you’ve reminded me of Chris Mullins’s recently published work of the same name. His description of Tony Blair as the sort of person who tells several women that he loves them, thus flattering them hugely until they meet and compare notes, amused me greatly.

    I’m all for a trip to Bali. Cancun would do, though.. :-)

  2. geoffchambers

    Thanks for lead co-author offer (along with TonyB).

    Your point:

    Joe Public was often better at predicting the future than the “experts”

    is confirmed in Nassim Taleb’s “The Black Swan”.

    Under the heading “epistemic arrogance” Taleb points out the rather obvious observation that in making predictions it is not so important what an expert knows, it is much more important what he does not know. Epistemic arrogance causes the “expert” to overestimate what he knows and underestimate uncertainty. In effect, his problem is “that he does not know what he does not know”.

    The “non-expert” may only be able to rely on his common sense, but he knows full well “what he does not know” and therefore has the advantage that he is not blinded by “epistemic arrogance”. As a result, his “prediction” will more often than not be closer to the mark than that of the “expert”.

    Max

  3. Off to Cancun! Ai Chihuahua! I can already taste those (taxpayer funded) Margaritas!

  4. A little serendipity – I was checking the “six impossible things before breakfast” quotation from Alice in Wonderland, as it seemed mildly apposite, and stumbled on this lesser-known one from the same source. Substitute AGW for Arithmetic and it seems very prescient:

    “The different branches of Arithmetic — Ambition, Distraction, Uglification, and Derision.”

    :-)

  5. JamesP #29
    Alice in Warmerland! That’s the way to go! I’ve got George (“It’s already to late!”) Monbiot down for the White Rabbit and Lovelock as the Old man sitting on a Gate. Auditioning for the rest of the cast tomorrow.
    When you think – the cards painting the red roses white (hide the decline), the Walrus lamenting species loss as he eats the last oyster – it hardly needs rewriting.

  6. “Alice in Warmerland”

    I think that might have legs, Geoff! I must re-read the original properly – I’m not sure I really appreciated it before.

  7. ‘There’s Mann’s Nature trick for you!’

    ‘I don’t know what you mean by “Mann’s Nature trick”,’ Alice said.

    Humpty Dumpty smiled contemptuously. ‘Of course you don’t – till I tell you. I meant “there’s a nice knock-down argument for you!”‘

    ‘But “Mann’s Nature trick” doesn’t mean “a nice knock-down argument”,’ Alice objected.

    ‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’

    ‘The question is,’ said Alice, `whether you can make words mean so many different things.’

    ‘The question is,’ said Humpty Dumpty, ‘which is to be master – that’s all.’

    Alice was too much puzzled to say anything.

  8. Climate Science through the looking glass – GCMs, alternative energy and the precaustionary principle:

    ‘I see you’re admiring my little box.’ the Knight said in a friendly tone. ‘It’s my own invention— to keep clothes and sandwiches in. You see I carry it upside-down, so that the rain can’t get in.’

    ‘But the things can get out,’ Alice gently remarked. ‘Do you know the lid’s open?’

    ‘I didn’t know it,’ the Knight said, a shade of vexation passing over his face. ‘Then all the things much have fallen out! And the box is no use without them.’ He unfastened it as he spoke, and was just going to throw it into the bushes, when a sudden thought seemed to strike him, and he hung it carefully on a tree. ‘Can you guess why I did that?’ he said to Alice.

    Alice shook her head.

    ‘In hopes some bees may make a nest in it— then I should get the honey.’

    ‘But you’ve got a bee-hive— or something like one— fastened to the saddle,’ said Alice.

    ‘Yes, it’s a very good bee-hive,’ the Knight said in a discontented tone, ‘one of the best kind. But not a single bee has come near it yet. And the other thing is a mouse-trap. I suppose the mice keep the bees out— or the bees keep the mice out, I don’t know which.’

    ‘I was wondering what the mouse-trap was for,’ said Alice. ‘It isn’t very likely there would be any mice on the horse’s back.’

    ‘Not very likely, perhaps,’ said the Knight: ‘but if they do come, I don’t choose to have them running all about.’

    ‘You see,’ he went on after a pause, ‘it’s as well to be provided for everything.’

    [Excellent! In a way it would be nice to have a thread here headed ‘Rajendra (or Al) in Wonderland’, but really they are playing the part of the Knight not Alice. Who might Alice be? Surely not Steve McIntyre. TonyN]

  9. As Alex and David demonstrate above, there’s no need to rewrite the text at all – simply add explanatory footnotes to the original. (Lewis Carroll was a lecturer in mathematical logic after all, and an expert in paradoxes, so it’s not surprising that he should have already foreseen the warmist arguments).
    “Algore in Warmerland” sounds nice, but, as TonyN suggests, the whole point is that Alice is the natural sceptic, surrounded by clever adults who know better.

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)


four − 1 =

© 2011 Harmless Sky Suffusion theme by Sayontan Sinha