(Sublime: producing an overwhelming sense of awe or other high emotion through being vast or grand)
Commenters on this blog have posed the question: why are social scientists not taking more interest in the climate science community and the mechanisms by which its activities over the last decade have come to influence public opinion and public policy? What follows may give a clue as to why they would be inclined to steer very well clear of this area of research.
Myanna Lahsen is an anthropologist who has studied a new tribe that has emerged as part of the wider community of climate scientists: climate modellers. Over a period of 6 years (1994-2000), while she was based at NCAR (National Center for Atmospheric Research, Boulder, Colorado), a major base for modellers, she travelled widely to conduct over 100 interviews with atmospheric scientists, 15 of whom were climate modellers. Her findings were published in Social Studies of Science as Seductive Simulations? Uncertainty Distribution Around Climate Models in 2005.
The purpose of Lahsen’s paper was to consider the distribution of certainty around General Circulation Models (GSMs), with particular reference to Donald MacKenzie’s concept of the ‘certainty trough’, and propose a more multidimensional and dynamic conceptualisation of how certainty is distributed around technology. That, thankfully, is not the subject of this post, interesting though it is once you work out what she is talking about.
At the heart of her research is the question of whether modellers are just too close to what they are doing to assess the accuracy of their simulations of Earth’s climate that they create. She suggests that atmospheric scientists who are at some distance from this field of research may be better able to do so, which is not so surprising, but she also reveals a darker side of the culture that climate modellers are part of which is much more disturbing.
It is the ethnographic observations that emerged from her extensive fieldwork that I want to concentrate on here, and I’m going to quote from her paper without adding much by way of comment. In light of the Climategate emails, these extracts speak for themselves.
At the end of the introductory section of the paper, and under the heading The Epistemology1of Models we find two quotations that give a hint of what is in store:
The biggest problem with models is the fact that they are made by humans who tend to shape or use their models in ways that mirror their own notion of what a desirable outcome would be. (John Firor [1998], Senior Research Associate and former Director of NCAR, Boulder, CO,
USA)
In climate modeling, nearly everybody cheats a little. (Kerr, 1994) [Writing in Scinece]
Page 898
Lahsen goes on to highlight concerns that modellers are too close to their work, and have too much invested in it professionally and socially, to feel able to speak freely to those outside their field about uncertainties. Furthermore, she presents evidence that the nature of their work makes it difficult for them to distinguish between their simulations and the real world as represented by scientific observations. She does not suggest that modellers are systemically dishonest, but she does present a very compelling case that pressures on them make it very difficult to be entirely truthful when assessing their work or describing their findings to others. A model may take decades to develop meaning that, if its output is perceived to be flawed, the developers’ entire professional reputations and careers may be on the line. Here is an example of the way in which external influence can be applied:
… modelers – keen to preserve the authority of their models – deliberately present and encourage interpretations of models as ‘truth machines’ when speaking to external audiences.
…
Like scientists in other fields, modelers might ‘oversell’ their products (as acknowledged in quotes presented below), because of funding considerations. In a highly competitive funding environment they have an interest in presenting the models in a positive light. The centrality of climate models in politics can also shape how modelers and others who promote concern about climate change present them. GCMs figure centrally in heated political controversies about the reality of climate change, the impact of human activities, and competing policy options. In this context, caveats, qualifications, and other acknowledgements of model limitations can become fodder for the antienvironmental movement
…
Speaking to a full room of NCAR scientists in 1994, a prominent scientist and frequent governmental advisor on global change warned an audience mostly made up of atmospheric scientists to be cautious about public expressions of reservations about the models. ‘Choose carefully your adjectives to describe the models’, he said, ‘Confidence or lack of confidence in the models is the deciding factor in whether or not there will be policy response on behalf of climate change.’ While such explicit and public references to the political impact of the science are rare (I only encountered this one instance during my fieldwork), a similar lesson is communicated in more informal and subtle ways. It is also impressed on many who witness fellow atmospheric scientists being subjected to what they perceive as unfair attacks in media-driven public relations campaigns …
[The speaker was Dan Albritton, although for some reason Lahsen fails to identify him]
Page 905
Albritton’s assumption that his audience will have a commitment to political action as well as scientific research is chilling. Elsewhere, in a comment on Roger Pielke Jr’s blog, Lahsen says:
To my knowledge, there are no studies of climate modeller’s preferences related to climate change. On the basis of my observations, I would say that climate modellers, as a whole, are environmentally concerned. However, few of them involve themselves in any active way with policy issues. As a whole they are much more interested in the science than in the associated policy consequences.
This would seem rather naive. If your field of research is a driving force of public policy, then what need is there to involve yourself in political activism? And the link between public policy initiative and the funding which fuels the science that underpin them, is obvious.
But it is the confusion that Lahsen documents between the simulated worlds of the modeller’s ‘truth machines’ and the real world of scientific observations that is most troubling.
During modelers’ presentations to fellow atmospheric scientists that I attended during my years at NCAR, I regularly saw confusion arise in the audience because it was unclear whether overhead charts and figures were based on observations or simulations.
…
I realized that I was not alone in my confusion when scientists in the audience stopped the presenter to ask for clarification as to whether the overhead figures were based on observations or model extrapolations. The presenter specified that the figures were based on models, and then continued his presentation.
…
… modelers may have been strategic when alternating between speaking of their models as heuristics and presenting them as ‘truth machines’. However, the oscillation also may reflect how some modelers think and feel about their models at particular moments when they fail to maintain sufficient critical distance. In interviews, modelers indicated that they have to be continually mindful to maintain critical distance from their own models. For example:
Interviewer: Do modelers come to think of their models as reality?
Modeler A: Yes! Yes. You have to constantly be careful about that [laughs].
He described how it happens that modelers can come to forget known and potential errors:
You spend a lot of time working on something, and you are really trying to do the best job you can of simulating what happens in the real world. It is easy to get caught up in it; you start to believe that what happens in your model must be what happens in the real world. And often that is not true . . . The danger is that you begin to lose some objectivity on the response of the model [and] begin to believe that the model really works like the real world . . . then you begin to take too seriously how it responds to a change in forcing. Going back to trace gases, CO2 models – or an ozone change in the stratosphere: if you really believe your model is so wonderful, then the danger is that it’s very tempting to believe that the way it responds to a change in forcing must be right. [Emphasis added]
This modeler articulates that the persuasive power of the simulations can affect the very process of creating them: modelers are at times tempted to ‘get caught up in’ their own creations and to ‘start to believe’ them, to the point of losing awareness about potential inaccuracies. Erroneous assumptions and questionable interpretations of model accuracy can, in turn, be sustained by the difficulty of validating the models in the absence of consistent and independent data sets.
Page 908
The highly specialised nature of a modeler’s work can cut them off from the rest of the research community.
Critical distance is also difficult to maintain when scientists spend the vast majority of their time producing and studying simulations, rather than less mediated empirical representations. Noting that he and fellow modelers spend 90% of their time studying simulations rather than empirical evidence, a modeler explained the difficulty of distinguishing a model from nature:
Modeler B: Well, just in the words that you use. You start referring to your simulated ocean as ‘the ocean’ – you know, ‘the ocean gets warm’, ‘the ocean gets salty’. And you don’t really mean the ocean, you mean your modeled ocean. Yeah! If you step away from your model you realize ‘this is just my model’. But [because we spend 90% of our time studying our models] there is a tendency to forget that just because your model says x, y, or z doesn’t mean that that’s going to happen in the real world.
This modeler suggests that modelers may talk about their models in ways they don’t really mean (‘you don’t really mean the ocean, you mean your modeled ocean . . . ‘). However, in the sentence that immediately follows, he implies that modelers sometimes actually come to think about their models as truth-machines (they ‘forget to step away from their models to realize that it is just a model’; they have a ‘tendency to forget’).
Page 909
And again:
The following interview extract arguably reflects such an instance of forgetting. This modeler had sought to model the effects of the possible ‘surprise’ event of a change in the ocean’s climate-maintaining thermohaline circulation. On the basis of his simulation he concluded that the widely theorized change in the ocean’s circulation due to warmer global temperatures is not likely to be catastrophic:
Modeler C: One of the surprises that people have been worrying about is whether the thermohaline circulation of the oceans [the big pump that could change the Gulf Stream] shuts off . . . . If the models are correct, the effect even of something like that is not as catastrophic as what most people think. You have to do something really nasty to [seriously perturb the system] . . . The reality is, it really is an ocean thing, it is basically an ocean phenomenon; it really doesn’t touch land very much.
Interviewer: But wouldn’t it change the Gulf Stream and therefore . . . ?
Modeler C: Yes, look right here [shows me the model output, which looks like a map]. If the model is right. [Slight pause] I put that caveat in at the beginning [laughs]. But right there is the picture.
Modeler C struggles to not speak of his model as a ‘truth machine’, but lapses before catching himself when presented with a question. Though he starts off indicating that the models could be wrong (‘if the models are correct’), he soon treats the model as a truth machine, referring to the modeled phenomena as reliable predictions of future reality (‘The reality is, it really is an ocean thing’). Catching himself, he then refers back to the caveat, followed by a little laugh.
Page 909
As simulations become more detailed, they may become even more seductive for their creators, but the likelihood of model output becoming more distorted increases too:
The increasingly realistic appearance of ever-more comprehensive simulations may increase the temptation to think of them as ‘truthmachines’. As Shackley et al. (1998) have noted, there is a tendency among modelers to give greater credence to models the more comprehensive and detailed they are, a tendency they identify as cultural in nature because of a common trade-off between comprehensiveness and error range. As GCMs incorporate ever more details – even things such as dust and vegetation – the models increasingly appear like the real world, but the addition of each variable increases the error range (Syukuro Manabe, quoted in Revkin, 2001).
Page 910
Then there are the usual pressures of ‘office politics’ and conflict with those who are not members of the modelling tribe:
Modelers’ professional and emotional investment in their own models reduces their inclination and ability to maintain critical awareness about the uncertainties and inaccuracies of their own simulations. Shackley and Wynne suggest that modelers talk freely about their models’ shortcomings among themselves. However, the following researcher identified a general reluctance on the part of modelers to discuss their models’ weaknesses, even among themselves:
Modeler E: What I try to do [when presenting my model results to other modelers] . . . is that I say ‘this is what is wrong in my model, and I think this is the same in all models, and I think it is because of the way we’re resolving the equations, that we have these systematic problems’. And it often gets you in trouble with the other people doing the modeling. But it rarely gets you in trouble with people who are interested in the real world. They are much more receptive to that, typically, than they are if you say ‘here, this is my result, doesn’t this look like the real world?’ And ‘this looks like the real world, and everything is wonderful’.
Interviewer: Why do you get in trouble with modelers with that?
Modeler E: Because . . . when I present it, I say ‘this model is at least as good as everyone else’s, and these problems are there and they are in everybody else’s models too.’ They often don’t like that, even if I am not singling out a particular model, which I have done on occasion [smiles] – not necessarily as being worse than mine but as having the same flaws. Not when they are trying to sell some point of view and I go in there saying ‘Hey, this is where I go wrong [in my model], and you are doing the same thing! And you can’t be doing any better than that because I know that this isn’t a coding error problem’ [laughs].
This modeler confirmed statements about modelers I encountered in other contexts, who also identified a disinclination on the part of modelers to highlight, discuss, and sometimes even perceive problems in their model output.
Page 911
And:
Modeler E noted that theoreticians and empiricists often criticize modelers for claiming unwarranted levels of accuracy, to the point of conflating their models with reality. My fieldwork revealed that such criticisms circulate widely among atmospheric scientists. Sometimes such criticisms portray modelers as motivated by a need to secure funding for their research, but they also suggest that modelers have genuine difficulty with gaining critical distance from their models’ strengths and weaknesses. Moreover, they criticize modelers for lacking empirical understanding of how the atmosphere works (‘Modelers don’t know anything about the atmosphere’).
Page 913
And finally:
Empiricists complain that model developers often freeze others out and tend to be resistant to critical input. At least at the time of my fieldwork, close users and potential close users at NCAR (mostly synoptically trained meteorologists who would like to have a chance to validate the models) complained that modelers had a ‘fortress mentality’. In the words of one such user I interviewed, the model developers had ‘built themselves into a shell into which external ideas do not enter’. His criticism suggests that users who were more removed from the sites of GCM development sometimes have knowledge of model limitations that modelers themselves are unwilling, and perhaps unable, to countenance. A model developer acknowledged this tendency and explained it as follows:
Modeler F: There will always be a tension there. Look at it this way: I spent ten years building a model and then somebody will come in and say ‘well, that’s wrong and that’s wrong and that’s wrong’. Well, fine! And then they say, ‘well, fix it!’ [And my response to them is:] ‘you fix it! [laughs] I mean, if I knew how to fix it, I would have done it right in the first place!!! [Laughs] And what is more, I don’t like you anymore – all you do is you come in and tell me what is wrong with my model! Go away!’ [laughter]. I mean, this is the field.
Modeler F’s acknowledgement of inaccuracies in his model is implied in his comment that he would have improved the model if he knew how.
Page 916
Lahsen’s use of the term ‘fortress mentality’ is worryingly reminiscent of Judith Curry’s initial response to the Climategate emails which was posted at Climate Audit soon after they appeared on the net. She refers to politicisation of climate scinece, the circling of wagons in the face of criticism, professional egos out of control, and issues surrounding scientific integrity all being features of tribalism in the climate community.
Lahsen’s paper appeared five years ago and received some attention in the blogosphere before becoming a back number. Had it been published at the time of Climategate, then it’s very likely that her findings would have become part of the story. That it did not do so makes the spotlight that she has shone on an apparently rather murky aspect of climate research no less important. Predictions about future climate have been one of the major factors in promoting climate alarmism, and it is important that there should be a general understanding of the professional environment and culture in which this research originates. In this context, Lahsen’s findings are very disturbing indeed.
This post started with the question: why are social scientists not taking more interest in the climate science community and the mechanism by which its activities over the last decade have come to influence public opinion and public policy? Given the peer pressure that exists in academia to conform to the orthodoxies surrounding global warming, and not ask awkward questions, I can well understand why most social scientists would run a mile from embarking on such research. Professionally it would be a very dumb move.
Geoff Chambers has (jokingly I think) accused Harmless Sky of becoming the Antiques Roadshow of bloggery, with a succession of posts based on trawling through old files. I make no apology for this. As a wise old histiorian once said:
The further backward you can look, the further forward you can see.
Blogging about the climate debate tends to be very much focused on what is happening now, but it would seem to me that there is good reason to look back occasionally and consider how we have arrived at the present situation. Lahsen’s paper takes on a whole new significance when re-considered in the light of Climategate.
Next week the inter academies report on the procedures used by the IPCC when compiling their assessment reports will be published I wonder if they will have considered whether it is a good idea to have modellers as lead authors and as the review editor, on the chapter in the next report dealing with climate models?
1 Epistemology: the theory of knowledge – distinction between justified belief and opinion.
This looks like an honest, though hardly world-shattering, piece of work. Many thanks for digging it out. (My comment about the Antiques Roadshow was indeed said in jest, but also in admiration; these nuggets, like TonyB’s work on the historical temperature records, bring to climate research a depth and seriousness missing from the work of the Climategate crowd).
One comment: you say:
Lead author Kevin Trenberth is on record as saying there are no predictions in the IPCC reports, only “projections”. If the models can’t produce predictions, then someone should be asking hard questions about what the taxpayer is getting for his money. If the politicians won’t ask these questions (because they’ve effectively handed over responsibility to the “experts”) then it’s up to ordinary citizens.
Trenberth’s remark reveals the double language of climate scientists, and the fact that they are at some level aware of the dangers revealed in Myanna Lahsen’s paper. On the one hand, professional standards require that they play down the certainty of the output of their models. On the other hand, these same models are the sole source of the alarmism which is propelling the political programme of CO2 reduction. This is the logical contradiction at the heart of the global warming mess, a contradiction which is independent of the quality of the science.
Geoff:
I only used one thread of argument from what is really a pretty big paper. It is well worth having a look at the rest, particularly the section near the beginning on the limitations that models have as predictive tools. Bearing in mind that although Lahsen’s paper was published five years ago, submissions for AR4 closed in Summer 2006, soon afterwards, ahead of publication in 2007, and so the situation could not have changed significantly prior to the IPCC report being published.
Somewhere, I think I have a printout of the IPCC’s instruction to authors banning the use of the term “prediction” in reports and requiring that all references should in future be to “projections”. This was in response to pressure from Dennis Grey(?) who was at that time a reviewer and had raised questions about confidence in the predictive skill of models.
Surely that makes Lahsen’s paper very important indeed. I forgot to work Chriton’s famous remark to a congressional committee into the post, “predictions are not facts”.
TonyN
Thanks for digging up and presenting another “nugget” from a few years ago.
Myanna Lahsen’s study of climate modelers at NCAR is as pertinent today as it was when it was written.
IPCC has made the major error of underplaying “uncertainty”, in particular in its more widely read “Summary for Policymakers” report. There is much talk of “higher confidence levels”, “progress in understanding of human and natural drivers of climate change”, “improvements in understanding of processes and their simulation in models” and “more extensive exploration of uncertainty ranges”, all intended to show that there is very little “uncertainty” left in the conclusions reached.
It is clear that one must consider the IPCC report designated for “policymakers” as a political “sales pitch” for the premise that AGW has caused a significant portion of the recently observed warming and, if left unchecked, could well represent a serious potential threat to our climate and our society.
We all know that “sales pitches” do not accentuate uncertainties; they rather underplay (or even ignore) them.
We have also seen recently how IPCC “sales pitches” have been distorted and exaggerated to get the main message across (Himalayan glaciers, African crop losses, Amazon forest disappearance, etc.).
Lahsen’s study shows how it is not only the political “sales pitch” problem, which has caused the gross underplaying of uncertainty in the more alarming IPCC conclusions and predictions for the future (or “projections”, as is the preferred designation).
Instead, the problem has emerged from the climate modelers, themselves, i.e. from the “science” supporting the “sales pitch”, for the several underlying reasons outlined by Lahsen.
It is much more disturbing when the “science” understates the uncertainties or exaggerates the consequences than when this is done in a political “summary report”, which is specifically designated for “policymakers”, where one would expect such distortions.
As the late Stephen Schneider, a scientist deeply involved in the IPCC assessments of uncertainties, has been quoted as saying:
The two quotations at the very top of your post carry essentially this same message:
and
Thanks for another incisive post, Tony.
Max
TonyB
A sideline.
Crighton’s “predictions are not facts” is a very concise statement.
Yogi Berra also had two good ones:
and (when things did not turn out quite as expected):
The word “projection” is apparently favored by IPCC.
This is defined by the on-line dictionary:
Other words (synonyms) listed:
I prefer short words of Anglo-Saxon or Scandinavian origin, such as “guess” over the more nuanced and complicated Norman imports.
guess:
Or another one that might fit well for IPCC is “hunch”:
But I can see why IPCC prefers “projection” (even though it means exactly the same thing as “prediction”): it sounds sooooo much more “scientific” and leaves the prognosticator completely off the hook if it turns out to be wrong!
Max
… and right on cue, as an example of what manacker is talking about #4, Delingpole reports this exchange from the BBC’s “Uncertain Climate”:
Myanna Lahsen’s study of degrees of uncertainty among climate modellers is, frankly, disappointing. Her main interest is not in climate models, but in a particular sociological model of professional uncertainty developed by Donald MacKenzie in a study of uncertainty among producers and users of anti-ballistic missile technology.
Not surprisingly, Mackenzie found that “users” (presumably government procurement officials) tended to have more blind faith in the product, the developers of the product were more critical, but most critical of all were developers of rival systems. You could probably produce a similar result in an examination of the sweets industry, with Mars food technologists casting a critical but generally favourable eye on the Bounty bar, end consumers (children) being uncritically favourable, and Cadburys experts being the most critical of all. Whether it would be thought worthwhile to dignify this finding with the title of “the Uncertainty Trough” is doubtful.
Not surprisingly, climate modelling is different, since modellers, far from being involved in cutthroat competition to produce the “best” prediction of the weather a hundred years hence, tend to tweak their models in order to remain within the same band of predictions as their rivals. While Lahsen is gently critical of the tendency towards confirmation bias to be found within the modelling world, as indicated by TonyN’s quotes, the real surprise is in the “graph” she produces at the end in her alternative to the MacKenzie model. In it, she shows those “alienated from institutions/committed to different technology” (that’s us sceptics) as having “high uncertainty levels” about the models (too true); the users as having medium uncertainty; and the producers as having ZERO uncertainty.
As I said, it’s not a real graph, since there is no continuity between the groups, but the finding that the modellers have complete faith in their product is oddly extreme, given the quotes in the paper.
bibliography:
Lahsen: Seductive Simulations? Uncertainty Distribution Around Climate Models
http://en.wikipedia.org/wiki/List_of_chocolate_bar_brands
Geoff, I’m wondering whether it might be possible for climate models to be assessed in such a way that objectivity is built in, so to speak. Your chocolate analogy is a good one, but could an easier analogy be something like cola, which is more a “samey” kind of product? When organisations like the Consumers’ Association (Which?) want to carry out a taste comparison test between various brands of cola, I think it’s done as a double blind trial, so diehard Pepsi fans, for example, won’t be influenced by bias.
Given that few human observers will not have some degree of bias (and it’s still only the early 21st century, so we cannot yet ask benevolent aliens or advanced Artificial Intelligences to help us out), could there be some way of deliberately making it possible for climate models (plus the assumptions that go into them, and the code out of which they are made) to be assessed in such a way that the assessors have no idea whose model this is, or even what output it is meant to be providing?
Great link, by the way – I hadn’t realised there were so many kinds of choc bars in the world. (Interesting names too, e.g., the “Moody” bar from Syria.) This is a bit of a distraction actually, as I’m trying to reduce my “chocolate footprint” (not very successfully.)
Alex,
It would be nice if you could do double bind tests on climate models, wouldn’t it? Since everything from chocolate bars to subatomic particles comes in different flavours, why not climate models?
My disappointment with the Lahsen study is that her conclusions really add nothing to what you can glean from the comments of the modellers which TonyN quotes. They’re a bunch of bright and basically honest guys who may sometimes get a bit carried away with enthusiasm for what they’re doing. Then in her final graphic summary, there’s this half-hidden conclusion – that they have zero uncertainty about what they’re doing – in other words, no self-critical faculty.
Her failure to spell out this conclusion seems to me to be a typical case of an anthropologist “going native”. She wasn’t going to abuse the hospitality of these intelligent and important people whom she’d been privileged to associate with by criticising them too overtly. Hence her accusation of what is effectively massive confirmation bias is couched in terms of revision of a competing anthropologist’s model. Interesting though.
IMHO the value of Lahsen’s paper, for those outside her very specialised field of anthropology, lies in the glimpses it provides of the mentality and modus operandi of a very influential cadre of climate scientists. The Climategate emails served a similar purpose where the CRU and the IPCC process is concerned. It is tantalising to wonder what might be revealed in other branches of climate science and its interface with politics if similar material was available.
Lahsen’s paper and the Climategate emails may do no more than confirm what sceptics have long suspected, but that confirmation – coming from sources uncontaminated by partisan views – is very important.
I listened to Harrabin’s second instalment on R4 this morning, but I can’t decide whether he’s had a real change of heart or is just covering his backside. He mentioned that his editor didn’t want him to include an item, which sounded like an attempt to appeal to listening sceptics, but it didn’t work for me, I’m afraid.
TonyN #9 Agreed, the value of Lahsen’s study lies in the attitudes of modellers as revealed in the quotes. But the study itself adds nothing of interest that wasn’t in the bits you quoted, since the author is careful not to interpret the quotes in a way which might be critical of the quality of the “science”.
You say:
Very true. And thanks to the furore over the CRU emails, you can be sure it never will be. Think of the quotes we all know and love: Phil Jones’ “Why should I give you the data…?” Sir John Houghton’s quote about disasters, and the anonymous “We have to get rid of the Mediaeval Warm Period”. No-one will ever say such things in public again. And we sceptics will sound more and more like a scratched record.
I’ve done this kind of micro-sociology in the distant past, interviewing Army Officers and Prison Officers among others. Of course you cherish the quotes that lift the curtain a little on the official reality, but I don’t see it as a way of understanding the big picture which you sketched in your feedback diagram. That would require the kind of old-fashioned sociologist or historian who could sum up an entire society; or else the kind of investigative journalist prepared to sift through old cuttings for years on end. Remember Paul Foot, or – dare I say it – George Monbiot?
JamesP
Your post #10 intrigued me.
I listened to Harrabin’s second installment (a couple of times), as well.
He is very cleverly trying to play the “objective observer and reporter” role, but (if one listens closely) it is obvious that he has not changed his views at all.
He starts by telling us that 3 inquiries have shown that Climategate was only a “storm in a teacup” and that the leaks have not in any way detracted from the validity of the “science” behind the IPCC hypothesis that humans are changing the climate and that this could have serious consequences.
When a “skeptic” questions whether or not IPCC “defines the mainstream in climate science”, Harrabin takes the stand that IPCC does do so.
Both Lindzen and McIntyre are allowed their short “sound bites”, as is Judith Curry, who makes an interesting observation that there is “too much expert judgment and not enough actual observations” in climate science today.
To Harrabin’s question whether the developed world should carry some responsibility for problems occurring in the undeveloped world due to climate change, Monckton makes the astute comment that if there is a problem (of which there is no evidence) we should help Bangladesh build their sea defenses to respond to the problem.
Tony Blair tells us that it is hard to say “this will be the exact warming” but that the judgment is that this is a serious problem that needs action because the risks are too big.
Hoskins is allowed to tell us that climate models are not perfect, but they are the best we have, and they should be improved and tested rather than simply thrown out.
In his interview with Vicky Pope the point is made that the “uncertainties” do not detract from the threat, but instead that they actually mean that things could be even worse than predicted.
Then Harrabin concludes that the “science” may not be conclusive and the uncertainties may still be high, but that the “risks” are too great not to start taking preventative action now.
This all sounds like pretty much the same story Harrabin has been trumpeting all along, with a minor concession that the climate science may not be all that well defined or certain as once thought, but the political conclusions are right nevertheless, due to the high risks and potential impacts.
James, I do not think leopards change their spots, and Harrabin remains a leopard.
He is still “selling” the “dangerous AGW” pitch.
Max
following on from my #11
Since no world-renowned social historian has come forward to write the definitive history of the global warming cult, I see I shall have to do it myself. Here’s my Short History of the Next 500 Years. There’s something in it for Brute, and something for Peter. Please feel free to join in.
The European powers, having failed in their attempt to conquer the underdeveloped world, handed Africa and Asia back to their inhabitants, while continuing to exploit the material wealth of the “third world” in order to improve the living conditions of their own citizens.
A mixture of guilt for past crimes and worry about the future then caused the ruling classes to adopt a curious belief system which – like Christianity in the Roman Empire – originated among the poorest and least educated members of society, but soon spread to the upper classes. This cult held that production of an almost indetectable gas was leading to the destruction of the planet. The cult spread swiftly among the semi-educated, and was quickly adopted by rulers who saw the possibilities for self-enrichment and control of the economy. Though the idea never caught on among the masses, (since it was in contradiction with all empirical evidence) its adoption by all the main political parties and media outlets ensured its success in the English-speaking world, where it still holds sway today, five centuries later.
Meanwhile, China and India, whose undemocratic, or imperfectly democratic, systems ensured the creation of a self selecting élite of intelligent and farseeing leaders, swiftly took the reins of leadership of the third world, reducing Europe and North America to the role of providers of entertainment and mass culture for their new masters, and a new era of universal peace and happiness began.
MacKenzie has got his graph upside down.
The Y-axis is “uncertainty” which is an ill-defined negative measure. Like “darkness” or “lightweightness” it’s going to be difficult to build a scale for it. It’s more normal to measure something that goes up as you go up the paper on a graph – even if it’s a bad thing you are measuring like crime or divorce.
And the X-axis. Like Geoff says it’s not a real axis with any continuity. As you move right on the paper you are not becoming “more of something”. Instead you are jumping from one category of person to another. Discrete segments.
At the end of the paper Lahsen suggests a “graph” showing that the modellers are too close to the models to be objective and that groups less involved in the models put less trust in them. She contrasts this with MacKenzie’s graph. This is a meaningless comparison – he was looking at a different situation with different groups looking at a different question at a different time.
The only useful part of the study is the interviews – the modellers condemn themselves out of their own mouths.
Maybe the whole paper is a recursive exercise – describing theorists that are too close to their own theories to see that they have zero value?
Jack Hughes #14
I agree absolutely that her graph is nonsense. I didn’t want to put it so cruelly because she has performed a useful service by actually talking (and listening critically) to these Doctor Strangeloves who are shaping our future – something no journalist has bothered to do.
Geoff:
Try thinking of what you refer to as ‘the big picture which you sketched in your feedback diagram’ as a mosaic in which Lahsen’s revelations about the culture in the modelling community are but one tessera.
My problem is that I have the kind of records, going back to 2004, that you are referring to. They cover several tables, some shelf space, and are beginning to colonise the floor. And that’s just the stuff that I’ve printed out. My hard disk might be described as elephant country. At the moment I’m trying to work out what to do about it, hence the recent posts based on old documents.
Max:
You say:
If you read the BBC guidelines, they make it very clear that the audience should not be aware of a reporter’s views. If you have pursued a complaint all the way to the BBC Trust’s Editorial Standards Committee – a yearlong task – only to find that the considerable resources of that very wealthy organisation are entirely directed towards defending the indefensible, then you begin to realise that the culture within the BBC makes any objective self-criticism impossible.
Rod Liddle, one-time producer of the Today programme, had a very good piece about this in the Sunday Times this week headlined ‘They’re not biased at the BBC – they just know better than us’. Unfortunately it’s tucked away behind Rupert Murdoch’s pay-wall. I wouldn’t dream of infringing his copyright by publishing such a thing on my blob, but I see no harm in sharing a press cutting with an acquaintance.
TonyB
Haven’t read the BBC guidelines, but Harrabin’s views appear fairly transparent if one listens closely to this broadcast.
But I will admit that, in contrast to pre-Climategate reports, he tries very cleverly to present the image of impartial observer and reporter of the facts.
He just doesn’t pull it off very well, as far as I am concerned.
Max
Jack Hughes
Believe the French call this symptom “la déformation professionelle” but I prefer the German expression, “Berufskrankheit”.
Max
geoffchambers
Your definitive history of the global warming cult as seen over the next 500 years (13) is brilliant.
But I would add a “Case B” scenario:
Though the idea never caught on among the masses, (since it was in contradiction with all empirical evidence) its adoption by all the main political parties and media outlets ensured its success in the English-speaking world, where it held sway until the onset of the climatic period now known as the Early Millennial Deep Freeze, which began in the late 21st century, but only caused major global cooling and upheaval in the 22nd and 23rd centuries. Meanwhile, throughout both the cooling and earlier warming phases, China and India, whose undemocratic, or imperfectly democratic, systems ensured the creation of a self selecting élite of intelligent and farseeing leaders,…etc.
Max
geoff
A second minor suggestion:
In your definitive history you wrote:
I would suggest that the “masses” may have been less aware of “empirical evidence” than they were of plain “common sense”, so that this could be lightly modified to:
Sorry for niggling…
Max
Geoff
The main trouble with your definitive history is that it is much too short to have any real credibility.
You need to get a grant and pad it out to at least 750 pages. I would suggest you also need at least 30 graphs of varying degrees of incomprehensibility. Also you have not used the terms ‘plausible’ or ‘robust’ which is a major failing.
One last thing, the audience who will read this have no idea of the meaning of ’empirical’. I suggest you change this for something else. Perhaps ‘plausible and robust computer generated evidence’ might hit all the targets at one go.
I put myself at your disposal to write a fifty page executive summary of your improved and expanded definitive history.
tonyb
Max (12)
I agree. It’s unfortunate for him that, thanks to his job, his former lack of curiosity is all on the record, so I can sympathise with his predicament to an extent, but I would have a lot more respect for him if he simply announced that he had been misled (and had misled us) over the years, instead of this backing slowly towards the door.
I wish more journalists, doctors and politicians felt able to announce a change of heart when circumstances dictated, without the ‘loss of face’ that seems to accompany it.
As JM Keynes put it: “When the facts change, I change my mind. What do you do, sir?”
TonyB, Max,
You’re on as Lead Authors for the next chapters. Last one to get a $million grant is a sissy.
On empirical evidence v common sense: – one of the things which tipped me into incorrigible scepticism was a study conducted by the Institute of Forecasting, which found that Joe Public was often better at predicting the future than the “experts”, since he tended to assume that things would carry on much as before, whereas the experts liked to follow any trend line which caught their fancy into the realms of NeverNeverland.
Geoff – does your history relate what happened to Pachauri?
I wonder how that piece might be received at RC or Joe Romm’s? A bit too much for their ironometers, I suspect!
JamesP
I haven’t even decided what style to adopt yet, let alone what happens to the minor characters. Should this be “The Decline and Fall of the Tropospheric Temperature” or something more Monty Python, like “What has Carbon Dioxide ever done for us?”
I suggest a meeting of Lead Authors in Bali as soon as possible. Drinks on the taxpayer.