Posts tagged ‘KNMI’

Using Computer Models To Launder Certainty

For a while, I have criticized the practice both in climate and economics of using computer models to increase our apparent certainty about natural phenomenon.   We take shaky assumptions and guesstimates of certain constants and natural variables and plug them into computer models that produce projections with triple-decimal precision.   We then treat the output with a reverence that does not match the quality of the inputs.

I have had trouble explaining this sort of knowledge laundering and finding precisely the right words to explain it.  But this week I have been presented with an excellent example from climate science, courtesy of Roger Pielke, Sr.  This is an excerpt from a recent study trying to figure out if a high climate sensitivity to CO2 can be reconciled with the lack of ocean warming over the last 10 years (bold added).

“Observations of the sea water temperature show that the upper ocean has not warmed since 2003. This is remarkable as it is expected the ocean would store that the lion’s share of the extra heat retained by the Earth due to the increased concentrations of greenhouse gases. The observation that the upper 700 meter of the world ocean have not warmed for the last eight years gives rise to two fundamental questions:

  1. What is the probability that the upper ocean does not warm for eight years as greenhouse gas concentrations continue to rise?
  2. As the heat has not been not stored in the upper ocean over the last eight years, where did it go instead?

These question cannot be answered using observations alone, as the available time series are too short and the data not accurate enough. We therefore used climate model output generated in the ESSENCE project, a collaboration of KNMI and Utrecht University that generated 17 simulations of the climate with the ECHAM5/MPI-OM model to sample the natural variability of the climate system. When compared to the available observations, the model describes the ocean temperature rise and variability well.”

Pielke goes on to deconstruct the study, but just compare the two bolded statements.  First, that there is not sufficiently extensive and accurate observational data to test a hypothesis.  BUT, then we will create a model, and this model is validated against this same observational data.  Then the model is used to draw all kinds of conclusions about the problem being studied.

This is the clearest, simplest example of certainty laundering I have ever seen.  If there is not sufficient data to draw conclusions about how a system operates, then how can there be enough data to validate a computer model which, in code, just embodies a series of hypotheses about how a system operates?

A model is no different than a hypothesis embodied in code.   If I have a hypothesis that the average width of neckties in this year's Armani collection drives stock market prices, creating a computer program that predicts stock market prices falling as ties get thinner does nothing to increase my certainty of this hypothesis  (though it may be enough to get me media attention).  The model is merely a software implementation of my original hypothesis.  In fact, the model likely has to embody even more unproven assumptions than my hypothesis, because in addition to assuming a causal relationship, it also has to be programmed with specific values for this correlation.

This is not just a climate problem.  The White House studies on the effects of the stimulus were absolutely identical.  They had a hypothesis that government deficit spending would increase total economic activity.  After they spent the money, how did they claim success?  Did they measure changes to economic activity through observational data?  No, they had a model that was programmed with the hypothesis that government spending increased job creation, ran the model, and pulled a number out that said, surprise, the stimulus created millions of jobs (despite falling employment).  And the press reported it like it was a real number.

Signal to Noise Ratio

There is a burgeoning grass roots movement (described here, in part) to better document key temperature measurement stations both to better correct past measurements as well as to better understand the quality of the measurements we are getting.

Steve McIntyre
has had some back and forth conversations with Eli Rabbett about temperature measurement points, each accusing the other of cherry-picking their examples of bad and good installations.  McIntyre therefore digs into one of the example temperature measurement points Rabbett offers as a cherry-picked example of a good measurement point.  For this cherry-picked good example of a historical temperature measurement point, here are the adjustments that are made to this site's measurements before it is crunched up into the official historic global warming numbers:

Corrections have been made for:
- relocation combined with a transition of large open hut to a wooden Stevenson screen (September 1950) [ed:  This correction was about 1°C]
- relocation of the Stevenson screen (August 1951).
- lowering of Stevenson screen from 2.2 m to 1.5 m (June 1961).
- transition of artificial ventilated Stevenson screen to the current KNMI round-plated screen (June 1993).
- warming trend of 0.11°C per century caused by urban warming.

Note that these corrections, which are by their nature guesstimates, add up to well over 1 degree C, and therefore are larger in magnitude than the global warming that scientists are trying to measure.  In other words, the noise is larger than the signal.

Postscript:
  0.11C per century is arguably way too low an estimate for urban warming.