Posts tagged ‘Sir Brian Hoskins’

Computer Modeling as "Evidence"

The BBC has decided not to every talk to climate skeptics again, in part based on the "evidence" of computer modelling

Climate change skeptics are being banned from BBC News, according to a new report, for fear of misinforming people and to create more of a "balance" when discussing man-made climate change.

The latest casualty is Nigel Lawson, former London chancellor and climate change skeptic, who has just recently been barred from appearing on BBC. Lord Lawson, who has written about climate change, said the corporation is silencing the debate on global warming since he discussed the topic on its Radio 4 Today program in February.

This skeptic accuses "Stalinist" BBC of succumbing to pressure from those with renewable energy interests, like the Green Party, in an editorial for the Daily Mail.

He appeared on February 13 debating with scientist Sir Brian Hoskins, chairman of the Grantham Institute for Climate Change at Imperial College, London, to discuss recent flooding that supposedly was linked to man-made climate change.

Despite the fact that the two intellectuals had a "thoroughly civilized discussion," BBC was "overwhelmed by a well-organized deluge of complaints" following the program. Naysayers harped on the fact that Lawson was not a scientist and said he had no business voicing his opinion on the subject.

...

Among the objections, including one from Green Party politician Chit Chong, were that Lawson's views were not supported by evidence from computer modeling.

I see this all the time.  A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being "anti-science" by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything.  Computer models are extremely useful when we have hypotheses about complex, multi-variable systems.  It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

This is no different (except in speed and scale) from a person in the 18th century sitting down with Newton's gravitational equations and grinding out five years of predicted positions for Venus (in fact, the original meaning of the word "computer" was a human being who ground out numbers in just his way).  That person and his calculations are the exact equivalent of today's computer models.  We wouldn't say that those lists of predictions for Venus were "evidence" that Newton was correct.  We would use these predictions and compare them to actual measurements of Venus's position over the next five years.  If they matched, we would consider that match to be the real evidence that Newton may be correct.

So it is not the existence of the models or their output that are evidence that catastrophic man-made global warming theory is correct.  It would be evidence that the output of these predictive models actually match what plays out in reality.  Which is why skeptics think the fact that the divergence between climate model temperature forecasts and actual temperatures is important, but we will leave that topic for other days.

The other problem with models

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate.  But the techniques are substantially the same.  And the pitfalls.

Confession time.  In my very early days as a consultant, I did something I am not proud of.  I was responsible for a complex market model based on a lot of market research and customer service data.  Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results.  In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion.  It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy.  A few tweaks to assumptions and I could get the answer I wanted.  And no one would ever know.  Someone could stare at the model for an hour and not recognize the tuning.

Robert Caprara has similar thoughts in the WSJ (probably behind a paywall)  Hat tip to a reader

The computer model was huge—it analyzed every river, sewer treatment plant and drinking-water intake (the places in rivers where municipalities draw their water) in the country. I'll spare you the details, but the model showed huge gains from the program as water quality improved dramatically. By the late 1980s, however, any gains from upgrading sewer treatments would be offset by the additional pollution load coming from people who moved from on-site septic tanks to public sewers, which dump the waste into rivers. Basically the model said we had hit the point of diminishing returns.

When I presented the results to the EPA official in charge, he said that I should go back and "sharpen my pencil." I did. I reviewed assumptions, tweaked coefficients and recalibrated data. But when I reran everything the numbers didn't change much. At our next meeting he told me to run the numbers again.

After three iterations I finally blurted out, "What number are you looking for?" He didn't miss a beat: He told me that he needed to show $2 billion of benefits to get the program renewed. I finally turned enough knobs to get the answer he wanted, and everyone was happy...

I realized that my work for the EPA wasn't that of a scientist, at least in the popular imagination of what a scientist does. It was more like that of a lawyer. My job, as a modeler, was to build the best case for my client's position. The opposition will build its best case for the counter argument and ultimately the truth should prevail.

If opponents don't like what I did with the coefficients, then they should challenge them. And during my decade as an environmental consultant, I was often hired to do just that to someone else's model. But there is no denying that anyone who makes a living building computer models likely does so for the cause of advocacy, not the search for truth.