Posts tagged ‘warming’

Environmentalists and the Third World

While we can argue about the projected impacts of man-made global warming (my skeptics site here), it is almost certain than any solution that puts a real dent in CO2 production will bar from the middle class about a billion people who are just climbing out of subsistence poverty.  TJIC notes a particularly odious proposal by environmental groups to encourage human power over industrialization in the third world:

See, first world Volvo-driving environmentalists!  We can help
the Third World! All we need to do is build them human hamster wheels,
so that they can set their children to work pumping water, instead of
using nasty diesel pumps (like we do here in the First World, while our
children attend soccer practice or piano lessons).

Don't miss the really awful animation from the environmentalist's site.

Good News: Hansen Releases the Temperature Code

Good news this week:  James Hansen and NASA have now deigned to release for scrutiny their taxpayer-funded temperature aggregation and adjustment code.  I go in more detail and explain why this matters over at Climate Skeptic.

By the way, if you are wondering why I have calmed down a bit on climate of late here at Coyote Blog, it is because I have decided that my climate work really was diluting what I want to do here at Coyote Blog, and it really deserved its own home and audience.  I have begun archiving old posts over at Climate Skeptic, and I will do most of my new posting on climate there.  Those interested in the climate issues are encouraged to bookmark the new site and/or subscribe to its feed.

For a little while, I will still mirror the headlines over here at Coyote Blog (after all, the paint is still so wet over at Climate Skeptic that I don't think Google has found me yet -- a few blogrolls wouldn't hurt, hint, hint.)

Also, in the next few weeks I plan release my own video on issues with catastrophic anthropogenic (man-made) global warming theory.  The core of this video will be based on this skeptics summary post and my 60-second climate overview as well as my free 80-page skeptics primer, of course.

The Perfect Political Bludgeon

I am often asked, "why, if the threat of global warming is really so overstated, does the issue have such legs with politicians, media, and activists?"

Answer:  Because it is the perfect political bludgeon.  One of the reasons I felt like high school debate really was broken (I don't know if it has been fixed since) was because every single debate eventually devolved into which side was more likely to cause a nuclear war.  It didn't matter if you were arguing about energy policy or the presidential primary system, no good debate case stopped short of blaming the other side for nuclear war.

Today, with nuclear weapons mostly forgotten (unfortunately not gone), global warming is the new nuclear war. It doesn't matter what you are arguing about:

There was Clare Short, a member of the British Parliament and Secretary
for International Development under Prime Minister Tony Blair until she
resigned in 2003 over the Iraq war. Claiming that Israel is actually
"much worse than the original apartheid state" and accusing it of
"killing (Palestinian) political leaders," Ms. Short charged the Jewish
state with the ultimate crime: Israel "undermines the international
community's reaction to global warming." According to Ms. Short, the
Middle East conflict distracts the world from the real problem:
man-made climate change. If extreme weather will lead to the "end of
the human race," as Ms. Short warned it could, add this to the list of
the crimes of Israel.

Contributing to Science, Follow-up

My photo survey of the Tucson USHCN climate station is still creating a lot of discussion.  Discussion, for example, is here, here, and here.

And you too can have the satisfaction of contributing to science.  All
you need is a camera (a GPS of some sort is also helpful).  I wrote a
post with instructions on how to find temperature stations near you and how to document them for science here.  Believe it or not, for all the work and money spent on global warming,
this is something that no one had done -- actually go document these
sites to check their quality and potential biases.

Problems With Catastrophic Global Warming Shown in Two Charts

OK, I understand that perhaps my worst flaw in trying to make a point is in being too loquacious.  In a previous post, I showed why estimates of climate catastrophe were overblown by using the earth's experience over the last 100 years as an empirical guide.  Today, I shall try to make the same point with fewer words and just two charts instead.

Scientists have a concept called climate sensitivity which refers to the amount of global warming in degrees Celsius we might expect from a doubling of CO2 concentrations from a pre-industrial 280ppm to 560ppm  (we are currently at about 380ppm today and will reach 560ppm between 2065 and 2100, depending on how aggressive a forecast you want to adopt).

A simple way to estimate sensitivity is from experience over the past century.  At the same time CO2 has gone up by 100ppm, global temperatures have gone up by at most 0.6 Celsius (from the 4th IPCC report).  I actually believe this number is over-stated due to uncorrected urban effects and other surface temperature measurement issues, but let's assume 0.6ºC.  Only a part of that 0.6ºC is due to man - some is likely do to natural cyclical effects, but again to avoid argument, let's assume man's CO2 has heated the earth 0.6 Celsius.  From these data points, we can project forward:

Sensitivity1

As you can see, the projection is actually a diminishing curve.  For reasons I will not go into again (you can read much more in my original post) this relationship HAS to be a diminishing curve.  It's a fact accepted by everyone.  True climate consensus.  We can argue about the slope and exact shape, but I have chosen midpoint values from a reasonable range.  The answer is not that sensitive to different assumptions anyway.  Even a linear extrapolation, which is clearly wrong scientifically, would only yield a sensitivity projection a few tenths of a degree higher.

What we arrive at is a sensitivity of about 1.2 degrees Celsius for a CO2 doubling (where the blue line crosses 560ppm).  In other words, we can expect another 0.6ºC increase over the next century, about the same amount we experienced (and most of us failed to notice) over the last century.

But, you are saying, global warming catastrophists get so much higher numbers.  Yes they do, with warming as high as 9-10C in the next century.  In fact, most global warming catastrophists believe the climate sensitivity is at least 3ºC per doubling, and many use estimates as high as 5ºC or 6ºC.  Do these numbers make sense?  Well, let's draw the same curve for a sensitivity of 3ºC, the low end of the catastrophists' estimates, this time in red:

Sensitivity2

To get a sensitivity of 3.0ºC, one has to assume that global warming due solely to man's CO2 (nothing else) would have to be 1.5ºC to date (where the red line intersects the current concentration of 380ppm).  But no one, not the IPCC or anyone else, believes measured past warming has been anywhere near this high.  So to believe the catastrophic man-made global warming case, you have to accept a sensitivity three or more times higher than historical empirical data would support.  Rather than fighting against climate consensus, which is how we are so often portrayed, skeptics in fact have history and empirical data on our side.  For me, this second chart is the smoking gun of climate skepticism.  We have a lot of other issues -- measurement biases, problems with historical reconstructions, role of the sun, etc -- but this chart highlights the central problem -- that catastrophic warming forecasts make no sense based on the last 100+ years of actual data.

Global warming catastrophists in fact have to argue against historical data, and say it is flawed in two ways:  First, they argue there are positive feedbacks in climate that will take hold in the future and accelerate warming; and second, they argue there are other anthropogenic effects, specifically sulphate aerosols, that are masking man-made warming.  Rather than just repeat myself (and in the interest in proving I can actually be succinct) I will point you to my original post, the second half of which deals in depth with these two issues. 

As always, you can find my Layman's Guide to Skepticism about Man-made Global Warming here.  It is available for free in HTML or pdf download, or you can order the printed book that I sell at cost.  My other recent posts about climate are here.

Must...Not...Make...Ad...Hominem...Attack

A couple of weeks ago, Newsweek published a front-page article demonizing ExxonMobil for given $10,000 honorariums to researchers likely to publish work skeptical of catastrophic man-made global warming.  If $10,000 is corrupting and justifies such an ad hominem attack, what are we to make of $100 million (pronounced in Dr. Evil voice with pinkie to lips) a year in pro-catastrophe spending:

That's right, $100 million per year. Al Gore,
who seems to think it is sinister for other people to spend money in
order to communicate their ideas about sound public policy is going to
outspend the entire mass of climate policy critics tenfold in order to
spread his message of environmental catastrophism to the public.

Speech:  OK for me, but not for thee.

Postscript:  By the way, I fully support Mr. Gore and his donor's efforts to let their viewpoint be heard.  I just wonder why they don't extend me the same courtesy.

Reality Checking Global Warming Forecasts

I know I have deluged you with a lot of climate change posts of late.  I think this particular post is important, as it is the clearest single argument I can make as to why I am skeptical that man-made global warming will rise to catastrophic levels.  It is not comprehensive, it took me 80 pages to do that, but it should get anyone thinking.

It turns out to be quite easy to do a simple but fairly robust reality check of global warming forecasts, even without knowing what a "Watt" or a "forcing" is.   Our approach will be entirely empirical, based on the last 100 years of climate history.  I am sensitive that we skeptics not fall into the
9/11 Truther syndrome of arguing against a coherent theory from
isolated anomalies
.  To this end, my approach here is holistic and not
anomaly driven.  What we
will find is that, extrapolating from history, it is almost impossible to get warming numbers as high as those quoted by global warming alarmists.

Climate Sensitivity

The one simple concept you need to understand is "climate sensitivity."  As used in most global warming literature, climate sensitivity is the amount of global warming that results from a doubling in atmospheric CO2 concentrations.   Usually, when this number is presented, it refers to the warming from a doubling of CO2 concentrations since the beginning of the industrial revolution.  The pre-industrial concentration is generally accepted as 280ppm (0.028% of the atmosphere) and the number today is about 380ppm, so a doubling would be to 560ppm.

As a useful, though not required, first step before we begin, I encourage you to read the RealClimate simple "proof" for laymen that the climate sensitivity is 3ºC, meaning the world will warm 3 degrees C with a doubling of CO2 concentrations from their pre-industrial level.  Don't worry if you don't understand the whole description, we are going to do it a different, and I think more compelling, way (climate scientists are a bit like the Wizard of Oz -- they are afraid if they make things too simple someone might doubt they are a real wizard).  3ºC is a common number for sensitivity used by global warming hawks, though it is actually at the low end of the range that the UN IPCC arrived at in their fourth report.  The IPCC (4th report, page 798) said that the expected value is between 3ºC and 4ºC and that there was a greater chance the sensitivity was larger than 6ºC than that it was 1.5ºC or less.  I will show you why I think it is extraordinarily unlikely that the number is greater even than 1.5ºC.

Our Approach

We are going to derive the sensitivity (actually a reasonable range for sensitivity) for ourselves in three steps.  First, we will do it a simple way.  Then, we will do it a slightly harder but more accurate way.  And third, we will see what we would have to assume to get a number anywhere near 3ºC.  Our approach will be entirely empirical, using past changes in CO2 and temperature to estimate sensitivity.  After all, we have measured CO2 going up by about 100 ppm.  That is about 36% of the way towards a doubling from 280 to 560.  And, we have measured temperatures -- and though there are a lot of biases in these temperature measurements, these measurements certainly are better than our guesses, say, of temperatures in the last ice age.  Did you notice something odd, by the way, in the RealClimate derivation?  They never mentioned measured sensitivities in the last 100 years -- they jumped all the way back to the last ice age.  I wonder if there is a reason for that?

A First Approximation

OK, let's do the obvious.  If we have experienced 36% of a doubling, then we should be able to take the historic temperature rise from CO2 for the same period and multiply it by 2.8 (that's just reciprocal of 36%) and derive the temperature increase we would expect for a full doubling.

The problem is that we don't know the historic temperature rise solely form CO2.  But we do know how to bound it.  The IPCC and most global warming hawks place the warming since 1900 at about 0.6ºC.  Since no one attributes warming before 1900 to man-made CO2  (it did warm, but this is attributed to natural cyclical recovery from the little ice age) then the maximum historic man-made warming is 0.6ºC.  In fact, all of that warming is probably not from CO2.  Some probably is from continued cyclical warming out of the little ice age.  Some, I believe strongly, is due to still uncorrected biases, particularly of urban heat islands, in surface temperature data. 

But let's for a moment attribute, unrealistically, all of this 0.6ºC to man-made CO2 (this is in fact what the IPCC does in their report).   This should place an upper bound on the sensitivity number.  Taking 0.6ºC times 2.8 yields an estimated  climate sensitivity of  1.7ºC.  Oops.  This is about half of the RealClimate number or the IPCC number! And if we take a more realistic number for man-made historic warming as 0.4ºC, then we get a sensitivity of 1.1ºC.  Wow, that's a lot lower! We must be missing something important!  It turns out that we are, in this simple analysis, missing something important.  But taking it into account is going to push our sensitivity number even lower.

A Better Approximation

What we are missing is that the relation between CO2 concentration and warming is not linear, as implied in our first approximation.  It is a diminishing return.  This means that the first 50 ppm rise in CO2 concentrations causes more warming than the next 50 ppm, etc.  This effect has often been compared to painting a window.  The first coat of paint blocks out a lot of light, but the window is still translucent.  The next coat blocks out more light, but not as much as the first.  Eventually, subsequent coats have no effect because all the light is already blocked.  CO2 has a similar effect on warming.  It only absorbs certain wavelengths of radiation returning to space from earth.  Once the absorption of those wavelengths is saturated, extra CO2 will do almost nothing. (update:  By the way, this is not some skeptic's fantasy -- everyone in climate accepts this fact).

So what does this mean in English?  Well, in our first approximation, we assumed that 36% of a CO2 doubling would yield 36% of the temperature we would get in a doubling.  But in reality, since the relationship is a diminishing return, the first 36% of a CO2 doubling will yield MORE than 36% of the temperature increase you get for a doubling.  The temperature increase is front-loaded, and diminishes going forward.   An illustration is below, with the linear extrapolation in red and the more realistic decreasing exponential extrapolation in blue.

Sensitivity

The exact shape and equation of this curve is not really known, but we can establish a reasonable range of potential values.  For any reasonable shapes of this curve, 36% of a CO2 doubling (where we are today) equates to from 43% to 63% of the final temperature increase over a doubling.  This would imply that a multiplier between 2.3 and 1.6 for temperature extrapolation  (vs. 2.8 derived above for the straight linear extrapolation above) or a climate sensitivity of 1.4ºC to 1.0ºC if man-made historic warming was 0.6ºC and a range of 0.9ºC to 0.6ºC for a man-made historic warming of 0.4ºC.  I tend to use the middle of this range, with a multiplier of about 1.9 and a man-made historic warming of 0.5ºC to give a expected sensitivity of 0.95ºC, which we can round to 1ºC. 

This is why you will often hear skeptics cite numbers closer to 1ºC rather than 3ºC for the climate sensitivity.   Any reasonable analysis of actual climate experience over the last 100 years yields a sensitivity much closer to 1ºC than 3ºC.  Most studies conducted before the current infatuation with showing cataclysmic warming forecasts came up with this same 1ºC, and peer-reviewed work is still coming up with this same number

So what does this mean for the future?  Well, to predict actual temperature increases from this sensitivity, we would have to first create a CO2 production forecast and, you guessed it, global warming hawks have exaggerated that as well.  The IPCC says we will hit the full doubling to 560ppm around 2065 (Al Gore, incredibly, says we will hit it in the next two decades).  This means that with about 0.5C behind us, and a 3 sensitivity, we can expect 2.5C more warming in the next 60 years.  Multiply that times exaggerated negative effects of warming, and you get instant crisis.

However, since actual CO2 production is already below IPCC forecasts, we might take a more reasonable date of 2080-2100 for a doubling to 560.  And, combining this with our derived sensitivity of 1ºC (rather than RealClimate's 3ºC) we will get 0.5C more warming in the next 75-100 years.  This is about the magnitude of warming we experienced in the last century, and most of us did not even notice.

I know you are scratching you head and wondering what trick I pulled to get numbers so much less than the scientific "consensus."  But there is no trick, all my numbers are empirical and right out of the IPCC reports.  In fact, due to measurement biases and other climate effects that drive warming, I actually think the historic warming from CO2 and thus the sensitivity is even lower, but I didn't want to confuse the message. 

So what are climate change hawks assuming that I have not included?  Well, it turns out they add on two things, neither of which has much empirical evidence behind it.  It is in fact the climate hawks, not the skeptics, that need to argue for a couple of anomalies to try to make their case.

Is Climate Dominated by Positive Feedback?

Many climate scientists argue that there are positive feedbacks in the climate system that tend to magnify and amplify the warming from CO2.  For example, a positive feedback might be that hotter climate melts sea ice and glaciers, which reduces the reflectiveness of the earth's surface, which causes more sunlight to be absorbed, which warms things further.  A negative feedback might be that warmer climate vaporizes more water which forms more clouds which blocks sunlight and cools the earth. 

Climate scientists who are strong proponents of catastrophic man-made warming theory assume that the climate is dominated by positive feedbacks.  In fact, my reading of the IPCC report says that the climate "consensus" is that net feedback in the climate system is positive and tends to add 2 more degrees of temperature for every one added from CO2.  You might be thinking - aha - I see how they got a sensitivity of 3ºC:  Your 1ºC plus 2ºC in feedback equals 3ºC. 

But there is a problem with that.  In fact, there are three problems with this.  Here they are:

  1. We came up with our 1ºC sensitivity empirically.  In other words, we observed a 100ppm past CO2 increase leading to 0.5ºC measured temperature increase which implies 1ºC sensitivity.  But since this is empirical, rather than developed from some set of forcings and computer models, then it should already be net of all feedbacks.  If there are positive feedbacks in the system, then they have been operating and should be part of that 1ºC.
  2. There is no good scientific evidence that there is a large net positive feedback loop in climate, or even that the feedback is net positive at all.  There are various studies, hypotheses, models, etc., but no proof at all.  In fact, you can guess this from our empirical data.  History implies that there can't be any large positive feedbacks in the system or else we would have observed higher temperatures historically.  In fact, we can go back in to the distant historical record (in fact, Al Gore showed the chart I am thinking of in An Inconvenient Truth) and find that temperatures have never run away or exhibited any sort of tipping point effect.
  3. The notion that a system like climate, which has been reasonably stable for millions of years, is dominated by positive feedback should offend the intuition of any scientist.  Nature is dominated in large part by negative feedback processes.  Positive feedback processes are highly unstable, and tend to run away to a distant endpoint.  Nuclear fission, for example, is a positive feedback process

Do aerosols and dimming imply a higher sensitivity?

Finally, the last argument that climate hawks would employ is that anthropogenic effects, specifically emission of SO2 aerosols and carbon black, have been reflecting sunlight and offsetting the global warming effect.  But, they caution, once we eliminate these pollutants, which we have done in the West (only to be offset in China and Asia) temperatures will no longer be suppressed and we will see the full extent of warming.

First, again, no one really has any clue the magnitude of this effect, or even if it is an effect at all.  Second, its reach will tend to be localized over industrial areas (since their presence in the atmosphere is relatively short-lived), whereas CO2 acts worldwide.  If these aerosols and carbon black are concentrated say over 20% of the land surface of the world, this means they are only affecting the temperature over 5% of the total earth' s surface.  So its hard to argue they are that significant.

However, let's say for a moment this effect does exist.  How large would it have to be to argue that a 3.0ºC climate sensitivity is justified by historical data?  Well, taking 3.0ºC and dividing by our derived extrapolation multiplier of 1.9, we get required historic warming due to man's efforts of 1.6ºC.  This means that even if all past 0.6ºC of warming is due to man (a stretch), then aerosols must be suppressing a full 1ºC of warming.   I can't say this is impossible, but it is highly unlikely and certainly absolutely no empirical evidence exists to support any number like this. Particularly since dimming effects probably are localized, you would need as much as 20ºC suppression in these local areas to get a 1ºC global effect.  Not very likely.

Why the number might even be less

Remember that when we calculated sensitivity, we needed the historical warming due to man's CO2.  A simple equation for arriving at this number is:

Warming due to Man's CO2 = Total Historic Measured Warming - Measurement Biases - Warming from other Sources + Warming suppressed by Aerosols

This is why most skeptics care if surface temperature measurements are biased upwards or if the sun is increasing in intensity.  Global warming advocates scoff and say that these effects don't undermine greenhouse gas theory.  And they don't.  I accept greenhouse gases cause some warming.  BUT, the more surface temperature measurements are biased upwards and the more warming is being driven by non-anthropogenic sources, the less that is being caused by man.  And, as you have seen in this post, the less warming caused by man historically means less that we will see in the future.  And while global warming hawks want to paint skeptics as "deniers", we skeptics want to argue the much more interesting question "Yes, but how much is the world warming, and does this amount of warming really justify the costs of abatement, which are enormous."

 

As always, you can find my Layman's Guide to Skepticism about Man-made Global Warming here.  It is available for free in HTML or pdf download, or you can order the printed book that I sell at cost.  My other recent posts about climate are here.

Um, Whatever

James Hansen, NASA climate scientist and lead singer in the climate apocalypse choir, responded to his  temperature data revisions a week ago:

What we have here is a case of dogged contrarians who
present results in ways intended to deceive the public into believing
that the changes have greater significance than reality. They aim to
make a mountain out of a mole hill. I believe that these people are not
stupid, instead they seek to create a brouhaha and muddy the waters in
the climate change story. They seem to know exactly what they are doing
and believe they can get away with it, because the public does not have
the time, inclination, and training to discern what is a significant
change with regard to the global warming issue.

The proclamations of the contrarians are a deceit

Um, whatever.  Remember, this is the man who had large errors in his data set, used by nearly every climate scientist in the world, for years, and which were only recently discovered by Steven McIntyre (whom Hansen refuses to even name in his letter).  These errors persisted for years because Mr. Hansen refuses to allow the software and algorithms he uses to "correct" and adjust the data to be scrutinized by anyone else.  He keeps critical methodologies that are paid for by we taxpayers a secret.  But it is his critics who are deceitful? 

In particular, he is bent out of shape that critics' first presented the new data as a revised ranking of the hottest years rather than as a revised line graph.  But it was Hansen and his folks who made a big deal in the press that 1998 was the hottest year in history.  It was he that originally went for this sound byte rather than the more meaningful and data-rich graph when communicating with the press.  But then he calls foul when his critics mimic his actions?  (Oh, and by the way, I showed it both ways).

Hansen has completely ignored the important lessons from this experience, while focusing like a laser on the trivial.  I explained in detail why this event mattered, and it was not mainly because of the new numbers.  In short, finding this mistake was pure accident -- it was a bit like inferring that the furniture in a house is uncomfortable solely by watching the posture of visitors leaving the house.  That's quite an deductive achievement, but how much more would you learn if the homeowners would actually let you in the house to inspect the furniture.  Maybe its ugly too.

So why does Hansen feel he should be able to shield himself from scrutiny and keep the details of his database adjustments and aggregation methodology a secret?  Because he thinks he is the king.    Just read his letter:

The contrarians will be remembered as court jesters. There is no point
to joust with court jesters. "¦ Court jesters serve as a distraction, a
distraction from usufruct. Usufruct is the matter that the captains
wish to deny, the matter that they do not want their children to know
about.

Why do we allow this kind of secrecy and spurning of scrutiny in science?  Is it tolerated in any other discipline?

Steve McIntyre has his response here.  McIntyre still has my favorite comment ever about Hansen and his gang:

While acolytes may call these guys "professionals", the process of
data adjustment is really a matter of statistics and even accounting.
In these fields, Hansen and Mann are not "professionals" - Mann
admitted this to the NAS panel explaining that he was "not a
statistician". As someone who has read their works closely, I do not
regard any of these people as "professional". Much of their reluctance
to provide source code for their methodology arises, in my opinion,
because the methods are essentially trivial and they derive a certain
satisfaction out of making things appear more complicated than they
are, a little like the Wizard of Oz. And like the Wizard of Oz, they
are not necessarily bad men, just not very good wizards.

Update:  If you have a minute, read Hansen's letter, and then ask yourself:  Does this sound like what I would expect of scientific discourse?  Does he sound more like a politician or a scientist?

The Next State AG Boondoggle

Chris Horner reports that the next mass-state-AG-tort, modeled after their fairly succesful efforts against tobacco companies, will be against oil companies over global warming:

A little birdie recently chirped about some
usual-suspect state attorneys general preparing a litigation strategy
document for/with environmental pressure groups, providing a roadmap
for cooperatively replicating the tobacco litigation of a decade ago in
the "global warming" context, substituting that projected catastrophe
for cancer and "big energy" for tobacco companies.

The point of
such exercise would not be to litigate the matter to conclusion "” ever
more challenging what with forced corrections of the temperature
record, recent exposure of the woeful reliability of our own world's
most reliable surface measuring network, and of course no global
warming in a decade (or, we now know, since 1900 for that matter) "” but
to extract massive settlements from the energy industry to further fund
the trial lawyers, greens and the greens' pet projects. Just imagine
the anti-energy campaign that this model would yield! And at no cost,
really, except to anyone who uses energy and/or invests in these sleepy
"granny stocks". Oh, and the economy.

He goes on to include a copy of the memo making the rounds of the AG offices.   This will certainly be a circus, and generally an expensive time-waster that will just serve to line the pockets of tort lawyers and the politically connected.  If things turn out like the tobacco settlement, the oil companies may jump on board early, since the tobacco settlement has turned into a state-enforced oligopoly for the major tobacco companies.  On the bright side, this might be an opportunity to subpoena the details of a bunch of climate work that is currently kept secret.

Balanced on the Knife Edge

OK, obviously I am not going to be able to stop posting on climate.  TigerHawk has a nice article on the global cooling panic from the April
28, 1975 issue of Newsweek
. 
However, rather than highlight the fact that climatologists have
reversed themselves on cooling vs. warming, because that sometimes
happens in science, I want to highlight what they described as the
effects of global cooling: 

They begin
by noting the slight drop in over-all temperature that produces large numbers
of pressure centers in the upper atmosphere. These break up the smooth flow of westerly winds over temperate
areas. The stagnant air produced in this
way causes an increase in extremes of local weather such as droughts, floods,
extended dry spells, long freezes, delayed monsoons and even local temperature
increases "“ all of which have a direct impact on food supplies.

So
cooling will cause more droughts, floods, extreme weather, and even
local temperature increases.  And we have been told constantly that
warming
will
cause more droughts, floods, extreme weather, and even local
temperature decreases.  So does this mean that we are currently
balanced on the knife edge of the perfect climate, and any change
cooler or warmer will make it worse?  Or could it be that the
weather-disaster-hype-machine has a defined playbook and these are its
elements?

Cities and Global Warming

OK, I lied.  I have one more post I want to make on global warming now that Steve McIntyre's site is back up.  I suspect I tend to bury the lede in my warming posts, because I try to be really careful to set up the conclusion in a fact-based way.  However, for this post, I will try a different approach.  Steven McIntyre has reshuffled the data in a study on urbanization and temperature that is relied on by the last IPCC report to get this chart for US Temperature data.
Peters27

Conclusion?  For this particular set of US temperature data, all the 20th century warming was observed in urban areas, and none was observed in rural areas less affected by urban heat islands, asphalt, cars, air conditioning, etc.

If it can be generalized, this is an amazing conclusion -- it would imply that the sum of US measured warming over the last century could be almost 100% attributed to urban heat islands (a different and more localized effect than CO2 greenhouse gas warming).  Perhaps more importantly, outside of the US nearly all of the historical temperature measurement is in urban areas -- no one has 100 year temperature records for the Chinese countryside.  However much this effect might be over-stating US temperature increases, it would probably be even more pronounced in measurements in other parts of the word.

OK, so how did he get this chart?  Did he cherry-pick the data?  First, a bit of background.

The 2003 Peterson study on urban effects on temperature was adopted as a key study for the last IPCC climate report.  In that report, Peterson concluded:

Contrary to generally accepted wisdom, no statistically significant
impact of urbanization could be found in annual temperatures.

This study (which runs counter to both common sense and the preponderance of past studies) was latched onto by the IPCC to allow them to ignore urban heat island effects on historical temperatures and claim that most all past warming in the last half-century was due to CO2.  Peterson's methodology was to take a list of several hundred US temperature stations (how he picked these is unclear, they are a mix of USHCN and non-USHCN sites) and divide them between "urban" and "rural" using various inputs, including satellite photos of night lights.  Then he compared the temperature changes over the last century for the two groups, and declared them substantially identical.

However, McIntyre found a number of problems with his analysis.  First, looking at Peterson's data set, he saw that the raw temperature measurement did show an urbanization effect of about 0.7C over the last century, a very large number.  It turns out that Peterson never showed these raw numbers in his study, only the numbers after he applied layers of "corrections" to them, many of which appear to McIntyre to be statistically dubious.  I discussed the weakness of this whole "adjustment" issue here.

Further, though, McIntyre found obviously rural sites lurking in the urban data, and vice versa, such that Peterson was really comparing a mixed bag with a mixed bag.  For example, Snoqualmie Falls showed as urban -- I have been to Snoqualmie Falls several times, and while it is fairly close to Seattle, it is not urban.  So McIntyre did a simple sort.  He took from Peterson's urban data set only large cities, which he defined as having a major league sports franchise  (yes, a bit arbitrary, but not bad).  He then compared this narrower urban data set from Peterson against Peterson's rural set and got the chart above.  The chart is entirely from Peterson's data set, with no cherry-picking except to clean up the urban list.

Postscript:  Please don't get carried away.  Satellite measurement of the troposphere, which are fairly immune to these urbanization effects, show the world has been warming, though far less than the amount shown in surface temperature databases.

Update: To reinforce the point about global sites, Brazil apparently only has six (6) sites in the worldwide database.  That is about 1/200 of the number of sites in the continental US, which has about the same land area.  And of those six, McIntyre compares urban vs. rural sites.  Guess what he finds?  And, as a follow up from the postscript, while satellites show the Northern Hemisphere is warming, it shows that the Southern Hemisphere is not.

Denier vs. Skeptic

We all know why Newsweek and many others (like Kevin Drum) choose to use the term "denier" for those of us who are skeptical of catastrophic anthropogenic global warming:  These media folks, who are hesitant to use the word "terrorist" because of its emotional content, want to imply that we skeptics are somehow similar to Holocaust deniers.

But beyond just the issues of false emotional content, the word denier is incorrect as applied to most skeptics, including myself, and helps man-made warming hawks avoid a difficult argument.  I try to be careful to say that I am a skeptic of "catastrophic man-made (or anthropogenic) global warming theory." 

  • So, does that mean I think the world is not warming?  In fact, the evidence is pretty clear that it is warming (though perhaps not by as much as shown in current surface temperature databases).
  • So does this mean that I think that human activities are not causing some warming?  In fact, I do think man-made CO2 is causing some, but not all the current 20th century warming trend.  I also think that man's land use  (urbanization, irrigated agriculture, etc) has effects on climate.

Where I really get skeptical is the next proposition -- that man's burning of fossil fuels is going to cause warming in the next century that will carry catastrophic impacts, and that these negative effects will justify massive current spending and government interventions (that will have their own negative consequences in terms of lost economic growth, increased poverty, and reduction in freedoms). 

Strong supporters of catastrophic man-made global warming theory do not usually want to argue this last point.  It is much easier to argue points 1 and 2, because the science is pretty good that the earth has warmed (though the magnitude is in question) and that CO2 greenhouse effect does cause warming (though the magnitude is in question).  That is why skeptics are called deniers.  It is in effect a straw man that allows greenhouse supporters to stay on 1 and 2 without getting into the real meat of the question.

Here is a quick example to prove my point.  Follow me for three paragraphs, then ask yourself if you have ever heard any of this in the media or on any RealClimate-type site's FAQ.

Anthropogenic global warming hawks admit that the warming solely from the CO2 greenhouse effect will likely NOT rise to catastrophic levels.  So how do they get such big, scary forecasts?  The answer is positive feedback.

Almost every process you can think of in nature operates by negative
feedback, meaning that an input to a system is damped.  Roll a ball, and eventually friction and wind resistance
bring
it to a stop.    Positive feedback means that an input to the system is multiplied and increased.  Negative feedback is a ball in the bottom of a bowl, always returning to the center; positive feedback is a ball perched precariously at the top of a
mountain that will run faster and faster downhill with a tiny push. Positive feedback
breeds instability, and processes that operate by positive feedback are
dangerous, and usually end up in extreme states -- these processes tend
to
"run away" like the ball rolling down the hill.  Nuclear fission, for
example, is a positive feedback process. 

Current catastrophic man-made global warming theory asserts that our climate is dominated
by positive feedback.  The last UN IPCC report posits that a small increase in
temperature from CO2 is multiplied 2,3,4 times or more by positive
feedbacks like humidity and ice albedo.   So a modest degree or degree and a half of warming from the greenhouse effect becomes a scary five or eight degrees of warming in the next century once any number of hypothesized positive feedbacks are applied.  Add to this exaggerated, sometimes over-the-top visions of possible negative consequences, and that is how global warming hawks justify massive government action.

OK, that is a very brief description of what I consider a sophisticated reason to be skeptical:  Most catastrophic warming forecasts depend on positive feedback loops, feedbacks for which we have little or no evidence and which don't tend to dominate in other stable systems.  So how many times have you seen this issue discussed?  Zero?  Yeah, its so much easier just to call us deniers.

If you are interested, here is slightly longer version of my skeptic's point of view.  Here is my much longer version.  Here is the specific chapter that discusses feedback loops.  Here is Roy Spencer discussing problems with studies trying to measure these feedbacks.

Postscript:  By the way, it is in this context that the discussions about restating temperatures and problems with historical surface temperature measurements are important.  Exaggerated historical warming numbers leave more room to posit positive feedback loops.  Lower historical numbers, or evidence past warming is driven by non-man-made sources (e.g. solar activity), leave less room to justify positive feedback loops.

Update:  RealClimate has posted their six steps to explain catastrophic warming from CO2.  Seems have buried the feedback issue.  Note that forcings mentioned here include feedbacks, they are not from CO2 alone but from CO2 + positive feedback.  Strange they didn't mention this.

A Temperature Adjustment Example

I won't go back into all the details, but I have posted before about just how large the manual adjustments to temperature numbers are (the "noise") as compared to the magnitude of measured warming (the "signal").  This issue of manual temperature corrections is the real reason the NASA temperature restatements are important (not the absolute value of the restatement).

Here is a quick visual example.  Both charts below are from James Hansen and the GISS and are for the US only.  Both use basically the same temperature measurement network (the USHCN).  The one on the left was Hansen's version of US temperatures in 1999.  The one on the right he published in 2001.
Hansen_1999_v_2001

The picture at the right is substantially different  than the one on the left.  Just look at 1932 and 1998.  Between the first and second chart, none of the underlying temperature measurements changed.  What changed  were the adjustments to the underlying measurements applied by the NOAA and by the GISS.  For some reason, temperatures after 1980 have been raised and temperatures in the middle of the century were lowered.

For scientists to apply a negative temperature adjustment to measurements, as they did for the early 1930's, it means they think there was some warming bias in 1932 that does not exist today.  When scientists raise current temperatures, they are saying there is some kind of cooling bias that exists today that did not exist in the 1930's.  Both of these adjustments are basically implying the same thing:  That temperature measurement was more biased upwards, say by asphalt and urbanization and poor sitings, in 1932 than they are today.  Does this make any freaking sense at all?

Of course, there may be some other bias at work here that I don't know about.  But I and everyone else in the world are forced to guess because the NOAA and the GISS insist on keeping their adjustment software and details a secret, and continue to resist outside review.

Read much more about this from Steve McIntyre.

Some Final Thoughts on The NASA Temperature Restatement

I got a lot of traffic this weekend from folks interested in the US historical temperature restatement at NASA-GISS.  I wanted to share to final thoughts and also respond to a post at RealClimate.org (the #1 web cheerleader for catastrophic man-made global warming theory).

  1. This restatement does not mean that the folks at GISS are necessarily wrong when they say the world has been warming over the last 20 years.  We know from the independent source of satellite measurements that the Northern Hemisphere has been warming (though not so much in the Southern Hemisphere).  However, surface temperature measurements, particularly as "corrected" and aggregated at the GISS, have always been much higher than the satellite readings.  (GISS vs Satellite)  This incident may start to give us an insight into how to bring those two sources into agreement. 
  2. For years, Hansen's group at GISS, as well as other leading climate scientists such as Mann and Briffa (creators of historical temperature reconstructions) have flaunted the rules of science by holding the details of their methodologies and algorithm's secret, making full scrutiny impossible.  The best possible outcome of this incident will be if new pressure is brought to bear on these scientists to stop saying "trust me" and open their work to their peers for review.  This is particularly important for activities such as Hansen's temperature data base at GISS.  While measurement of temperature would seem straight forward, in actual fact the signal to noise ration is really low.  Upward "adjustments" and fudge factors added by Hansen to the actual readings dwarf measured temperature increases, such that, for example, most reported warming in the US is actually from these adjustments, not measured increases.
  3. In a week when Newsweek chose to argue that climate skeptics need to shut up, this incident actually proves why two sides are needed for a quality scientific debate.  Hansen and his folks missed this Y2K bug because, as a man-made global warming cheerleader, he expected to see temperatures going up rapidly so he did not think to question the data.  Mr. Hansen is world-famous, is a friend of luminaries like Al Gore, gets grants in quarter million dollar chunks from various global warming believers.  All his outlook and his incentives made him want the higher temperatures to be true.  It took other people with different hypotheses about climate to see the recent temperature jump for what it was: An error.

The general response at RealClimate.org has been:  Nothing to see here, move along.

Among other incorrect stories going around are that the mistake was due
to a Y2K bug or that this had something to do with photographing
weather stations. Again, simply false.

I really, really don't think it matters exactly how the bug was found, except to the extent that RealClimate.org would like to rewrite history and convince everyone this was just a normal adjustment made by the GISS themselves rather than a mistake found by an outsider.  However, just for the record, the GISS, at least for now until they clean up history a bit, admits the bug was spotted by Steven McIntyre.  Whatever the bug turned out to be, McIntyre initially spotted it as a discontinuity that seemed to exist in GISS data around the year 2000.  He therefore hypothesized it was a Y2K bug, but he didn't know for sure because Hansen and the GISS keep all their code as a state secret.  And McIntyre himself says he became aware of the discontinuity during a series of posts that started from a picture of a weather station at Anthony Watts blog.  I know because I was part of the discussion, talking to these folks online in real time.  Here is McIntyre explaining it himself.

In sum, the post on RealClimate says:

Sum total of this change? A couple of hundredths of degrees in the US
rankings and no change in anything that could be considered
climatically important (specifically long term trends).

A bit of background - surface temperature readings have read higher than satellite readings of the troposphere, when the science of greenhouse gases says the opposite should be true.  Global warming hawks like Hansen and the GISS have pounded on the satellite numbers, investigating them 8 ways to Sunday, and have on a number of occasions trumpeted upward corrections to satellite numbers that are far smaller than these downward corrections to surface numbers. 

But yes, IF this is the the only mistake in the data, then this is a mostly correct statement from RealClimate.org..  However, here is my perspective:

  • If a mistake of this magnitude can be found by outsiders without access to Hansen's algorithm's or computer code just by inspection of the resulting data, then what would we find if we could actually inspect the code?  And this Y2K bug is by no means the only problem.  I have pointed out several myself, including adjustments for urbanization and station siting that make no sense, and averaging in rather than dropping bad measurement locations
  • If we know significant problems exist in the US temperature monitoring network, what would we find looking at China? Or Africa?  Or South America.  In the US and a few parts of Europe, we actually have a few temperature measurement points that were rural in 1900 and rural today.  But not one was measuring rural temps in these other continents 100 years ago.  All we have are temperature measurements in urban locations where we can only guess at how to adjust for the urbanization.  The problem in these locations, and why I say this is a low signal to noise ratio measurement, is that small percentage changes in our guesses for how much the urbanization correction should be make enormous changes (even to changing the sign) of historic temperature change measurements.

Here are my recommendations:

  1. NOAA and GISS both need to release their detailed algorithms and computer software code for adjusting and aggregating USHCN and global temperature data.  Period.  There can be no argument.  Folks at RealClimate.org who believe that all is well should be begging for this to happen to shut up the skeptics.  The only possible reason for not releasing this scientific information that was created by government employees with taxpayer money is if there is something to hide.
  2. The NOAA and GISS need to acknowledge that their assumptions of station quality in the USHCN network are too high, and that they need to incorporate actual documented station condition (as done at SurfaceStations.org) in their temperature aggregations and corrections.  In some cases, stations like Tucson need to just be thrown out of the USHCN.  Once the US is done, a similar effort needs to be undertaken on a global scale, and the effort needs to include people whose incentives and outlook are not driven by making temperatures read as high as possible.
  3. This is the easiest of all.  Someone needs to do empirical work (not simulated, not on the computer, but with real instruments) understanding how various temperature station placements affect measurements.  For example, how do the readings of an instrument in an open rural field compare to an identical instrument surrounded by asphalt a few miles away?  These results can be used for step #2 above.  This is cheap, simple research a couple of graduate students could do, but climatologists all seem focused on building computer models rather than actually doing science.
  4. Similar to #3, someone needs to do a definitive urban heat island study, to find out how much temperature readings are affected by urban heat, again to help correct in #2.  Again, I want real research here, with identical instruments placed in various locations and various radii from an urban center  (not goofy proxys like temperature vs. wind speed -- that's some scientist who wants to get a result without ever leaving his computer terminal).  Most studies have shown the number to be large, but a couple of recent studies show smaller effects, though now these studies are under attack not just for sloppiness but outright fabrication.  This can't be that hard to study, if people were willing to actually go into the field and take measurements.  The problem is everyone is trying to do this study with available data rather than by gathering new data.

Postscript:  The RealClimate post says:

However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake.

If catastrophic man-made global warming theory is correct, then man faces a tremendous lose-lose.  Either shut down growth, send us back to the 19th century, making us all substantially poorer and locking a billion people in Asia into poverty they are on the verge of escaping, or face catastrophic and devastating changes in the planet's weather.

Now take two people.  One in his heart really wants this theory not to be true, and hopes we don't have to face this horrible lose-lose tradeoff.  The other has a deeply felt wish that this theory is true, and hopes man does face this horrible future.  Which person do you like better?  And recognize, RealClimate is holding up the latter as the only moral man. 

Update:  Don't miss Steven McIntyre's take from the whole thing.  And McIntyre responds to Hansen here.

Letter to Newsweek

Editors-

Oh, the delicious irony.

As a skeptic of catastrophic man-made global warming, I was disturbed to see that Newsweek in its August 13, 2007 issue (The Truth About Denial)
had equated me with a Holocaust denier.  There are so many interesting
scientific issues involved in climate change that it was flabbergasting
to me that Newsweek would waste time on an extended ad hominem
attack against one side in a scientific debate.  I was particularly
amazed that Newsweek would accuse the side of the debate that is
outspent 1000:1 with being tainted by money.  This is roughly
equivalent to arguing that Mike Gravel's spending is corrupting the
2008 presidential election.

However, fate does indeed have a sense of humor.  Skeptics' efforts of the sort Newsweek derided just this week
forced NASA-Goddard (GISS) to revise downward recent US temperature
numbers due to a programming mistake that went unidentified for
years, in part because NASA's taxpayer-paid researchers refuse to
release their temperature adjustment and aggregation methodology to the
public for scrutiny.  The problem was found by a chain of events that
began with amateur volunteers and led ultimately to Steven McIntyre (he
of the Michael Mann hockey stick debunking) calling foul.

The particular irony is that the person who is in charge of this
database, and is responsible for the decision not to allow scientific
scrutiny of his methodologies, is none other than James Hansen, who
Newsweek held up as the shining example of scientific objectivity in
its article.  Newsweek should have been demanding that taxpayer-funded
institutions like NASA should be opening their research to full review,
but instead Newsweek chose to argue that Mr. Hansen should be shielded
from scrutiny.

Warren Meyer

Breaking News: Recent US Temperature Numbers Revised Downwards Today

This is really big news, and a fabulous example of why two-way scientific discourse is still valuable, in the same week that both Newsweek and Al Gore tried to make the case that climate skeptics were counter-productive and evil. 

Climate scientist Michael Mann (famous for the hockey stick chart) once made the statement that  the 1990's were the
warmest decade in a millennia and that "there is a 95 to 99% certainty
that 1998 was the hottest year in the last one thousand years." (By
the way, Mann now denies he ever made this claim, though you can watch him say
these exact words in the CBC documentary Global
Warming:  Doomsday Called Off
).

Well, it turns out, according to the NASA GISS database, that 1998 was not even the hottest year of the last century.  This is because many temperatures from recent decades that appeared to show substantial warming have been revised downwards.  Here is how that happened (if you want to skip the story, make sure to look at the numbers at the bottom).

One of the most cited and used historical surface temperature databases is that of NASA/Goddard's GISS.  This is not some weird skeptics site.  It is considered one of the premier world temperature data bases, and it is maintained by anthropogenic global warming true believers.  It has consistently shown more warming than any other data base, and is thus a favorite source for folks like Al Gore.  These GISS readings in the US rely mainly on the US Historical Climate Network (USHCN) which is a network of about 1000 weather stations taking temperatures, a number of which have been in place for over 100 years.

Frequent readers will know that I have been a participant in an effort led by Anthony Watts at SurfaceStations.org to photo-document these temperature stations as an aid to scientists in evaluating the measurement quality of each station.  The effort has been eye-opening, as it has uncovered many very poor instrument sitings that would bias temperature measurements upwards, as I found in Tucson and Watts has documented numerous times on his blog.

One photo on Watt's blog got people talking - a station in MN with a huge jump in temperature about the same time some air conditioning units were installed nearby.   Others disagreed, and argued that such a jump could not be from the air conditioners, since a lot of the jump happened with winter temperatures when the AC was dormant.  Steve McIntyre, the Canadian statistician who helped to expose massive holes in Michael Mann's hockey stick methodology, looked into it.  After some poking around, he began to suspect that the GISS data base had a year 2000 bug in one of their data adjustments.

One of the interesting aspects of these temperature data bases is that they do not just use the raw temperature measurements from each station.  Both the NOAA (which maintains the USHCN stations) and the GISS apply many layers of adjustments, which I discussed here.  One of the purposes of Watt's project is to help educate climate scientists that many of the adjustments they make to the data back in the office does not necessarily represent the true condition of the temperature stations.  In particular, GISS adjustments imply instrument sitings are in more natural settings than they were in say 1905, an outrageous assumption on its face that is totally in conflict to the condition of the stations in Watt's data base.  Basically, surface temperature measurements have a low signal to noise ratio, and climate scientists have been overly casual about how they try to tease out the signal.

Anyway, McIntyre suspected that one of these adjustments had a bug, and had had this bug for years.  Unfortunately, it was hard to prove.  Why?  Well, that highlights one of the great travesties of climate science.  Government scientists using taxpayer money to develop the GISS temperature data base at taxpayer expense refuse to publicly release their temperature adjustment algorithms or software (In much the same way Michael Mann refused to release the details for scrutiny of his methodology behind the hockey stick).  Using the data, though, McIntyre made a compelling case that the GISS data base had systematic discontinuities that bore all the hallmarks of a software bug.

Today, the GISS admitted that McIntyre was correct, and has started to republish its data with the bug fixed.  And the numbers are changing a lot.  Before today, GISS would have said 1998 was the hottest year on record (Mann, remember, said with up to 99% certainty it was the hottest year in 1000 years) and that 2006 was the second hottest.  Well, no more.  Here are the new rankings for the 10 hottest years in the US, starting with #1:

1934, 1998, 1921, 2006, 1931, 1999, 1953, 1990, 1938, 1939

Three of the top 10 are in the last decade.  Four of the top ten are in the 1930's, before either the IPCC or the GISS really think man had any discernible impact on temperatures.  Here is the chart for all the years in the data base:
New_giss

There are a number of things we need to remember:

  • This is not the end but the beginning of the total reexamination that needs to occur of the USHCN and GISS data bases.  The poor correction for site location and urbanization are still huge issues that bias recent numbers upwards.  The GISS also has issues with how it aggregates multiple stations, apparently averaging known good stations with bad stations a process that by no means eliminates biases.  As a first step, we must demand that NOAA and GISS release their methodology and computer algorithms to the general public for detailed scrutiny by other scientists.
  • The GISS today makes it clear that these adjustments only affect US data and do not change any of their conclusions about worldwide data.  But consider this:  For all of its faults, the US has the most robust historical climate network in the world.  If we have these problems, what would we find in the data from, say, China?  And the US and parts of Europe are the only major parts of the world that actually have 100 years of data at rural locations.  No one was measuring temperature reliably in rural China or Paraguay or the Congo in 1900.  That means much of the world is relying on urban temperature measurement points that have substantial biases from urban heat.
  • All of these necessary revisions to surface temperatures will likely not make warming trends go away completely.  What it may do is bring the warming down to match the much lower satellite measured warming numbers we have, and will make current warming look more like past natural warming trends (e.g. early in this century) rather than a catastrophe created by man.  In my global warming book, I argue that future man-made warming probably will exist, but will be more like a half to one degree over the coming decades than the media-hyped numbers that are ten times higher.

So how is this possible?  How can the global warming numbers used in critical policy decisions and scientific models be so wrong with so basic of an error?  And how can this error have gone undetected for the better part of a decade?  The answer to the latter question is because the global warming  and climate community resist scrutiny.  This weeks Newsweek article and statements by Al Gore are basically aimed at suppressing any scientific criticism or challenge to global warming research.  That is why NASA can keep its temperature algorithms secret, with no outside complaint, something that would cause howls of protest in any other area of scientific inquiry.

As to the first question, I will leave the explanation to Mr. McIntyre:

While acolytes may call these guys "professionals", the process of
data adjustment is really a matter of statistics and even accounting.
In these fields, Hansen and Mann are not "professionals" - Mann
admitted this to the NAS panel explaining that he was "not a
statistician". As someone who has read their works closely, I do not
regard any of these people as "professional". Much of their reluctance
to provide source code for their methodology arises, in my opinion,
because the methods are essentially trivial and they derive a certain
satisfaction out of making things appear more complicated than they
are, a little like the Wizard of Oz. And like the Wizard of Oz, they
are not necessarily bad men, just not very good wizards.

For more, please see my Guide to Anthropogenic Global Warming or, if you have less time, my 60-second argument for why one should be skeptical of catastrophic man-made global warming theory.

Update:
Nothing new, just thinking about this more, I cannot get over the irony that in the same week Newsweek makes the case that climate science is settled and there is no room for skepticism, skeptics discover a gaping hole and error in the global warming numbers.

Update #2:  I know people get upset when we criticize scientists.  I get a lot of "they are not biased, they just made a mistake."  Fine.  But I have zero sympathy for a group of scientists who refuse to let other scientists review their methodology, and then find that they have been making a dumb methodology mistake for years that has corrupted the data of nearly every climate study in the last decade.

Update #3:  I labeled this "breaking news," but don't expect to see it in the NY Times anytime soon.  We all know this is one of those asymmetric story lines, where if the opposite had occurred (ie things found to be even worse/warmer than thought) it would be on the front page immediately, but a lowered threat will never make the news.

Oh, and by he way.  This is GOOD news.  Though many won't treat it that way.  I understand this point fairly well because, in a somewhat parallel situation, I seem to be the last anti-war guy who treats progress in Iraq as good news.

Update #4: I should have mentioned that the hero of the Newsweek story is catastrophic man-made global warming cheerleader James Hansen, who runs the GISS and is most responsible for the database in question as well as the GISS policy not to release its temperature aggregation and adjustment methodologies.  From IBD, via CNN Money:

Newsweek portrays James Hansen, director of NASA's Goddard Institute for Space Studies, as untainted by corporate bribery.

Hansen
was once profiled on CBS' "60 Minutes" as the "world's leading
researcher on global warming." Not mentioned by Newsweek was that
Hansen had acted as a consultant to Al Gore's slide-show presentations
on global warming, that he had endorsed John Kerry for president, and
had received a $250,000 grant from the foundation headed by Teresa
Heinz Kerry.

Update #5: My letter to the editor at Newsweek.  For those worried that this is some weird skeptic's fevered dream, Hansen and company kind of sort of recognize the error in the first paragraph under background here.  Their US temperature chart with what appears is the revised data is here.

Update #6: Several posts are calling this a "scandal."  It is not a scandal.  It is a mistake from which we should draw two lessons:

  1. We always need to have people of opposing opinions looking at a problem.  Man-made global warming hawks expected to see a lot of warming after the year 2000, so they never questioned the numbers.  It took folks with different hypotheses about climate to see the jump in the numbers for what it was - a programming error.
  2. Climate scientists are going to have to get over their need to hold their adjustments, formulas, algorithms and software secret.  It's just not how science is done.  James Hansen saying "trust me, the numbers are right, I don't need to tell you how I got them" reminds me of the mathematician Fermat saying he had a proof of his last theorem, but it wouldn't fit in the margin.  How many man-hours of genius mathematicians was wasted because Fermat refused to show his proof (which was most likely wrong, given how the theorem was eventually proved).

Final Update:  Some parting thoughts, and recommendations, here.

Food Miles Stupidity

Via the New York Times:

THE term "food miles" "” how far food has traveled before you buy it "” has entered the enlightened lexicon.

Which should tell you all you need to know about the "enlightened."

There are many good reasons for eating local "” freshness, purity,
taste, community cohesion and preserving open space "” but none of these
benefits compares to the much-touted claim that eating local reduces
fossil fuel consumption. In this respect eating local joins recycling,
biking to work and driving a hybrid as a realistic way that we can, as individuals, shrink our carbon footprint and be good stewards of the environment.

Actually, most recycling, with the exception of aluminum which takes tons of electricity to manufacture in the first place, does nothing to reduce our carbon footprint.  And I must say that I often enjoy buying from farmers markets and such.  But does "food miles" mean anything?  And should we really care?  Well, here is an early hint:  The ultimate reduction in food miles, the big winner on this enlightened metric, is subsistence farming.  Anyone ready to go there yet?  These are the economics Ghandi promoted in India, and it set that country back generations.

Well, lets go back to economics 101.  The reason we do not all grow our own food, make our own clothes, etc. is because the global division of labor allows food and clothing and everything else to be produced more efficiently by people who specialize and invest in those activities than by all of us alone in our homes.  So instead of each of us growing our own corn, in whatever quality soil we happen to have around our house, some guy in Iowa grows it for thousands of us, and because he specialized and grows a lot, he invests in equipment and knowledge to do it better every year.  The cost of fuel to move the corn or corn products to Phoenix from Iowa are trivial compared to the difference in efficiency that guy in Iowa has over me trying to grow corn in my back yard.  Back to the New York Times:

On its face, the connection between lowering food miles and decreasing greenhouse gas emissions is a no-brainer.

Sure, if you look at complex systems as single-variable linear equations.  Those of us who don't immediately treated the food mile concept as suspect.  It turns out, for good reason:

It all depends on how you wield the carbon calculator. Instead of
measuring a product's carbon footprint through food miles alone, the
Lincoln University scientists expanded their equations to include other
energy-consuming aspects of production "” what economists call "factor
inputs and externalities" "” like water use, harvesting techniques,
fertilizer outlays, renewable energy applications, means of
transportation (and the kind of fuel used), the amount of carbon
dioxide absorbed during photosynthesis, disposal of packaging, storage
procedures and dozens of other cultivation inputs.

Incorporating
these measurements into their assessments, scientists reached
surprising conclusions. Most notably, they found that lamb raised on
New Zealand's clover-choked pastures and shipped 11,000 miles by boat
to Britain produced 1,520 pounds of carbon dioxide emissions per ton
while British lamb produced 6,280 pounds of carbon dioxide per ton, in
part because poorer British pastures force farmers to use feed. In
other words, it is four times more energy-efficient for Londoners to
buy lamb imported from the other side of the world than to buy it from
a producer in their backyard. Similar figures were found for dairy
products and fruit.

All I can say is just how frightening it is that the paper of record could find this result "surprising."  The price mechanism does a pretty good job of sorting this stuff out.  If fuel prices rise a lot, then agriculture might move more local, but probably not by much.  The economies to scale and location just dwarf the price of fuel. 

By the way, one reason this food-mile thing is not going away, no matter how stupid it is, has to do with the history of the global warming movement.  Remember all those anti-globalization folks who rampaged in Seattle?  Where did they all go?  Well, they did not get sensible all of a sudden.  They joined the environmental movement.  One reason a core group of folks in the catastrophic man-made global warming camp react so poorly to any criticism of the science is that they need and want it to be true that man is causing catastrophic warming -- anti-corporate and anti-globalization activists jumped into the global warming environmental movement, seeing in it a vehicle to achieve their aims of rolling back economic growth, global trade, and capitalism in general.  Food miles appeals to their disdain for world trade, and global warming and carbon footprints are just a convenient excuse for trying to sell the concept to other people.

A little while back, I posted a similar finding in regards to packaging, that is worth repeating here for comparison.

Contrary to current wisdom, packaging can reduce total rubbish
produced. The average household in the United States generates one
third
less trash each year than does the average household in Mexico,
partly because packaging reduces breakage and food waste. Turning a
live chicken into a meal creates food waste. When chickens are
processed commercially, the waste goes into marketable products
(such as pet food), instead of into a landfill. Commercial processing
of 1,000 chickens requires about 17 pounds of packaging, but it also
recycles at least 2,000 pounds of by-products.

More victories for the worldwide division of labor.  So has the NY Times seen the light and accepted the benefits of capitalism?  Of course not.  With the New Zealand example in hand, the writer ... suggests we need more state action to compel similar situations.

Given these problems, wouldn't it make more sense to stop obsessing
over food miles and work to strengthen comparative geographical
advantages? And what if we did this while streamlining transportation
services according to fuel-efficient standards? Shouldn't we create
development incentives for regional nodes of food production that can
provide sustainable produce for the less sustainable parts of the
nation and the world as a whole? Might it be more logical to
conceptualize a hub-and-spoke system of food production and
distribution, with the hubs in a food system's naturally fertile hot
spots and the spokes, which travel through the arid zones, connecting
them while using hybrid engines and alternative sources of energy?

Does anyone even know what this crap means?  You gotta love technocratic statists -- they just never give up.  Every one of them thinks they are smarter than the the sum of billions of individual minds working together of their own free will to create our current world production patterns.

Postscript: There is one thing the government could do tomorrow to promote even more worldwide agricultural efficiency:  Drop subsidies and protections on agriculture.   You would immediately get more of this kind of activity, for example with Latin America and the Caribbean supplying more/all of the US's sugar and other parts of Asia providing more/all of Japan's rice.

Storm Frequency

I already discussed Newsweek's happy little ad hominem attack on climate skeptics here.  However, as promised, I wanted to talk about the actual, you know, science for a bit, starting from the Newsweek author's throwaway statement that she felt required no
proof, "The frequency of Atlantic hurricanes has already doubled in the
last century."

This is really a very interesting topic, much more interesting than following $10,000 of skeptics' money around in a global warming industry spending billions on research.  One would think the answer to this hurricane question is simple.  Can we just look up the numbers?  Well, let's start there.  Total number of Atlantic hurricanes form the HURDAT data base, first and last half of the last century:

1905-1955 = 366
1956-2006 = 458

First, you can see nothing like a doubling.  This is an increase of 25%.  So already, we see that in an effort to discredit skeptics for fooling America about the facts, Newsweek threw out a whopper that absolutely no one in climate science, warming skeptic or true believer, would agree with.

But let's go further, because there is much more to the story.  Because 25% is a lot, and could be damning in and of itself.  But there are problems with this data.  If you think about storm tracking technology in 1905 vs. 2005, you might see the problem.  To make it really clear, I want to talk about tornadoes for a moment.

In An Inconvenient Truth, Al Gore and company said that global warming was increasing the number of tornadoes in the US.  He claimed 2004 was the highest year ever for tornadoes in the US.  In his PowerPoint slide deck (on which the movie was based) he sometimes uses this chart (form the NOAA):

Whoa, that's scary.  Any moron can see there is a trend there.  Its like a silver bullet against skeptics or something.  But wait.  Hasn't tornado detection technology changed over the last 50 years?  Today, we have doppler radar, so we can detect even smaller size 1 tornadoes, even if no one on the ground actually spots them (which happens fairly often).  But how did they measure smaller tornadoes in 1955 if no one spotted them?  Answer:  They didn't.  In effect, this graph is measuring apples and oranges.  It is measuring all the tornadoes we spotted by human eye in 1955 with all the tornadoes we spotted with doppler radar in 2000.   The NOAA tries to make this problem clear on their web site.

With increased national doppler
radar coverage, increasing population, and greater attention to tornado
reporting, there has been an increase in the number of tornado reports over the
past several decades. This can create a misleading appearance of an increasing
trend in tornado frequency. To better understand the true variability and trend
in tornado frequency in the US, the total number of strong to violent tornadoes
(F3 to F5 category on the Fujita scale) can be analyzed. These are the
tornadoes that would have likely been reported even during the decades before
Dopplar radar use became widespread and practices resulted in increasing
tornado reports. The bar chart below indicates there has been little trend in
the strongest tornadoes over the past 55 years.

So itt turns out there is a decent way to correct for this.  We don't think that folks in 1955 were missing many of the larger class 3-5 tornadoes, so comparing 1955 and 2000 data for these larger tornadoes should be more apples to apples (via NOAA).

Well, that certainly is different (note 2004 in particular, given the movie claim).  No upward trend at all when you get the data right.  I wonder if Al Gore knows this?  I am sure he is anxious to set the record straight.

OK, back to hurricanes.  Generally, whether in 1905 or 2005, we know if a hurricane hits land in the US.  However, what about all the hurricanes that don't hit land or hit land in some undeveloped area?  Might it be that we can detect these better in 2006 with satellites than we could in 1905?  Just like the tornadoes?

Well, one metric we have is US landfall.  Here is that graph  (data form the National Weather Service -- I have just extrapolated the current decade based on the first several years).

Not much of a trend there, though the current decade is high, in part due to the fact that it does not incorporate the light 2006 season nor the light-so-far 2007 season.  The second half of the 20th century is actually lower than the first half, and certainly not "twice as large".  But again, this is only a proxy.  There may be reasons more storms are formed but don't make landfall (though I would argue most Americans only care about the latter).

But what about hurricane damages?  Everyone knows that the dollar damages from hurricanes is way up.  Well, yes.  But the amount of valuable real estate on the United State's coast is also way up.  Roger Pielke and Chris Landsea (you gotta love a guy studying hurricane strikes named Landsea) took a shot at correcting hurricane damages for inflation and the increased real estate value on the coasts.  This is what they got:

Anyway, back to our very first data, several scientists are trying to correct the data for missing storms, particularly in earlier periods.  There is an active debate here about corrections I won't get into, but suffice it to say the difference between the first half of the 20th century to the latter half in terms of Atlantic hurricane formations is probably either none or perhaps a percentage increase in the single digits (but nowhere near 100% increase as reported by Newsweek).

Debate continues, because there was a spike in hurricanes from 1995-2005 over the previous 20 years.  Is this anomalous, or is it similar to the spike that occurred in the thirties and forties?  No one is sure, but isn't this a lot more interesting than figuring out how the least funded side of a debate gets their money?  And by the way, congratulations again to MSM fact-checkers.

My layman's guide to skepticism of catastrophic man-made global warming is here.  A shorter, 60-second version of the best climate skeptic's arguments is here.

Update:  If the author bothered to have a source for her statement, it would probably be Holland and Webster, a recent study that pretty much everyone disagrees with and many think was sloppy.  And even they didn't say activity had doubled.  Note the only way to get a doubling is to cherry-pick a low decade in the first half of the century and a high decade in the last half of the century and compare just those two decades -- you can see this in third paragraph of the Scientific American article.  This study bears all the hallmarks -- cherry picking data, ignoring scientific consensus, massaging results to fit an agenda -- that the Newsweek authors were accusing skeptics of.

Update #2:  The best metric for hurricane activity is not strikes or numbers but accumulated cyclonic energy.  Here is the ACE trend, as measured by Florida State.  As you can see, no upward trend.

6a00e54eeb9dc1883400e553bfddf188338

I Was Teenage Warming-Denying Werewolf

Update:  My post on breaking news about downward revisions to US temperature numbers is here.

Well, I finally read Newsweek's long ad hominem attack on climate skeptics in the recent issue.  It is basically yet another take on the global-warming-skeptics-are-all-funded-by-Exxon meme.  The authors breathlessly "follow the money to show how certain scientists have taken as much as $10,000 (gasp) from fossil-fuel related companies to publish skeptical work.  Further, despite years of hand-wringing about using emotionally charged words like "terrorist" in their news articles, Newsweek happily latches onto "denier" as a label for skeptics, a word chosen to parallel the term "Holocaust denier" -- nope, no emotional content there.

I'm not even going to get into it again, except to make the same observation I have made in the past:  Arguing that the global warming debate is "tainted" by money from skeptics is like saying the 2008 presidential election is tainted by Mike Gravel's spending.  Money from skeptics is so trivial, by orders of magnitude, compared to spending by catastrophic warming believers that it is absolutely amazing folks like Newsweek could feel so threatened by it.  In my Layman's Guide To Man-Made Global Warming Skepticism, I estimated skeptics were being outspent 1000:1.  I have no way to check his figures, but Senator Inhofe's office estimated skeptics were being outspent $50 billion to 19 million, which is about the same order of magnitude as my estimate.

Given this skew in spending, and the fact that most of the major media accepts catastrophic man-made  global warming as a given, this was incredible:

Look for the next round of debate to center on what Americans are
willing to pay and do to stave off the worst of global warming. So far
the answer seems to be, not much. The NEWSWEEK Poll finds less than half in favor of requiring high-mileage cars or energy-efficient appliances and buildings....

Although the figure is less than in earlier polls, A new NEWSWEEK Poll finds that the influence of the denial machine remains strong.39
percent of those asked say there is "a lot of disagreement among
climate scientists" on the basic question of whether the planet is
warming; 42 percent say there is a lot of disagreement that human
activities are a major cause of global warming. Only 46 percent say the
greenhouse effect is being felt today.

It has to be the "denial machine" at fault, right?  I can't possibly be because Americans think for themselves, or that they tend to reject micro-managing government regulations.  The author sounds so much like an exasperated parent "I kept telling my kids what's good for them and they just don't listen."

Yes, I could easily turn the tables here, and talk about the financial incentives in academia for producing headlines-grabbing results, or discuss the political motivations behind Marxist groups who have latched onto man-made global warming for their own ends.  But this does not really solve the interesting science questions, and ignores the fact that many catastrophic climate change believers are well meaning and thoughtful, just as many skeptics are.  The article did not even take the opportunity to thoughtfully discuss the range of skeptic's positions.  Some reject warming entirely, while others, like myself, recognize the impact man can have on climate, but see man's impact being well below catastrophic levels (explained here in 60 seconds).  Anyway, I don't have the energy to fisk it piece by piece, but Noel Sheppard does.

For those of you who are interested, I have a follow-up post on the science itself, which is so much more interesting that this garbage.  I use as a starting point the Newsweek author's throwaway statement that she felt required no proof, "The frequency of Atlantic hurricanes has already doubled in the last century."  (Hint:  the answer turns out to be closer to +5% than +100%)

Adjusting Data to Get the "Right" Answer

On several occasions, I have discussed how much of the reported temperature increases worldwide in the last century are actually the results of adjustments to the actual gauge measurements.  These upward adjustments in the numbers by climate scientists actually dwarf measured increases.

Thanks to reader Scott Brooks, here is another such example except this time with measurement of sea level increases.  Dr. Nils-Axel Morner is the head of the Paleogeophysics and Geodynamics department at Stockholm University in Sweden.  He has studied sea-level changes for 35 years (emphasis added).

Another
way of looking at what is going on is the tide gauge. Tide gauging is
very complicated, because it gives different answers for wherever you
are in the world. But we have to rely on geology when we interpret it.
So, for example, those people in the IPCC [Intergovernmental Panel on
Climate Change], choose Hong Kong, which has six tide gauges, and they
choose the record of one, which gives 2.3 mm per year rise of sea
level. Every geologist knows that that is a subsiding area. It's the
compaction of sediment; it is the only record which you shouldn't use.
And if that figure [for sea level rise] is correct, then Holland would not be subsiding, it
would be uplifting.

And
that is just ridiculous. Not even ignorance could be responsible for a
thing like that. So tide gauges, you have to treat very, very
carefully. Now, back to satellite altimetry, which shows the water, not
just the coasts, but in the whole of the ocean. And you measure it by
satellite. From 1992 to 2002, [the graph of the sea level] was a
straight line, variability along a straight line, but absolutely no
trend whatsoever. We could see those spikes: a very rapid rise, but
then in half a year, they fall back again. But absolutely no trend, and
to have a sea-level rise, you need a trend.

Then,
in 2003, the same data set, which in their [IPCC's] publications, in
their website, was a straight line suddenly it changed, and showed a
very strong line of uplift, 2.3 mm per year, the same as from the tide
gauge. And that didn't look so nice. It looked as though they had
recorded something; but they hadn't recorded anything. It was the
original one which they had suddenly twisted up, because they entered a correction factor, which they took from the tide gauge.
So it was not
a measured thing, but a figure introduced from outside.
I accused them
of this at the Academy of Sciences in Moscow I said you have
introduced factors from outside; it's not a measurement. It looks like
it is measured from the satellite, but you don't say what really
happened. And they ans-wered, that we had to do it, because otherwise
we would not have gotten any trend!

That
is terrible! As a matter of fact, it is a falsification of the data
set. Why? Because they know the answer. And there you come to the
point: They know the answer; the rest of us, we are searching for the
answer. Because we are field geologists; they are computer scientists.
So all this talk that sea level is rising, this stems from the computer
modeling, not from observations. The observations don't find it!

I have
been the expert reviewer for the IPCC, both in 2000 and last year. The
first time I read it, I was exceptionally surprised. First of all, it
had 22 authors, but none of them  none were sea-level specialists. They
were given this mission, because they promised to answer the right
thing. Again, it was a computer issue. This is the typical thing: The meteorological community works with computers, simple computers.

Geologists
don't do that! We go out in the field and observe, and then we can try
to make a model with computerization; but it's not the first thing.

I am working on my next version of a layman's guide to skeptics arguments against catastrophic man-made global warming, which you can find here.

Adjusting Data to Get the "Right" Answer

On several occasions, I have discussed how much of the reported temperature increases worldwide in the last century are actually the results of adjustments to the actual gauge measurements.  These upward adjustments in the numbers by climate scientists actually dwarf measured increases.

Thanks to reader Scott Brooks, here is another such example except this time with measurement of sea level increases.  Dr. Nils-Axel Morner is the head of the Paleogeophysics and Geodynamics department at Stockholm University in Sweden.  He has studied sea-level changes for 35 years (emphasis added).

Another
way of looking at what is going on is the tide gauge. Tide gauging is
very complicated, because it gives different answers for wherever you
are in the world. But we have to rely on geology when we interpret it.
So, for example, those people in the IPCC [Intergovernmental Panel on
Climate Change], choose Hong Kong, which has six tide gauges, and they
choose the record of one, which gives 2.3 mm per year rise of sea
level. Every geologist knows that that is a subsiding area. It's the
compaction of sediment; it is the only record which you shouldn't use.
And if that figure [for sea level rise] is correct, then Holland would not be subsiding, it
would be uplifting.

And
that is just ridiculous. Not even ignorance could be responsible for a
thing like that. So tide gauges, you have to treat very, very
carefully. Now, back to satellite altimetry, which shows the water, not
just the coasts, but in the whole of the ocean. And you measure it by
satellite. From 1992 to 2002, [the graph of the sea level] was a
straight line, variability along a straight line, but absolutely no
trend whatsoever. We could see those spikes: a very rapid rise, but
then in half a year, they fall back again. But absolutely no trend, and
to have a sea-level rise, you need a trend.

Then,
in 2003, the same data set, which in their [IPCC's] publications, in
their website, was a straight line suddenly it changed, and showed a
very strong line of uplift, 2.3 mm per year, the same as from the tide
gauge. And that didn't look so nice. It looked as though they had
recorded something; but they hadn't recorded anything. It was the
original one which they had suddenly twisted up, because they entered a correction factor, which they took from the tide gauge.
So it was not
a measured thing, but a figure introduced from outside.
I accused them
of this at the Academy of Sciences in Moscow I said you have
introduced factors from outside; it's not a measurement. It looks like
it is measured from the satellite, but you don't say what really
happened. And they ans-wered, that we had to do it, because otherwise
we would not have gotten any trend!

That
is terrible! As a matter of fact, it is a falsification of the data
set. Why? Because they know the answer. And there you come to the
point: They know the answer; the rest of us, we are searching for the
answer. Because we are field geologists; they are computer scientists.
So all this talk that sea level is rising, this stems from the computer
modeling, not from observations. The observations don't find it!

I have
been the expert reviewer for the IPCC, both in 2000 and last year. The
first time I read it, I was exceptionally surprised. First of all, it
had 22 authors, but none of them  none were sea-level specialists. They
were given this mission, because they promised to answer the right
thing. Again, it was a computer issue. This is the typical thing: The meteorological community works with computers, simple computers.

Geologists
don't do that! We go out in the field and observe, and then we can try
to make a model with computerization; but it's not the first thing.

I am working on my next version of a layman's guide to skeptics arguments against catastrophic man-made global warming, which you can find here.

Ethanol Get's Slammed

Finally, the blinders are coming off and the media is starting to
wake up to the absolute travesty that is the Congress's promotion of
ethanol.  From Rolling Stone(!) emphasis added.

This is not just hype -- it's dangerous, delusional bullshit.  Ethanol doesn't burn cleaner than gasoline, nor is it
cheaper. Our current ethanol production represents only 3.5 percent of
our gasoline consumption -- yet it consumes twenty percent of the
entire U.S. corn crop, causing the price of corn to double in the last
two years and raising the threat of hunger in the Third World. And the
increasing acreage devoted to corn for ethanol means less land for
other staple crops, giving farmers in South America an incentive to
carve fields out of tropical forests that help to cool the planet and
stave off global warming.

So why bother? Because the whole
point of corn ethanol is not to solve America's energy crisis, but to
generate one of the great political boondoggles of our time. Corn is
already the most subsidized crop in America, raking in a total of $51
billion in federal handouts between 1995 and 2005 -- twice as much as
wheat subsidies and four times as much as soybeans. Ethanol itself is
propped up by hefty subsidies, including a fifty-one-cent-per-gallon
tax allowance for refiners. And a study by the International Institute
for Sustainable Development found that ethanol subsidies amount to as
much as $1.38 per gallon -- about half of ethanol's wholesale market
price.

Hurrah!  Unfortunately, I fear we may be waking up too late.  Already, billions of dollars are being invested by politically connected companies
on the promises of subsidies and promotion of ethanol extending out to
the end of the universe.  At this point, ethanol may be as entrenched
as agriculture subsidies, the education department, and depression-era
alcohol regulation.  The government has no problem reneging on contracts with oil companies, but God forbid anyone deny Archer Daniels Midland the right to infinite subsidies.

More Thoughts on Historic Temperature Adjustments

A few posts back, I showed how nearly 85% of the reported warming in the US over the last century is actually due to adjustments and added fudge-factors by scientists rather than actual measured higher temperatures.  I want to discuss some further analysis Steve McIntyre has done on these adjustments, but first I want to offer a brief analogy.

Let's say you had two compasses to help you find north, but the compasses are reading incorrectly.  After some investigation, you find that one of the compasses is located next to a strong magnet, which you have good reason to believe is strongly biasing that compass's readings.  In response, would you

  1. Average the results of the two compasses and use this mean to guide you, or
  2. Ignore the output of the poorly sited compass and rely solely on the other unbiased compass?

Most of us would quite rationally choose #2.  However, Steve McIntyre shows us a situation involving two temperature stations in the USHCN network in which government researchers apparently have gone with solution #1.  Here is the situation:

He compares the USHCN station at the Grand Canyon (which appears to be a good rural setting) with the Tucson USHCN station I documented here, located in a parking lot in the center of a rapidly growing million person city.   Unsurprisingly, the Tucson data shows lots of warming and the Grand Canyon data shows none.  So how might you correct Tucson and the Grand Canyon data, assuming they should be seeing about the same amount of warming?  Would you
average them, effectively adjusting the two temperature readings
towards each other, or would you assume the Grand Canyon data is cleaner
with fewer biases and adjust Tucson only?   Is there anyone who would not choose the second option, as with the compasses?

The GISS data set, created by the Goddard Center of NASA, takes the USHCN data set and somehow uses nearby stations to correct for anomalous stations.  I say somehow, because, incredibly, these government scientists, whose research is funded by taxpayers and is being used to make major policy decisions, refuse to release their algorithms or methodology details publicly.  They keep it all secret!  Their adjustments are a big black box that none of us are allowed to look into  (and remember, these adjustments account for the vast majority of reported warming in the last century).

We can, however, reverse engineer some of these adjustments, and McIntyre does.  What he finds is that the GISS appears to be averaging the good and bad compass, rather than throwing out or adjusting only the biased reading.  You can see this below.  First, here are the USHCN data for these two stations with only the Time of Observation adjustment made (more on what these adjustments are in this article).
Grand_12

As I said above, no real surprise - little warming out in undeveloped nature, lots of warming in a large and rapidly growing modern city.  Now, here is the same data after the GISS has adjusted it:

Grand_15

You can see that Tucson has been adjusted down a degree or two, but Grand Canyon has been adjusted up a degree or two (with the earlier mid-century spike adjusted down).  OK, so it makes sense that Tucson has been adjusted down, though there is a very good argument to be made that it should be been adjusted down more, say by at least 3 degrees**.  But why does the Grand Canyon need to be adjusted up by about a degree and a half?  What is biasing it colder by 1.5 degrees, which is a lot?  The answer:  Nothing.  The explanation:  Obviously, the GISS is doing some sort of averaging, which is bringing the Grand Canyon and Tucson from each end closer to a mean. 

This is clearly wrong, like averaging the two compasses.  You don't average a measurement known to be of good quality with one known to be biased.  The Grand Canyon should be held about the same, and Tucson adjusted down even more toward it, or else thrown out.  Lets look at two cases.  In one, we will use the GISS approach to combine these two stations-- this adds 1.5 degrees to GC and subtracts 1.5 degrees from Tucson.  In the second, we will take an approach that applies all the adjustment to just the biases (Tucson station) -- this would add 0 degrees to GC and subtract 3 degrees from Tucson.  The first approach, used by the GISS, results in a mean warming in these two stations that is 1.5 degrees higher than the more logical second approach.  No wonder the GISS produces the highest historical global warming estimates of any source!  Steve McIntyre has much more.

** I got to three degrees by applying all of the adjustments for GC and Tucson to Tucson.  Here is another way to get to about this amount.   We know from studies that urban heat islands can add 8-10 degrees to nighttime urban temperatures over surrounding undeveloped land.  Assuming no daytime effect, which is conservative, we might conclude that 8-10 degrees at night adds about 3 degrees to the entire 24-hour average.

Postscript: Steve McIntyre comments (bold added):

These adjustments are supposed to adjust for station moves - the
procedure is described in Karl and Williams 1988 [check], but, like so
many climate recipes, is a complicated statistical procedure that is
not based on statistical procedures known off the island.
(That's not
to say that the procedures are necessarily wrong, just that the
properties of the procedure are not known to statistical civilization.
)
When I see this particular outcome of the Karl methodology, my
impression is that, net of the pea moving under the thimble, the Grand
Canyon values are being blended up and the Tucson values are being
blended down. So that while the methodology purports to adjust for
station moves, I'm not convinced that the methodology can successfully
estimate ex post the impact of numerous station moves and my guess is
that it ends up constructing a kind of blended average.

LOL.  McIntyre, by the way, is the same gentleman who helped call foul on the Mann hockey stick for bad statistical procedure.

An Interesting Source of Man-Made Global Warming

The US Historical Climate Network (USHCN) reports about a 0.6C temperature increase in the lower 48 states since about 1940.  There are two steps to reporting these historic temperature numbers.  First, actual measurements are taken.  Second, adjustments are made after the fact by scientists to the data.  Would you like to guess how much of the 0.6C temperature rise is from actual measured temperature increases and how much is due to adjustments of various levels of arbitrariness?  Here it is, for the period from 1940 to present in the US:

Actual Measured Temperature Increase: 0.1C
Adjustments and Fudge Factors: 0.5C
Total Reported Warming: 0.6C

Yes, that is correct.  Nearly all the reported warming in the USHCN data base, which is used for nearly all global warming studies and models, is from human-added fudge factors, guesstimates, and corrections.

I know what you are thinking - this is some weird skeptic's urban legend.  Well, actually it comes right from the NOAA web page which describes how they maintain the USHCN data set.  Below is the key chart from that site showing the sum of all the plug factors and corrections they add to the raw USHCN measurements:
Ushcn_corrections
I hope you can see this significance.  Before we get into whether these measurements are right or wrong or accurate or guesses, it is very useful to understand that almost all the reported warming in the US over the last 70 years is attributable to the plug figures and corrections a few government scientists add to the data in the back room.  It kind of reduces one's confidence, does it not, in the basic conclusion about catastrophic warming? 

Anyway, lets look at the specific adjustments.  The lines in the chart below should add to the overall adjustment line in the chart above.
Ushcn_corrections2

  • Black line is a time of observation adjustment, adding about 0.3C since 1940
  • Light Blue line is a missing data adjustment that does not affect the data much since 1940
  • Red line is an adjustment for measurement technologies, adding about 0.05C since 1940
  • Yellow line is station location quality adjustment, adding about 0.2C since 1940
  • Purple line is an urban heat island adjustment, subtracting about 0.05C since 1950.

Let's take each of these in turn.  The time of observation adjustment is defined as follows:

The Time of Observation Bias (TOB) arises when the 24-hour daily
summary period at a station begins and ends at an hour other than local
midnight. When the summary period ends at an hour other than midnight,
monthly mean temperatures exhibit a systematic bias relative to the
local midnight standard

0.3C seems absurdly high for this adjustment, but I can't prove it.  However, if I understand the problem, a month might be picking up a few extra hours from the next month and losing a few hours to the previous month.  How is a few hour time shift really biasing a 720+ hour month by so large a number? I will look to see if I can find a study digging into this. 

I will skip over the missing data and measurement technology adjustments, since they are small.

The other two adjustments are fascinating.  The yellow line says that siting has improved on USHCN sites such that, since 1900, their locations average 0.2C cooler due to being near more grass and less asphalt today than in 1900. 

During this time, many sites were relocated from city locations to
airports and from roof tops to grassy areas. This often resulted in
cooler readings than were observed at the previous sites.

OK, without a bit of data, does that make a lick of sense?  Siting today in our modern world has GOT to be worse than it was in 1900 or even 1940.  In particular, the very short cable length of the newer MMTS sensors that are standard for USHCN temperature measurement guarantee that readings today are going to be close to buildings and paving.  Now, go to SurfaceStations.org and look at pictures of actual installations, or look at the couple of installations in the Phoenix area I have taken pictures of here.  Do these look more grassy and natural than measurement sites were likely to be in 1900?  Or go to Anthony Watts blog and scroll down his posts on horrible USHCN sites.

The fact is that not only is NOAA getting this correction wrong, but it probably has the SIGN wrong.  The NOAA has never conducted the site by site survey that we discussed above.  Their statement that locations are improving is basically a leap of faith, rather than a fact-based conclusion.  In fact, NOAA scientists who believe that global warming is a problem tend to overlay this bias on the correction process.  Note the quote above -- temperatures that don't increase as they expect are treated as an error to be corrected, rather than a measurement that disputes their hypothesis.

Finally, lets look the urban heat island adjustment.  The NOAA is claiming that the sum total of urban heat island effects on its network since 1900 is just 0.1C, and less than 0.05C since 1940.  We're are talking about the difference between a rural America with horses and dirt roads and a modern urban society with asphalt and air conditioning and cars.  This rediculously small adjustment reflects two biases among anthropogenic global warming advocates:  1)  That urban heat island effects are negligible and 2) That the USHCN network is all rural.  Both are absurd.  Study after study has show urban heat island effects as high as 6-10 degrees.  Just watch you local news if you live in a city --  you will see actual temperatures and forecasts lower by several degrees in the outlying areas than in the center of town.  As to the locations all being rural, you just have to go to surfacestations.org and see where these stations are.  Many of these sites might have been rural in 1940, but they have been engulfed by cities and towns since.

To illustrate both these points, lets take the case of the Tucson site I visited.  In 1900, Tucson was a dusty one-horse town (Arizona was not even a state yet).  In 1940, it was still pretty small.  Today, it is a city of over one million people and the USHCN station is dead in the center of town, located right on an asphalt parking lot.  The adjustment NOAA makes for all these changes?  Less than one degree.  I don't think this is fraud, but it is willful blindness.

So, let's play around with numbers.  Let's say that instead of a 0.2C site quality adjustment we instead used a -0.1C adjustment, which is still probably generous.  Let's assume that instead of a -0.05C urban adjustment we instead used -0.2C.  The resulting total adjustment from 1940 to date would be +0.05 and the total measurement temperature increase in the US would fall from 0.6C to 0.15C.  And this is without even changing the very large time of observation adjustment, and is using some pretty conservative assumptions on my part.  Wow!  This would put US warming more in the range of what satellite data would imply, and would make it virtually negligible. It means that the full amount of reported US warming may well be within the error bars for the measurement network and the correction factors.

While anthropogenic global warming enthusiasts are quick to analyze the reliability of any temperature measurement that shows lower global warming numbers (e.g. satellite), they have historically resisted calls to face up to the poor quality of surface temperature measurement and the arbitrariness of current surface temperature correction factors.  As the NOAA tellingly states:

The U.S. Historical Climatology Network (USHCN, Karl et al. 1990)
is a high-quality moderate sized data set of monthly averaged maximum,
minimum, and mean temperature and total monthly precipitation developed
to assist in the detection of regional climate change. The USHCN is
comprised of 1221 high-quality stations from the U.S. Cooperative
Observing Network within the 48 contiguous United States.

Does it sound defensive to anyone else when they use "high-quality" in both of the first two sentences?  Does anyone think this is high qualityOr this?  Or this?  Its time to better understand what this network as well as its limitations.

My 60-second climate skepticism argument is here.  The much longer paper explaining the breath of skeptic's issues with catastrophic man-made global warming is available for free here.

PS- This analysis focuses only on the US.  However, is there anyone out there who thinks that measurement in China and India and Russia and Africa is less bad?

UpdateThis pdf has an overview of urban heat islands, including this analysis showing the magnitude of the Phoenix nighttime UHI as well as the fact that this UHI has grown substantially over the last 30 years.

Uhi1

Update2: Steve McIntyre looks at temperature adjustments for a couple of California Stations.  In one case he finds a station that has not moves for over one hundred years getting an adjustment that implies a urban heat island reduction over the past 100 years.

Air Conditioning Is Causing Global Warming

Yep, I admit it, air conditioning may indeed be causing us to measure higher temperatures.  Here is the historic temperature plot of Detroit Lake, MN, one of the thousand or so measurement points in the data base that is used to compute historical warming in the US.
Detroit_lakes_gissplot

Look at that jump in the last 10 years.  It must be global warming!  Can't possibly be due to these air conditioning units installed around 2000 and venting hot gas on the temperature instrument (in that round louvered thing on the post).
Detroit_lakes_ushcn_2

More from Anthony Watts, who is leading the effort to document all these stations. You too can help.  The odds are you live less than an hour from one of these stations -- take your camera and add it to the data base.  Its fun!

Incredibly, the global warming community still argues that documenting the quality of the installations used in the official global warming numbers is unnecessary.  More air conditioners blowing on official temperature measurements hereWorst temperature installation found to date here, "coincidently" at the site with the highest measured 20th century warming.