Archive for August 2007

Some Final Thoughts on The NASA Temperature Restatement

I got a lot of traffic this weekend from folks interested in the US historical temperature restatement at NASA-GISS.  I wanted to share to final thoughts and also respond to a post at RealClimate.org (the #1 web cheerleader for catastrophic man-made global warming theory).

  1. This restatement does not mean that the folks at GISS are necessarily wrong when they say the world has been warming over the last 20 years.  We know from the independent source of satellite measurements that the Northern Hemisphere has been warming (though not so much in the Southern Hemisphere).  However, surface temperature measurements, particularly as "corrected" and aggregated at the GISS, have always been much higher than the satellite readings.  (GISS vs Satellite)  This incident may start to give us an insight into how to bring those two sources into agreement. 
  2. For years, Hansen's group at GISS, as well as other leading climate scientists such as Mann and Briffa (creators of historical temperature reconstructions) have flaunted the rules of science by holding the details of their methodologies and algorithm's secret, making full scrutiny impossible.  The best possible outcome of this incident will be if new pressure is brought to bear on these scientists to stop saying "trust me" and open their work to their peers for review.  This is particularly important for activities such as Hansen's temperature data base at GISS.  While measurement of temperature would seem straight forward, in actual fact the signal to noise ration is really low.  Upward "adjustments" and fudge factors added by Hansen to the actual readings dwarf measured temperature increases, such that, for example, most reported warming in the US is actually from these adjustments, not measured increases.
  3. In a week when Newsweek chose to argue that climate skeptics need to shut up, this incident actually proves why two sides are needed for a quality scientific debate.  Hansen and his folks missed this Y2K bug because, as a man-made global warming cheerleader, he expected to see temperatures going up rapidly so he did not think to question the data.  Mr. Hansen is world-famous, is a friend of luminaries like Al Gore, gets grants in quarter million dollar chunks from various global warming believers.  All his outlook and his incentives made him want the higher temperatures to be true.  It took other people with different hypotheses about climate to see the recent temperature jump for what it was: An error.

The general response at RealClimate.org has been:  Nothing to see here, move along.

Among other incorrect stories going around are that the mistake was due
to a Y2K bug or that this had something to do with photographing
weather stations. Again, simply false.

I really, really don't think it matters exactly how the bug was found, except to the extent that RealClimate.org would like to rewrite history and convince everyone this was just a normal adjustment made by the GISS themselves rather than a mistake found by an outsider.  However, just for the record, the GISS, at least for now until they clean up history a bit, admits the bug was spotted by Steven McIntyre.  Whatever the bug turned out to be, McIntyre initially spotted it as a discontinuity that seemed to exist in GISS data around the year 2000.  He therefore hypothesized it was a Y2K bug, but he didn't know for sure because Hansen and the GISS keep all their code as a state secret.  And McIntyre himself says he became aware of the discontinuity during a series of posts that started from a picture of a weather station at Anthony Watts blog.  I know because I was part of the discussion, talking to these folks online in real time.  Here is McIntyre explaining it himself.

In sum, the post on RealClimate says:

Sum total of this change? A couple of hundredths of degrees in the US
rankings and no change in anything that could be considered
climatically important (specifically long term trends).

A bit of background - surface temperature readings have read higher than satellite readings of the troposphere, when the science of greenhouse gases says the opposite should be true.  Global warming hawks like Hansen and the GISS have pounded on the satellite numbers, investigating them 8 ways to Sunday, and have on a number of occasions trumpeted upward corrections to satellite numbers that are far smaller than these downward corrections to surface numbers. 

But yes, IF this is the the only mistake in the data, then this is a mostly correct statement from RealClimate.org..  However, here is my perspective:

  • If a mistake of this magnitude can be found by outsiders without access to Hansen's algorithm's or computer code just by inspection of the resulting data, then what would we find if we could actually inspect the code?  And this Y2K bug is by no means the only problem.  I have pointed out several myself, including adjustments for urbanization and station siting that make no sense, and averaging in rather than dropping bad measurement locations
  • If we know significant problems exist in the US temperature monitoring network, what would we find looking at China? Or Africa?  Or South America.  In the US and a few parts of Europe, we actually have a few temperature measurement points that were rural in 1900 and rural today.  But not one was measuring rural temps in these other continents 100 years ago.  All we have are temperature measurements in urban locations where we can only guess at how to adjust for the urbanization.  The problem in these locations, and why I say this is a low signal to noise ratio measurement, is that small percentage changes in our guesses for how much the urbanization correction should be make enormous changes (even to changing the sign) of historic temperature change measurements.

Here are my recommendations:

  1. NOAA and GISS both need to release their detailed algorithms and computer software code for adjusting and aggregating USHCN and global temperature data.  Period.  There can be no argument.  Folks at RealClimate.org who believe that all is well should be begging for this to happen to shut up the skeptics.  The only possible reason for not releasing this scientific information that was created by government employees with taxpayer money is if there is something to hide.
  2. The NOAA and GISS need to acknowledge that their assumptions of station quality in the USHCN network are too high, and that they need to incorporate actual documented station condition (as done at SurfaceStations.org) in their temperature aggregations and corrections.  In some cases, stations like Tucson need to just be thrown out of the USHCN.  Once the US is done, a similar effort needs to be undertaken on a global scale, and the effort needs to include people whose incentives and outlook are not driven by making temperatures read as high as possible.
  3. This is the easiest of all.  Someone needs to do empirical work (not simulated, not on the computer, but with real instruments) understanding how various temperature station placements affect measurements.  For example, how do the readings of an instrument in an open rural field compare to an identical instrument surrounded by asphalt a few miles away?  These results can be used for step #2 above.  This is cheap, simple research a couple of graduate students could do, but climatologists all seem focused on building computer models rather than actually doing science.
  4. Similar to #3, someone needs to do a definitive urban heat island study, to find out how much temperature readings are affected by urban heat, again to help correct in #2.  Again, I want real research here, with identical instruments placed in various locations and various radii from an urban center  (not goofy proxys like temperature vs. wind speed -- that's some scientist who wants to get a result without ever leaving his computer terminal).  Most studies have shown the number to be large, but a couple of recent studies show smaller effects, though now these studies are under attack not just for sloppiness but outright fabrication.  This can't be that hard to study, if people were willing to actually go into the field and take measurements.  The problem is everyone is trying to do this study with available data rather than by gathering new data.

Postscript:  The RealClimate post says:

However, there is clearly a latent and deeply felt wish in some sectors for the whole problem of global warming to be reduced to a statistical quirk or a mistake.

If catastrophic man-made global warming theory is correct, then man faces a tremendous lose-lose.  Either shut down growth, send us back to the 19th century, making us all substantially poorer and locking a billion people in Asia into poverty they are on the verge of escaping, or face catastrophic and devastating changes in the planet's weather.

Now take two people.  One in his heart really wants this theory not to be true, and hopes we don't have to face this horrible lose-lose tradeoff.  The other has a deeply felt wish that this theory is true, and hopes man does face this horrible future.  Which person do you like better?  And recognize, RealClimate is holding up the latter as the only moral man. 

Update:  Don't miss Steven McIntyre's take from the whole thing.  And McIntyre responds to Hansen here.

Another Arizona Water Ariticle With No Mention of Price

Well, the Arizona Republic has done it again.  It has published yet another first-section front page water article (this makes about 50 in a row) discussing ways to make demand match supply without once discussing price.  This time, the reporting centers on a new online water supply and demand simulation model (here) introduced by Arizona State University.  With the model, the public gets to play dictator, implementing all kinds of policies and restrictions on individual consumers to see what effect these command and control steps have on water supply and demand.  And it is almost anti-climactic when I tell you that price does not enter in any way into the model. 

I probably don't have to remind readers that Phoenix has some of the cheapest water in the country, with prices less than half what they are in, say, water-logged Seattle.  Don't you think that might have a little to do with why supply and demand don't match?

Let's say there are about a 1000 key raw materials we use in modern society -- oil, natural gas, iron ore, uranium, bauxite, titanium, gold, silver, etc.  Of these, how do we match supply and demand?  Well, for 999 of the 1000, we use this thingie called the price mechanism.  The exception is water.  And it is incredible to me that not one but dozens of articles could be written by our newspaper about matching water supply and demand and not one of them could mention price, the mechanism we use to match supply and demand for 99.9% of commodities.  Remember when Hillary suggested a while back we need a special academy for government workers?  This is what they would teach -- that all problems can only be solved by government command and control.  As I wrote before:

In their general pandering and populism, politicians are afraid to
raise water prices, fearing the decision would be criticized.  So, they
keep prices artificially low, knowing that this low price is causing
reservoirs and aquifers to be pumped faster than their replacement
rate.  Then, as the reservoirs go dry, the politicians blame us, the
consumers, for being too profligate with water and call for ... wait
for it ... more power for themselves, the ones whose spinelessness is
the root cause of the problem, to allocate and ration water and
development

The Most Important College Football Poll of the Year

The most important college football poll of the year is out, and the top five are as follows  (rank, team, #1 votes record so far, total points):

1. USC (45)  0-0 1,481
2. LSU (4)  0-0 1,372
3. Florida (9)  0-0 1,278
4. Texas 0-0 1,231
5. Michigan (2)  0-0 1,218

The rest of the list is here.

Many of you might notice that all of these teams have a record of 0-0.  So you might ask, "Coyote, are you crazy, why did you call this the most important poll of the year?"  Well, since I answered that last year, I will go back a year ago and quote myself:

In theory, voters in the college football polls each week come up
with their current ranking of teams, which in theory could be very
different from how they ranked things the previous week.  In practice,
however, voters start with their rankings of the previous week and then
make adjustments up and down for individual teams based on that week's
game results....

In effect, the college football rankings are a bit like a tennis ladder. Each
week, losers drop down 3-8 spots and all the winners and no-plays move up to
fill in the vacated spots. Sometimes a team will leapfrog another, but that is
rare and it is extremely rare to leapfrog more than 1 or 2 spots. In this sense, the
initial football poll is the most critical, since only those in the top 10-15
have any chance of moving up the ladder to #1.

In
effect, the pre-season poll is the baseline off which all future polls
start.  I haven't done the research, but you could probably refine my
statement in the previous paragraph to a set of rules such as:

  • A three-loss team can never win the championship
  • A two-loss team can win but only if they start in the top 5 of the pre-season poll
  • A one-loss team can win but only if they start in the top 15
  • An undefeated team can win even if they were left out of the
    initial top 25, but only if they play in a major conference.  A minor
    conference team, even undefeated, will not ever end up #1 unless they
    started the season in the top 25.

Again, the numbers in these rules may not be exactly right, but I
think they are directionally correct.  This is what I call my theory of
College Football Calvinism (the religion, not the cartoon character)
since one's ultimate fate is in large part pre-ordained by the polls
even before the season is born.  So, if your alma mater has any shot at
the title, you should hope your AD is out there in the summer lobbying
the writers like hell to up their pre-season poll standings. Every spot
you gain in the pre-season poll is one you don't have to win on the
playing field.

Computer Models In Complex Systems

Apparently, there are some dangers with getting too confident about your computer modeling of complex systems:

Computers don't always work.

That was the lesson so far this month for many so-called quant hedge
funds, whose trading is dictated by complex computer programs.

The markets' volatility of the past few weeks has taken a toll on
many widely known funds for sophisticated investors, notably a
once-highflying hedge fund at Wall Street's Goldman Sachs Group Inc.

Global Alpha, Goldman's widely known internal hedge fund, is now
down about 16% for the year after a choppy July, when its performance
fell about 8%, according to people briefed on the matter.

This kind of reminds me of another kind of computer modeling of complex systems.

Why I Don't Host This Blog on My Own Servers...

...Because there might come a slow news day in August when Tigerhawk, Hot Air, Pajamas Media, Reddit, the Free Republic, Ace of Spades, and many others all link to the same post at the same time.  In which case my servers here in the office and the poor hamster who powers them by running on his little wheel would be a smoking hole in the ground.

A Thought on Blogging Relevance

You know you are not one of the blogging big boys when you dig into your referral logs to find the source of a bump in traffic and discover it is from a link buried inside a comment thread at a larger blogger.  No one here but us minnows.

Letter to Newsweek

Editors-

Oh, the delicious irony.

As a skeptic of catastrophic man-made global warming, I was disturbed to see that Newsweek in its August 13, 2007 issue (The Truth About Denial)
had equated me with a Holocaust denier.  There are so many interesting
scientific issues involved in climate change that it was flabbergasting
to me that Newsweek would waste time on an extended ad hominem
attack against one side in a scientific debate.  I was particularly
amazed that Newsweek would accuse the side of the debate that is
outspent 1000:1 with being tainted by money.  This is roughly
equivalent to arguing that Mike Gravel's spending is corrupting the
2008 presidential election.

However, fate does indeed have a sense of humor.  Skeptics' efforts of the sort Newsweek derided just this week
forced NASA-Goddard (GISS) to revise downward recent US temperature
numbers due to a programming mistake that went unidentified for
years, in part because NASA's taxpayer-paid researchers refuse to
release their temperature adjustment and aggregation methodology to the
public for scrutiny.  The problem was found by a chain of events that
began with amateur volunteers and led ultimately to Steven McIntyre (he
of the Michael Mann hockey stick debunking) calling foul.

The particular irony is that the person who is in charge of this
database, and is responsible for the decision not to allow scientific
scrutiny of his methodologies, is none other than James Hansen, who
Newsweek held up as the shining example of scientific objectivity in
its article.  Newsweek should have been demanding that taxpayer-funded
institutions like NASA should be opening their research to full review,
but instead Newsweek chose to argue that Mr. Hansen should be shielded
from scrutiny.

Warren Meyer

The Silicon Valley of Begging

Stephen Dubner's roundtable on the Economics of Street Charity got me thinking about a recent experience visiting Boulder, Colorado, an odd but lovely town in which I used to live.

Here in Phoenix, most of our panhandlers show little or no innovation.  They are still using the "will work for food" or "Vietnam vet" cardboard signs that were an innovation years ago, but now are tired and hard to believe.  All the signs were generic.  None of them seemed tailored to the local audience. 

So where is the innovation in begging occurring?  Someone must have first thought of the "will work for food" come-on which I presume was so initially successful, since everyone copied it, just as they copy any successful innovation in the marketplace?

My vote for the Silicon Valley of Begging is Boulder, Colorado, and specifically on the Pearl Street Mall.  I have recently visited homeless capital Santa Monica, and San Francisco, as well as New York and Boston, and none of their beggers hold a candle to those in Boulder.  Here is why:

  • Their come-ons were unique -- I never saw the same one twice
  • Their come-ons were well tailored to the local audience.  "Need Money for Pot" is not going to get one anywhere in Oklahoma, but it is very likely to elicit a chuckle and a buck from a UC college student or sixties-survivor Boulder resident.  Given that President Bush has about a 0.01% approval rating in Boulder, many of the come-ons led one to believe that giving the beggar a buck would show one's disdain for GWB.

Breaking News: Recent US Temperature Numbers Revised Downwards Today

This is really big news, and a fabulous example of why two-way scientific discourse is still valuable, in the same week that both Newsweek and Al Gore tried to make the case that climate skeptics were counter-productive and evil. 

Climate scientist Michael Mann (famous for the hockey stick chart) once made the statement that  the 1990's were the
warmest decade in a millennia and that "there is a 95 to 99% certainty
that 1998 was the hottest year in the last one thousand years." (By
the way, Mann now denies he ever made this claim, though you can watch him say
these exact words in the CBC documentary Global
Warming:  Doomsday Called Off
).

Well, it turns out, according to the NASA GISS database, that 1998 was not even the hottest year of the last century.  This is because many temperatures from recent decades that appeared to show substantial warming have been revised downwards.  Here is how that happened (if you want to skip the story, make sure to look at the numbers at the bottom).

One of the most cited and used historical surface temperature databases is that of NASA/Goddard's GISS.  This is not some weird skeptics site.  It is considered one of the premier world temperature data bases, and it is maintained by anthropogenic global warming true believers.  It has consistently shown more warming than any other data base, and is thus a favorite source for folks like Al Gore.  These GISS readings in the US rely mainly on the US Historical Climate Network (USHCN) which is a network of about 1000 weather stations taking temperatures, a number of which have been in place for over 100 years.

Frequent readers will know that I have been a participant in an effort led by Anthony Watts at SurfaceStations.org to photo-document these temperature stations as an aid to scientists in evaluating the measurement quality of each station.  The effort has been eye-opening, as it has uncovered many very poor instrument sitings that would bias temperature measurements upwards, as I found in Tucson and Watts has documented numerous times on his blog.

One photo on Watt's blog got people talking - a station in MN with a huge jump in temperature about the same time some air conditioning units were installed nearby.   Others disagreed, and argued that such a jump could not be from the air conditioners, since a lot of the jump happened with winter temperatures when the AC was dormant.  Steve McIntyre, the Canadian statistician who helped to expose massive holes in Michael Mann's hockey stick methodology, looked into it.  After some poking around, he began to suspect that the GISS data base had a year 2000 bug in one of their data adjustments.

One of the interesting aspects of these temperature data bases is that they do not just use the raw temperature measurements from each station.  Both the NOAA (which maintains the USHCN stations) and the GISS apply many layers of adjustments, which I discussed here.  One of the purposes of Watt's project is to help educate climate scientists that many of the adjustments they make to the data back in the office does not necessarily represent the true condition of the temperature stations.  In particular, GISS adjustments imply instrument sitings are in more natural settings than they were in say 1905, an outrageous assumption on its face that is totally in conflict to the condition of the stations in Watt's data base.  Basically, surface temperature measurements have a low signal to noise ratio, and climate scientists have been overly casual about how they try to tease out the signal.

Anyway, McIntyre suspected that one of these adjustments had a bug, and had had this bug for years.  Unfortunately, it was hard to prove.  Why?  Well, that highlights one of the great travesties of climate science.  Government scientists using taxpayer money to develop the GISS temperature data base at taxpayer expense refuse to publicly release their temperature adjustment algorithms or software (In much the same way Michael Mann refused to release the details for scrutiny of his methodology behind the hockey stick).  Using the data, though, McIntyre made a compelling case that the GISS data base had systematic discontinuities that bore all the hallmarks of a software bug.

Today, the GISS admitted that McIntyre was correct, and has started to republish its data with the bug fixed.  And the numbers are changing a lot.  Before today, GISS would have said 1998 was the hottest year on record (Mann, remember, said with up to 99% certainty it was the hottest year in 1000 years) and that 2006 was the second hottest.  Well, no more.  Here are the new rankings for the 10 hottest years in the US, starting with #1:

1934, 1998, 1921, 2006, 1931, 1999, 1953, 1990, 1938, 1939

Three of the top 10 are in the last decade.  Four of the top ten are in the 1930's, before either the IPCC or the GISS really think man had any discernible impact on temperatures.  Here is the chart for all the years in the data base:
New_giss

There are a number of things we need to remember:

  • This is not the end but the beginning of the total reexamination that needs to occur of the USHCN and GISS data bases.  The poor correction for site location and urbanization are still huge issues that bias recent numbers upwards.  The GISS also has issues with how it aggregates multiple stations, apparently averaging known good stations with bad stations a process that by no means eliminates biases.  As a first step, we must demand that NOAA and GISS release their methodology and computer algorithms to the general public for detailed scrutiny by other scientists.
  • The GISS today makes it clear that these adjustments only affect US data and do not change any of their conclusions about worldwide data.  But consider this:  For all of its faults, the US has the most robust historical climate network in the world.  If we have these problems, what would we find in the data from, say, China?  And the US and parts of Europe are the only major parts of the world that actually have 100 years of data at rural locations.  No one was measuring temperature reliably in rural China or Paraguay or the Congo in 1900.  That means much of the world is relying on urban temperature measurement points that have substantial biases from urban heat.
  • All of these necessary revisions to surface temperatures will likely not make warming trends go away completely.  What it may do is bring the warming down to match the much lower satellite measured warming numbers we have, and will make current warming look more like past natural warming trends (e.g. early in this century) rather than a catastrophe created by man.  In my global warming book, I argue that future man-made warming probably will exist, but will be more like a half to one degree over the coming decades than the media-hyped numbers that are ten times higher.

So how is this possible?  How can the global warming numbers used in critical policy decisions and scientific models be so wrong with so basic of an error?  And how can this error have gone undetected for the better part of a decade?  The answer to the latter question is because the global warming  and climate community resist scrutiny.  This weeks Newsweek article and statements by Al Gore are basically aimed at suppressing any scientific criticism or challenge to global warming research.  That is why NASA can keep its temperature algorithms secret, with no outside complaint, something that would cause howls of protest in any other area of scientific inquiry.

As to the first question, I will leave the explanation to Mr. McIntyre:

While acolytes may call these guys "professionals", the process of
data adjustment is really a matter of statistics and even accounting.
In these fields, Hansen and Mann are not "professionals" - Mann
admitted this to the NAS panel explaining that he was "not a
statistician". As someone who has read their works closely, I do not
regard any of these people as "professional". Much of their reluctance
to provide source code for their methodology arises, in my opinion,
because the methods are essentially trivial and they derive a certain
satisfaction out of making things appear more complicated than they
are, a little like the Wizard of Oz. And like the Wizard of Oz, they
are not necessarily bad men, just not very good wizards.

For more, please see my Guide to Anthropogenic Global Warming or, if you have less time, my 60-second argument for why one should be skeptical of catastrophic man-made global warming theory.

Update:
Nothing new, just thinking about this more, I cannot get over the irony that in the same week Newsweek makes the case that climate science is settled and there is no room for skepticism, skeptics discover a gaping hole and error in the global warming numbers.

Update #2:  I know people get upset when we criticize scientists.  I get a lot of "they are not biased, they just made a mistake."  Fine.  But I have zero sympathy for a group of scientists who refuse to let other scientists review their methodology, and then find that they have been making a dumb methodology mistake for years that has corrupted the data of nearly every climate study in the last decade.

Update #3:  I labeled this "breaking news," but don't expect to see it in the NY Times anytime soon.  We all know this is one of those asymmetric story lines, where if the opposite had occurred (ie things found to be even worse/warmer than thought) it would be on the front page immediately, but a lowered threat will never make the news.

Oh, and by he way.  This is GOOD news.  Though many won't treat it that way.  I understand this point fairly well because, in a somewhat parallel situation, I seem to be the last anti-war guy who treats progress in Iraq as good news.

Update #4: I should have mentioned that the hero of the Newsweek story is catastrophic man-made global warming cheerleader James Hansen, who runs the GISS and is most responsible for the database in question as well as the GISS policy not to release its temperature aggregation and adjustment methodologies.  From IBD, via CNN Money:

Newsweek portrays James Hansen, director of NASA's Goddard Institute for Space Studies, as untainted by corporate bribery.

Hansen
was once profiled on CBS' "60 Minutes" as the "world's leading
researcher on global warming." Not mentioned by Newsweek was that
Hansen had acted as a consultant to Al Gore's slide-show presentations
on global warming, that he had endorsed John Kerry for president, and
had received a $250,000 grant from the foundation headed by Teresa
Heinz Kerry.

Update #5: My letter to the editor at Newsweek.  For those worried that this is some weird skeptic's fevered dream, Hansen and company kind of sort of recognize the error in the first paragraph under background here.  Their US temperature chart with what appears is the revised data is here.

Update #6: Several posts are calling this a "scandal."  It is not a scandal.  It is a mistake from which we should draw two lessons:

  1. We always need to have people of opposing opinions looking at a problem.  Man-made global warming hawks expected to see a lot of warming after the year 2000, so they never questioned the numbers.  It took folks with different hypotheses about climate to see the jump in the numbers for what it was - a programming error.
  2. Climate scientists are going to have to get over their need to hold their adjustments, formulas, algorithms and software secret.  It's just not how science is done.  James Hansen saying "trust me, the numbers are right, I don't need to tell you how I got them" reminds me of the mathematician Fermat saying he had a proof of his last theorem, but it wouldn't fit in the margin.  How many man-hours of genius mathematicians was wasted because Fermat refused to show his proof (which was most likely wrong, given how the theorem was eventually proved).

Final Update:  Some parting thoughts, and recommendations, here.

Food Miles Stupidity

Via the New York Times:

THE term "food miles" "” how far food has traveled before you buy it "” has entered the enlightened lexicon.

Which should tell you all you need to know about the "enlightened."

There are many good reasons for eating local "” freshness, purity,
taste, community cohesion and preserving open space "” but none of these
benefits compares to the much-touted claim that eating local reduces
fossil fuel consumption. In this respect eating local joins recycling,
biking to work and driving a hybrid as a realistic way that we can, as individuals, shrink our carbon footprint and be good stewards of the environment.

Actually, most recycling, with the exception of aluminum which takes tons of electricity to manufacture in the first place, does nothing to reduce our carbon footprint.  And I must say that I often enjoy buying from farmers markets and such.  But does "food miles" mean anything?  And should we really care?  Well, here is an early hint:  The ultimate reduction in food miles, the big winner on this enlightened metric, is subsistence farming.  Anyone ready to go there yet?  These are the economics Ghandi promoted in India, and it set that country back generations.

Well, lets go back to economics 101.  The reason we do not all grow our own food, make our own clothes, etc. is because the global division of labor allows food and clothing and everything else to be produced more efficiently by people who specialize and invest in those activities than by all of us alone in our homes.  So instead of each of us growing our own corn, in whatever quality soil we happen to have around our house, some guy in Iowa grows it for thousands of us, and because he specialized and grows a lot, he invests in equipment and knowledge to do it better every year.  The cost of fuel to move the corn or corn products to Phoenix from Iowa are trivial compared to the difference in efficiency that guy in Iowa has over me trying to grow corn in my back yard.  Back to the New York Times:

On its face, the connection between lowering food miles and decreasing greenhouse gas emissions is a no-brainer.

Sure, if you look at complex systems as single-variable linear equations.  Those of us who don't immediately treated the food mile concept as suspect.  It turns out, for good reason:

It all depends on how you wield the carbon calculator. Instead of
measuring a product's carbon footprint through food miles alone, the
Lincoln University scientists expanded their equations to include other
energy-consuming aspects of production "” what economists call "factor
inputs and externalities" "” like water use, harvesting techniques,
fertilizer outlays, renewable energy applications, means of
transportation (and the kind of fuel used), the amount of carbon
dioxide absorbed during photosynthesis, disposal of packaging, storage
procedures and dozens of other cultivation inputs.

Incorporating
these measurements into their assessments, scientists reached
surprising conclusions. Most notably, they found that lamb raised on
New Zealand's clover-choked pastures and shipped 11,000 miles by boat
to Britain produced 1,520 pounds of carbon dioxide emissions per ton
while British lamb produced 6,280 pounds of carbon dioxide per ton, in
part because poorer British pastures force farmers to use feed. In
other words, it is four times more energy-efficient for Londoners to
buy lamb imported from the other side of the world than to buy it from
a producer in their backyard. Similar figures were found for dairy
products and fruit.

All I can say is just how frightening it is that the paper of record could find this result "surprising."  The price mechanism does a pretty good job of sorting this stuff out.  If fuel prices rise a lot, then agriculture might move more local, but probably not by much.  The economies to scale and location just dwarf the price of fuel. 

By the way, one reason this food-mile thing is not going away, no matter how stupid it is, has to do with the history of the global warming movement.  Remember all those anti-globalization folks who rampaged in Seattle?  Where did they all go?  Well, they did not get sensible all of a sudden.  They joined the environmental movement.  One reason a core group of folks in the catastrophic man-made global warming camp react so poorly to any criticism of the science is that they need and want it to be true that man is causing catastrophic warming -- anti-corporate and anti-globalization activists jumped into the global warming environmental movement, seeing in it a vehicle to achieve their aims of rolling back economic growth, global trade, and capitalism in general.  Food miles appeals to their disdain for world trade, and global warming and carbon footprints are just a convenient excuse for trying to sell the concept to other people.

A little while back, I posted a similar finding in regards to packaging, that is worth repeating here for comparison.

Contrary to current wisdom, packaging can reduce total rubbish
produced. The average household in the United States generates one
third
less trash each year than does the average household in Mexico,
partly because packaging reduces breakage and food waste. Turning a
live chicken into a meal creates food waste. When chickens are
processed commercially, the waste goes into marketable products
(such as pet food), instead of into a landfill. Commercial processing
of 1,000 chickens requires about 17 pounds of packaging, but it also
recycles at least 2,000 pounds of by-products.

More victories for the worldwide division of labor.  So has the NY Times seen the light and accepted the benefits of capitalism?  Of course not.  With the New Zealand example in hand, the writer ... suggests we need more state action to compel similar situations.

Given these problems, wouldn't it make more sense to stop obsessing
over food miles and work to strengthen comparative geographical
advantages? And what if we did this while streamlining transportation
services according to fuel-efficient standards? Shouldn't we create
development incentives for regional nodes of food production that can
provide sustainable produce for the less sustainable parts of the
nation and the world as a whole? Might it be more logical to
conceptualize a hub-and-spoke system of food production and
distribution, with the hubs in a food system's naturally fertile hot
spots and the spokes, which travel through the arid zones, connecting
them while using hybrid engines and alternative sources of energy?

Does anyone even know what this crap means?  You gotta love technocratic statists -- they just never give up.  Every one of them thinks they are smarter than the the sum of billions of individual minds working together of their own free will to create our current world production patterns.

Postscript: There is one thing the government could do tomorrow to promote even more worldwide agricultural efficiency:  Drop subsidies and protections on agriculture.   You would immediately get more of this kind of activity, for example with Latin America and the Caribbean supplying more/all of the US's sugar and other parts of Asia providing more/all of Japan's rice.

Competitor or Enemy

I heard Obama get asked a question at the AFL-CIO yesterday whether he thought China was a competitor or an enemy.  I am not very good at parsing politician-speak, but he seemed to answer "both." 

How about neither?  Let's try "partner in the worldwide division of labor" or maybe "home of a billion people who would like to trade with us 300 million individuals to our mutual self interest."   Or maybe "One reason we have full employment AND low prices."

Our trade with Canada is 60% higher than with China.  Does that make them an enemy?  Yes, for some of the Democratic candidates.

Wherein I Answer Lou Dobbs and Suspect He Is A Chinese Agent

It is always dangerous to argue with the insane, but I am actually willing to answer Lou Dobbs question:

And what I can't quite figure out amongst these geniuses who are
so-called free traders is, why do they think that about a 35 percent to
40 percent undervaluation of the Chinese yuan to the dollar is free
trade? Why do they think 25 percent duties in tariffs on American
products entering China is free trade?

I will leave aside the question of how he or anybody else knows the yuan is undervalued by this much.  I will accept his premise on the basis that we know the Chinese government spends money to keep the yuan lower than it might be otherwise.  Here is my answer:

Yes, it is not perfectly free trade.  But we let it continue because the freaking Chinese government, its consumers, and its taxpayers are subsidizing Americans.  The Chinese government is making all of its consumers pay higher prices and higher taxes just so American consumers can have lower prices.  Napoleon advised that one never should interrupt an enemy when he is making a mistake -- after all, this same strategy managed to earn Japan a decade and a half long recession.  Our correct response is not tariffs, it is to say, "gee, thanks."  This is for the Chinese people to stop, not our government. 

Why is China doing this?  Because it government is using monetary policy to help out a few favored exporters who have political influence at the expense of all of their consumers and exporters.  And Lou Dobbs wants the US to respond exactly the same way, to punish our consumers to favor some of our favored politically-connected exporters so the Chinese consumers can have lower prices.  Great plan.  Is Lou Dobbs an Chinese agent?

Storm Frequency

I already discussed Newsweek's happy little ad hominem attack on climate skeptics here.  However, as promised, I wanted to talk about the actual, you know, science for a bit, starting from the Newsweek author's throwaway statement that she felt required no
proof, "The frequency of Atlantic hurricanes has already doubled in the
last century."

This is really a very interesting topic, much more interesting than following $10,000 of skeptics' money around in a global warming industry spending billions on research.  One would think the answer to this hurricane question is simple.  Can we just look up the numbers?  Well, let's start there.  Total number of Atlantic hurricanes form the HURDAT data base, first and last half of the last century:

1905-1955 = 366
1956-2006 = 458

First, you can see nothing like a doubling.  This is an increase of 25%.  So already, we see that in an effort to discredit skeptics for fooling America about the facts, Newsweek threw out a whopper that absolutely no one in climate science, warming skeptic or true believer, would agree with.

But let's go further, because there is much more to the story.  Because 25% is a lot, and could be damning in and of itself.  But there are problems with this data.  If you think about storm tracking technology in 1905 vs. 2005, you might see the problem.  To make it really clear, I want to talk about tornadoes for a moment.

In An Inconvenient Truth, Al Gore and company said that global warming was increasing the number of tornadoes in the US.  He claimed 2004 was the highest year ever for tornadoes in the US.  In his PowerPoint slide deck (on which the movie was based) he sometimes uses this chart (form the NOAA):

Whoa, that's scary.  Any moron can see there is a trend there.  Its like a silver bullet against skeptics or something.  But wait.  Hasn't tornado detection technology changed over the last 50 years?  Today, we have doppler radar, so we can detect even smaller size 1 tornadoes, even if no one on the ground actually spots them (which happens fairly often).  But how did they measure smaller tornadoes in 1955 if no one spotted them?  Answer:  They didn't.  In effect, this graph is measuring apples and oranges.  It is measuring all the tornadoes we spotted by human eye in 1955 with all the tornadoes we spotted with doppler radar in 2000.   The NOAA tries to make this problem clear on their web site.

With increased national doppler
radar coverage, increasing population, and greater attention to tornado
reporting, there has been an increase in the number of tornado reports over the
past several decades. This can create a misleading appearance of an increasing
trend in tornado frequency. To better understand the true variability and trend
in tornado frequency in the US, the total number of strong to violent tornadoes
(F3 to F5 category on the Fujita scale) can be analyzed. These are the
tornadoes that would have likely been reported even during the decades before
Dopplar radar use became widespread and practices resulted in increasing
tornado reports. The bar chart below indicates there has been little trend in
the strongest tornadoes over the past 55 years.

So itt turns out there is a decent way to correct for this.  We don't think that folks in 1955 were missing many of the larger class 3-5 tornadoes, so comparing 1955 and 2000 data for these larger tornadoes should be more apples to apples (via NOAA).

Well, that certainly is different (note 2004 in particular, given the movie claim).  No upward trend at all when you get the data right.  I wonder if Al Gore knows this?  I am sure he is anxious to set the record straight.

OK, back to hurricanes.  Generally, whether in 1905 or 2005, we know if a hurricane hits land in the US.  However, what about all the hurricanes that don't hit land or hit land in some undeveloped area?  Might it be that we can detect these better in 2006 with satellites than we could in 1905?  Just like the tornadoes?

Well, one metric we have is US landfall.  Here is that graph  (data form the National Weather Service -- I have just extrapolated the current decade based on the first several years).

Not much of a trend there, though the current decade is high, in part due to the fact that it does not incorporate the light 2006 season nor the light-so-far 2007 season.  The second half of the 20th century is actually lower than the first half, and certainly not "twice as large".  But again, this is only a proxy.  There may be reasons more storms are formed but don't make landfall (though I would argue most Americans only care about the latter).

But what about hurricane damages?  Everyone knows that the dollar damages from hurricanes is way up.  Well, yes.  But the amount of valuable real estate on the United State's coast is also way up.  Roger Pielke and Chris Landsea (you gotta love a guy studying hurricane strikes named Landsea) took a shot at correcting hurricane damages for inflation and the increased real estate value on the coasts.  This is what they got:

Anyway, back to our very first data, several scientists are trying to correct the data for missing storms, particularly in earlier periods.  There is an active debate here about corrections I won't get into, but suffice it to say the difference between the first half of the 20th century to the latter half in terms of Atlantic hurricane formations is probably either none or perhaps a percentage increase in the single digits (but nowhere near 100% increase as reported by Newsweek).

Debate continues, because there was a spike in hurricanes from 1995-2005 over the previous 20 years.  Is this anomalous, or is it similar to the spike that occurred in the thirties and forties?  No one is sure, but isn't this a lot more interesting than figuring out how the least funded side of a debate gets their money?  And by the way, congratulations again to MSM fact-checkers.

My layman's guide to skepticism of catastrophic man-made global warming is here.  A shorter, 60-second version of the best climate skeptic's arguments is here.

Update:  If the author bothered to have a source for her statement, it would probably be Holland and Webster, a recent study that pretty much everyone disagrees with and many think was sloppy.  And even they didn't say activity had doubled.  Note the only way to get a doubling is to cherry-pick a low decade in the first half of the century and a high decade in the last half of the century and compare just those two decades -- you can see this in third paragraph of the Scientific American article.  This study bears all the hallmarks -- cherry picking data, ignoring scientific consensus, massaging results to fit an agenda -- that the Newsweek authors were accusing skeptics of.

Update #2:  The best metric for hurricane activity is not strikes or numbers but accumulated cyclonic energy.  Here is the ACE trend, as measured by Florida State.  As you can see, no upward trend.

6a00e54eeb9dc1883400e553bfddf188338

I Was Teenage Warming-Denying Werewolf

Update:  My post on breaking news about downward revisions to US temperature numbers is here.

Well, I finally read Newsweek's long ad hominem attack on climate skeptics in the recent issue.  It is basically yet another take on the global-warming-skeptics-are-all-funded-by-Exxon meme.  The authors breathlessly "follow the money to show how certain scientists have taken as much as $10,000 (gasp) from fossil-fuel related companies to publish skeptical work.  Further, despite years of hand-wringing about using emotionally charged words like "terrorist" in their news articles, Newsweek happily latches onto "denier" as a label for skeptics, a word chosen to parallel the term "Holocaust denier" -- nope, no emotional content there.

I'm not even going to get into it again, except to make the same observation I have made in the past:  Arguing that the global warming debate is "tainted" by money from skeptics is like saying the 2008 presidential election is tainted by Mike Gravel's spending.  Money from skeptics is so trivial, by orders of magnitude, compared to spending by catastrophic warming believers that it is absolutely amazing folks like Newsweek could feel so threatened by it.  In my Layman's Guide To Man-Made Global Warming Skepticism, I estimated skeptics were being outspent 1000:1.  I have no way to check his figures, but Senator Inhofe's office estimated skeptics were being outspent $50 billion to 19 million, which is about the same order of magnitude as my estimate.

Given this skew in spending, and the fact that most of the major media accepts catastrophic man-made  global warming as a given, this was incredible:

Look for the next round of debate to center on what Americans are
willing to pay and do to stave off the worst of global warming. So far
the answer seems to be, not much. The NEWSWEEK Poll finds less than half in favor of requiring high-mileage cars or energy-efficient appliances and buildings....

Although the figure is less than in earlier polls, A new NEWSWEEK Poll finds that the influence of the denial machine remains strong.39
percent of those asked say there is "a lot of disagreement among
climate scientists" on the basic question of whether the planet is
warming; 42 percent say there is a lot of disagreement that human
activities are a major cause of global warming. Only 46 percent say the
greenhouse effect is being felt today.

It has to be the "denial machine" at fault, right?  I can't possibly be because Americans think for themselves, or that they tend to reject micro-managing government regulations.  The author sounds so much like an exasperated parent "I kept telling my kids what's good for them and they just don't listen."

Yes, I could easily turn the tables here, and talk about the financial incentives in academia for producing headlines-grabbing results, or discuss the political motivations behind Marxist groups who have latched onto man-made global warming for their own ends.  But this does not really solve the interesting science questions, and ignores the fact that many catastrophic climate change believers are well meaning and thoughtful, just as many skeptics are.  The article did not even take the opportunity to thoughtfully discuss the range of skeptic's positions.  Some reject warming entirely, while others, like myself, recognize the impact man can have on climate, but see man's impact being well below catastrophic levels (explained here in 60 seconds).  Anyway, I don't have the energy to fisk it piece by piece, but Noel Sheppard does.

For those of you who are interested, I have a follow-up post on the science itself, which is so much more interesting that this garbage.  I use as a starting point the Newsweek author's throwaway statement that she felt required no proof, "The frequency of Atlantic hurricanes has already doubled in the last century."  (Hint:  the answer turns out to be closer to +5% than +100%)

Bloggers Union

A number of folks are getting a good chuckle out of the suggestion from YearlyKos that bloggers form a union.  Many, like John Scalzi, have asked, why?

I think folks are missing the point.  At its heart, those making this suggestion are not bloggers who want to be in a union, these
are people who want to run a union of bloggers. They want the power and prestige
that comes from being able to say "I represent the International
Brotherhood of Bloggers." They are trying to channel the dispersed
power of bloggers and the trendiness of blogging (such that it is) and
aggregate it to themselves.

I Too Want A Big Picture Job

TJIC has a great link to an article about a guy who doesn't want to grub around in the details, but wants a job to help a company see the big picture and move forward.  LOL.  I can't tell you how many times I get a request for that job.  People are always saying they want a job doing "business development**" or "coordination" or "performance reviews."  The common denominator when I ask people to explain to me what these jobs actually would do is that they involve driving around a lot to different recreation sites I run or might run and "checking things out."

I tell people there is no such job.  I tell them I don't have that job, and I own the company.   It's a TV-inspired view of business, like Dynasty or Dallas, where the protagonists run around and do all kinds of stuff that doesn't look like real work.

Yeah, I get to enjoy some perks now and do some cool stuff running my company.  But how did I get here?   Well, the whole story is too boring to tell, but here is one vignette:  In March of 2003 I spent about 6 straight 90-hour weeks trying to get my new company registered on the fly in 12 states and about 30 counties for tax withholding, sales tax, occupancy licenses, unemployment taxes, workers compensation, and even egg licenses just so I could use the assets I just purchased.  This was at the same time I was programming some add-ons to Quickbooks so the finances could be tracked and setting up some of our first web sites.  All while I tried to keep an unfamiliar company running.  And, oh yeah, while I was thinking all that big picture stuff.  Yes, I think about the big picture - and in fact, I have radically reshaped the positioning of this company over the past five years.  But that is what you do in the shower or on the stationary bike.

I don't explain all of this, of course, I just tell people that I don't have a big picture job to offer them.   TJIC, as usual, is a bit more direct:

Or, phrased another way: you're a useless drama queen who - instead of
compromising your principals and taking a job that doesn't match the
job title you want, and then growing the job position around your
abilities - you'd rather stay home and live off your wife's salary.

** The world's one great moment for such jobs was in the late 90's Internet craze, when every soon-to-be-on-FuckedCompany.com startup employed hordes of business development guys who ran around making grand press-release inducing deals that generated absolutely no money.  "Let's trade our proprietary online merchant services framework no one wants to buy for your proprietary online price management algorithm no one wants to buy.  OK, cool."  When I came into the waning stages of several such companies, the first thing I did was blow all these guys away, followed by a quick inventory of our soft and hard assets to see if we actually had anything anyone wanted to, you know, pay money for.  I still think the whole IT world is tainted by the memory of these glory days for produce-nothings.  Everyone wants to be Steve Jobs without having to actually first produce a salable new technology with their own hands in their garage.

Let's Make Sure Politicians Have NO Real World Experience

Via Cato and IBD:

Sen. Hillary Clinton says she wants to establish a national academy
that will train public servants. Why do re-education camps come to
mind? "¦ Somehow we doubt there will be many lectures in making
government smaller, deregulating business, cutting taxes or increasing
individual freedom. Is there a chance that this "new generation"
attending the academy will hear a single voice that isn't hailing the
glories of the nanny state? Will students being groomed for public
service ever hear the names Hayek, von Mises or Friedman during their
studies? "¦ Government at all levels is already overflowing with
bureaucrats who suck up taxpayers' money and produce little, if
anything, of economic value. More often, the bureaucracy actually gets
in the way of economic progress.

This way, government employees can know absolutely nothing about the real world or productive enterprise, and never have to be burdened with listening to anyone in school who doesn't think government is the be-all end-all, kind of like, uh, Hillary Clinton.

Democrats and Republicans United In Grabbing Power

This weekend, the Democrats in Congress passed legislation legalizing the Administration's previous grab for new wiretapping powers.  Further proving that the minority party in the US government does not really object to power grabs, they just get in a huff that the other party thought of it first.  Other examples of such behavior include the Patriot act, currently supported by Republicans and opposed by many Democrats, but most of whose provisions were originally proposed by Bill Clinton and opposed by a Republican Congress  (opposition led by John Ashcroft!)

I really don't want the president, of either party, listening to my phone calls without a warrant, and that answer does not change if I am talking  to my friends in Arizona or my friends in London.

John Scalzi has a great post reacting to the line in the article above where Democrats vow to, at some time in the future, "fix" the flaws in the law they just passed.

They wouldn't have to "fix" it if they hadn't have passed it.
Once again I am entirely flummoxed how it is that the Democrats, faced
with the president more chronically unpopular than Nixon, and so
politically weakened that the GOP candidates for president can barely
bring themselves to acknowledge that he exists, yet manage to get played by the man again and again.

If the Democrats honestly did not feel this version of the bill
should have been passed, they shouldn't have passed it. I don't see why
this is terribly complicated. And don't tell me that at least it has a
six-month "sunset" clause; all it means at this point is that in six
months, the Democrats are going to allow themselves to get played once
more, and this time they'll have given Bush the talking point of "well,
they passed it before."

My only objection to this statement is the implication the this is just a matter of the Democrats getting played.  I actually think it's exactly what the Democrats want -- they want to retain a reputation for caring about government intrusiveness without actually reducing government powers (just like Republicans want a reputation for reducing economic regulations without actually doing do when they were in power).  After all, the Dems expect to control the administration in 2 years, and they really don't want to take away any of the President's toys before that time.

Oil Trading Conspiracy -- To Reduce Prices?

A while back, I talked about a conversation I had with a friend of mine that prices for oil were set $20 dollars or more above the natural clearing price because a few oil traders controlled the market.  I argued in a long post that this was absurd, and might be possible for a few minutes in the trading day, but over multiple years it would be just impossible either to store the extra oil supply or hide the efforts to suppress supply from thousands of sources.

Well, another argument I made is that buyers in the oil markets are big boys too, and would not tolerate paying $20 a barrel too much for years or even hours.  After all, it was silver buyers and the exchange owners who stopped the Hunt's famous attempt to corner the silver market.

Anyway, one proof of this latter proposition is this
:

The alleged manipulation occured during the so-called "Platts window,"
a 30 minute interval at the end of the trading day when the energy
publishing firm Platts pulls data used to set prices for other foreign
and domestic crudes. CFTC said Marathon tried to sell oil below market
prices during the window in order to get a lower price set for oil it
intended to purchase.

Again note the timing -- trying to influence the market for minutes, not years.  If companies like Marathon are willing to risk criminal prosecution to get lower oil prices for purchase, they certainly are not going to sit back and tolerate a multi-year manipulation that raises prices $20.

Adjusting Data to Get the "Right" Answer

On several occasions, I have discussed how much of the reported temperature increases worldwide in the last century are actually the results of adjustments to the actual gauge measurements.  These upward adjustments in the numbers by climate scientists actually dwarf measured increases.

Thanks to reader Scott Brooks, here is another such example except this time with measurement of sea level increases.  Dr. Nils-Axel Morner is the head of the Paleogeophysics and Geodynamics department at Stockholm University in Sweden.  He has studied sea-level changes for 35 years (emphasis added).

Another
way of looking at what is going on is the tide gauge. Tide gauging is
very complicated, because it gives different answers for wherever you
are in the world. But we have to rely on geology when we interpret it.
So, for example, those people in the IPCC [Intergovernmental Panel on
Climate Change], choose Hong Kong, which has six tide gauges, and they
choose the record of one, which gives 2.3 mm per year rise of sea
level. Every geologist knows that that is a subsiding area. It's the
compaction of sediment; it is the only record which you shouldn't use.
And if that figure [for sea level rise] is correct, then Holland would not be subsiding, it
would be uplifting.

And
that is just ridiculous. Not even ignorance could be responsible for a
thing like that. So tide gauges, you have to treat very, very
carefully. Now, back to satellite altimetry, which shows the water, not
just the coasts, but in the whole of the ocean. And you measure it by
satellite. From 1992 to 2002, [the graph of the sea level] was a
straight line, variability along a straight line, but absolutely no
trend whatsoever. We could see those spikes: a very rapid rise, but
then in half a year, they fall back again. But absolutely no trend, and
to have a sea-level rise, you need a trend.

Then,
in 2003, the same data set, which in their [IPCC's] publications, in
their website, was a straight line suddenly it changed, and showed a
very strong line of uplift, 2.3 mm per year, the same as from the tide
gauge. And that didn't look so nice. It looked as though they had
recorded something; but they hadn't recorded anything. It was the
original one which they had suddenly twisted up, because they entered a correction factor, which they took from the tide gauge.
So it was not
a measured thing, but a figure introduced from outside.
I accused them
of this at the Academy of Sciences in Moscow I said you have
introduced factors from outside; it's not a measurement. It looks like
it is measured from the satellite, but you don't say what really
happened. And they ans-wered, that we had to do it, because otherwise
we would not have gotten any trend!

That
is terrible! As a matter of fact, it is a falsification of the data
set. Why? Because they know the answer. And there you come to the
point: They know the answer; the rest of us, we are searching for the
answer. Because we are field geologists; they are computer scientists.
So all this talk that sea level is rising, this stems from the computer
modeling, not from observations. The observations don't find it!

I have
been the expert reviewer for the IPCC, both in 2000 and last year. The
first time I read it, I was exceptionally surprised. First of all, it
had 22 authors, but none of them  none were sea-level specialists. They
were given this mission, because they promised to answer the right
thing. Again, it was a computer issue. This is the typical thing: The meteorological community works with computers, simple computers.

Geologists
don't do that! We go out in the field and observe, and then we can try
to make a model with computerization; but it's not the first thing.

I am working on my next version of a layman's guide to skeptics arguments against catastrophic man-made global warming, which you can find here.

Adjusting Data to Get the "Right" Answer

On several occasions, I have discussed how much of the reported temperature increases worldwide in the last century are actually the results of adjustments to the actual gauge measurements.  These upward adjustments in the numbers by climate scientists actually dwarf measured increases.

Thanks to reader Scott Brooks, here is another such example except this time with measurement of sea level increases.  Dr. Nils-Axel Morner is the head of the Paleogeophysics and Geodynamics department at Stockholm University in Sweden.  He has studied sea-level changes for 35 years (emphasis added).

Another
way of looking at what is going on is the tide gauge. Tide gauging is
very complicated, because it gives different answers for wherever you
are in the world. But we have to rely on geology when we interpret it.
So, for example, those people in the IPCC [Intergovernmental Panel on
Climate Change], choose Hong Kong, which has six tide gauges, and they
choose the record of one, which gives 2.3 mm per year rise of sea
level. Every geologist knows that that is a subsiding area. It's the
compaction of sediment; it is the only record which you shouldn't use.
And if that figure [for sea level rise] is correct, then Holland would not be subsiding, it
would be uplifting.

And
that is just ridiculous. Not even ignorance could be responsible for a
thing like that. So tide gauges, you have to treat very, very
carefully. Now, back to satellite altimetry, which shows the water, not
just the coasts, but in the whole of the ocean. And you measure it by
satellite. From 1992 to 2002, [the graph of the sea level] was a
straight line, variability along a straight line, but absolutely no
trend whatsoever. We could see those spikes: a very rapid rise, but
then in half a year, they fall back again. But absolutely no trend, and
to have a sea-level rise, you need a trend.

Then,
in 2003, the same data set, which in their [IPCC's] publications, in
their website, was a straight line suddenly it changed, and showed a
very strong line of uplift, 2.3 mm per year, the same as from the tide
gauge. And that didn't look so nice. It looked as though they had
recorded something; but they hadn't recorded anything. It was the
original one which they had suddenly twisted up, because they entered a correction factor, which they took from the tide gauge.
So it was not
a measured thing, but a figure introduced from outside.
I accused them
of this at the Academy of Sciences in Moscow I said you have
introduced factors from outside; it's not a measurement. It looks like
it is measured from the satellite, but you don't say what really
happened. And they ans-wered, that we had to do it, because otherwise
we would not have gotten any trend!

That
is terrible! As a matter of fact, it is a falsification of the data
set. Why? Because they know the answer. And there you come to the
point: They know the answer; the rest of us, we are searching for the
answer. Because we are field geologists; they are computer scientists.
So all this talk that sea level is rising, this stems from the computer
modeling, not from observations. The observations don't find it!

I have
been the expert reviewer for the IPCC, both in 2000 and last year. The
first time I read it, I was exceptionally surprised. First of all, it
had 22 authors, but none of them  none were sea-level specialists. They
were given this mission, because they promised to answer the right
thing. Again, it was a computer issue. This is the typical thing: The meteorological community works with computers, simple computers.

Geologists
don't do that! We go out in the field and observe, and then we can try
to make a model with computerization; but it's not the first thing.

I am working on my next version of a layman's guide to skeptics arguments against catastrophic man-made global warming, which you can find here.

Arizona: Saving Northerner's Lives Since 1912

New study results, via Tyler Cowen:

We estimate the effect of extreme weather on life expectancy in the US. ... However, the
increase in mortality following extreme heat appears entirely driven by
temporal displacement, while the increase in mortality following
extreme cold is long lasting. The aggregate effect of cold on mortality
is quantitatively large. We estimate that the number of annual deaths
attributable to cold temperature is 27,940 or 1.3% of total deaths in
the US. This effect is even larger in low income areas. Because the
U.S. population has been moving from cold Northeastern states to the
warmer Southwestern states, our findings have implications for
understanding the causes of long-term increases in life expectancy. We
calculate that every year, 5,400 deaths are delayed by changes in
exposure to cold temperature induced by mobility.
These longevity gains
associated with long term trends in geographical mobility account for
8%-15% of the total gains in life expectancy experienced by the US
population over the past 30 years
. Thus mobility is an important but
previously overlooked determinant of increased longevity in the United
States. We also find that the probability of moving to a state that has
fewer days of extreme cold is higher for the age groups that are
predicted to benefit more in terms of lower mortality compared to the
age groups that are predicted to benefit less.

Your welcome, America. 

Time to Switch From Meese's to Gipper's

From Daniel Griswold at Cato:

One sure sign of a hyperinflation is that the central bank must
issue new currency notes in ever higher denominations so that people
won't have to carry bags or wheelbarrows of money around to make
everyday purchases. Sure enough, the government of Zimbabwe is now
wrestling with that very question. According to the FT story:

The launch yesterday of a new large-denomination bank
note of Z$200,000"”worth [US$13] at the official exchange rate and
[US$1.30] at the more realistic parallel rate"”underlines the disarray.
The central bank had wanted to issue a Z$500,000 note, but a bank
official said this was vetoed by the finance ministry because senior
staff thought such a large denomination would have reinforced an
impression that inflation was out of control.

At a 13,000 percent rate, that cat is probably already out of the bag.

What a mess.  Explanation of the post title here.

Steve McIntyre Comments on Historical Temperature Adjustments

Steve McIntyre, the statistician than called into question much of the methodology behind the Mann Hockey Stick chart, has some observations on adjustments to US temperature records I discussed here and here.

Eli Rabett and Tamino have both advocated faith-based climate
science in respect to USHCN and GISS adjustments. They say that the
climate "professionals" know what they're doing; yes, there are
problems with siting and many sites do not meet even minimal compliance
standards, but, just as Mann's "professional" software was able to
extract a climate signal from the North American tree ring data, so
Hansen's software is able to "fix" the defects in the surface sites.
"Faith-based" because they do not believe that Hansen has any
obligation to provide anything other than a cursory description of his
software or, for that matter, the software itself. But if they are
working with data that includes known bad data, then critical
examination of the adjustment software becomes integral to the
integrity of the record - as there is obviously little integrity in
much of the raw data.

While acolytes may call these guys "professionals", the process of
data adjustment is really a matter of statistics and even accounting.
In these fields, Hansen and Mann are not "professionals" - Mann
admitted this to the NAS panel explaining that he was "not a
statistician". As someone who has read their works closely, I do not
regard any of these people as "professional". Much of their reluctance
to provide source code for their methodology arises, in my opinion,
because the methods are essentially trivial and they derive a certain
satisfaction out of making things appear more complicated than they
are, a little like the Wizard of Oz. And like the Wizard of Oz, they
are not necessarily bad men, just not very good wizards.

He goes on to investigate a specific example the "professionals" use
as a positive example, demonstrating they appear to have a Y2K error in
their algorithm.   This is difficult to do, because like Mann, government scientists maintaining a government temperature data base taken from government sites paid for with taxpayer funds refuse to release their methodology or algorithms for inspection.

In the case cited, the "professionals" also make adjustments that imply the site has
decreasing urbanization over the last 100 years, something I am not
sure one can say about any site in the US except perhaps for a few
Colorado ghost towns.  The "experts" also fail to take the basic step of actually analyzing the site itself which, if visited, would reveal recently installed air conditioning unites venting hot air on the temperature instrument.   

A rebuttal, arguing that poor siting of temperature instruments is OK and does not affect the results is here.  I find rebuttals of this sort really distressing.  I studied physics for a while, before switching to engineering, and really small procedural mistakes in measurement could easily invalidate one's results.  I find it amazing that climate scientists seek to excuse massive mistakes in measurement.  I'm sorry, but in no other branch of science are results considered "settled" when the experimental noise is greater than the signal.  I would really, really, just for once, love to see a anthropogenic global warming promoter say "well, I don't think the siting will change the results, but you are right, we really need to go back and take another pass at correcting historical temperatures based on more detailed analysis of the individual sites."

State Run Medicine: Bureaucrat Salaries Trump Patients

Italian Daniele Capezzone writes in the WSJ($):

This situation is especially dire in Italy. The
government has capped spending on pharmaceuticals at 13% of total
health-care expenditures while letting expenses for infrastructure and
staff skyrocket. From 2001 to 2005, general health expenses in Italy
grew by 31% while expenditure on medicines increased a mere 1.7%.
Italian patients might well have been better off if the reverse was the
case, but the state bureaucrats who make these decisions refuse to
acknowledge the benefits of advanced drugs....

Part of the problem is that regional authorities
manage most of Italy's health-care spending. A strike by health-care
personnel has an immediate impact on the region, but the consequences
of cutting the budget for medicines are only felt in the long term and
distributed across the nation. Hence, local authorities continue to
focus on personnel and infrastructure in an age when medical research
has become the most efficient way to improve public health.

Gee, government officials more concerned about raising government salaries than performance?  Couldn't possibly happen in the US, could it?  This is classic government management -- freeze or reduce expenses that actually provide customer service, and raise administrative costs and salaries many times faster than inflation.  This is exactly what has happened in public schools, as infrastructure and teaching aid investments have been deferred in favor of raising salaries and adding untold number of vice-principals and administrators to every school.

But the government is focused on the long-term while greedy old for-profits are short-term focused.  Right?

Unfortunately, most of today's cutting-edge research is conducted
outside Europe, which was once a pioneer in this field. About 78% of
global biotechnology research funds are spent in the U.S., compared to
just 16% in Europe. Americans therefore have better access to modern
drugs. One result is that in the U.S., the annual death rate from
cancer is 196 per 100,000 people, compared to 235 in Britain, 244 in
France, 270 in Italy and 273 in Germany.

Update:  Ronald Bailey points out that drug re importation is just a way to impose drug price controls in the US, effectively applying the most aggressive price-control regime for each drug worldwide to US prices.  Right now, drug companies tolerate price controls set as much as 2/3 under US prices or more because they can still make money at the margin, because the marginal cost of drug production is so much lower than the total cost with R&D, etc. included.  However, they cannot survive at these prices applied to US demand.  Remember, drug companies have profits margins averaging in the 18-20% range.  Perhaps you might argue they should only be making 10%, but that only gives you room for an imposed 10% price cut, not the huge cuts politicians would like.  And you would get that only at tremendous costs in terms of lost freedoms and demolished incentives for new drug development.