Archive for the ‘Climate’ Category.

It is Important to Call Foul on One's Allies as Well, and This From Christopher Monkton is Bad Stuff

Christopher Monkton has been a very public supporter of the climate skeptic position.  I think he sometimes gets his science wrong, but he is glib and entertaining and by his position as peer of the realm he gets media space not available to many of us.

But this is bad, bad, bad.  He is calling for using British libel law against an alarmist who merely disagrees.  I have not problem with most of his factual defenses of Richard Lindzen.  But the points he labels "lies" are more accurately described as areas where people disagree on data source and interpretation.  Turning this into libel, under the egregiously onerous British libel laws, is a terrible precedent.

The climate debate is already over-full with vilification and ad hominem attacks.  The last thing we need is to throw British courts into the equation.

Congratulations to Nature Magazine for Catching up to Bloggers

The journal Nature has finally caught up to the fact that ocean cycles may influence global surface temperature trends.  Climate alarmists refused to acknowledge this when temperatures were rising and the cycles were in their warm phase, but now are grasping at these cycles for an explanation of the 15+ year hiatus in warming as a way to avoid abandoning high climate sensitivity assumptions  (ie the sensitivity of global temperatures to CO2 concentrations, which IMO are exaggerated by implausible assumptions of positive feedback).

Here is the chart from Nature:

click to enlarge

 

I cannot find my first use of this chart, but here is a version I was using over 5 years ago.  I know I was using it long before that

click to enlarge

 

It will be interesting to see if they find a way to blame cycles for cooling in the last 10-15 years but not for the warming in the 80's and 90's.

Next step -- alarmists have the same epiphany about the sun, and blame non-warming on a low solar cycle without simultaneously giving previous high solar cycles any credit for warming.  For Nature's benefit, here is another chart they might use (from the same 2008 blog post).  The number 50 below is selected arbitrarily, but does a good job of highlighting solar activity in the second half of the 20th century vs. the first half.

click to enlarge

 

Global Warming: The Unfalsifiable Hypothesis

This is hilarious.  Apparently the polar vortex proves whatever hypothesis you are trying to prove, either cooling or warming:

Steven Goddard of the Real Science blog has the goods on Time magazine.  From the 1974 Time article “Another Ice Age?”:

Scientists have found other indications of global cooling. For one thing there has been anoticeable expansion of the great belt of dry, high-altitude polar winds —the so-calledcircumpolar vortex—that sweep from west to east around the top and bottom of the world.

And guess what Time is saying this week?  Yup:

But not only does the cold spell not disprove climate change, it may well be that global warming could be making the occasional bout of extreme cold weather in the U.S. even more likely. Right now much of the U.S. is in the grip of a polar vortex, which is pretty much what it sounds like: a whirlwind of extremely cold, extremely dense air that forms near the poles. Usually the fast winds in the vortex—which can top 100 mph (161 k/h)—keep that cold air locked up in the Arctic. But when the winds weaken, the vortex can begin to wobble like a drunk on his fourth martini, and the Arctic air can escape and spill southward, bringing Arctic weather with it. In this case, nearly the entire polar vortex has tumbled southward, leading to record-breaking cold.

Wow, Thomas Friedman is A Total Joke

I missed this editorial from back in April, but it is a classic.  If you want one of the greatest illustrations of the phrase "if all you have is a hammer, everything looks like a nail", here is is.

UNTIL we fully understand what turned two brothers who allegedly perpetrated the Boston Marathon bombings into murderers, it is hard to make any policy recommendation other than this: We need to redouble our efforts to make America stronger and healthier so it remains a vibrant counterexample to whatever bigoted ideology may have gripped these young men. With all our warts, we have built a unique society — a country where a black man, whose middle name is Hussein, whose grandfather was a Muslim, can run for president and first defeat a woman in his own party and then four years later a Mormon from the opposition, and no one thinks twice about it. With so many societies around the world being torn apart, especially in the Middle East, it is vital that America survives and flourishes as a beacon of pluralism....

So what to do?  We need a more “radical center” — one much more willing to suggest radically new ideas to raise revenues, not the “split-the-difference-between-the-same-old-options center.” And the best place to start is with a carbon tax.

Triangle Trade and Physics

You have heard of the Atlantic triangle trade in school.  It is always discussed in terms of its economic logic (e.g. English rum to African slaves to New World sugar).  But the trade has a physical logic as well in the sailing ship era.  Current wind patterns:

Earth-wind-map

 

Real time version here.  Via Flowing data.

Seriously, click on the real time link.  Even if you are jaded, probably the coolest thing you will see today.  One interesting thing to look at -- there is a low point in the spine of the mountains of Mexico west of Yucatan.  Look at the wind pour through it like air out of a balloon.

Explaining the Flaw in Kevin Drum's (and Apparently Science Magazine's) Climate Chart

I won't repeat the analysis, you need to see it here.  Here is the chart in question:

la-sci-climate-warming

My argument is that the smoothing and relatively low sampling intervals in the early data very likely mask variations similar to what we are seeing in the last 100 years -- ie they greatly exaggerate the smoothness of history and create a false impression that recent temperature changes are unprecedented (also the grey range bands are self-evidently garbage, but that is another story).

Drum's response was that "it was published in Science."  Apparently, this sort of appeal to authority is what passes for data analysis in the climate world.

Well, maybe I did not explain the issue well.  So I found a political analysis that may help Kevin Drum see the problem.  This is from an actual blog post by Dave Manuel (this seems to be such a common data analysis fallacy that I found an example on the first page of my first Google search).  It is an analysis of average GDP growth by President.  I don't know this Dave Manuel guy and can't comment on the data quality, but let's assume the data is correct for a moment.  Quoting from his post:

Here are the individual performances of each president since 1948:

1948-1952 (Harry S. Truman, Democrat), +4.82%

1953-1960 (Dwight D. Eisenhower, Republican), +3%

1961-1964 (John F. Kennedy / Lyndon B. Johnson, Democrat), +4.65%

1965-1968 (Lyndon B. Johnson, Democrat), +5.05%

1969-1972 (Richard Nixon, Republican), +3%

1973-1976 (Richard Nixon / Gerald Ford, Republican), +2.6%

1977-1980 (Jimmy Carter, Democrat), +3.25%

1981-1988 (Ronald Reagan, Republican), 3.4%

1989-1992 (George H. W. Bush, Republican), 2.17%

1993-2000 (Bill Clinton, Democrat), 3.88%

2001-2008 (George W. Bush, Republican), +2.09%

2009 (Barack Obama, Democrat), -2.6%

Let's put this data in a chart:

click to enlarge

 

Look, a hockey stick , right?   Obama is the worst, right?

In fact there is a big problem with this analysis, even if the data is correct.  And I bet Kevin Drum can get it right away, even though it is the exact same problem as on his climate chart.

The problem is that a single year of Obama's is compared to four or eight years for other presidents.  These earlier presidents may well have had individual down economic years - in fact, Reagan's first year was almost certainly a down year for GDP.  But that kind of volatility is masked because the data points for the other presidents represent much more time, effectively smoothing variability.

Now, this chart has a difference in sampling frequency of 4-8x between the previous presidents and Obama.  This made a huge difference here, but it is a trivial difference compared to the 1 million times greater sampling frequency of modern temperature data vs. historical data obtained by looking at proxies (such as ice cores and tree rings).  And, unlike this chart, the method of sampling is very different across time with temperature - thermometers today are far more reliable and linear measurement devices than trees or ice.  In our GDP example, this problem roughly equates to trying to compare the GDP under Obama (with all the economic data we collate today) to, say, the economic growth rate under Henry the VIII.  Or perhaps under Ramses II.   If I showed that GDP growth in a single month under Obama was less than the average over 66 years under Ramses II, and tried to draw some conclusion from that, I think someone might challenge my analysis.  Unless of course it appears in Science, then it must be beyond question.

If You Don't Like People Saying That Climate Science is Absurd, Stop Publishing Absurd Un-Scientific Charts

Kevin Drum can't believe the folks at the National Review are still calling global warming science a "myth".  As is usual for global warming supporters, he wraps himself in the mantle of science while implying that those who don't toe the line on the declared consensus are somehow anti-science.

Readers will know that as a lukewarmer, I have as little patience with outright CO2 warming deniers as I do with those declaring a catastrophe  (for my views read this and this).  But if you are going to simply be thunderstruck that some people don't trust climate scientists, then don't post a chart that is a great example of why people think that a lot of global warming science is garbage.  Here is Drum's chart:

la-sci-climate-warming

 

The problem is that his chart is a splice of multiple data series with very different time resolutions.  The series up to about 1850 has data points taken at best every 50 years and likely at 100-200 year or more intervals.  It is smoothed so that temperature shifts less than 200 years or so in length won't show up and are smoothed out.

In contrast, the data series after 1850 has data sampled every day or even hour.  It has a sampling interval 6 orders of magnitude (over a million times) more frequent.  It by definition is smoothed on a time scale substantially shorter than the rest of the data.

In addition, these two data sets use entirely different measurement techniques.  The modern data comes from thermometers and satellites, measurement approaches that we understand fairly well.  The earlier data comes from some sort of proxy analysis (ice cores, tree rings, sediments, etc.)  While we know these proxies generally change with temperature, there are still a lot of questions as to their accuracy and, perhaps more importantly for us here, whether they vary linearly or have any sort of attenuation of the peaks.  For example, recent warming has not shown up as strongly in tree ring proxies, raising the question of whether they may also be missing rapid temperature changes or peaks in earlier data for which we don't have thermometers to back-check them (this is an oft-discussed problem called proxy divergence).

The problem is not the accuracy of the data for the last 100 years, though we could quibble this it is perhaps exaggerated by a few tenths of a degree.  The problem is with the historic data and using it as a valid comparison to recent data.  Even a 100 year increase of about a degree would, in the data series before 1850, be at most a single data point.  If the sampling is on 200 year intervals, there is a 50-50 chance a 100 year spike would be missed entirely in the historic data.  And even if it were in the data as a single data point, it would be smoothed out at this data scale.

Do you really think that there was never a 100-year period in those last 10,000 years where the temperatures varied by more than 0.1F, as implied by this chart?  This chart has a data set that is smoothed to signals no finer than about 200 years and compares it to recent data with no such filter.  It is like comparing the annualized GDP increase for the last quarter to the average annual GDP increase for the entire 19th century.   It is easy to demonstrate how silly this is.  If you cut the chart off at say 1950, before much anthropogenic effect will have occurred, it would still look like this, with an anomalous spike at the right (just a bit shorter).  If you believe this analysis, you have to believe that there is an unprecedented spike at the end even without anthropogenic effects.

There are several other issues with this chart that makes it laughably bad for someone to use in the context of arguing that he is the true defender of scientific integrity

  • The grey range band is if anything an even bigger scientific absurdity than the main data line.  Are they really trying to argue that there were no years, or decades, or even whole centuries that never deviated from a 0.7F baseline anomaly by more than 0.3F for the entire 4000 year period from 7500 years ago to 3500 years ago?  I will bet just about anything that the error bars on this analysis should be more than 0.3F, much less the range of variability around the mean.  Any natural scientist worth his or her salt would laugh this out of the room.  It is absurd.  But here it is presented as climate science in the exact same article that the author expresses dismay that anyone would distrust climate science.
  • A more minor point, but one that disguises the sampling frequency problem a bit, is that the last dark brown shaded area on the right that is labelled "the last 100 years" is actually at least 300 years wide.  Based on the scale, a hundred years should be about one dot on the x axis.  This means that 100 years is less than the width of the red line, and the last 60 years or the real anthropogenic period is less than half the width of the red line.  We are talking about a temperature change whose duration is half the width of the red line, which hopefully gives you some idea why I say the data sampling and smoothing processes would disguise any past periods similar to the most recent one.

Update:  Kevin Drum posted a defense of this chart on Twitter.  Here it is:  "It was published in Science."   Well folks, there is climate debate in a nutshell.   An 1000-word dissection of what appears to be wrong with a particular analysis retorted by a five-word appeal to authority.

Update #2:  I have explained the issue with a parallel flawed analysis from politics where Drum is more likely to see the flaws.

Want to Save The Ice in the Arctic?

I wrote below about Chinese pollution, but here is one other thought.  Shifting Chinese focus from reducing CO2 with unproven 21st century technology to reducing particulates with 1970s technology would be a great boon for its citizens.  But it could well have one other effect:

It might reverse the warming in the Arctic.

The reduction of Arctic ice sheet size in the summer, and the warming of the Arctic over the last several decades, is generally attributed to greenhouse warming.  But there are reasons to doubt that Co2 is the whole story.   One is that the sea ice extent in Antarctica has actually been growing at the same time the Arctic sea ice cover has been shrinking.  Maybe there is another explanation, one that affects only the northern hemisphere and not the southern?

I don't know if you have snow right now or even ever get snow.  If you do, find some black dust, like coal dust or dark dirt, and sprinkle it on a patch of snow.  Then come back tomorrow.  What will you find?  The patch of snow you sprinkled in dark dust melted a lot in comparison to the rest of the snow.  This is an albedo effect.  Snow takes a while to melt because it reflects rather than absorbs solar radiation.  Putting black dust on it changes that equation, and suddenly solar radiation is adsorbed as heat, and the now melts.  Fast.  I know this because I run a sledding hill in the wintertime, where snow falls on a black cinder hill.  The snow will last until even the smallest patch of black cinders is exposed.  Once exposed, that small hole will grow like a cancer, as it absorbs solar energy and pumps it into the surrounding ground.

By the way, if you have not snow, Accuweather.com did the experiment for you.  See here.  Very nice pictures that make the story really clear.

So consider this mess:

china_pollution_ap971430398958_620x350

Eventually that mess blows away.  Where does it end up?  Well, a lot of it ends up deposited in the Arctic, on top of the sea ice and Greenland ice sheet.

There is a growing hypothesis that this black carbon deposited on the ice from China is causing much of the sea ice to melt faster.  And as the ice sheet melts faster, this lowers the albedo of the arctic, and creates warming.  In this hypothesis, warming follows from ice melting, rather than vice versa.

How do we test this?  Well, the best way would be to go out and actually measure the deposits and calculate the albedo changes from this.  My sense is that this work is starting to be done (example), but it has been slow, because everyone who is interested in Arctic ice of late are strong global warming proponents who have incentives not to find an alternative explanation for melting ice.

But here are two quick mental experiments we can do:

  1. We already mentioned one proof.  Wind patterns cause most pollution to remain within the hemisphere (northern or southern) where it was generated.  So we would expect black carbon ice melting to be limited to the Arctic and not be seen in the Antarctic.  This fits observations
  2. In the winter, as the sea ice is growing, we would expect new ice would be free of particulate deposits and that any new deposits would be quickly covered in snow.  This would mean that we should see ice extents in the winter to be about the same as they were historically, and we would see most of the ice extent reduction in the summer.  Again, this is exactly what we see.

This is by no means a proof -- there are other explanations for the same data.  But I am convinced we would see at least a partial sea ice recovery in the Arctic if China could get their particulate emissions under control.

Update:  Melt ponds in Greenland are black with coal dust

 

Irony

It turns out that the US is one of the few industrialized nations to meet the terms of the Kyoto protocols (reduce CO2 emissions to 1997 levels) despite the fact we never signed it or did anything to try to meet the goals.

Thank the recession and probably more importantly the natural gas and fracking revolution.  Fracking will do more to reduce CO2 than the entire sum of government and renewable energy projects (since a BTU from natural gas produces about half the CO2 as a BTU form coal).  Of course, environmentalists oppose fracking.  They would rather carpet the desert with taxpayer-funded solar panels and windmills than allow the private sector to solve the problem using 50-year-old technology.

Climate Humor from the New York Times

Though this is hilarious, I am pretty sure Thomas Lovejoy is serious when he writes

But the complete candor and transparency of the [IPCC] panel’s findings should be recognized and applauded. This is science sticking with the facts. It does not mean that global warming is not a problem; indeed it is a really big problem.

This is a howler.  Two quick examples.  First, every past IPCC report summary has had estimates for climate sensitivity, ie the amount of temperature increase they expect for a doubling of CO2 levels.  Coming into this IPCC report, emerging evidence from recent studies has been that the climate sensitivity is much lower than previous estimates.  So what did the "transparent" IPCC do?  They, for the first time, just left out the estimate rather than be forced to publish one that was lower than the last report.

The second example relates to the fact that temperatures have been flat over the last 15-17 years and as a result, every single climate model has overestimated temperatures.  By a lot. In a draft version, the IPCC created this chart (the red dots were added by Steve McIntyre after the chart was made as the new data came in).

figure-1-4-models-vs-observations-annotated (1)

 

This chart was consistent with a number of peer-reviewed studies that assessed the performance of climate models.  Well, this chart was a little too much "candor" for the transparent IPCC, so they replaced it with this chart in the final draft:

figure-1-4-final-models-vs-observations

 

What a mess!  They have made the area we want to look at between 1990 and the present really tiny, and then they have somehow shifted the forecast envelopes down on several of the past reports so that suddenly current measurements are within the bands.   They also hide the bottom of the fourth assessment band (orange FAR) so you can't see that observations are out of the envelope of the last report.  No one so far can figure out how they got the numbers in this chart, and it does not match any peer-reviewed work.  Steve McIntyre is trying to figure it out.

OK, so now that we are on the subject of climate models, here is the second hilarious thing Lovejoy said:

Does the leveling-off of temperatures mean that the climate models used to track them are seriously flawed? Not really. It is important to remember that models are used so that we can understand where the Earth system is headed.

Does this make any sense at all?  Try it in a different context:  The Fed said the fact that their economic models failed to predict what actually happened over the last 15 years is irrelevant because the models are only used to see where the economy is headed.

The consistent theme of this report is declining certainty and declining chances of catastrophe, two facts that the IPCC works as hard as possible to obfuscate but which still come out pretty clearly as one reads the report.

The Key Disconnect in the Climate Debate

Much of the climate debate turns on a single logical fallacy.  This fallacy is clearly on display in some comments by UK Prime Minister David Cameron:

It’s worth looking at what this report this week says – that [there is a] 95 per cent certainty that human activity is altering the climate. I think I said this almost 10 years ago: if someone came to you and said there is a 95 per cent chance that your house might burn down, even if you are in the 5 per cent that doesn’t agree with it, you still take out the insurance, just in case.”

"Human activity altering climate" is not the same thing as an environmental catastrophe (or one's house burning down).  The statement that he is 95% certain that human activity is altering climate is one that most skeptics (including myself) are 100% sure is true.  There is evidence that human activity has been altering the climate since the dawn of agriculture.  Man's changing land uses have been demonstrated to alter climate, and certainly man's incremental CO2 is raising temperatures somewhat.

The key question is -- by how much?  This is a totally different question, and, as I have written before, is largely dependent on climate theories unrelated to greenhouse gas theory, specifically that the Earth's climate system is dominated by large positive feedbacks.  (Roy Spenser has a good summary of the issue here.)

The catastrophe is so uncertain that for the first time, the IPCC left estimates of climate sensitivity to CO2 out of its recently released summary for policy makers, mainly because it was not ready to (or did not want to) deal with a number of recent studies yielding sensitivity numbers well below catastrophic levels.  Further, the IPCC nearly entirely punted on the key question of how it can reconcile its past high sensitivity/ high feedback based temperature forecasts with past relative modest measured warming rates, including a 15+ year pause in warming which none of its models predicted.

The overall tone of the new IPCC report is one of declining certainty -- they are less confident of their sensitivity numbers and less confident of their models which have all been a total failure over the last 15 years. They have also backed off of other statements, for example saying they are far less confident that warming is leading to severe weather.

Most skeptics are sure mankind is affecting climate somewhat, but believe that this effect will not be catastrophic.  On both fronts, the IPCC is slowly catching up to us.

Hearing What You Want to Hear from the Climate Report

After over 15 years of no warming, which the IPCC still cannot explain, and with climate sensitivity numbers dropping so much in recent studies that the IPCC left climate sensitivity estimates out of their summary report rather than address the drop, the Weather Channel is running this headline on their site:

weatherch

 

The IPCC does claim more confidence that warming over the past 60 years is partly or mostly due to man (I have not yet seen the exact wording they landed on), from 90% to 95%.  But this is odd given that the warming all came from 1978 to 1998 (see for yourself in temperature data about halfway through this post).  Temperatures are flat or cooling for the other 40 years of the period.  The IPCC cannot explain these 40 years of no warming in the context of high temperature sensitivities to CO2.  And, they can't explain why they can be 95% confident of what drove temperatures in the 20 year period of 1978-1998 but simultaneously have no clue what drove temperatures in the other years.

At some point I will read the thing and comment further.

 

Appeals to Authority

A reader sends me a story of global warming activist who clearly doesn't know even the most basic facts about global warming.  Since this article is about avoiding appeals to authority, so I hate to ask you to take my word for it, but it is simply impossible to immerse oneself in the science of global warming for any amount of time without being able to immediately rattle off the four major global temperature data bases (or at least one of them!)

I don't typically find it very compelling to knock a particular point of view just because one of its defenders is a moron, unless that defender has been set up as a quasi-official representative of that point of view (e.g. Al Gore).  After all, there are plenty of folks on my side of issues, including those who are voicing opinions skeptical of catastrophic global warming, who are making screwed up arguments.

However, I have found over time this to be an absolutely typical situation in the global warming advocacy world.  Every single time I have publicly debated this issue, I have understood the opposing argument, ie the argument for catastrophic global warming, better than my opponent.   In fact, I finally had to write a first chapter to my usual presentation.  In this preamble, I outline the case and evidence for manmade global warming so the audience could understand it before I then set out to refute it.

The problem is that the global warming alarm movement has come to rely very heavily on appeals to authority and ad hominem attacks in making their case.  What headlines do you see? 97% of scientists agree, the IPCC is 95% sure, etc.  These "studies", which Lord Monkton (with whom I often disagree but who can be very clever) calls "no better than a show of hands", dominate the news.  When have you ever seen a story in the media about the core issue of global warming, which is diagnosing whether positive feedbacks truly multiply small bits of manmade warming to catastrophic levels.  The answer is never.

Global warming advocates thus have failed to learn how to really argue the science of their theory.  In their echo chambers, they have all agreed that saying "the science is settled" over and over and then responding to criticism by saying "skeptics are just like tobacco lawyers and holocaust deniers and are paid off by oil companies" represents a sufficient argument.**  Which means that in an actual debate, they can be surprisingly easy to rip to pieces.  Which may be why most, taking Al Gore's lead, refuse to debate.

All of this is particularly ironic since it is the global warming alarmists who try to wrap themselves in the mantle of the defenders of science.  Ironic because the scientific revolution began only when men and women were willing to reject appeals to authority and try to understand things for themselves.

 

** Another very typical tactic:  They will present whole presentations without a single citation.   But make one statement in your rebuttal as a skeptic that is not backed with a named, peer-reviewed study, and they will call you out on it.  I remember in one presentation, I was presenting some material that was based on my own analysis.  "But this is not peer-reviewed" said one participant, implying that it should therefore be ignored.  I retorted that it was basic math, that the data sources were all cited, and they were my peers -- review it.  Use you brains.  Does it make sense?  Is there a flaw?  But they don't want to do that.  Increasingly, oddly, science is about having officially licensed scientists delivery findings to them on a platter.

IPCC: We Count on Lazy Reporters

We will see the final version of the IPCC's Fifth climate assessment soon.  But here is something interesting from the last draft circulated.  First, here is there chart comparing actual temperatures to model forecasts.  As you can see, all the actuals fall outside the published ranges from all previous reports (with a couple of the most recent data points added by Steve McIntyre in red).

click to enlarge

 

A problem, though not necessarily a fatal problem if the divergence can be explained.  And the IPCC is throwing out a lot of last minute explanations, though none of them are backed with any actual science.  I discussed one of these explanations here.  Anyway, you see their data above.  This is what they actually write in the text:

the globally-averaged surface temperatures are well within the uncertainty range of all previous IPCC projections, and generally are in the middle of the scenario ranges”.

This is completely absurd, of course, given their own data, but it has lasted through several drafts, so we will see if it makes it into the final draft.  My guess is that they will leave this issue out entirely in the summary for policy makers (the only part the media reads).  Steve McIntyre discusses the whole history of this divergence issue, along with a series of studies highlighting this divergence that have been consistently kept out of publication by climate gatekeepers.

The frustrating part is that the IPCC is running around saying they can't have a complete answer on this critical issue because it is so new.  By "new" they mean a frequent skeptics' observation and criticism of climate models for over a decade that they have only recently been forced under duress to finally consider.

Great Moments in Predictions -- Al Gore's Ice Forecast

Via Icecap (I still don't think they have permalinks that work)

In his Dec. 10, 2007 “Earth has a fever” speech, Gore referred to a prediction by U.S. climate scientist Wieslaw Maslowski that the Arctic’s summer ice could “completely disappear” by 2013 due to global warming caused by carbon emissions.

Gore said that on Sept. 21, 2007, “scientists reported with unprecedented alarm that the North Polar icecap is, in their words, ‘falling off a cliff.’ One study estimated that it could be completely gone during summer in less than 22 years. Another new study to be presented by U.S. Navy researchers later this week warns that it could happen in as little as seven years, seven years from now.”

Maslowski told members of the American Geophysical Union in 2007 that the Arctic’s summer ice could completely disappear within the decade. “If anything,” he said, “our projection of 2013 for the removal of ice in summer...is already too conservative.”

The former vice president also warned that rising temperatures were “a planetary emergency and a threat to the survival of our civilization.”

However, instead of completely melting away, the polar icecap is at now at its highest level for this time of year since 2006.

Some Responsible Press Coverage of Record Temperatures

The Phoenix New Times blog had a fairly remarkable story on a record-hot Phoenix summer.  The core of the article is a chart from the NOAA.  There are three things to notice in it:

  • The article actually acknowledges that higher temperatures were due to higher night-time lows rather than higher daytime highs  Any mention of this is exceedingly rare in media stories on temperatures, perhaps because the idea of a higher low is confusing to communicate
  • It actually attributes urban warming to the urban heat island effect
  • It makes no mention of global warming

Here is the graphic:

hottest-summer

 

This puts me in the odd role of switching sides, so to speak, and observing that greenhouse warming could very likely manifest itself as rising nighttime lows (rather than rising daytime highs).  I can only assume the surrounding area of Arizona did not see the same sort of records, which would support the theory that this is a UHI effect.

Phoenix has a huge urban heat island effect, which my son actually measured.  At 9-10 in the evening, we measured a temperature differential of 8-12F from city center to rural areas outside the city.  By the way, this is a fabulous science fair project if you know a junior high or high school student trying to do something different than growing bean plants under different color lights.

Update On My Climate Model (Spoiler: It's Doing a Lot Better than the Pros)

In this post, I want to discuss my just-for-fun model of global temperatures I developed 6 years ago.  But more importantly, I am going to come back to some lessons about natural climate drivers and historic temperature trends that should have great relevance to the upcoming IPCC report.

In 2007, for my first climate video, I created an admittedly simplistic model of global temperatures.  I did not try to model any details within the climate system.  Instead, I attempted to tease out a very few (it ended up being three) trends from the historic temperature data and simply projected them forward.  Each of these trends has a logic grounded in physical processes, but the values I used were pure regression rather than any bottom up calculation from physics.  Here they are:

  • A long term trend of 0.4C warming per century.  This can be thought of as a sort of base natural rate for the post-little ice age era.
  • An additional linear trend beginning in 1945 of an additional 0.35C per century.  This represents combined effects of CO2 (whose effects should largely appear after mid-century) and higher solar activity in the second half of the 20th century  (Note that this is way, way below the mainstream estimates in the IPCC of the historic contribution of CO2, as it implies the maximum historic contribution is less than 0.2C)
  • A cyclic trend that looks like a sine wave centered on zero (such that over time it adds nothing to the long term trend) with a period of about 63 years.  Think of this as representing the net effect of cyclical climate processes such as the PDO and AMO.

Put in graphical form, here are these three drivers (the left axis in both is degrees C, re-centered to match the centering of Hadley CRUT4 temperature anomalies).  The two linear trends (click on any image in this post to enlarge it)

click to enlarge

 

And the cyclic trend:

click to enlarge

These two charts are simply added and then can be compared to actual temperatures.  This is the way the comparison looked in 2007 when I first created this "model"

click to enlarge

The historic match is no great feat.  The model was admittedly tuned to match history (yes, unlike the pros who all tune their models, I admit it).  The linear trends as well as the sine wave period and amplitude were adjusted to make the fit work.

However, it is instructive to note that a simple model of a linear trend plus sine wave matches history so well, particularly since it assumes such a small contribution from CO2 (yet matches history well) and since in prior IPCC reports, the IPCC and most modelers simply refused to include cyclic functions like AMO and PDO in their models.  You will note that the Coyote Climate Model was projecting a flattening, even a decrease in temperatures when everyone else in the climate community was projecting that blue temperature line heading up and to the right.

So, how are we doing?  I never really meant the model to have predictive power.  I built it just to make some points about the potential role of cyclic functions in the historic temperature trend.  But based on updated Hadley CRUT4 data through July, 2013, this is how we are doing:

click to enlarge

 

Not too shabby.  Anyway, I do not insist on the model, but I do want to come back to a few points about temperature modeling and cyclic climate processes in light of the new IPCC report coming soon.

The decisions of climate modelers do not always make sense or seem consistent.  The best framework I can find for explaining their choices is to hypothesize that every choice is driven by trying to make the forecast future temperature increase as large as possible.  In past IPCC reports, modelers refused to acknowledge any natural or cyclic effects on global temperatures, and actually made statements that a) variations in the sun's output were too small to change temperatures in any measurable way and b) it was not necessary to include cyclic processes like the PDO and AMO in their climate models.

I do not know why these decisions were made, but they had the effect of maximizing the amount of past warming that could be attributed to CO2, thus maximizing potential climate sensitivity numbers and future warming forecasts.  The reason for this was that the IPCC based nearly the totality of their conclusions about past warming rates and CO2 from the period 1978-1998.  They may talk about "since 1950", but you can see from the chart above that all of the warming since 1950 actually happened in that narrow 20 year window.  During that 20-year window, though, solar activity, the PDO and the AMO were also all peaking or in their warm phases.  So if the IPCC were to acknowledge that any of those natural effects had any influence on temperatures, they would have to reduce the amount of warming scored to CO2 between 1978 and 1998 and thus their large future warming forecasts would have become even harder to justify.

Now, fast forward to today.  Global temperatures have been flat since about 1998, or for about 15 years or so.  This is difficult to explain for the IPCC, since about none of the 60+ models in their ensembles predicted this kind of pause in warming.  In fact, temperature trends over the last 15 years have fallen below the 95% confidence level of nearly every climate model used by the IPCC.  So scientists must either change their models (eek!) or else they must explain why they still are correct but missed the last 15 years of flat temperatures.

The IPCC is likely to take the latter course.  Rumor has it that they will attribute the warming pause to... ocean cycles and the sun (those things the IPCC said last time were irrelevant).  As you can see from my model above, this is entirely plausible.  My model has an underlying 0.75C per century trend after 1945, but even with this trend actual temperatures hit a 30-year flat spot after the year 2000.   So it is entirely possible for an underlying trend to be temporarily masked by cyclical factors.

BUT.  And this is a big but.  You can also see from my model that you can't assume that these factors caused the current "pause" in warming without also acknowledging that they contributed to the warming from 1978-1998, something the IPCC seems loath to do.  I do not know how the ICC is going to deal with this.  I hate to think the worst of people, but I do not think it is beyond them to say that these factors offset greenhouse warming for the last 15 years but did not increase warming the 20 years before that.

We shall see.  To be continued....

Update:  Seriously, on a relative basis, I am kicking ass

click to enlarge

Trend That is Not A Trend: Rolling Stone Wildfire Article

Rolling Stone brings us an absolutely great example of an article that claims a trend without actually showing the trend data, and where the actual data point to a trend in the opposite direction as the one claimed.

I won't go into the conclusions of the article.  Suffice it to say it is as polemical as anything I have read of late and could be subtitled "the Tea Party and Republicans suck."  Apparently Republicans are wrong to criticize government wildfire management and do so only because they suck, and the government should not spend any effort to fight wildfires that threaten private property but does so only because Republicans, who suck, make them.  Or something.

What I want to delve into is the claim by the author that wildfires are increasing due to global warming, and only evil Republicans (who suck) could possibly deny this obvious trend (numbers in parenthesis added so I can reference passages below):

 But the United States is facing an even more basic question: How should we manage fire, given the fact that, thanks to climate change, the destruction potential for wildfires across the nation has never been greater? In the past decade alone, at least 10 states – from Alaska to Florida – have been hit by the largest or most destructive wildfires in their respective histories (1). Nationally, the cost of fighting fires has increased from $1.1 billion in 1994 to $2.7 billion in 2011.(2)

The line separating "fire season" from the rest of the year is becoming blurry. A wildfire that began in Colorado in early October continued smoldering into May of this year. Arizona's first wildfire of 2013 began in February, months ahead of the traditional firefighting season(3). A year-round fire season may be the new normal. The danger is particularly acute in the Intermountain West, but with drought and record-high temperatures in the Northwest, Midwest, South and Southeast over the past several years, the threat is spreading to the point that few regions can be considered safe....

For wildland firefighters, the debate about global warming was over years ago. "On the fire lines, it is clear," fire geographer Michael Medler told a House committee in 2007. "Global warming is changing fire behavior, creating longer fire seasons and causing more frequent, large-scale, high-severity wildfires."...(4)

Scientists have cited climate change as a major contributor in some of the biggest wildfires in recent years, including the massive Siberian fires during a record heat wave in 2010 and the bushfires that killed 173 people in Australia in 2009.(5)...

The problem is especially acute in Arizona, where average annual temperatures have risen nearly three-quarters of a degree Fahrenheit each decade since 1970, making it the fastest­-warming state in the nation. Over the same period, the average annual number of Arizona wildfires on more than 1,000 acres has nearly quadrupled, a record unsurpassed by any other state and matched only by Idaho. One-quarter of Arizona's signature ponderosa pine and mixed-conifer forests have burned in just the past decade. (6)...

At a Senate hearing in June, United States Forest Service Chief Thomas Tidwell testified that the average wildfire today burns twice as many acres as it did 40 years ago(7). "In 2012, over 9.3 million acres burned in the United States," he said – an area larger than New Jersey, Connecticut and Delaware combined. Tidwell warned that the outlook for this year's fire season was particularly grave, with nearly 400 million acres – more than double the size of Texas – at a moderate-to-high risk of burning.(8)

These are the 8 statements I can find to support an upward trend in fires.  And you will note, I hope, that none of them include the most obvious data - what has the actual trend been in number of US wildfires and acres burned.  Each of these is either a statement of opinion or a data point related to fire severity in a particular year, but none actually address the point at hand:  are we getting more and larger fires?

Maybe the data does not exist.  But in fact it does, and I will say there is absolutely no way, no way, the author has not seen the data.  The reason it is not in this article is because it does not fit the "reporters" point of view so it is left out.  Here is where the US government tracks fires by year, at the National Interagency Fire Center.   To save you clicking through, here is the data as of this moment:

click to enlarge fires 2013 to date

 

Well what do you know?  The number of fires and the acres burned in 2013 are not some sort of record high -- in fact they actually are the, respectively, lowest and second lowest numbers of the last 10 years.  In fact, both the number of fires and the total acres burned are running a third below average.

The one thing this does not address is the size of fires.  The author implies that there are more fires burning more acres, which we see is clearly wrong, but perhaps the fires are getting larger?  Well, 2012 was indeed an outlier year in that fires were larger than average, but 2013 has returned to the trend which has actually been flat to down, again exactly opposite of the author's contention (data below is just math from chart above)

Click to enlarge

 

In the rest of the post, I will briefly walk through his 8 statements highlighted above and show why they exhibit many of the classic fallacies in trying to assert a trend where none exists.  In the postscript, I will address one other inconsistency from the article as to the cause of these fires which is a pretty hilarious of how to turn any data to supporting you hypothesis, even if it is unrelated.  Now to his 8 statements:

(1) Again, no trend here, this is simply a single data point.  He says that  10 states have set in one year or another in the last decade a record for one of two variables related to fires.  With 50 states and 2 variables, we have 100 measurements that can potentially hit a record in any one year.  So if we have measured fires and fire damage for about 100 years (about the age of the US Forest Service), then we would expect on average 10 new records every decade, exactly what the author found.  Further, at least one of these -- costliness of the fires -- should be increasing over time due to higher property valuations and inflation, factors I am betting the author did not adjust for.

(2)  This cost increase over 17 years represents a 5.4% per year inflation.  It is very possible this is entirely due to changes in firefighting unit costs and methods rather than any change in underlying fire counts.

(3) This is idiotic, a desperate reach by an author with an axe to grind.  Wildfires in Arizona often occur out of fire season.   Having a single fire in the winter means nothing.

(4) Again, we know the data does not support the point.  If the data does not support your point, find some "authority" that will say it is true.  There is always someone somewhere who will say anything is true.

(5) It is true that there are scientists who have blamed global warming for these fires.  Left unmentioned is that there are also scientists who think that it is impossible to parse the effect of a 0.5C increase in global temperatures from all the other potential causes of individual weather events and disasters.  If there is no data to support a trend in the mean, it is absolutely irresponsible to claim causality in isolated data points in the tails of the distribution

(6) The idea that temperatures in Arizona have risen 3/4 a degree F for four decades is madness.  Not even close.  This would be 3F, and there is simply no basis in any reputable data base I have seen to support this.  It is potentially possible to take a few AZ urban thermometers to see temperature increases of this magnitude, but they would be measuring mostly urban heat island effects, and not rural temperatures that drive wildfires (more discussion here).  The statement that "the average annual number of Arizona wildfires on more than 1,000 acres has nearly quadrupled" is so awkwardly worded we have to suspect the author is reaching here.  In fact, since wildfires average about 100 acres, the 1000 acre fire is going to be rare.  My bet is that this is a volatility in small numbers (e.g. 1 to 4) rather than a real trend.  His final statement that "One-quarter of Arizona's signature ponderosa pine and mixed-conifer forests have burned in just the past decade" is extremely disingenuous.  The reader will be forgiven for thinking that a quarter of the trees in Arizona have burned.  But in fact this only means there have been fires in a quarter of the forests -- a single tree in one forest burning would likely count for this metric as a forest which burned.

(7) This may well be true, but means nothing really.  It is more likely, particularly given the evidence of the rest of the article, to be due to forest management processes than global warming.

(8)  This is a data point, not a trend.  Is this a lot or a little?  And remember, no matter how much he says is at risk (and remember this man is testifying to get more budget money out of Congress, so he is going to exaggerate) the actual acreage burning is flat to down.

Postscript:  The article contains one of the most blatant data bait and switches I have ever seen.  The following quote is taken as-is in the article and has no breaks or editing and nothing left out.   Here is what you are going to see.  All the way up to the last paragraph, the author tells a compelling story that the fires are due to a series of USFS firefighting and fuel-management policies.  Fair enough.   His last paragraph says that Republicans are the big problem for opposing... opposing what?  Changes to the USFS fire management practices?  No, for opposing the Obama climate change plan. What??  He just spent paragraphs building a case that this is a fire and fuel management issue, but suddenly Republicans suck for opposing the climate change bill?

Like most land in the West, Yarnell is part of an ecosystem that evolved with fire. "The area has become unhealthy and unnatural," Hawes says, "because fires have been suppressed." Yarnell is in chaparral, a mix of small juniper, oak and manzanita trees, brush and grasses. For centuries, fires swept across the chaparral periodically, clearing out and resetting the "fuel load." But beginning in the early 1900s, U.S. wildfire policy was dominated by fire suppression, formalized in 1936 as "the 10 a.m. rule" – fires were to be extinguished by the morning after they were spotted; no exceptions. Back in the day, the logic behind the rule appeared sound: If you stop a fire when it's small, it won't become big. But wildland ecosystems need fire as much as they need rain, and it had been some 45 years since a large fire burned around Yarnell. Hawes estimates that there could have been up to five times more fuel to feed the Yarnell Hill fire than was natural.

The speed and intensity of a fire in overgrown chaparral is a wildland firefighter's nightmare, according to Rick Heron, part of another Arizona crew that worked on the Yarnell Hill fire. Volatile resins and waxy leaves make manzanita "gasoline in plant form," says Heron. He's worked chaparral fires where five-foot-tall manzanitas produced 25-foot-high flames. Then there are the decades of dried-up grasses, easily ignitable, and the quick-burning material known as "fine" or "flash" fuels. "That's the stuff that gets you," says Heron. "The fine, flashy fuels are just insane. It doesn't look like it's going to be a problem. But when the fire turns on you, man, you can't outdrive it. Let alone outrun it."

Beginning with the Forest Service in 1978, the 10 a.m. rule was gradually replaced by a plan that gave federal agencies the discretion to allow fires to burn where appropriate. But putting fire back in the landscape has proved harder to do in practice, where political pressures often trump science and best-management practices. That was the case last year when the Forest Service once again made fire suppression its default position. Fire managers were ordered to wage an "aggressive initial attack" on fires, and had to seek permission to deviate from this practice. The change was made for financial reasons. Faced with skyrocketing costs of battling major blazes and simultaneous cuts to the Forest Service firefighting budget, earlier suppression would, it was hoped, keep wildfires small and thus reduce the cost of battling big fires.

Some critics think election-year politics may have played a role in the decision. "The political liability of a house burning down is greater than the political liability of having a firefighter die," says Kierán Suckling, head of the Tucson-based Center for Biological Diversity. "If they die, you just hope that the public narrative is that they were American heroes."

The problem will only get worse as extremist Republicans and conservative Democrats foster a climate of malign neglect. Even before President Obama unveiled a new climate-change initiative days before the fire, House Speaker John Boehner dismissed the reported proposal as "absolutely crazy." Before he was elected to the Senate last November, Jeff Flake, then an Arizona congressman, fought to prohibit the National Science Foundation from funding research on developing a new model for international climate-change analysis, part of a program he called "meritless." The biggest contributor to Flake's Senate campaign was the Club for Growth, whose founder, Stephen Moore, called global warming "the biggest myth of the last one hundred years."

By the way, the Yarnell firefighters did not die due to global warming or even the 10am rule.  They died due to stupidity.  Whether their own or their leaders may never be clear, but I have yet to meet a single firefighter that thought they had any business being where they were and as out of communication as they were.

 

About That "Thousand Year" Storm in Colorado....

Last week I expressed my doubts that the storm in Colorado was really, as described breathlessly at the Weather Underground, a once in a thousand year storm (the logic of the article, and many others, being that one in a thousand is the same as "zero" and thus the storm could not have occurred naturally and therefore Global Warming).

Turns out it is not even close.  From the Colorado Climate Center at Colorado State University:

How much rain fell on Colorado this week? And where? Colorado residents can help the weather experts at Colorado State University answer these questions.

In response to the incredible recent rains and flooding in parts of the state, the Colorado Climate Center will be mapping rainfall totals and graphing hourly intensities for the entire state for the period beginning Sunday, Sept. 8 (as storms first developed over southern Colorado) through the end of the storm later this weekend

"As is typical of Colorado storms, some parts of the state were hard hit and others were untouched. Still, this storm is ranking in the top ten extreme flooding events since Colorado statehood," said Nolan Doesken, State Climatologist at CSU. "It isn't yet as extreme or widespread as the June 1965 floods or as dramatic as the 1935 floods but it ranks right up there among some of the worst.”

Among the worst, according to Climate Center data, occurred in May 1904, October 1911, June 1921, May 1935, September 1938, May 1955, June 1965, May 1969, October 1970, July 1976, July 1981, and, of course, the Spring Creek Flood of July 1997 that ravaged Fort Collins and the CSU campus.

."Every flood event in Colorado has its own unique characteristics," said Doesken. "But the topography of the Colorado Front Range makes this area particularly vulnerable when the necessary meteorological conditions come together as they did this week."

So it is perhaps a one in fifteen year flood.  Note that (by the math in my previous article linked above) a one in fifteen year flood covering an area half the size of Colorado should occur on overage over 60+ times a year around the world.  Our intuition about tail of the distribution event frequency is not very good, which is just another reason they make a poor proxy for drawing conclusions about trends in the mean of some phenomenon.

 

The Magic Theory

Catastrophic Anthropogenic Climate Change is the magic theory -- every bit of evidence proves it.   More rain, less rain, harder rain, drought, floods, more tornadoes, fewer tornadoes, hotter weather, colder weather, more hurricanes, fewer hurricane -- they all prove the theory.  It is the theory that it is impossible not to confirm.  Example

It will take climate scientists many months to complete studies into whether manmade global warming made the Boulder flood more likely to occur, but the amount by which this event has exceeded past events suggests that manmade warming may have played some role by making the event worse than it otherwise would have been...

An increase in the frequency and intensity of extreme precipitation events is expected to take place even though annual precipitation amounts are projected to decrease in the Southwest. Colorado sits right along the dividing line between the areas where average annual precipitation is expected to increase, and the region that is expected to become drier as a result of climate change.

That may translate into more frequent, sharp swings between drought and flood, as has recently been the case. Last year, after all, was Colorado's second-driest on record, with the warmest spring and warmest summer on record, leading to an intense drought that is only just easing.

Generally one wants to point to a data trend to prove a theory, but look at that last paragraph.  Global warming is truly unique because it can be verified by there being no trend.

I hate to make this point for the five millionth time, but here goes:  It is virtually impossible (and takes far more data, by orders of magnitude, than we posses) to prove a shift in the mean of any phenomenon simply by highlighting occasional tail-of-the-distribution events.  The best way to prove a mean shift is to actually, you know, track the mean.  The problem is that the trend data lines for all these phenomenon -- droughts, wet weather, tornadoes, hurricanes -- show no trend, so the only tool supporters of the theory have at their disposal is to scream "global warming" as loud as they can every time there is a tail-of-the-distribution event.

Let's do some math:  They claim this flood was a one in one thousand year event.  That strikes me as false precision, because we have only been observing this phenomenon with any reliability for 100 years, but I will accept their figure for now.  Let's say this was indeed a one in 1000 year flood that it occurred over, say, half the area of Colorado (again a generous assumption, it was actually less that that).

Colorado is about 270,000 KM^2 so half would be 135,000 KM^2.  The land area of the world (we really should include oceans for this but we will give these folks every break) is about 150,000,000 km^2.  That means that half of Colorado is a bit less than 1/1000 of the world land area.

Our intuition tells us that a 1 in 1000 year storm is so rare that to have one means something weird or unusual or even unnatural must be going on.  But by the math above, since this storm covered 1/1000 of the land surface of the Earth, we should see one such storm on average every year somewhere in the world.  This is not some "biblical" unprecedented event - it is freaking expected, somewhere, every year.  Over the same area we should also see a 1 in 1000 year drought, a 1 in 1000 year temperature high, and a one in one thousand year temperature low -- every single damn year.  Good news if you are a newspaper and feed off of this stuff, but bad news for anyone trying to draw conclusions about the shifts in means and averages from such events.

Climate Theory vs. Climate Data

This is a pretty amazing statement Justin Gillis in the New York Times.

This month, the world will get a new report from a United Nations panel about the science of climate change. Scientists will soon meet in Stockholm to put the finishing touches on the document, and behind the scenes, two big fights are brewing....

In the second case, we have mainstream science that says if the amount of carbon dioxide in the atmosphere doubles, which is well on its way to happening, the long-term rise in the temperature of the earth will be at least 3.6 degrees Fahrenheit, but more likely above 5 degrees. We have outlier science that says the rise could come in well below 3 degrees.

In this case, the drafters of the report lowered the bottom end in a range of temperatures for how much the earth could warm, treating the outlier science as credible.

The interesting part is that "mainstream science" is based mainly on theory and climate models that over the last 20 years have not made accurate predictions (overestimating warming significantly).  "Outlier science" is in a lot of cases based on actual observations of temperatures along with other variables like infrared radiation returning to space.  The author, through his nomenclature, is essentially disparaging observational data that is petulantly refusing to match up to model predictions.  But of course skeptics are anti-science.

Climate Groundhog Day

I discuss in a bit more detail at my climate blog why I feel like climate blogging has become boring and repetitious.  To prove it, I predict in advance the stories that skeptics will run about the upcoming IPCC report.

I had a reader write to ask how I could be bored when there were still hilarious stories out there of climate alarmists trying to row through the Arctic and finding to their surprise it is full of ice.  But even this story repeats itself.  There have been such stories almost every year in the past five.

We Are 95% Confident in a Meaningless Statement

Apparently the IPCC is set to write:

Drafts seen by Reuters of the study by the U.N. panel of experts, due to be published next month, say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s.

That is up from at least 90 percent in the last report in 2007, 66 percent in 2001, and just over 50 in 1995, steadily squeezing out the arguments by a small minority of scientists that natural variations in the climate might be to blame.

I have three quick reactions to this

  • The IPCC has always adopted words like "main cause" or "substantial cause."  They have not even had enough certainly to use the word "majority cause" -- they want to keep it looser than that.  If man causes 30% and every other cause is at 10% or less, is man the main cause?  No one knows.  So that is how we get to the absurd situation where folks are trumpeting being 95% confident in a statement that is purposely vaguely worded -- so vague that the vast majority of people who sign it would likely disagree with one another on exactly what they have agreed to.
  • The entirety of the post-1950 temperature rise occurred between 1978 and 1998 (see below a chart based on the Hadley CRUT4 database, the same one used by the IPCC

2013 Version 3 Climate talk

Note that temperatures fell from 1945 to about 1975, and have been flat from about 1998 to 2013.  This is not some hidden fact - it was the very fact that the warming slope was so steep in the short period from 1978-1998 that contributed to the alarm.  The current 15 years with no warming was not predicted and remains unexplained (at least in the context of the assumption of high temperature sensitivities to CO2).  The IPCC is in a quandary here, because they can't just say that natural variation counter-acted warming for 15 years, because this would imply a magnitude to natural variability that might have explained the 20 year rise from 1978-1998 as easily as it might explain the warming hiatus over the last 15 years (or in the 30 years preceding 1978).

  • This lead statement by the IPCC continues to be one of the great bait and switches of all time.  Most leading skeptics (excluding those of the talk show host or politician variety) accept that CO2 is a greenhouse gas and is contributing to some warming of the Earth.  This statement by the IPCC says nothing about the real issue, which is what is the future sensitivity of the Earth's temperatures to rising CO2 - is it high, driven by large positive feedbacks, or more modest, driven by zero to negative feedbacks.  Skeptics don't disagree that man has cause some warming, but believe that future warming forecasts are exaggerated and that the negative effects of warming (e.g. tornadoes, fires, hurricanes) are grossly exaggerated.

Its OK not to know something -- in fact, that is an important part of scientific detachment, to admit what one does not know.   But what the hell does being 95% confident in a vague statement mean?  Choose which of these is science:

  • Masses are attracted to each other in proportion to the product of their masses and inversely proportional to the square of their distance of separation.
  • We are 95% certain that gravity is the main cause of my papers remaining on my desk

Summer of the (Flaming) Shark

Give me a quick answer - are forest fires above average this year?  Is this an unusually bad fire season?

You could be forgiven for saying "yes".  In fact, it is an unusually quiet fire season.  Via Real Science

ScreenHunter_241 Jul. 26 22.14

source:  National Interagency Fire Center

It is such a disconnect with news reporting that you may have to click the source link yourself just to make sure I am not having you on, but 2013 is an unusually quiet fire season (2012 was worse but still under the 10 year average).  This tendency to judge trends by frequency of the media coverage rather than frequency of the underlying phenomenon is one I have written about before.

let’s take a step back to 2001 and the “Summer of the Shark.”  The media hysteria began in early July, when a young boy was bitten by a shark on a beach in Florida.  Subsequent attacks received breathless media coverage, up to and including near-nightly footage from TV helicopters of swimming sharks.  Until the 9/11 attacks, sharks were the third biggest story of the year as measured by the time dedicated to it on the three major broadcast networks’ news shows.

Through this coverage, Americans were left with a strong impression that something unusual was happening — that an unprecedented number of shark attacks were occurring in that year, and the media dedicated endless coverage to speculation by various “experts” as to the cause of this sharp increase in attacks.

Except there was one problem — there was no sharp increase in attacks.  In the year 2001, five people died in 76 shark attacks.  However, just a year earlier, 12 people had died in 85 attacks.  The data showed that 2001 actually was  a down year for shark attacks.

shark

This Is How We Get In Pointless Climate Flame Wars

The other day I posted a graph from Roy Spencer comparing climate model predictions to actual measurements in the tropical mid-troposphere (the zone on Earth where climate models predict the most warming due to large assumed water vapor positive feedbacks).  The graph is a powerful indictment of the accuracy of climate models.

Spencer has an article (or perhaps a blog post) in the Financial Post with the same results, and includes a graph that does a pretty good job of simplifying the messy spaghetti graph in the original version.  Except for one problem.  Nowhere is it correctly labelled.  One would assume looking at it that it is a graph of global surface temperatures, which is what most folks are used to seeing in global warming articles. But in fact it is a graph of temperatures in the mid-troposphere, between 20 degrees North and 20 degrees South latitude.  He mentions that it is for tropical troposphere in the text of the article, but it is not labelled as such on the graph.  There is a very good reason for that narrow focus, but now the graph will end up on Google image search, and people will start crying "bullsh*t" because they will compare the numbers to global surface temperature data and it won't match.

I respect Spencer's work but he did not do a good job with this.