Archive for the ‘Climate’ Category.

Denying the Climate Catastrophe: 8. The Lukewarmer Middle Ground

This is Chapter 8 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground (this article)
  9. A Low-Cost Insurance Policy

In this chapter we are going to try to sum up where we are and return to our very first chapter, when I said that we would find something odd once we returned to the supposed global warming "consensus".

First, let's return to our framework one last time and try to summarize what has been said:

Slide75

I believe that this is a pretty fair representation of the median luke-warmer position.  Summarized, it would be:

  • Manmade CO2 warms the Earth, though by much less than most climate models claim because these models are assuming unrealistic levels of positive feedback that over-state future warming.  One degree C of warming, rather than four or five, is a more realistic projection of man-made warming over the next century
  • The world has certainly warmed over the last century, though by perhaps a bit less than the 0.8C in the surface temperature record due to uncorrected flaws in that record
  • Perhaps half of this past warming is due to man, the rest due to natural variability
  • There is little evidence that weather patterns are "already changing" in any measurable way from man-made warming

The statements I just wrote above, no matter how reasonable, are enough to get me and many others vilified as "deniers".  You might think that I am exaggerating -- that the denier tag is saved for folks who absolutely deny any warming effect of CO2.  But that is not the case, I can assure you from long personal experience.

The Climate Bait and Switch

Of course, the very act of attempting to shut people up who disagree with one's position on a scientific issue is, I would have thought, obviously anti-science.   The history of science is strewn with examples of the majority being totally wrong.   Even into the 1960's, for example, the 97% consensus in geology was that the continents don't move and that the few scientists who advocated for plate tectonics theory were crackpots.

But that is not how things work today.  Climate action advocates routinely look for ways to silence climate skeptics, up to and including seeking to prosecute these climate heretics and try to throw them in jail.

The reason that alarmists say they feel confident in vilifying and attempting to silence folks like myself is because they claim that the science is settled, that 97% of climate scientists believe in the consensus, and so everyone who is not on board with the consensus needs to shut up.  But what exactly is this consensus?

The 97% number first appeared in a "study" by several academics who sent out a survey to scientists with some climate change questions.  They recieved over 3146 responses, but they decided that only 77 of these respondents "counted" as climate scientists, and of these 75 of the 77 (97%) answered two questions about climate change in the affirmative.

Slide82

We will get to the two questions in a second, but note already the odd study methodology.  If the other 10,000 plus people sent the survey were not the targets of the survey, why were they sent a survey in the first place?  It makes one suspicious that the study methodology was changed mid-stream to get the answer they wanted.

Anyway, what is even more fascinating is the two questions asked in the survey.  Here they are:

  1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?
  2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

The 97% in this survey answered the questions "risen" and "yes".

Do you see the irony here?  If you have been following along with this series, you should be able to say how I would have answered the two questions.  I would certainly have said "risen" to 1.  The answer to question 2 is a bit hard because "significant" is not defined, but in a complex system with literally thousands of variables, I would have said one of those variables was a significant contributor at anything over about 10%.  Since I estimated man's effect on past warming around 40-50%, I would have answered "yes" to #2!  In fact, most every prominent science-based skeptic I can think of would likely have answered the same.

So you heard it right -- I and many prominent skeptics are part of the 97% consensus.  Effectively, I am being told to shut up and not continue to say what I think, in the name of a 97% consensus that represents exactly what I am saying.  This is so weird as to be almost Kafka-esque.

This is what I call the climate bait and switch.  Shaky assumptions about things like high positive feedback assumptions are defended with the near-certainty that surrounds unrelated proposition such as the operation of the greenhouse gas effect.

In fact, merely arguing about whether man-made warming exists or is "significant" falls well short of what we really need in the public policy arena.  What we really should be discussing is a proposition like this:

Is manmade CO2 causing catastrophic increases in warming and warming-driven weather effects whose costs exceed those of reducing CO2 production enough to avoid these effects?

It is about at this point when I usually have people bring up the precautionary principle.  So that I am not unfair to proponents of that principle, I will use the Wikipedia definition:

if an action or policy has a suspected risk of causing harm to the public, or to the environment, in the absence of scientific consensus (that the action or policy is not harmful), the burden of proof that it is not harmful falls on those taking an action that may or may not be a risk.

The principle is used by policy makers to justify discretionary decisions in situations where there is the possibility of harm from making a certain decision (e.g. taking a particular course of action) when extensive scientific knowledge on the matter is lacking. The principle implies that there is a social responsibility to protect the public from exposure to harm, when scientific investigation has found a plausible risk. These protections can be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.

I believe that, as stated, this is utter madness.  I will give you an example.   Consider a vaccine that saves thousands of lives a year.  Let's say, as is typical of almost every vaccine, that it also hurts a few people, such that it may kill 1 person for every thousand it saves.  By the precautionary principle as stated, we would never have approved any vaccine, because the precautionary principle does not put any weight on avoided costs of the action.

So take fossil fuel burning.   Proponents of taking drastic action to curb fossil fuel use in the name of global warming prevention will argue that until there is an absolute consensus that burning fossil fuels is not harmful to the climate, such burning should be banned.  But it ignores the substantial, staggering, unbelievably-positive effects we have gained from fossil fuels and the technology and economy they support.

Just remember back to that corn yield chart.

Slide123

Bill McKibbon wants us to stop using fossil fuels because they may cause warmer temperatures that might reduce corn yields.  But there is a near absolute certainty that dismantling the fossil fuel economy will take us back to the horrendous yields in the yellow years on this chart.  Proponents of climate action point to the possibility of warming-based problems, but miss the near certainty of problems from elimination of fossil fuels.

Over the last 30 years, something unprecedented in the history of human civilization has occurred -- an astounding number of people have exited absolute poverty.

Slide124

Folks like McKibbon act like there is no downside to drastically cutting back on fossil fuel use and switching to substantially more expensive and less convenient fuels, as if protecting Exxon's profits are the only reason anyone would possibly oppose such a measure.  But the billion or so people who have exited poverty of late have done so by burning every bit of fossil fuels than can obtain, and never would have been able to do so in such numbers had such an inexpensive fuel option not been available.  We in the West could likely afford having to pay $50 a month more for fuel, but what of the poor of the world?

Perhaps this will give one an idea of how central inexpensive fossil fuels are to well-being.  This is a chart from World Bank data plotting each country based on their per capital CO2 production and their lifespan.

Slide79

As you can see, there is a real, meaningful relationship between CO2 production and life expectancy.  In fact, each 10x increase in CO2 production is correlated with 10 years of additional life expectancy.  Of course, this relationship is not direct -- CO2 itself does not have health benefits (if one is not a plant).  But burning more CO2 is a byproduct of a growing technological economy, which leads to greater wealth and life expectancy.

The problem, then, is not that we shouldn't consider the future potential costs and risks of climate change, but that we shouldn't consider them in a vaccuum without also considering the costs of placing severe artificial limits on inexpensive fossil fuels.

Slide78

People often say to me that climate action is an insurance policy -- and they ask me, "you buy insurance, don't you?"   My answer invariably is, "yes, I buy insurance, but not when the cost of the policy is greater than the risk being insured against."

As it turns out, there is an approach we can take in this country to creating a low-cost insurance policy against the risks that temperature sensitivity to CO2 is higher than I have estimated in this series.  I will outline that plan in my final chapter.

Here is Chapter 9:  A Low-Cost Insurance Policy

Denying the Climate Catastrophe: 7. Are We Already Seeing Climate Change?

This is Chapter 7 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change (this article)
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

Note:  This is by far the longest chapter, and could have been 10x longer without a lot of aggressive editing.  I have chosen not to break it into two pieces.  Sorry for the length.  TL;DR:  The vast majority of claims of current climate impacts from CO2 are grossly exaggerated or even wholly unsupported by the actual data.  The average quality of published studies in this area is very low compared to other parts of climate science.

Having discussed the theory and reality of man-made warming, we move in this chapter to what is often called "climate change" -- is manmade warming already causing adverse changes in the climate?

click to enlarge

This is a peculiarly frustrating topic for a number of reasons.

First, everyone who discusses climate change automatically assumes the changes will be for the worse.  But are they?  The Medieval Warm Period, likely warmer than today, was a period of agricultural plenty and demographic expansion (at least in Europe) -- it was only the end of the warm period that brought catastrophe, in the form of famine and disease.  As the world warms, are longer growing seasons in the colder parts of the northern hemisphere really so bad, and why is it no one ever mentions such positive offsets?

The second frustrating issues is that folks increasingly talk about climate change as if it were a direct result of CO2, e.g. CO2 is somehow directly worsening hurricanes.  This is in part just media sloppiness, but it has also been an explicit strategy, re-branding global warming as climate change during the last 20 years when global temperatures were mostly flat.  So it is important to make this point:  There is absolutely no mechanism that has been suggested by anyone wherein CO2 can cause climate change except through the intermediate step of warming.  CO2 causes warming, which then potentially leads to changes in weather.  If CO2 is only causing incremental warming, then it likely is only causing incremental changes to other aspects of the climate.   (I will note as an aside that man certainly has changed the climate through mechanisms other than CO2, but we will not discuss these.  A great example is land use.  Al Gore claimed the snows of Kilimanjaro are melting because of global warming, but in fact it is far more likely they are receding due to precipitation changes as a result of deforestation of Kilimanjaro's slopes.)

Finally, and perhaps the most frustrating issue, is that handling claims of various purported man-made changes to the climate has become an endless game of "wack-a-mole".  It is almost impossible to keep up with the myriad claims of things that are changing (always for the worse) due to CO2.  One reason that has been suggested for this endless proliferation of dire predictions is that if one wants to study the mating habits of the ocelot, one may have trouble getting funding, but funding is available in large quantities if you re-brand your study as the effect of climate change on the mating habits of the ocelot.  It is the unusual weather event or natural phenomenon (Zika virus!) that is not blamed by someone somewhere on man-made climate change.

As a result, this section could be near-infinitely long.  To avoid that, and to avoid a quickly tedious series of charts labelled "hurricanes not up", "tornadoes not up", etc., I want to focus more on the systematic errors that lead to the false impression that we are seeing man-made climate changes all around us.

click to enlarge

We will start with publication bias, which I would define as having a trend in the reporting of a type of an event mistaken for a trend in the underlying events itself.  Let's start with a classic example from outside climate, the "summer of the shark".

click to enlarge

The media hysteria began in early July, 2001, when a young boy was bitten by a shark on a beach in Florida.  Subsequent attacks received breathless media coverage, up to and including near-nightly footage from TV helicopters of swimming sharks.  Until the 9/11 attacks, sharks were the third biggest story of the year as measured by the time dedicated to it on the three major broadcast networks’ news shows.

Through this coverage, Americans were left with a strong impression that something unusual was happening — that an unprecedented number of shark attacks were occurring in that year, and the media dedicated endless coverage to speculation by various “experts” as to the cause of this sharp increase in attacks.

click to enlarge

Except there was one problem — there was no sharp increase in attacks. In the year 2001, five people died in 76 shark attacks. However, just a year earlier, 12 people had died in 85 attacks. The data showed that 2001 actually was a down year for shark attacks.  The increased media coverage of shark attacks was mistaken for an increase in shark attacks themselves.

Hopefully the parallel with climate reporting is obvious.  Whereas a heat wave in Moscow was likely local news only 30 years ago, now it is an international story that is tied, in every broadcast, to climate change.  Every single tail-of-the-distribution weather event from around the world is breathlessly reported, leaving the impression among viewers that more such events are occurring, even when there is in fact no such trend.   Further, since weather events can drive media ratings, there is  an incentive to make them seem scarier:

click to enlarge

When I grew up, winter storms were never named.  It was just more snow in Buffalo, or wherever.  Now, though, we get "Winter Storm Saturn: East Coast Beast."  Is the weather really getting scarier, or just the reporting?

click to enlarge

The second systematic error is not limited to climate, and is so common I actually have a category on my blog called "trend that is not a trend".   There is a certain chutzpah involved in claiming a trend when it actually does not exist in the data, but such claims occur all the time.  In climate, a frequent variation on this failure is the claiming of a trend from a single data point -- specifically, a tail-of-the-distribution weather event will be put forward as "proof" that climate is changing, ie that there is a trend to the worse somehow in the Earth's climate.

The classic example was probably just after Hurricane Katrina.  In a speech in September of 2005 in San Francisco, Al Gore told his Sierra Club audience that not only was Katrina undoubtedly caused by man-made global warming, but that it was the harbinger of a catastrophic onslaught of future such hurricanes.     In fact, though, there is no upward trend in Hurricane activity.   2005 was a high but not unprecedented year for hurricanes.  An Katrina was soon followed by a long and historic lull in North American hurricane activity.

Counting hurricane landfalls is a poor way to look at hurricanes.  A better way is to look at the total energy of hurricanes and cyclones globally.  And as you can see, the numbers are cyclical (as every long-time hurricane observer could have told Mr. Gore) but without any trend:

click to enlarge

In fact, the death rates from severe weather have been dropping throughout the last century at the same time CO2 levels have been rising

click to enlarge

Of course, it is likely that increasing wealth and better technology are responsible for much of this mitigation, rather than changes in underlying weather patterns, but this is still relevant to the debate -- many proposed CO2 abatement plans would have the effect of slowing growth in the developing world, leaving them more vulnerable to weather events.   I have argued for years that the best way to fight weather deaths is to make the world rich, not to worry about 1 hurricane more or less.

Droughts are another event where the media quickly finds someone to blame the event on man-made climate change and declare that this one event is proof of a trend.  Bill McKibben tweeted about drought and corn yields many times in 2012, for example:

It turns out that based on US government data, the 2012 drought was certainly severe but no worse than several other droughts of the last 50 years (negative numbers represent drought).

click to enlarge

There is no upward trend at all (in fact a slightly downward trend that likely is not statistically significant) in dry weather in the US

click to enlarge

McKibben blamed bad corn yields in 2012 on man-made global warming, and again implied that one year's data point was indicative of a trend

US corn yields indeed were down in 2012, but still higher than at any time they had been since 1995.

Slide138

It is worth noting the strong upward trend in corn yields from 1940 to today, at the same time the world has supposedly experienced unprecedented man-made warming.   I might also point out the years in yellow, which were grown prior to the strong automation of farming via the fossil fuel economy.  Bill McKibben hates fossil fuels, and believes they should be entirely eliminated.  If so, he also must "own" the corn yields in yellow.  CO2-driven warming has not inhibited corn yields, but having McKibben return us to a pre-modern economy certainly would.

Anyway, as you might expect, corn yields after 2012 return right back to trend and continue to hit new records.  2012 did not represent a new trend, it was simply one bad year.

Slide139

I think most folks would absolutely swear, from media coverage, that the US is seeing more new high temperatures set and an upward trend in heat waves.  But it turns out neither is the case.

click to enlarge

Obviously, one has to be careful with this analysis.  Many temperature stations in the US Historical Climate Network have only been there for  20 or 30 years, so their all time high at that station for any given day is, by definition, going to be in the last 20 or 30 years.  But if one looks at temperature stations with many years of data, as done above, we can see there has been no particular uptick in high temperature records and in fact a disproportionate number of our all-time local records were set in the 1930's.

While there has been a small uptick in heat waves over the last 10-20 years, it is trivial compared to the heat of the 1930's

click to enlarge

Looking at it a different way, there is no upward trend in 100 degree (Fahrenheit) days...

click to enlarge

Or even 110 degree days.  Again, the 1930's were hot, long before man-made CO2 could possibly have made them so

click to enlarge

Why, one might ask, don't higher average global temperatures translate into more day-time high temperature records?  Well, we actually gave the answer back in Chapter 4A, but as a reminder, much of the warming we have seen has occurred at night, raising the nighttime lows without as much affect on daytime highs, so we are seeing more record nighttime high Tmin's than we have in much of the last century without seeing more record daytime Tmax temperatures:

Click to enlarge

We could go on all day with examples of claiming a trend from a single data point.  Watch for it yourself.  But for now let's turn to a third category

click to enlarge

We can measure things much more carefully and accurately than we could in the past.  This is a good thing, except when we are trying to compare the past to the present.  In a previous chapter, we showed a count of sunspots, and databases of sunspot counts go all the way back into the early 18th century.  Were telescopes in 1716 able to see all the sunspots we can see in 2016?  Or might an upward trend in sunspot counts be biased by our better ability today to detect small ones?

A great example of this comes, again, from Al Gore's movie in which Gore claimed that tornadoes were increasing and man-made global warming was the cause.  He was working with this data:

click to enlarge

This certainly looks scary.  Tornadoes have increased by a factor of 5 or 6!  But if you look at the NOAA web site, right under this chart, there is a big warning that ways to beware of this data.  With doppler radar and storm chasers and all kinds of other new measurement technologies, we can detect smaller tornadoes that were not counted in the 1950's.  The NOAA is careful to explain that this chart is biased by changes in measurement technology.  If one looks only at larger tornadoes we were unlikely to miss in the 1950's, there is no upward trend, and in fact there may be a slightly declining trend.

click to enlarge

That, of course, does not stop nearly every person in the media from blaming global warming whenever there is an above-average tornado year

Behind nearly every media story about "abnormal" weather or that the climate is somehow "broken" is an explicit assumption that we know what "normal" is.  Do we?

click to enlarge

We have been keeping systematic weather records for perhaps 150 years, and have really been observing the climate in detail for perhaps 30 years.  Many of our best tools are space-based and obviously only have 20-30 years of data at most.  Almost no one thinks we have been able to observe climate in depth through many of its natural cycles, so how do we know exactly what is normal?  Which year do we point to and say, "that was the normal year, that was the benchmark"?

One good example of this is glaciers.  Over the last 30 years, most (but not all) major glaciers around the world have retreated, leading to numerous stories blaming this retreat on man-made warming.  But one reason that glaciers have retreated over the last 50 years is that they were also retreating the 50 years before that and the 50 years before that:

click to enlarge

In fact, glaciers have been retreating around the world since the end of the Little Ice Age (I like to date it to 1812, with visions of Napoleon's army freezing in Russia, but that is of course arbitrary).

A while ago President Obama stood in front of an Alaskan glacier and blamed its retreat on man.  But at least one Alaskan glacier in the area has been mapped for centuries, and has been retreating for centuries:

click to enlarge

As you can see, from a distance perspective, most of the retreat actually occurred before 1900.  If one wants to blame the modern retreat of these glaciers on man, one is left with the uncomfortable argument that natural forces drove the retreat until about 1950, at which point the natural forces stopped just in time for man-made effects to take over.

Melting ice is often linked to sea level rise, though interestingly net ice melting contributes little to IPCC forecasts of sea level rises due to expected offsets with ice building in Antarctica -- most forecast sea level rise comes from the thermal expansion of water in the oceans.  And of course, the melting arctic sea ice that makes the news so often contributes nothing to sea level rise (which is why your water does not overflow your glass when the ice melts).

But the story for rising sea levels is the same as with glacier retreats -- the seas have been rising for much longer than man has been burning fossil fuels in earnest, going back to about the same 1812 start point:

Slide132

There is some debate about manual corrections added to more recent data (that should sound familiar to those reading this whole series) but recent sea level rise seems to be no more than 3 mm per year.  At most, recent warming has added perhaps 1 mm a year to the natural trend, or about 4 inches a century.

Our last failure mode is again one I see much more widely than just in climate.  Whether the realm is economics or climate or human behavior, the media loves to claim that incredibly complex, multi-variable systems are in fact driven by a single variable, and -- who'd have thunk it -- that single variable happens to fit with their personal pet theory.

click to enlarge

With all the vast complexity of the climate, are we really to believe that every unusual weather event is caused by a 0.013 percentage point change (270 ppm to 400 ppm) in the concentration of one atmospheric gas?

Let me illustrate this in another way.  The NOAA not only publishes a temperature anomaly (which we have mostly been using in all of our charts) but they take a shot at coming up with an average temperature for the US.   The following chart uses their data for the monthly average of Tmax (the daily high at all locations), Tmin (the daily low for all locations) and Tavg (generally the average of Tmin and Tmax).

 

click to enlarge

Note that even the average temperatures vary across a range of 40F through the seasons and years.  If one includes the daily high and low, the temperatures vary over a range of nearly 70F.  And note that this is the average for all the US over a month.  If we were to look at the range of daily temperatures across the breath of locations, we would see numbers that varied from well into the negative numbers to over 110.

The point of all this is that temperatures vary naturally a lot.  Now look at the dotted black line.  That is the long-term trend in the average, trending slightly up (since we know that average temperatures have risen over the last century).  The slope of that line, around 1F per century for the US, is virtually indistinguishable.   It is tiny, tiny, tiny compared to the natural variation of the averages.

The point of this is not that small increases in the average don't matter, but that it is irrational to blame every tail-of-the-distribution temperature event on man-made warming, since no matter how large we decide that number has been, its trivial compared to the natural variation we see in temperatures.

OK, I know that was long, but this section was actually pretty aggressively edited even to get it this short.  For God sakes, we didn't even mention polar bears (the animals that have already survived through several ice-free inter-glacial periods but will supposedly die if we melt too much ice today).  But its time to start driving towards a conclusion, which we will do in our next chapter.

Chapter 8, summarizing the lukewarmer middle ground, is here.

Coyote, Why All the Climate Stuff Suddenly?

Over the last several weeks, you have seen a series of posts on climate, and completing them has dominated much of my blogging time.  There are two or three chapters left to post.

This does not mean that I am shifting my attention on this blog to doing mostly climate.  In fact, if anything, it means perhaps the opposite.   I find the climate debate increasingly boring.  I don't think the arguments going on today are really much different than those that were going on five years ago.  So I have decided to try to get my current thinking on the topic online in an organized way, and then I likely will move on from the topic for a while.   I will still give presentations on it, and certainly will blog on the current efforts by AG's to use the force of government to suppress one side of the debate, but I have other things I would like to dig into more.

Denying the Climate Catastrophe: 6. Climate Models vs. Actual Temperatures

This is Chapter 6 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures (this article)
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In some sense, this is perhaps the most important chapter, the climax of all the discussion to this point.  It is where we return to climate forecasts and attempt to conclude whether forecasts of catastrophic levels of man-made warming are reasonable.  So let's take a step back and see where we are.

Here is the framework we have been working with -- we have walked through in earlier chapters both the "theory" and "observation" sections, ending most recently in chapter 5 with a discussion of how much past warming can be attributed to man.

click to enlarge

It is important to remember why we embarked on the observation section.  We ended the theory section with a range of future temperature forecasts, from the modest to the catastrophic, based on differing sensitivities of temperature to CO2 which were in turn largely based on varying assumptions about positive feedback effects in the climate.

Slide17

We concluded at the time that there was not much more we could go with pure theory in differentiating between these forecasts, that we had to consult actual observations to validate or invalidate these forecasts.

We've already done one such analysis when we made two comparisons back in Chapter 4.  We showed that temperatures had risen over the last 30 years by only a third to a half the rate projected by James Hanson to Congress...

click to enlarge

And that even the IPCC admitted in its last report that temperatures were running below or at best at the very low end of past forecast bands

click to enlarge

But in the grand scheme of things, even 30 years is a pretty short time frame to discuss climate changes.  Remember that in my own attribution attempt in Chapter 5, I posited an important 66 year decadal cycle, and past temperature reconstructions imply other cycles that are centuries and millennia long.

But there is a way we can seek confirmation of climate forecasts using over 100 years of past temperature data.  Let's take our forecast chart we showed above and give ourselves a bit more space on the graph by expanding the timescale:

click to enlarge

Here is the key insight that is going to help us evaluate the forecasts:  each forecast represents an actual, physical relationship between changes in CO2 concentrations and changes in temperature.  If such a relationship is to hold in the future, it also has to be valid in the past.  So we can take each of these different forecasts for the relation between temperature and CO2 and run them backwards to pre-industrial times in the 19th century, when atmospheric CO2 concentrations were thought to be around 270 ppm.

click to enlarge

The temperature value of each line at 270ppm point represents the amount of warming we should already have seen from man-made CO2

click to enlarge

What we see is that most of the mainstream IPCC and alarmist forecasts greatly over-predict past warming.  For example, this simple analysis shows that for the IPCC mean forecast to be correct, we should have seen 1.6C of manmade warming over the last century and a half.  But we know that we have not seen more than about 0.8C total in warming.  Even if all of that is attributed to man (which we showed in the last chapter is unlikely), warming has still been well-short of what this forecast would predict.  If we define a range for historic man-made warming from 0.33C (the number I came up with in the last chapter) to 0.8C (basically all of past warming due to man), we get numbers that are consistent with the non-catastrophic, zero-feedback cases

click to enlarge

Of course we are leaving out the time dimension -- many of the hypothesized feedbacks take time to operate, so the initial transient response of the world's temperatures is not the same as the longer-term equilibrium response.  But transient response likely is at least 2/3 of the full equilibrium value, meaning that my hypothesized value for man-made past warming of 0.33C would still be less than the no feedback case on an equilibrium basis.

It is from this analysis that I first convinced myself that man-made warming was unlikely to be catastrophic.

I want to add two notes here.

First, we mentioned back in the attribution section that some scientists argue that man has caused not all of but more than the total observed historical warming.  This chapter's analysis explains why.  The fact that climate models tend to overpredict history is not a secret among climate modelers (though it is something they seldom discuss publicly).  To justify their high feedback and sensitivity assumptions in their forecasts, they need more warming in the past.   One way to do this is to argue that the world would have cooled without man-made CO2, so that man-made CO2 contributed 0.8C of warming in addition to whatever the cooling would have been.  It allows attribution of more than 100% of past warming to man.

There are various ways this is attempted, but the most popular centers around man-made sulfate aerosols.  These aerosols are byproducts of burning sulfur-heavy fossil fuels, particularly coal, and they tend to have a cooling effect on the atmosphere (this is one reason why, in the 1970's, the consensus climate prediction was that man was causing the world to cool, not warm).  Some scientists argue that these aerosols have tended to cool the Earth over the past decades, but as we clean up our fuels their effect will go away and we will get catch-up warming.

There are a couple of problems with this line of thought.  The first is that we understand even less about the magnitude of aerosol cooling than we do of CO2 warming.  Any value we choose is almost a blind guess (though as we shall see in a moment, this can be a boon to modelers on a mission).  The second issue is that these aerosols tend to be very short-lived and local.  They don't remain in the atmosphere long enough to thoroughly mix and have a global effect.  Given their localization and observed concentrations, it is almost impossible to imagine them having more than a tenth or two effect on world temperatures.  And I will add that if we need to take into account cooling from sulfate aerosols, we also need to take into account the warming and ice melting effect of black carbon soot from dirty Asian coal combustion.  But we will return to that later in our section on Arctic ice.

My second, related note is that scientists will frequently claim that their computer models models do claim correctly match historic temperatures when run backwards.  As a long-time modeler of complex systems, my advice is this:  don't believe it until you have inspected the model in detail.  At least 9 times out of 10, one will find that this sort of tight fit with history is the result of manual tweaking, usually from the affect of a few "plug" variables.

Here is one example -- there was a study a while back that tried to understand how a number of different climate models could all arrive at very different temperature sensitivities to CO2, but all still claim to model the same history accurately.  What was found was that there was a second variable -- past cooling from man-made aerosols, discussed above -- that also varied greatly between models.  And it turned out that the value chosen in the models for this second variable was exactly the value necessary to make that model's output match history -- that is why I said that our very lack of knowledge of the actual cooling from such aerosols could be a boon to modelers on a mission.  In essence, there is a strong suspicion that this variable's value was not based on any observational evidence, but was simply chosen as a plug figure to make the model match history.

Having gone about as far as we can with the forecasts without diving into a whole new order of detail, let's move on to the final alarmist contention, that man-made CO2 is already changing the climate for the worse.  We will discuss this in Chapter 7.

Chapter 7 on whether we are already seeing man-made climate change is here.

Denying the Climate Catastrophe: 5b. Natural Attribution

This is part B of Chapter 5 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made;  B) Natural Attribution (this article)
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In part A, we discussed the main line of argument for attributing past warming to man-made CO2.  In essence, scientists have built computer models to simulate the climate (and global temperatures).  When these models were unable to simulate the amount of warming that occurred in the two decades between 1978 and 1998 using only what they thought were the major natural climate drivers, scientists concluded that this warming could not have been natural and could only have happened if the climate has a high sensitivity to man-made CO2.

This argument only works, of course, if the climate models are actually a correct representation of the climate.  And that can only be proven over time, by comparing climate model output to actual weather.  Back in chapter 4A, we briefly discussed how actual temperatures are in fact not tracking very well with climate model predictions, which should throw a substantial amount of doubt on the current quality of climate models (though the media still tends to treat model predictions as authoritative).

In this section, we will focus on some of the natural factors that are missing from most climate models.   Obviously, if important natural drivers have been left out of the models, then one cannot conclude from the inability of the models to match historical warming that the historical warming couldn't have been natural.  After discussing some of these factors, I will take my owns swing at the attribution problem.

Long-term Climate Shifts

We will begin with long-term climate variations.  These are most certainly left out of the models, because no one really understands why they occur (though theories abound, of course).  Mann's hockey stick not-withstanding, the consensus picture of past climate continues to include a strong warming period in the Middle Ages and a cool period, called the Little Ice Age, in the 16th and 17th centuries.

click to enlarge

Imagine you were a climate modeler in 1600.  Your model would probably have under-predicted temperatures over the next 200 years, because you were trying to model starting at the bottom of a long-term cyclical trend.  So clearly leaving this trend out in 1600 would get the wrong answer.  Wouldn't leaving it out in the year 2000 also get the wrong answer?  All too often scientists tend to assume (though not always explicitly) that this long-term natural recovery of temperatures ended around 1950, at the same time they believe man-made warming started.  A metaphorical hand-off occurred from natural to man-made factors.  But there is no evidence for this whatsoever.  We don't know what caused the Little Ice Age, so we don't know how long it can last or when it ends.

Changes in the Sun

Since we have mentioned it, let's discuss the sun.  The sun is the dynamo that, along with a few smaller effects like the rotation of the Earth, drives the climate.  We have known for some time that the Sun experiences cycles of variation, and one of the ways one can observe this variation is by looking at sunspots.  We have more sophisticated ways of measuring the sun today, but we still count the spots.

 

click to enlarge

Sunspots are cyclical in nature, and follow an eleven or so year cycle (you can see this in the spikes in the monthly light blue data above).  But when one take this cycle out of the picture, as was done with the 10.8 year moving average above, there also appears to be longer cyclical trends.  Since it is generally thought that more sunspots correlate with higher solar activity and output, one might expect that there could be some correlation between this solar trend and temperatures.  As we can see above, by the sunspot metric the sun was more active in the second half of the last century than in the first half.

Today, we don't have to relay on just the spots, we can look at the actual energy output of the sun.  And it turns out that the types of variations we have seen over recent decades in sunspots do not translate to very large changes in solar output on a percentage basis.  Yes, there is more solar output but the extra amount is small, too small to explain much temperature variation.   There is, though, an emerging new theory that a complex interaction of the sun with cosmic rays may affect cloud formation, acting as a multiplier effect on changes in solar output.  A lot of skeptics, eager to support the natural causation argument, jumped on this theory.  However, though the theory is intriguing and could turn out to be correct, I think folks are getting well ahead of the evidence in giving it too much credence at this point.

Ocean Cycles

At the end of the day, while solar variation may explain very long-cycle climate variations, it does not do much to explain our 1978-1998 warming period, so we will move on to another natural factor that does appear to have some explanatory power and which is also not in most climate models -- ocean cycles.

This is a complicated topic and I am far from an expert.  In short:  As mentioned in an earlier chapter, the oceans have far more heat carrying capacity than the atmosphere.  It turns out that oceans have cycles, that are decades long, where they can exchange more or less heat with the atmosphere.   In their "warm" periods, these cycles tend to leave more heat in the atmosphere, and in their "cold" periods they bury more heat in their depths.   Once such cycle is called the Pacific Decadal Oscillation (PDO), which will be familiar to most Americans because "El Nino"and "La Nina" climate patterns are part of this PDO cycle.  If one plots global temperatures against the PDO cycles, there is a good deal of correlation:

click to enlarge

When the PDO has been in its warm phases (the red periods in the chart above), global temperatures rise.  When it is in its cool phases (the blue zones), temperatures are flat to down.   As you can see, the PDO was in a warm phase in our 1978-1998 period.  Surely some of that steep rise in temperature may have come from the effect of this ocean cycle, yet this cycle was not included in the climate models that supposedly ruled out the possibility of natural causes for warming in this period.

A number of scientific studies have tried to remove these (and other) cyclical and event-based drivers from the historical temperature record.   Here is one such attempt (ENSO and AMO are ocean cycles, large volcanoes tend to have a global cooling effect for a few years after their eruption)

click to enlarge

With these natural effects removed, much of the cyclical variation from the Hadley CRUT4 data are gone, and we are left with a pretty constant linear trend.   Aha!  There is the warming signal, right?  Well, yes, but there is a problem here for the effort to attribute most or all of this warming to man -- specifically, this is not at all the trend one would expect if the long-term trend were primarily from man-made CO2.  Note the very linear trend starts around 1900, long before we began burning fossil fuels in earnest, and the trend is really quite flat, while man-made CO2 production has been growing exponentially.   Supporters of man-made attribution are left in the uncomfortable position of arguing that there must have been natural warming until about 1950 which stopped just in time for man-made warming to take over.

My Attribution Solution

A number of years ago I decided to take a shot at the attribution problem, largely just for fun, but it turned out so well I still keep it up to date.   I decided to assume just three factors:  1.  A long term linear trend starting even before the 20th century, presumably natural; 2.  A new added linear trend, presumably from man-made effects; and 3. A decadal cyclical factor, from things like ocean cycles.  I let the optimization program control everything -- the slope of the linear trends, the amplitude and period of the cyclical factor, the start date of the second modern trend, etc, to get the best fit with historic temperatures.  As before, I used monthly Hadley CRUT4 data.

This is what we ended up with.  A 66-year sine wave:

click to enlarge

Plus a long-term linear trend of 0.36C per century and a new linear trend beginning around 1950 that adds another 0.5C per century (for a total linear trend after 1950 of 0.86C per century).

click to enlarge

The result was a pretty good fit 8 years ago and more importantly, still continues to be a good fit up to today (unlike much more complicated climate models)

click to enlarge

Though the optimization was based on monthly data, you can see the fit even better if we add on a 5-year moving average to the chart:

click to enlarge

That is, then, my solution to the attribution problem.   Take the 0.5C per century since 1950 that this model has as  a modern linear trend, and we will for argument sake attribute it all to man.  From 1950-2016 (66 years, coincidentally my sin wave period) that is 0.33C  of historic warming due to man-made CO2.

In the next chapter, we return to the climate forecasts we discussed in chapters 2 and 3 and ask ourselves whether these make sense in the context of past warming.

Chapter 6 on climate forecasts vs. actual temperatures is here.

Wrapped Around the Axle

This is home repair day, so I am working from home while a variety of repair people show up (none of whom has yet shown up in their promised arrival time window).

Anyway, the A/C guy was here first and was diagnosing why my condenser didn't seem to be running.  He found this on the cooling fan motor (dead):

DSC_0257 (1)

Life in Arizona.

Denying the Climate Catastrophe: 5a. Arguments For Attributing Past Warming to Man

This is part A of Chapter 5 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made (this article); B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

Having established that the Earth has warmed over the past century or so (though with some dispute over how much), we turn to the more interesting -- and certainly more difficult -- question of finding causes for past warming.  Specifically, for the global warming debate, we would like to know how much of the warming was due to natural variations and how much was man-made.   Obviously this is hard to do, because no one has two thermometers that show the temperature with and without man's influence.

I like to begin each chapter with the IPCC's official position, but this is a bit hard in this case because they use a lot of soft words rather than exact numbers.  They don't say 0.5 of the 0.8C is due to man, or anything so specific.   They use phrases like "much of the warming" to describe man's affect.  However, it is safe to say that most advocates of catastrophic man-made global warming theory will claim that most or all of the last century's warming is due to man, and that is how we have put it in our framework below:

click to enlarge

By the way, the "and more" is not a typo -- there are a number of folks who will argue that the world would have actually cooled without manmade CO2 and thus manmade CO2 has contributed more than the total measured warming.  This actually turns out to be an important argument, since the totality of past warming is not enough to be consistent with high sensitivity, high feedback warming forecasts.  But we will return to this in part C of this chapter.

Past, Mostly Abandoned Arguments for Attribution to Man

There have been and still are many different approaches to the attributions problem.  In a moment, we will discuss the current preferred approach.  However, it is worth reviewing two other approaches that have mostly been abandoned but which had a lot of currency in the media for some time, in part because both were in Al Gore's film An Inconvenient Truth.

Before we get into them, I want to take a step back and briefly discuss what is called paleo-climatology, which is essentially the study of past climate before the time when we had measurement instruments and systematic record-keeping for weather.   Because we don't have direct measurements, say, of the temperature in the year 1352, scientists must look for some alternate measure, called a "proxy,"  that might be correlated with a certain climate variable and thus useful in estimating past climate metrics.   For example, one might look at the width of tree rings, and hypothesize that varying widths in different years might correlate to temperature or precipitation in those years.  Most proxies take advantage of such annual layering, as we have in tree rings.

One such methodology uses ice cores.  Ice in certain places like Antarctica and Greenland is laid down in annual layers.  By taking a core sample, characteristics of the ice can be measured at different layers and matched to approximate years.  CO2 concentrations can actually be measured in air bubbles in the ice, and atmospheric temperatures at the time the ice was laid down can be estimated from certain oxygen isotope ratios in the ice.  The result is that one can plot a chart going back hundreds of thousands of years that estimates atmospheric CO2 and temperature.  Al Gore showed this chart in his movie, in a really cool presentation where the chart wrapped around three screens:

click to enlarge

As Gore points out, this looks to be a smoking gun for attribution of temperature changes to CO2.  From this chart, temperature and CO2 concentrations appear to be moving in lockstep.  From this, CO2 doesn't seem to be a driver of temperatures, it seems to be THE driver, which is why Gore often called it the global thermostat.

But there turned out to be a problem, which is why this analysis no longer is treated as a smoking gun, at least for the attribution issue.  Over time, scientists got better at taking finer and finer cuts of the ice cores, and what they found is that when they looked on a tighter scale, the temperature was rising (in the black spikes of the chart) on average 800 years before the CO2 levels (in red) rose.

This obviously throws a monkey wrench in the causality argument.  Rising CO2 can hardly be the cause of rising temperatures if the CO2 levels are rising after temperatures.

It is now mostly thought that what this chart represents is the liberation of dissolved CO2 from oceans as temperatures rise.  Oceans have a lot of dissolved CO2, and as the oceans get hotter, they will give up some of this CO2 to the atmosphere.

The second outdated attribution analysis we will discuss is perhaps the most famous:  The Hockey Stick.  Based on a research paper by Michael Mann when he was still a grad student, it was made famous in Al Gore's movie as well as numerous other press articles.  It became the poster child, for a few years, of the global warming movement.

So what is it?  Like the ice core chart, it is a proxy analysis attempting to reconstruct temperature history, in this case over the last 1000 years or so.  Mann originally used tree rings, though in later versions he has added other proxies, such as from organic matter laid down in sediment layers.

Before the Mann hockey stick, scientists (and the IPCC) believed the temperature history of the last 1000 years looked something like this:

click to enlarge

Generally accepted history had a warm period from about 1100-1300 called the Medieval Warm Period which was warmer than it is today, with a cold period in the 17th and 18th centuries called the "Little Ice Age".  Temperature increases since the little ice age could in part be thought of as a recovery from this colder period.  Strong anecdotal evidence existed from European sources supporting the existence of both the Medieval Warm Period and the Little Ice Age.  For example, I have taken several history courses on the high Middle Ages and every single professor has described the warm period from 1100-1300 as creating a demographic boom which defined the era (yes, warmth was a good thing back then).  In fact, many will point to the famines in the early 14th century that resulted from the end of this warm period as having weakened the population and set the stage for the Black Death.

However, this sort of natural variation before the age where man burned substantial amounts of fossil fuels created something of a problem for catastrophic man-made global warming theory.  How does one convince the population of catastrophe if current warming is within the limits of natural variation?  Doesn't this push the default attribution of warming towards natural factors and away from man?

The answer came from Michael Mann (now Dr. Mann but actually produced originally before he finished grad school).  It has been dubbed the hockey stick for its shape:

 

click to enlarge

The reconstructed temperatures are shown in blue, and gone are the Medieval Warm Period and the Little Ice Age, which Mann argued were local to Europe and not global phenomena.  The story that emerged from this chart is that before industrialization, global temperatures were virtually flat, oscillating within a very narrow band of a few tenths of a degree.  However, since 1900, something entirely new seems to be happening, breaking the historical pattern.  From this chart, it looks like modern man has perhaps changed the climate.  This shape, with the long flat historical trend and the sharp uptick at the end, is why it gets the name "hockey stick."

Oceans of ink and electrons have been spilled over the last 10+ years around the hockey stick, including a myriad of published books.  In general, except for a few hard core paleoclimatologists and perhaps Dr. Mann himself, most folks have moved on from the hockey stick as a useful argument in the attribution debate.  After all, even if the chart is correct, it provides only indirect evidence of the effect of man-made CO2.

Here are a few of the critiques:

  • Note that the real visual impact of the hockey stick comes from the orange data on the far right -- the blue data alone doesn't form much of a hockey stick.  But the orange data is from an entirely different source, in fact an entirely different measurement technology -- the blue data is from tree rings, and the orange is form thermometers.  Dr. Mann bristles at the accusation that he "grafted" one data set onto the other, but by drawing the chart this way, that is exactly what he did, at least visually.  Why does this matter?  Well, we have to be very careful with inflections in data that occur exactly at the point that where we change measurement technologies -- we are left with the suspicion that the change in slope is due to differences in the measurement technology, rather than in the underlying phenomenon being measured.
  • In fact, well after this chart was published, we discovered that Mann and other like Keith Briffa actually truncated the tree ring temperature reconstructions (the blue line) early.  Note that the blue data ends around 1950.  Why?  Well, it turns out that many tree ring reconstructions showed temperatures declining after 1950.  Does this mean that thermometers were wrong?  No, but it does provide good evidence that the trees are not accurately following current temperature increases, and so probably did not accurately portray temperatures in the past.
  • If one looks at the graphs of all of Mann's individual proxy series that are averaged into this chart, astonishingly few actually look like hockey sticks.  So how do they average into one?  McIntyre and McKitrick in 2005 showed that Mann used some highly unusual and unprecedented-to-all-but-himself statistical methods that could create hockey sticks out of thin air.  The duo fed random data into Mann's algorithm and got hockey sticks.
  • At the end of the day, most of the hockey stick (again due to Mann's averaging methods) was due to samples from just a handful of bristle-cone pine trees in one spot in California, trees whose growth is likely driven by a number of non-temperature factors like precipitation levels and atmospheric CO2 fertilization.   Without these few trees, most of the hockey stick disappears.  In later years he added in non-tree-ring series, but the results still often relied on just a few series, including the Tiljander sediments where Mann essentially flipped the data upside down to get the results he wanted.  Taking out the bristlecone pines and the abused Tiljander series made the hockey stick go away again.

There have been plenty of other efforts at proxy series that continue to show the Medieval Warm Period and Little Ice Age as we know them from the historical record

 

click to enlarge

As an aside, Mann's hockey stick was always problematic for supporters of catastrophic man-made global warming theory for another reason.  The hockey stick implies that the world's temperatures are, in absence of man, almost dead-flat stable.   But this is hardly consistent with the basic hypothesis, discussed earlier, that the climate is dominated by strong positive feedbacks that take small temperature variations and multiply them many times.   If Mann's hockey stick is correct, it could also be taken as evidence against high climate sensitivities that are demanded by the catastrophe theory.

 

The Current Lead Argument for Attribution of Past Warming to Man

So we are still left wondering, how do climate scientists attribute past warming to man?  Well, to begin, in doing so they tend to focus on the period after 1940, when large-scale fossil fuel combustion really began in earnest.   Temperatures have risen since 1940, but in fact nearly all of this rise occurred in the 20 year period from 1978 to 1998:

 

click to enlarge

To be fair, and better understand the thinking at the time, let's put ourselves in the shoes of scientists around the turn of the century and throw out what we know happened after that date.  Scientists then would have been looking at this picture:

click to enlarge

Sitting in the year 2000, the recent warming rate might have looked dire .. nearly 2C per century...

click to enlarge

Or possibly worse if we were on an accelerating course...

click to enlarge

Scientists began to develop a hypothesis that this temperature rise was occurring too rapidly to be natural, that it had to be at least partially man-made.  I have always thought this a slightly odd conclusion, since the slope from this 20-year period looks almost identical to the slope centered around the 1930's, which was very unlikely to have much human influence.

 

click to enlarge

But never-the-less, the hypothesis that the 1978-1998 temperature rise was too fast to be natural gained great currency.  But how does one prove it?

What scientists did was to build computer models to simulate the climate.  They then ran the computer models twice.  The first time they ran them with only natural factors, or at least only the natural factors they knew about or were able to model (they left a lot out, but we will get to that in time).  These models were not able to produce the 1978-1998 warming rates.  Then, they re-ran the models with manmade CO2, and particularly with a high climate sensitivity to CO2 based on the high feedback assumptions we discussed in an earlier chapter.   With these models, they were able to recreate the 1978-1998 temperature rise.   As Dr. Richard Lindzen of MIT described the process:

What was done, was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multidecadal Oscillation), claim that such models nonetheless accurately depicted natural internal climate variability, and use the fact that these models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man.

Another way to put this argument is "we can't think of anything natural that could be causing this warming, so by default it must be man-made.  With various increases in sophistication, this remains the lead argument in favor of attribution of past warming to man.

In part B of this chapter, we will discuss what natural factors were left out of these models, and I will take my own shot at a simple attribution analysis.

The next section, Chapter 6 Part B, on natural attribution is here

Denying the Climate Catastrophe: 4b. Problems With The Surface Temperature Record

This is the part B of the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record (this article)
  5. Attribution of Past Warming;  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In part A of this chapter, we showed that the world had indeed warmed over the past 30-100 years, whether you looked at the surface temperature record or the satellite record.  Using either of these metrics, though, we did not see global warming accelerating, nor did we see warming rates that were faster than predicted.  In fact, we saw the opposite.

One story I left out of part A, because it did not affect the basic conclusions we drew, is the criticisms of the surface temperature record.  In this part B, we will discuss some of these criticisms, and see why many skeptics believe the 0.8C warming number for the past century is exaggerated.  We will also gain some insights as to why the satellite measured warming rates may be closer to the mark than rates determined by surface temperature stations.

Uncorrected Urban Biases

Years ago a guy named Steve McIntyre published a graphical portrayal of warming rates across the US.  This is a common chart nowadays. Anyway, this chart (almost 10 years old) drew from temperature measurement stations whose locations are shows with the crosses on the map:

usgrid80

I was living in Arizona at the time and I was interested to learn that the highest warming rate was being recorded at the USHCN station in Tucson (remember, just because Arizona is hot is no reason to necessarily expect it to have high warming rates, they are two different things).  At the time, Anthony Watt was just kicking off an initiative to develop quality control data for USHCN stations by having amateurs photograph the sites and upload them to a central data base.  I decided I would go down to the Tucson site to experience the highest warming rate myself.  This is what I found when I tracked down the station, and took this picture (which has been reproduced all over the place at this point):

click to enlarge

That is the temperature station, around that fenced in white box (the uproar over this picture eventually caused this location to be closed).  It was in the middle of a parking lot in the middle of a major university in the middle of a growing city.  100 years ago this temperature station was in the countryside, in essentially the open desert - no paving, no buildings, no cars.  So we are getting the highest warming rates in the country by comparing a temperature today in an asphalt parking lot in the middle of a city to a temperature a hundred years ago in the open desert.

The problem with this is what's called the urban heat island effect.   Buildings and concrete absorb heat from the sun during the day, more than would typically be absorbed by raw land in its natural state.  This heat is reradiated at night, causing nights to be warmer in cities than in the areas surrounding them.  If you live in a city, you will likely hear weather reports that predict colder temperatures in outlying areas, or warn of freezes in the countryside but not in the city itself.

It turns out that this urban heat island effect is easily measured -- it even makes a great science fair project!

Click to enlarge

My son and I did this project years ago, attaching a small GPS and temperature probe to a car.  We then drove out of the city center into the country and back in the early evening, when the urban heat island effect should be largest.  We drove out and then back to average out any effects of overall cooling during our testing.  One of the trips is shown above, with around 6 degrees F of temperature change.  We, and most others who have done this in other cities, found between 5 and 10 degrees of warming as one drives into a city at night.

If this effect were constant over time, it would not pose too many problems for our purposes here, because we are looking at changes in average temperatures over time, not absolute values.  But the urban heat island warming of a city (and particular temperature stations) increases as the urban area grows larger.   Because this urban warming is many times the global warming signal we are trying to measure, and since most temperature stations are located near growing urban locations, it introduces an important potential bias into measurement.

A number of studies have found that, in fact, we do indeed see more warming historically in thermometers located in urban areas than in those located in rural areas.  Two studies in California have shown much lower warming rates at rural thermometers than at urban ones:

click to enlarge

Click to enlarge

Anthony Watt has been working for years to do this same analysis for the entire US.  In fact, the pictures taken above of the temperature station in Tucson were part of the first phase of his project to document each USHCN site used in the global warming statistics with pictures.  Once he had pictures, he compared the details of the siting with a classification system scientists use to measure the quality of a temperature sites, from the best (class 1) to the worst with the most biases (class 5).  He found that perhaps a third of the warming in the official NOAA numbers may come from the introduction of siting biases from bad sites.  Or put another way, the warming at well-sited temperature stations was only about 2/3 in the official metric.

Click to enlarge

By the way, this is one other reason why I tend to favor the satellite measurements.  Going back to the numbers we showed in part A, the satellite temperature metric had about 2/3 the trend of the surface temperature reading, or almost exactly what the surface readings would be if this siting bias were eliminated (the absolute values of the trends don't match, because they are for different time periods and different geographies).

Click to enlarge

There is one other aspect of this chart that might have caught your eye -- if some temperature stations are showing 2 degrees of warming and some 3.2 degrees of warming, why is the total 3.2 degrees of warming.  Shouldn't it be somewhere in the middle?

One explanation is that the NOAA and other bodies take the data from these stations and perform a number of data manipulation steps in addition to a straight spatial averaging.   One such step is that they will use a computer process to try to correct temperature stations based on the values from neighboring stations.  The folks that run these indices argue that this computational process overcomes the site bias problem.  Skeptics will argue that this approach is utter madness -- why work to correct a known bad temperature point, why not just eliminate it?  If you have a good compass and a bad compass, you don't somehow mathematically average the results to find north, you throw out the bad one and use the good one.  In short, skeptics argue that this approach does not eliminate the error, it just spreads the error around to all the good stations, smearing the error like peanut butter.  Here is an example from the GISS, using station data that has only been adjusted for Time of Observation changes (TOBS).
Grand_12

This is exactly what we might expect - little warming out in undeveloped nature in Grand Canyon National Park, lots of warming in a large and rapidly growing modern city (yes, the Tucson data is from our favorite temperature station we featured above).  Now, here is the same data after the GISS has adjusted it:

Grand_15

You can see that Tucson has been adjusted down a degree or two, but Grand Canyon has been adjusted up a degree or two (with the earlier mid-century spike adjusted down).  OK, so it makes sense that Tucson has been adjusted down, though there is a very good argument to be made that it should be been adjusted down more, say by at least 3 degrees.  But why does the Grand Canyon need to be adjusted up by about a degree and a half?  What is currently biasing it colder by 1.5 degrees, which is a lot?  One suspects the GISS is doing some sort of averaging, which is bringing the Grand Canyon and Tucson from each end closer to a mean -- they are not eliminating the urban bias from Tucson, they are just spreading it around to other stations in the region.

Temperature Adjustments and Signal-To-Noise Ratio

Nothing is less productive, to my mind, than when skeptics yell the word "fraud!" on the issue of temperature adjustments.  All temperature databases include manual adjustments, even the satellite indices that many skeptics favor.    As mentioned above, satellite measurements have to be adjusted for orbital decay of the satellites just as surface temperature measurements have to be adjusted for changes in the daily time of observation.  We may argue that adjustment methodologies are wrong (as we did above with urban biases).  We may argue that there are serious confirmation biases (nearly every single adjustment to every temperature and sea level and ocean heat database tends to cool the past and warm the present, perhaps reinforced by preconceived notions that we should be seeing a warming signal.)  But I find that charges of fraud just cheapen the debate.

Even if the adjustments are all made the the best of intentions, we are still left with an enormous problem of signal to noise ratio.  It turns out that the signal we are trying to measure -- warming over time -- is roughly equal to the magnitude of the manual adjustments.  In other words, the raw temperature data does not show warming, only the manually adjusted data show warming.  This does not mean the adjusted data is wrong, but it should make us substantially less confident that we are truly measuring the signal in all this noise of adjustment.  Here are two examples, for an individual temperature station and for the entire database as a whole:

Click to enlarge

In this first example, we show the raw data (with Time of Observation adjustments only) in orange, and the final official adjusted version in blue.  The adjustments triple the warming rate for the last century.

Click to enlarge

We can see something similar for the whole US, as raw temperature measurements (this time before time of observation adjustments) actually shows a declining temperature trend in the US.  In this case, the entirety of the global warming signal, and more, comes from the manual adjustments.  Do these adjustments (literally thousands and thousands of them) make sense when taken in whole?  Does it make sense that there was some sort of warming bias in the 1920's that does not exist today? This  is certainly an odd conclusion given that it implies a bias exactly opposite of the urban heat island effect.

We could go into much more detail, but this gives one an idea of why skeptics prefer the satellite measurements to the surface temperature record.  Rather than endlessly working to try to get these public agencies to release their adjustment details and methodology for third party validation to the public that pays them (an ongoing task that still has not been entirely successful), skeptics have simply moved on to a better approach where the adjustments (to a few satellites) are much easier to manage.

Ultimately, both approaches for seeking a global warming signal are a bit daft.  Why?  Because, according to the IPCC, of all the extra warming absorbed by the surface of the Earth from the greenhouse effect, only about 1% goes into the atmosphere:

 

click to enlarge

Basically, water has a MUCH higher heat carrying capacity than air, and over 90% of any warming should be going into oceans.  We are just starting to get some new tools for measuring the changes to ocean heat content, though the task is hard because we are talking about changes in the thousandths of a degree in the deep oceans.

After this brief digression into the surface temperature records, it is now time to get back to our main line of discussion.  In the next chapter, we will begin to address the all-important attribution question:  Of the warming we have seen in the past, how much is man-made?

Chapter 5, Part A on the question of attributing past warming to man is here.

Denying the Climate Catastrophe: 4a. Actual Temperature Data

This is the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data (this article);   B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In our last chapter, we ended a discussion on theoretical future warming rates by saying that no amount of computer modelling was going to help us choose between various temperature sensitivities and thus warming rates.  Only observational data was going to help us determine how the Earth actually responds to increasing CO2 in the atmosphere.  So in this chapter we turn to the next part of our framework, which is our observations of Earth's temperatures, which is among the data we might use to support or falsify the theory of catastrophic man-made global warming.

click to enlarge

The IPCC position is that the world (since the late 19th century) has warmed about 0.8C.  This is a point on which many skeptics will disagree, though perhaps not as substantially as one might expect from the media.   Most skeptics, myself included, would agree that the world has certainly warmed over the last 100-150 years.  The disagreement tends to be in the exact amount of warming, with many skeptics contending that the amount of warming has been overstated due to problems with temperature measurement and aggregation methodology.

For now, we will leave those issues aside until part B of this section, where we will discuss some of these issues.  One reason to do so is to focus, at least at first, on the basic point of agreement that the Earth has indeed warmed somewhat.  But another reason to put these differences over magnitude aside is that we will find, a few chapters hence, that they essentially don't matter.  Even the IPCC's 0.8C estimate of past warming does not support its own estimates of temperature sensitivity to CO2.

Surface Temperature Record

The most obvious way to measure temperatures on the Earth is with thermometers near the ground.   We have been measuring the temperature at a few select locations for hundreds of years, but it really is only in the last century that we have fairly good coverage of the land surface.  And even then our coverage of places like the Antarctic, central Africa, parts of South America, and all of the oceans (which cover 75% of the Earth) is even today still spotty.  So coming up with some sort of average temperature for the Earth is not a straight averaging exercise -- data must be infilled and estimated, making the process complicated and subject to a variety of errors.

But the problem is more difficult than just data gaps.  How does one actually average a temperature from Denver with a temperature from San Diego?  While a few folks attempt such a straight average, scientists have developed a theory that one can more easily average what are known as temperature anomalies than one can average the temperature itself.  What is an anomaly?  Essentially, for a given thermometer, researchers will establish an average for that thermometer for a particular day of the year.  The exact time period or even the accuracy of this average is not that important, as long as the same time period is used consistently.  Then, the anomaly for any given measurement is the deviation of the measured temperature from its average.   So if the average historical temperature for this day of the year is 25C and the actual measured for the day is 26C, the anomaly for today at this temperature station is +1.0C.

Scientists then develop programs that spatially average these temperature anomalies for the whole Earth, while also adjusting for a myriad of factors, from time-of-day changes in measurement to technology changes over time of the temperature stations to actual changes in the physical location of the measurement.  This is a complicated enough a task, with enough explicit choices that must be made about techniques and adjustments, that there are many different temperature metrics floating around out there, many of which get different results from essentially the same data.  The Hadley Center in England's CRUT4 global temperature metric is generally considered the gold standard, and is the one used preferentially by the IPCC.  Its metric is shown below, with the monthly temperature anomaly in dark blue and the 5 year moving average (centered on its mid-point):

click to enlarge

Again, the zero point of the chart is arbitrary and merely depends on the period of time chosen as the base or average.  Looking at the moving average, one can see the temperature anomaly bounces around -0.3C in the late 19th century and has been around +0.5C over the last several years, which is how we get to about 0.8C warming.

Satellite Temperature Record

There are other ways to take temperature measurements, however.  Another approach is to use satellites to measure surface temperatures (or at least near-surface temperatures).   Satellites measure temperature by measuring the thermal microwave emissions of oxygen atoms in the lower troposphere (perhaps 0-3 miles above the Earth).  Satellites have the advantage of being able to look at the entire Earth without gaps, and are not subject to siting biases for surface temperatures stations (which will be discussed in our part B of this chapter).

The satellite record does, however, rely on a shifting array of satellites all of which have changing orbits for which adjustments must be made.  Of necessity, the satellite record cannot reach as far back into the past.  And the satellites are not actually measuring the temperature of the Earth, but rather a temperature a mile or two up.  Whether that matters is subject to debate, but the clincher for me is that the IPCC and most climate models have always shown that the first and most anthropogenic warming should show up in exactly this spot -- the lower troposphere -- which makes observation of this zone a particularly good way to look for a global warming signal.

Roy Spencer and John Christy have what is probably the leading satellite temperature metric, called "UAH" as a shorthand for University of Alabama, Huntsville's space science center.  The UAH record looks like this:

click to enlarge

Note that the absolute magnitude of the anomaly isn't comparable between the surface and satellite record, as they use different base periods, but changes and growth rates in the anomalies should be comparable between the two indices.

The first thing to note is that, though they are different, both the satellite and surface temperature records show warming since 1980.  For all that some skeptics may want to criticize the authors of the surface temperature databases, and there indeed some grounds for criticism, these issues should not distract us from the basic fact that in every temperature record we have (including other technologies like radiosonde balloons), we see recent warming.

In terms of magnitude, the two indices do not show the same amount of warming -- since 1980 the satellite temperature record shows about 30% less warming than does  the surface temperature record for the same period.   So which is right?  We will discuss this in more depth in part B, but the question is not made any easier by the fact that the surface records are compiled by prominent alarmist scientists while the satellite records are maintained by prominent skeptic scientists.  Which causes each side to accuse the other of having its thumb on the scale, so to speak.  I personally like the satellite record because of its larger coverage areas and the fact that its manual adjustments (which are required of both technologies) are for a handful of instruments rather than thousands, and are thus easier to manage and get right.  But I am also increasingly of the opinion that the differences are minor, and that neither are consistent with catastrophic forecasts.

So instead of getting ourselves involved in the dueling temperature data set food fight (we will dip our toe into this in part B), let's instead apply both these data sets to several propositions we see frequently in the media.  We will quickly see the answers we reach do not depend on the data set chosen.

Test #1:  Is Global Warming Accelerating

One frequent meme you will hear all the time is that "global warming is accelerating."  As of today it had 550,000 results on Google.  For example:

click to enlarge

So.  Is that true?  They can't print it if its not true, right (lol)?  Let's look first at the satellite record through the end of 2015 when this presentation was put together (there is an El Nino driven spike in 2 months after this chart was made, which does not affect the conclusions that follow in the least, but I will update to include ASAP).

click to enlarge

If you want a name for this chart, I could call it the "bowl of cherries" because it has become a cherry-picker's delight.   Everyone in the debate can find a starting point and an end point in this jagged data to find any trend they want to find.  So how do we find an objective basis to define end points for this analysis?  Well, my background is more in economic analysis.  Economists have the same problem in looking at trends for things like employment or productivity because there is a business cycle that adds volatility to these numbers above and beyond any long term trend.  One way they manage this is to measure variables from peak to peak of the economic cycle.

I have done something similar.  The equivalent cyclical peaks in the temperature world are probably the very high Pacific Decadal Oscillation, or El Nino, events.  There was one in 1998 and there is one occurring right now in late 2015/early 2016.  So I defined my period as 18 years from peak to peak.  By this timing, the satellite record shows temperatures to be virtually dead flat for those 18 years.  This is "the pause" that you may have heard of in climate debates.   Such an extended pause is not predicted by global warming theory, particularly when the theory (as in the IPCC main case) assumes high temperature sensitivities to CO2 and low natural variation in temperatures.

So if global warming were indeed accelerating, we would expect the warming rate over the last 18 years to be higher than the rate over the previous 18 years.  But just the opposite is true:

click to enlarge

While "the pause" does not in and of itself disprove the theory of catastrophic manmade global warming, it does easily falsify the myriad statements you see that global warming is accelerating.  At least for the last 20 years, it has been decelerating.

By the way, this is not somehow an artifact of just the satellite record.  This is what the surface record looks like for the same periods:

click to enlarge

Though it shows (as we discussed earlier) higher overall warming rates, the surface temperature record also shows a deceleration rather than acceleration over the last 20 years.

 

Test #2:  Are Temperatures Rising Faster than Expected

OK, let's consider another common meme, that the "earth is warming faster than predicted."

click to enlarge

Again, there over 500,000 Google matches for this meme.  So how do we test it?  Well, certainly not against the last IPCC forecasts -- they are only a few years old.  The first real high-sensitivity or catastrophic forecast we have is from James Hansen, often called the father of global warming.

click to enlarge

In June of 1988, Hanson made a seminal presentation to Congress on global warming, including this very chart (sorry for the sucky 1980's graphics).  In his testimony, he presented his models for the Earth's temperature, which showed a good fit with history**.  Using his model, he then created three forecasts:  Scenario A, with high rates of CO2 emissions;  Scenario B, with more modest emissions; and scenario C, with drastic worldwide emissions cuts (plus volcanoes, that tend to belch dust and chemicals that have a cooling effect).  Surprisingly, we can't even get agreement today about which forecast for CO2 production was closer to the mark (throwing in the volcanoes makes things hard to parse) but it is pretty clear that over the 30 years after this forecast, the Earth's CO2 output has been somewhere between A and B.

click to enlarge

As it turns out, it doesn't matter whether we actually followed the CO2 emissions from A or B.  The warming forecasts for scenario A and B turn out to be remarkably similar.  In the past, I used to just overlay temperature actuals onto Hansen's chart, but it is a little hard to get the zero point right and it led to too many food fights.  So let's pull the scenario A and B forecasts off the chart and compare them a different way.

click to enlarge

The left of chart shows Hanson's scenario A and B, scanned right from his chart.  Scenario A implies a warming rate from 1986 to 2016 of 3.1C per century.  Scenario B is almost as high, at 2.8C per century.  But as you can see on the right, the actual warming rates we have seen over the same period are well below these forecasts.  The surface temperature record shows only about half the warming, and the satellite record shows only about a third the warming, that Hansen predicted.   There is no justification for saying that recent warming rates have been higher than expected or forecast -- in fact, the exact opposite has been true.

We see the same thing when looking at past IPCC forecasts.  At each of its every-five-year assessments, the IPCC has included a forecast range for future temperatures.  In this case, though, we don't have to create a comparison with actuals because the most recent (5th) IPCC Assessment did it for us:

click to enlarge

The colored bands are their past forecasts.  The grey areas are the error bands on the forecast.  The black dots are global temperatures (which actually are shown with error bars, which is good practice but seldom done except perhaps when they are trying to stretch to get into the forecast range).  As you can see, temperatures have been so far below forecasts that they are dropping out of the low end of even the most generous forecast bands.  If temperatures were rising faster than expected, the black dots would be above the orange and yellow bands.  We therefore have to come to the conclusion that, at least for the last 20-30 years, temperatures have not been rising faster than expected, they have been rising slower than expected.

Day vs. Night

There is one other phenomenon we can see in the temperature data that we will come back to in later chapters:  that much of the warming over the last century has been at night, rather than in the daytime.   There are two possible explanations for this.  The first is that most anthropogenic warming models predict more night time warming than they do day time warming.  The other possibility is that a portion of the warming in the 20th century temperature record is actually spurious bias from the urban heat island effect due to siting of temperature stations near cities, since urban heat island warming shows up mainly at night.  We will discuss the latter effect in part B of this chapter.

Whatever the cause, much of the warming we have seen has occurred at night, rather than during the day.  Here is a great example from the Amherst, MA temperature station (Amherst was the first location where I gave this presentation, if that seems an odd choice).

Click to enlarge

As you can see, the warming rate since 1945 is 5 times higher at night than during the day.  This directly affects average temperatures since daily average temperature for a location in the historic record is the simple average of the daily high and daily low.  Yes, I know that this is not exactly accurate, but given technology in the past, this is the best that could be done.

The news media likes to cite examples of heat waves and high temperature records as a "proof" of global warming.   We will discuss this later, but this is obviously a logical fallacy -- one can't prove a trend in noisy data simply by citing isolated data points in one tail of the distribution.  But it is also fallacious for another reason -- we are not actually seeing any upwards trends in high temperature records, at least for daytime highs:

Click to enlarge

To get this chart, we obviously have to eliminate newer temperature stations from the data set -- any temperature station that is only 20 years old will have all of its all time records in the last 20 years (you would be surprised at how many otherwise reputable scientists miss simple things like this).  Looking at just the temperature stations in the US we have a long record for, we see with the black line that there is really no upwards trend in the number of high temperature records (Tmax) being set.   The 1930s were brutally hot, and if not for some manual adjustments we will discuss in part B of this section, they would likely still show as the hottest recent era for the US.   It turns out, with the grey line (Tmin), that while there is still no upward trend, we are actually seeing more high temperature records being set with daily lows (the highest low, as it were) than we are with daily highs.  The media is, essentially, looking in the wrong place, but I sympathize because a) broiling hot daytime highs are sexier and b) it is brutally hard to talk about highest low temperatures without being confusing as hell.

In our next chapter, or really part B of this chapter, we will discuss some of the issues that may be leading the surface temperature record to be exaggerated, or at least inaccurate.

Chapter 4, Part B on problems with the surface temperature record continues here.

If you want to skip Part B, and get right on with the main line of the argument, you can go straight to Chapter 5, part A, which starts in on the question of how much of past warming can be attributed to man.

 

** Footnote:  The history of Wall Street is full of bankrupt people whose models exactly matched history.  I have done financial and economic modeling for decades, and it is surprisingly easy to force multi-variable models to match history.  The real test is how well the model works going forward.  Both Hanson's 1988 models and the IPCC's many models do an awesome job matching history, but quickly go off the rails in future years.  I am reminded of a simple but famous example of the perfect past correlation between certain NFL outcomes and Presidential election outcomes.   This NFL model of presidential elections perfectly matches history, but one would be utterly mad to bet future elections based on it.

Denying the Climate Catastrophe: 3. Feedbacks

This is the third chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks (this article)
  4.  A)  Actual Temperature Data;   B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

We ended the last chapter on the greenhouse gas theory with this:

So whence comes the catastrophe?  As mentioned in the introduction, the catastrophe comes from a second, independent theory that the Earth's climate system is dominated by strong positive feedbacks that multiply greenhouse warming many times into a catastrophe.

Slide15

In this chapter, we will discuss this second, independent theory:  that the Earth's climate system is dominated by positive feedbacks.  I suppose the first question is, "What do we mean by feedback?"

Slide16

In a strict sense, feedback is the connection of the output of a system to its input, creating a process that is circular:  A system creates an output based on some initial input, that output changes the system's input, which then changes its output, which then in turn changes its input, etc.

Typically, there are two types of feedback:  negative and positive.  Negative feedback is a bit like the ball in the trough in the illustration above.  If we tap the ball, it moves, but that movement creates new forces (e.g. gravity and the walls of the trough) that tend to send the ball back where it started.  Negative feedback tends to attenuate any input to a system -- meaning that for any given push on the system, the output will end up being less than one might have expected from the push.

Positive feedback is more like the ball sitting on top of the hill.   Even a small tap will send it rolling very far away, because the shape of the hill and gravity tend to push the ball even further in the direction of the tap.  Positive feedback amplifies or multiplies any input to a system, meaning that even small pushes can lead to very large results.

The climate temperature system has a mix of positive and negative feedbacks.

For example, consider cumulus clouds.  If the Earth warms, more water tends to evaporate from the oceans, and some of that water will form big fluffy white clouds.  These clouds act as an umbrella for the Earth, reflecting heat back into space.  So as more clouds form due to warming, there is a net new cooling effect that offsets some of the original warming.  The amount of warming we might have expected is smaller due to the negative feedback of cloud formation.

On the other side, consider ice and snow.  Ice and snow reflect sunlight back into space and keep the Earth cooler than it would be without the ice and snow cover.  As the world warms, ice and snow will melt and thus reflect less sunlight back into space, having the effect of warming the Earth even more.  So an initial warming leads to more warming, amplifying the effect of the initial warming.

Since we know both types of feedback exist, what we care about is the net effect -- does negative or positive feedback dominate?  In every catastrophic forecast you have seen for global warming, in nearly every climate model the IPCC uses, the authors have assumed that the climate is dominated by strong positive feedbacks that multiply incremental warming from greenhouse gasses many times.

This is the result:

Slide17

As a reminder, the green line is the warming from increases in atmospheric CO2 concentration solely from the greenhouse gas effect, without any feedbacks taken into account.  It is generally agreed to be a warming rate of about 1.2C per doubling of CO2 concentrations, with which I and many (or most) science-based skeptics agree.  The other lines, then, are a variety of forecasts for warming after feedbacks are taken into account.  You can see that all these forecasts assume positive feedback, as the effect is multiplicative of the initial greenhouse gas warming (the pink, purple, and orange lines are approximately 3x, 5x, and 10x the green line, implying very high levels of positive feedback).

The pink line is the mean forecast from the 4th IPCC, implying a temperature sensitivity to CO2 of about 3C.  The purple line is the high end of the IPCC forecast band, implying a temperature sensitivity of 5C.  And the highest is not from a mathematical model per se, but from the mouth of Bill McKibben (sorry for the misspelling in the chart) who has on several occasions threatened that we could see as much as 10C of warming from CO2 by the end of the century.

Skeptics have pointed out a myriad of issues with the climate computer models that develop these forecasts, but I will leave those aside for now.  Suffice it to say that the models exclude many important aspects of the climate and are subject to hand tuning that allows modellers to produce pretty much any output they like.

But I do want to say a few words about computer models and scientific proof.  Despite what you will hear from the media, and even from the mouths of prominent alarmist scientists, computer models do not and cannot constitute "proof" of any sort.  Computer models are merely tools we use to derive the predicted values of physical parameters from complex hypotheses.  They are no different than the pen and paper computations an 18th century researcher might have made for the position of Saturn from Newton's celestial mechanics equations.  The "proof" comes when we take these predicted values and compare them against actual measurements over time and find that they are or are not accurate predictions.  Newton's laws were proved as his equations'  outputs for Saturn's position were compared to Saturn's actual measured position  (and in fact they were disproved, to a small extent, when Mercury's position did not accurately match and Einstein has to fix things a bit).  Similarly, hypotheses about global warming will be proved or disproved when the predictions of various models are compared to actual temperatures.

So we can't really get much further until we get to actual observations of the climate, which we will address in the next several chapters.  But I want to make sure that the two-part theory that leads to catastrophic global warming is clear.

This is the portion of the warming due to greenhouse gas theory:

Slide18

As you can see, the portion due to greenhouse gas theory is relatively small and likely not catastrophic.  The catastrophe comes from the second independent theory that the Earth's climate system is dominated by strong  (very strong!) positive feedbacks.

 

Slide19

It is the positive feedback that causes the catastrophe, not greenhouse gas theory.  So in debating catastrophic man-made global warming theory, we should be spending most of our time debating the theory that the climate is dominated by strong positive feedbacks, rather than debating the greenhouse gas theory.

But in fact, this does not happen in the mainstream media.  If you are an average consumer of climate news, I will be you have never heard a discussion in the media about this second theory.

And this second theory is far from settled.  If on the "settled" scale from 1-10, greenhouse gas theory is an 8 or 9, this theory of strong positive feedbacks dominating the climate is about a 2.   In fact, there is plenty of evidence that not only are scientists estimating feedbacks incorrectly, but that they don't even have the sign right and that net feedbacks may be negative.

This is a bit hard to communicate to a layman, but the positive feedbacks assumed by the most alarmist and catastrophic climate forecasts are very, very high.  Way higher than one might expect in advance upon encountering a new system.  This assumption of strong positive feedbacks is one that might even offend the sensibilities of the natural scientist.  Natural systems that are long-term stable (and certainly for all its variation the climate system has remained in a pretty narrow range for millions and millions of years) are typically not dominated by positive feedbacks, they are dominated by negative feedbacks.

If in fact our climate temperature system is dominated by negative feedbacks, the future warming forecast would actually be below the green line:

 

Slide20

OK, without getting in and criticizing the details of these models (which would by the way be a pointless wack-a-mole game because there are dozens of them) the best way to assess the validity of these various forecasts is to now consult actual observations.  Which we will begin to do in our next chapter, part 4a on actual temperature measurements.

Hey! Don't Overreact to That Issue, Overreact to My Issue

Nicholas Kristof urges us not to exaggerate or overreact to the risk of terrorism based on a few high-profile but isolated and nearly-impossible-to-control events, particularly since there is no upward trend in terrorism deaths.

He urges us instead to exaggerate and overreact to the risk of catastrophic man-made climate change based on a few high-profile but isolated and nearly-impossible-to-control weather events for which data show there is no actual upward trend (e.g. hurricanes, tornadoes, droughts, heat waves, etc).

Everyone today seems to be trying to stampede everyone else into some kind of fear based on overblown risks, whether it be to terrorism or climate change or immigrant-related crime or vaccine-caused autism or, uh, whatever is supposed to be bad that is caused by GMO's.  It is all a quest for power.  They hope that fear will cause you to write them a blank check for exercising power over you.  Don't give it to them.

Denying the Climate Catastrophe: 2. Greenhouse Gas Theory

Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory (this article)
  3. Feedbacks
  4.  A)  Actual Temperature Data; B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming;  A) Arguments for it being Man-Made;  B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

We continue our multi-part series on the theory of catastrophic man-made global warming by returning to our framework we introduced in the last chapter.

Click to Enlarge

In the introduction, we discussed how catastrophic man-made global warming theory was actually made up of two independent parts.  In this section, we will discuss the first of these two parts, the greenhouse gas effect, which is the box in the upper left of our framework.

For those unfamiliar with exactly what the greenhouse effect is, I encourage you to check out this very short primer.  Essentially, certain gasses in the atmosphere can absorb some of the heat the Earth is radiating into space, and re-radiate some of this heat back to Earth.  These are called greenhouse gasses.  Water vapor is a relatively strong greenhouse gas, while CO2 is actually a relatively weak greenhouse gas.

It may come as a surprise to those who only know of skeptics' arguments from reading their opponents (rather than the skeptics themselves), but most prominent skeptics accept the theory of greenhouse gas warming.  Of course there are exceptions, including a couple of trolls who like to get attention in the comments section of this and other blogs, and including a few prominent politicians and talk-show hosts.  But there are also environmental alarmists on the other side who have signed petitions to ban dihydrogen monoxide.  It is always tempting, but seldom intellectually rewarding, to judge a particular position by its least capable defenders.

There is simply too much evidence both from our and other planets (as well as simple experiments in a laboratory) to deny that greenhouse gasses in the atmosphere have a warming effect on planets, and that CO2 is such a greenhouse gas.   What follows in the rest of this section represents something of a consensus of people on both sides of the debate.

To investigate the effect of CO2 on Earth's temperature, we are going to use this chart:

Click to Enlarge

On the X axis is the atmospheric concentration of CO2 in parts-per-million (ppm).  Frequently, forecasts of CO2 warming are shown as a relationship over time.  I prefer this view, because it separates the assumption of CO2 emissions rates from assumptions about the sensitivity of temperatures to CO2.

Note that the concentrations we are talking about are remarkably small.  Currently the Earth is just over 400 ppm, which is 0.04%.  Only one in 2500 molecules in air is CO2.

Click to Enlarge

On the Y-axis we then have the incremental warming we might see, on average, across the surface of the Earth from increased concentrations of CO2.  Unless I point it out explicitly, we will use Celsius throughout this and later chapters.

What we now want to do is graph the relationship between the concentrations of CO2 in the atmosphere and the temperature increase of the Earth.  We will use 400 ppm and 0C increase as our starting points.  For now (and we will come back to this assumption) we will look at just the direct effect of warming from the greenhouse gas effect of CO2 and leave out any other complicated, 2nd order interactions with the Earth and its climate.

The estimate I will use comes from Dr. Michael Mann and was first cited in the early IPCC reports.  A quick note on the IPCC -- the IPCC is a body that meets every 5 years or so under the auspices of the United Nations to try to summarize the current state of climate science.  Many skeptics, including myself, would argue that the IPCC process is flawed and overly politicized, but as much as possible in this series I will try to use the IPCC position, making it explicit when I differ.  But what follows is very much IPCC canon.  In fact, I like using Michael Mann's work here because, as author of the hockey stick, he is a vocal and prominent advocate on the alarmist end of the debate and certain not in the tank for the skeptic side.

Click to Enlarge

The relationship is shown in the equation at the top (where delta T is the temperature increase and c is the atmospheric concentration in ppm).  I have graphed the equation in green because most of us do not have a good intuition for what this equation might look like.

The first thing you might note is that the line is curved, and represents a diminishing return relationship, which means that each incremental molecule of CO2 in the atmosphere has less warming effect than the last (see my short presentation on the greenhouse gas effect here).  Thus a constant rate of growth in CO2 concentrations would yield a slowing growth rate in temperatures.  This is a well-understood relationship, so much so that the sensitivity of temperature to CO2 is generally written not as degrees per ppm but as degrees per doubling of CO2 levels.  This means that the increase from 400-800 ppm would be expected to have about the same impact on temperature as the increase from 800 to 1600 ppm.

Of course, without any sense of CO2 growth rates, it's hard to relate this line to our lives.  So as a next step, we will overly some CO2 forecasts for the atmospheric levels of CO2 by 2100. [As an aside, there is a group of skeptics that think that most CO2 increases are coming from warming itself, flipping the arrow of causality, rather than from man.  There is some evidence for this proposition in ice core analysis, but I will leave it aside and for our purposes assume most CO2 increases in this century are coming from hydrocarbon combustion].

Though I think that their forecasts are exaggerated, I have taken the UN IPCC's 4 most likely CO2 cases for the year 2100 and overlayed them on the chart below:

 

Click to Enlarge

Taking the midpoint of these forecasts, we arrive at about 1C of warming between now and the end of the century.

So now, if you are paying attention, you may be ready to call bullsh*t on me.  Coyote, you say, every catastrophic forecast I have ever seen in the media is for WAY more than 1C of warming!  Bill McKibbon says its going to be 10 degrees of warming (and if you can't trust Harvard journalism majors on scientific issues, who can you trust?)  You are obviously lying, you evil denier.

Actually, not.  Everything in this chapter has been pretty much canon in the global warming world.  The direct, first order contribution of CO2 via the greenhouse effect is expected to be around a degree over the next century.  So whence comes the catastrophe?  As mentioned in the introduction, the catastrophe comes from a second, independent theory that the Earth's climate system is dominated by strong positive feedbacks that multiply greenhouse warming many times into a catastrophe.

If you have never heard of this second theory, don't be surprised.  In many years of reading press articles on global warming, I can't remember one that adequately explained the two-part nature of the theory that is embedded in most global warming forecasts and climate models.  But, perhaps not coincidentally, it is this second theory with which we skeptics have the most issues.  We will take this up in our next installment.

Part 3 on feedbacks can be found here.

Warren Meyer Climate Presentation at Claremont-McKenna Athenaeum

Denying the Climate Catastrophe: 1. Introduction

Last month I outlined my position on global warming to a fabulous audience at the Athenaeum at Claremont-McKenna College.  In doing so, I had a chance to substantially update my presentation materials.  I realized that it had been years since I had posted this presentation as anything but a video, and so I embark over the next several weeks to lay my position out in a multi-part written series.

Table of Contents (updated as new chapters are added)

  1. Introduction (this article)
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data; B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming;  A) Arguments for it being Man-Made;  B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

click to enlarge

I suppose the first question I need to answer is:  why should you bother reading this?  We are told the the science is "settled" and that there is a 97% consensus among scientists on .... something.  Aren't you the reader just giving excess credence to someone who is "anti-science" just by reading this?

Well, this notion that the "debate is over" is one of those statements that is both true and not true.  There is something approaching scientific consensus for certain parts of anthropogenic global warming theory -- for example, the fact that CO2 is a greenhouse gas and that concentrations of it in the atmosphere have a warming effect on the Earth is pretty much undisputed in all but the furthest reaches of the scientific community.

But it turns out that other propositions that are important to the debate on man-made global warming are far less understood scientifically, and the near certainty on a few issues (like the existence of the greenhouse gas effect) is often used to mask real questions about these other propositions.  So before we go any further , it is critical for us to get very clear what exact proposition we are discussing.

At this point I have to tell a story from over thirty years ago when I saw Ayn Rand speak at Northeastern University (it's hard to imagine any university today actually allowing Rand on campus, but that is another story).  In the Q&A period at the end, a woman asked Rand, "Why don't you believe in housewives?" and Rand answered, in a very snarky fashion, "I did not know housewives were a matter of belief."   What the woman likely meant to ask was "Why don't you believe that being a housewife is a valid occupation for a woman?"  But Rand was a bear for precision in language and was not going to agree or disagree with a poorly worded proposition.

I am always reminded of this story when someone calls me a climate denier.  I want to respond, in Rand's Russian accent, "I did not know that climate was a matter of belief?"

But rather than being snarky here, let's try to reword the "climate denier" label and see if we can get to a proposition with which I can agree or disagree.

Am I, perhaps, a "climate change denier?"  Well, no.  I don't know anyone who is.  The world has had warm periods and ice ages.  The climate changes.

OK, am I a "man-made climate change denier?"  No again.  I know very few people, except perhaps for a few skeptics of the talkshow host variety, that totally deny any impact of man's actions on climate.  Every prominent skeptic I can think of acknowledges multiple vectors of impact by man on climate, from greenhouse gas emissions to land use.

If you have to slap a label on me, I am a "catastrophic man-made climate change denier."  I deny the catastrophe.   Really, I would prefer "catastrophic man-made global warming denier" because there is no mechanism by which man's CO2 emissions can affect climate except through the intermediate step of warming.   The name change from "global warming" to "climate change" was, to my mind, less about science and more about a marketing effort to deal with the fact the temperatures had plateaued over the last 10-20 years and to allow activists to point to tail of the distribution weather events and call them man-made.    But we get ahead of ourselves.  We will discuss all of this in later sections.

In this series I will therefore be discussing what I will call the "Catastrophic Man-Made Global Warming Theory."   There are a lot of moving parts to this theory, so I will use the following framework as a structure for my discussion.

click to enlarge

This framework follows the work of the UN IPCC, an international panel that meets every 5 years or so to summarize the state of climate science in general and catastrophic man-made global warming in particular.  While I will obviously disagree with the IPCC canon from time to time, I will try to always point out when I do so.  However, I don't think any climate scientists would argue with the framework I am using here to describe their theory.

The first thing you will see, and perhaps the most important single point you should take away from this discussion, is that the core theory of catastrophic man-made global warming is actually a two part theory.  In part one, which is essentially greenhouse gas theory, a doubling of CO2 warms the Earth by a bit over 1 degree Celsius.  But there is a second part of the theory, a theory that is entirely unrelated to greenhouse gas theory.  That theory states that the Earth's climate systems are dominated by positive feedbacks which multiply the initial warming from CO2 by 3- 5 times or more.

It is this two-part theory that causes me, and many other skeptics, the most frustration in the climate debate.  For when advocates say the science is "settled," they really mean that greenhouse gas theory is pretty well accepted.  But this is only one part of a two-part theory, and in fact the catastrophe actually comes from the second theory, the theory that the climate is dominated by positive feedbacks, and this second theory is far from settled.  But again, I get ahead of myself, we will cover this all in great depth in later sections.

No theory in science has any meaning until it is confirmed by observations, so the bottom half of our framework deals with observational evidence for the theory.   The IPCC claims that the Earth has warmed about 0.8C over the last century, and that [much/substantial/most/all/more than 100%] of this warming is due to man.  The IPCC and many scientists have played with the wording of the amount of warming attributable to man over the years, and rather than deal with that complexity here, we will wait until we get to that section.  But it is fair to say that IPCC canon is that man's contribution to the warming is probably not less than half and could be more than 100%.

Finally, on the right of our framework, this man-made warming has the potential to cause all sorts of changes -- to weather patterns, to animal species, to disease vectors -- you name it.  Pick any possible negative effect -- more hurricanes, more tornadoes, more heat waves, more snow, less snow, lower crop yields, more malaria, more rain, less rain, more terrorism, rising sea levels, displaced persons, more acne, etc. etc. -- and someone has been quoted in the media claiming the link to warming.  When something bad happened in Medieval Europe, it was typically blamed on Jews or marginalized women (via witchcraft accusations).   Today, global warming is the new all-purpose target of blame.

Over many installments and several weeks, I hope to walk through this framework and discuss the state of the science (for those who can't wait, I wrote a much shorter overview here several years ago).  We will discuss parts of the science that are well-grounded -- such as man-made warming from greenhouse gas theory and the fact that the Earth has warmed over the last century.  We will discuss parts of the science I consider exaggerated -- such as the claim of large positive feedback multipliers of future warming and attribution of all past warming to man.  And we will discuss parts of the theory which, despite constant repetition in the press, have absolutely no evidence behind them whatsoever -- such as the claim that we are already seeing negative effects from warming such as more hurricanes and tornadoes.

Part 2 on the greenhouse gas theory continues here.

Coyote's Bi-Partisan Climate Plan -- A Climate Skeptic Calls For a Carbon Tax

While I am not deeply worried about man-made climate change, I am appalled at all the absolutely stupid, counter-productive things the government has implemented in the name of climate change, all of which have costly distorting effects on the economy while doing extremely little to affect man-made greenhouse gas production.  For example:

Even when government programs do likely have an impact of CO2, they are seldom managed intelligently.  For example, the government subsidizes solar panel installations, presumably to reduce their cost to consumers, but then imposes duties on imported panels to raise their price (indicating that the program has become more of a crony subsidy for US solar panel makers, which is typical of these types of government interventions).  Obama's coal power plan, also known as his war on coal, will certainly reduce some CO2 from electricity generation but at a very high cost to consumers and industries.  Steps like this are taken without any idea of whether this is the lowest cost approach to reducing CO2 production -- likely it is not given the arbitrary aspects of the program.

For years I have opposed steps like a Federal carbon tax or cap and trade system because I believe (and still believe) them to be unnecessary given the modest amount of man-made warming I expect over the next century.  I would expect to see about one degree C of man-made warming between now and 2100, and believe most of the cries that "we are already seeing catastrophic climate changes" are in fact panics driven by normal natural variation (most supposed trends, say in hurricanes or tornadoes or heat waves, can't actually be found when one looks at the official data).

But I am exhausted with all the stupid, costly, crony legislation that passes in the name of climate change action.   I am convinced there is a better approach that will have more impact on man-made CO2 and simultaneously will benefit the economy vs. our current starting point.  So here goes:

The Plan

Point 1: Impose a Federal carbon tax on fuel.

I am open to a range of actual tax amounts, as long as point 2 below is also part of the plan.  Something that prices CO2 between $25 and $45 a ton seems to match the mainstream estimates out there of the social costs of CO2.  I think methane is a rounding error, but one could make an adjustment to the natural gas tax numbers to take into account methane leakage in the production chain.   I am even open to make the tax=0 on biofuels given these fuels are recycling carbon from the atmosphere.

A Pigovian tax on carbon in fuels is going to be the most efficient possible way to reduce CO2 production.   What is the best way to reduce CO2 -- by substituting gas for coal?   by more conservation?  by solar, or wind?  with biofuels?  With a carbon tax, we don't have to figure it out.  Different approaches will be tested in the marketplace.  Cap and trade could theoretically do the same thing, but while this worked well in some niche markets (like SO2 emissions), it has not worked at all in European markets for CO2.   There has just been too many opportunities for cronyism, too much weird accounting for things like offsets that is hard to do well, and too much temptation to pick winners and losers.

Point 2:  Offset 100% of carbon tax proceeds against the payroll tax

Yes, there are likely many politicians, given their incentives, that would love a big new pool of money they could use to send largess, from more health care spending to more aircraft carriers, to their favored constituent groups.  But we simply are not going to get Conservatives (and libertarians) on board for a net tax increase, particularly one to address an issue they may not agree is an issue at all.  So our plan will use carbon tax revenues to reduce other Federal taxes.

I think the best choice would be to reduce the payroll tax.  Why?  First, the carbon tax will necessarily be regressive (as are most consumption taxes) and the most regressive other major Federal tax we have are payroll taxes.  Offsetting income taxes would likely be a non-starter on the Left, as no matter how one structures the tax reduction the rich would get most of it since they pay most of the income taxes.

There is another benefit of reducing the payroll tax -- it would mean that we are replacing a consumption tax on labor with a consumption tax on fuel. It is always dangerous to make gut-feel assessments of complex systems like the economy, but my sense is that this swap might even have net benefits for the economy -- ie we might want to do it even if there was no such thing as greenhouse gas warming.  In theory, labor and fuel are economically equivalent in that they are both production raw materials. But in practice, they are treated entirely differently by the public.   Few people care about the full productive employment of our underground fuel reserves, but nearly everybody cares about the full productive employment of our labor force.   After all, for most people, the primary single metric of economic health is the unemployment rate.  So replacing a disincentive to hire with a disincentive to use fuel could well be popular.

Point 3:  Eliminate all the stupid stuff

Oddly enough, this might be the hardest part politically because every subsidy, no matter how idiotic, has a hard core of beneficiaries who will defend it to the death -- this the the concentrated benefits, dispersed cost phenomena that makes it hard to change many government programs.  But never-the-less I propose that we eliminate all the current Federal subsidies, mandates, and prohibitions that have been justified by climate change. Ethanol rules and mandates, solar subsidies, wind subsidies, EV subsidies, targeted technology investments, coal plant bans, pipeline bans, drilling bans -- it all should go.  The carbon tax does the work.

States can continue to do whatever they want -- we don't need the Feds to step on states any more than they do already, and I continue to like the 50 state laboratory concept.  If California wants to continue to subsidize wind generators, let them do it.  That is between the state and its taxpayers (and for those who think the California legislature is crazy, that is what U-Haul is for).

Point 4:  Revamp our nuclear regulatory regime

As much as alternative energy enthusiasts would like to deny it, the world needs reliable, 24-hour baseload power -- and wind and solar are not going to do it (without a change in storage technology of at least 2 orders of magnitude in cost).  The only carbon-free baseload power technology that is currently viable is nuclear.

I will observe that nuclear power suffers under some of the same problems as commercial space flight -- the government helped force the technology faster than it might have grown organically on its own, which paradoxically has slowed its long-term development.  Early nuclear power probably was not ready for prime time, and the hangover from problems and perceptions of this era have made it hard to proceed even when better technologies have existed.   But we are at least 2 generations of technology past what is in most US nuclear plants.  Small air-cooled thorium reactors and other technologies exist that could provide reliable safe power for over 100 years.  I am not an expert on nuclear regulation, but it strikes me that a regime similar to aircraft safety, where a few designs are approved and used over and over makes sense.  France, which has the strongest nuclear base in the world, followed this strategy.  Using thorium could also have the advantage of making the technology more exportable, since its utility in weapons production would be limited.

Point 5: Help clean up Chinese, and Asian, coal production

One of the hard parts about fighting CO2 emissions, vs. all the other emissions we have tackled in the past (NOx, SOx, soot/particulates, unburned hydrocarbons, etc), is that we simply don't know how to combust fossil fuels without creating CO2 -- CO2 is inherent to the base chemical reaction of the combustion.  But we do know how to burn coal without tons of particulates and smog and acid rain -- and we know how to do it economically enough to support a growing, prosperous modern economy.

In my mind it is utterly pointless to ask China to limit their CO2 growth.  China has seen the miracle over the last 30 years of having almost a billion people exit poverty.  This is an event unprecedented in human history, and they have achieved it in part by burning every molecule of fossil fuels they can get their hands on, and they are unlikely to accept limitations on fossil fuel consumption that will derail this economic progress.  But I think it is reasonable to help China stop making their air unbreathable, a goal that is entirely compatible with continued economic growth.  In 20 years, when we have figured out and started to build some modern nuclear designs, I am sure the Chinese will be happy to copy these and start working on their CO2 output, but for now their Maslov hierarchy of needs should point more towards breathable air.

As a bonus, this would pay one immediate climate change benefit that likely would dwarf the near-term effect of CO2 reduction.  Right now, much of this soot from Asian coal plants lands on the ice in the Arctic and Greenland.  This black carbon changes the albedo of the ice, causing it to reflect less sunlight and absorb more heat.  The net effect is more melting ice and higher Arctic temperatures.  A lot of folks, including myself, think that the recent melting of Arctic sea ice and rising Arctic temperatures is more attributable to Asian black carbon pollution than to CO2 and greenhouse gas warming (particularly since similar warming and sea ice melting is not seen in the Antarctic, where there is not a problem with soot pollution).

Final Thoughts

At its core, this is a very low cost, even negative cost, climate insurance policy.  The carbon tax combined with a market economy does the work of identifying the most efficient ways to reduce CO2 production.   The economy benefits from the removal of a myriad of distortions and crony give-aways, while also potentially benefiting from the replacement of a consumption tax on labor with a consumption tax on fuel.  The near-term effect on CO2 is small (since the US is only a small part of the global emissions picture), but actually larger than the near-term effect of all the haphazard current programs, and almost certainly cheaper to obtain.  As an added benefit, if you can help China with its soot problem, we could see immediate improvements in probably the most visible front of man-made climate change:  in the Arctic.

Postscript

Perhaps the hardest thing to overcome in reaching a compromise here is the tribalism of modern politics.  I believe this is  a perfectly sensible plan that even those folks who believe man-made global warming is  a total myth ( a group to which I do not belong) could sign up for.  The barrier, though, is tribal.  I consider myself to be pretty free of team politics but my first reaction when thinking about this kind of plan was, "What? We can't let those guys win.  They are totally full of sh*t.  They are threatening to throw me in jail for my opinions."

It was at this point I was reminded of a customer service story at my company.  I had a customer who was upset call me, and I ended up giving them a full-refund and a certificate to come back and visit us in the future.  I actually suspected there was more to the story, but I didn't want a bad review.  The customer was happy, but my local manager was not.  She called me and said, "That was a bad customer! He was lying to you.  How can you let him win like that?"   Does this sound familiar?  I think we fall into this trap all the time in modern politics, worried more about preventing the other team from winning than about doing the right thing.

Warren Meyer Speaking in LA on Lukewarmer Climate Position on Wednesday, February 24 -- Come See Me!

I am speaking on Wednesday night at the Athenaeum at Claremont-McKenna College near Pomona on Wednesday, February 24.  It is open to the public and is free.  Come by a say hi if you are in the area.  You can just walk in to the presentation which begins at 6:45 but if you want to attend the pre-dinner at 5:30, there is a $20 charge and you need to reserve a spot by calling  909-621-8244.

I really hope if you are in the LA area you will come by.  The presentation is about 45 minutes plus a Q&A afterwards.

athmap_3

Coyote Climate Talk in LA Area This Week

I am speaking on Wednesday night at the Athenaeum at Claremont-McKenna College near Pomona on Wednesday, February 24.  I believe it is open to the public and is free but requires you to call ahead and reserve a spot.  Come by a say hi if you are in the area.

Coyote on the Real Clear Radio Hour

Bill Frezza interviewed me for his show the other day.  I felt it was not one of my better performances but he says he is a wizard of editing so we will see.  Anyway, I am actually sharing the show with Coyote-favorite Dr. Richard Lindzen, so at least that half of the show should be worth your time.  Here are the details:

Tune in Saturday, February 13th to RealClear Radio Hour with Bill Frezza with guests Richard Lindzen and Warren Meyer.

You can listen live on Bloomberg’s Boston iHeartRadio or Bloomberg’s San Francisco iHeartRadioSaturdays at 10a PT/ 1p ET, 4p PT/ 7p ET or Sundays at 1a PT/ 4a ET.

Government Science Monopoly

Richard Lindzen, atmospheric physicist, MIT professor emeritus, and lead author of the “Physical Climate Processes and Feedbacks” chapter of the 2001 Intergovernmental Panel on Climate Change report, attributes climate hype to politics, money, and propaganda. Lindzen particularly takes issue with the “97% consensus” claim that is being used to stifle debate and demonize skeptics.

Rescuing Public Parks

Warren Meyer, founder and president of Recreation Resource Management, shares how he has successfully managed public parks for nearly 25 years. Meyer advocates for whole park concessions—privatized management of public parks—to save them from closure and agency mismanagement.

If you can't tune in live - download the as-aired shows from iTunes or listen to podcasts with additional content on SoundCloud or YouTube

 

The weekly one-hour program airs:

WXKS 1200 and WJMN 94.5F in Boston Saturdays 1p & 7p & Sundays 4a ET,

KNEW 960 & KOSF 103.7 in San Francisco Saturdays 10a & 4p & Sundays 1a PT,

1030 KVOI in Tucson, AZ Saturdays 4a MT,

KSBN 1230 Money Talk in Spokane, WA Saturdays 5a PT,

Cities 92.9FM WRPW in Bloomington, IL Saturdays 7a CT,

1590 WSMN in Nashua, NH Saturdays 12p ET,

KATE 1450AM in Alberta Lea, MN Saturdays 1p CT,

1330 WEBY in Pensacola, FL Saturdays 3p CT,

The Patriot, KRMR 105.7FM in Hays, KS Sundays 3p CT,

The Patriot, KNNS 1510AM in Larned, KS Sundays 3p CT,

KVOW 1450 in Riverton, WY Sundays 3p MT, and

WROM Radio in Detroit, MI Mondays 8p ET

US Temperature Trends, In Context

There was some debate a while back around about a temperature chart some Conservative groups were passing around.

Obviously, on this scale, global warming does not look too scary.  The question is, is this scale at all relevant?  I could re-scale the 1929 stock market drop to a chart that goes from Dow 0 to, say, Dow 100,000 and the drop would hardly be noticeable.  That re-scaling wouldn't change the fact that the 1929 stock market crash was incredibly meaningful and had large impacts on the economy.  Kevin Drum wrote about the temperature chart above,

This is so phenomenally stupid that I figured it had to be a joke of some kind.

Mother Jones has banned me from commenting on Drum's site, so I could not participate in the conversation over this chart.  But I thought about it for a while, and I think the chart's author perhaps has a point but pulled it off poorly.  I am going to take another shot at it.

First, I always show the historic temperature anomaly on the zoomed in scale that you are used to seeing, e.g.  (as usual, click to enlarge)

click to enlarge

The problem with this chart is that it is utterly without context just as much as the previous chart.  Is 0.8C a lot or a little?  Going back to our stock market analogy, it's a bit like showing the recent daily fluctuations of the Dow on a scale from 16,300 to 16,350.  The variations will look huge, much larger than either their percentage variation or their meaningfulness to all but the most panicky investors.

So I have started including the chart below as well.  Note that it is in Fahrenheit (vs. the anomaly chart above in Celsius) because US audiences have a better intuition for Fahrenheit, and is only for the US vs. the global chart above.  It shows the range of variation in US monthly averages, with the orange being the monthly average daily maximum temperature across the US, the dark blue showing the monthly average daily minimum temperature, and the green the monthly mean.  The dotted line is the long-term linear trend

click to enlarge

Note that these are the US averages -- the full range of daily maximums and minimums for the US as a whole would be wider and the full range of individual location temperatures would be wider still.   A couple of observations:

  • It is always dangerous to eyeball charts, but you should be able to see what is well known to climate scientists (and not just some skeptic fever dream) -- that much of the increase over the last 30 years (and even 100 years) of average temperatures has come not from higher daytime highs but from higher nighttime minimum temperatures.  This is one reason skeptics often roll their eyes as attribution of 15 degree summer daytime record heat waves to global warming, since the majority of the global warming signal can actually be found with winter and nighttime temperatures.
  • The other reason skeptics roll their eyes at attribution of 15 degree heat waves to 1 degree long term trends is that this one degree trend is trivial compared to the natural variation found in intra-day temperatures, between seasons, or even across years.  It is for this context that I think this view of temperature trends is useful as a supplement to traditional anomaly charts (in my standard presentation, I show this chart scale once and the standard anomaly chart scale further up about 30 times, so that utility has limits).

Revisiting James Hanson's 1988 Global Warming Forecast to Congress

(Cross-posted from Climate Skeptic)

I want to briefly revisit Hansen's 1998 Congressional forecast.  Yes, I and many others have churned over this ground many times, but I think I now have a better approach.   The typical approach has been to overlay some actual temperature data set on top of Hansen's forecast (e.g. here).  The problem is that with revisions to all of these data sets, particularly the GISS reset in 1999, none of these data sets match what Hansen was using at the time.  So we often get into arguments on where the forecast and actuals should be centered, etc.

This might be a better approach.  First, let's start with Hansen's forecast chart (click to enlarge).

hansen forecast

Folks have argued for years over which CO2 scenario best matches history.  I would argue it is somewhere between A and B, but you will see in a moment that it almost does not matter.    It turns out that both A and B have nearly the same regressed slope.

The approach I took this time was not to worry about matching exact starting points or reconciling difference anomaly base periods.  I merely took the slope of the A and B forecasts and compared it to the slope over the last 30 years of a couple of different temeprature databases (Hadley CRUT4 and the UAH v6 satellite data).

The only real issue is the start year.  The analysis is not very sensitive to the year, but I tried to find a logical start.  Hansen's chart is frustrating because his forecasts never converge exactly, even 20 years in the past.  However, they are nearly identical in 1986, a logical base year if Hansen was giving the speech in 1988, so I started there.  I didn't do anything fancy on the trend lines, just let Excel calculate the least squares regression.  This is what we get (as usual, click to enlarge).

click to enlarge

I think that tells the tale  pretty clearly.   Versus the gold standard surface temperature measurement (vs. Hansen's thumb-on-the-scale GISS) his forecast was 2x too high.  Versus the satellite measurements it was 3x too high.

The least squares regression approach probably under-estimates that A scenario growth rate, but that is OK, that just makes the conclusion more robust.

By the way, I owe someone a thanks for the digitized numbers behind Hansen's chart but it has been so many years since I downloaded them I honestly forgot who they came from.

Dear Conservatives: This Is Why We Hate All Your Civil Rights Restrictions in the Name of Fighting Terror

Because about 5 seconds after they are passed, government officials are scheming to use the laws against non-terrorists to protect themselves from criticism.

Twenty-four environmental activists have been placed under house arrest ahead of the Paris climate summit, using France’s state of emergency laws. Two of them slammed an attack on civil liberties in an interview with FRANCE 24....

The officers handed Amélie a restraining order informing her that she can no longer leave Rennes, is required to register three times a day at the local police station, and must stay at home between 8pm and 6am.

The order ends on December 12, the day the Paris climate summit draws to a close....

Citing the heightened terrorist threat, French authorities have issued a blanket ban on demonstrations – including all rallies planned to coincide with the climate summit, which Hollande is due to formally open on Monday.

This justification is about as lame as them come:

AFP news agency has had access to the restraining notices. It says they point to the “threat to public order” posed by radical campaigners, noting that security forces “must not be distracted from the task of combating the terrorist threat”.

Note that the police had absolutely no evidence that these folks were planning any violence, or even that they were planning any particular sort of protest.  This was a classic "round up the usual suspects" dragnet of anyone who had made a name for themselves protesting at green causes in the past.

Postscript:  Yes, I know that these protesters and I would have very little common ground on environmental issues.  So what?  There is nothing more important than supporting the civil rights of those with whom one disagrees.

And yes, I do have the sneaking suspicion that many of the very same people caught up in this dragnet would cheer if I and other skeptics were similarly rounded up for our speech by the government.  But that is exactly the point.  There are people who, if in power, would like to have me rounded up.  So it is important to stand firm against any precedent allowing the government to have these powers.  Else the only thing standing between me and jail is a single election.

Update:  Think that last bit is overly dramatic?  Think again.  I can guarantee you that you have some characteristic or belief that would cause someone in the world today, and probably many people, to want to put you up against the wall if they had the power to do so.  As proof, see:  all of history.

The New Rich -- Living the High Life Through Your Non-Profit

Several months ago, a lot of folks where shocked to find that the Clinton Foundation only spent $9 million in direct aid out of a total budget of $150 million, with the rest going to salaries and bonuses and luxury travel for family and friends and other members of the Clinton posse.

None of this surprised me.  From my time at Ivy League schools, I know any number of kids from rich families that work for some sort of trust or non-profit that has nominally charitable goals, but most of whose budget seems to go to lavish parties, first-class travel, and sinecures for various wealthy family scions.

But this week comes a story from the climate world that demonstrates that making a fortune from your non-profit is not just for the old money any more -- it appears to be a great way for activists to build new fortunes.

The story starts with the abhorrent letter by 20 university professors urging President Obama to use the RICO statute (usually thought of as a tool to fight organized crime) to jail people who disagree with them in a scientific debate.  The letter was authored by Jagadish Shukla of George Mason University, and seems to take the position that all climate skeptics are part of an organized coordinated gang that are actively promoting ideas they know to be wrong solely for financial enrichment. (I will give the near-universal skeptic reply to this:  "So where is my Exxon check?!"

Anyway, a couple of folks, including Roger Pielke, Jr. and Steve McIntyre, both folks who get accused of being oil industry funded but who in fact get little or no funding from any such source, wondered where  Shukla's funding comes from.   Shukla gets what looks like a very generous salary from George Mason University of $314,000 a year.  Power to him on that score.  However, the more interesting part is where he makes the rest of his money, because it turns out his university salary is well under half his total income.  The "non-profits" he controls pays him, his family, and his friends over $800,000 a year in compensation, all paid out of government grants that supposedly are to support science.

A number of years ago Shukla created a couple of non-profits called the Institute for Global Environment and Security (IGES) and the Center for Ocean Land Atmosphere Interactions (COLA).  Both were founded by Shukla and are essentially controlled by him, though both now have some sort of institutional relationship with George Mason University as well.  Steve McIntyre has the whole story in its various details.

COLA and IGES both seem to have gotten most of their revenues from NSF, NASA, and NOAA grants.    Over the years, the IGES appears to have collected over $75 million in grants.  As an aside, this single set of grants to one tiny, you-never-even-heard-of-it climate non-profit is very likely way higher than the cumulative sum total of all money ever paid to skeptics.   I have always thought that warmists freaking out over the trivial sums of money going to skeptics is a bit like a football coach who is winning 97-0 freaking out in anger over the other team finally picking up a first down.

Apparently a LOT of this non-profit grant money ends up in the Shukla family bank accounts.

In 2001, the earliest year thus far publicly available, in 2001, in addition to his university salary (not yet available, but presumably about $125,000), Shukla and his wife received a further $214,496  in compensation from IGES (Shukla -$128,796; Anne Shukla – $85,700).  Their combined compensation from IGES doubled over the next two years to approximately $400,000 (additional to Shukla’s university salary of say $130,000), for combined compensation of about $530,000 by 2004.

Shukla’s university salary increased dramatically over the decade reaching $250,866 by 2013 and $314,000 by 2014.  (In this latter year, Shukla was paid much more than Ed Wegman, a George Mason professor of similar seniority). Meanwhile, despite the apparent transition of IGES to George Mason, the income of the Shuklas from IGES continued to increase, reaching $547,000 by 2013.

Grant records are a real mess but it looks like from George Mason University press releases that IGES and its successor recently got a $10 million five-year grant, or $2 million a year from the government.  Of that money:

  • approximately $550,000 a year goes to Shukla and his wife as salaries
  • some amount, perhaps $90,000 a year, goes to Shukla's daughter as salary
  • $171,000 a year goes as salary to James Kinter, an associate of Shukla at George Mason
  • An unknown amount goes for Shukla's expenses, for example travel.  When was the last time you ever heard of a climate conference, or any NGO conference, being held at, say, the Dallas-Ft Worth Airport Marriott?  No, because these conferences are really meant as paid vacation opportunities as taxpayer expense for non-profit executives.

I don't think it would be too much of a stretch, if one includes travel and personal expenses paid, that half the government grants to this non-profit are going to support the lifestyle of Shukla and his friends and family.  Note this is not money for Shukla's research or lab, this is money paid to him personally.

Progressives always like to point out examples of corruption in for-profit companies, and certainly those exist.  But there are numerous market and legal checks that bring accountability for such corruption.  But nothing of the sort exists in the non-profit world.  Not only are there few accountability mechanisms, but most of these non-profits are very good at using their stated good intentions as a shield from scrutiny -- "How can you accuse us of corruption, we are doing such important work!"

Postscript:  Oddly, another form of this non-profit scam exists in my industry.  As a reminder, my company privately operates public recreation areas.  Several folks have tried to set up what I call for-profit non-profits.  An individual will create a non-profit, and then pay themselves some salary that is equal to or even greater than the profits they would get as an owner.  They are not avoiding taxes -- they still have to pay taxes on that salary just like I have to pay taxes (at the same individual tax rates) on my pass-through profits.

What they are seeking are two advantages:

  • They are hoping to avoid some expensive labor law.  In most cases, these folks over-estimate how much a non-profit shell shelters them from labor law, but there are certain regulations (like the new regulations by the Obama Administration that force junior managers to be paid by the hour rather than be salaried) that do apply differently or not at all to a non-profit.
  • They are seeking to take advantage of a bias among many government employees, specifically that these government employees are skeptical of, or even despise, for-profit private enterprise.  As a result, when seeking to outsource certain operations on public lands, some individual decision-makers in government will have a preference for giving the contract to a nominal non-profit.   In California, there is even legislation that gives this bias a force of law, opening certain government contracting opportunities only to non-profits and not for-profits.

The latter can have hilarious results.  There is one non-profit I know of that is a total dodge, but the "owner" is really good at piously talking about his organization being "cleaner" because it is a non-profit, while all the while paying himself a salary higher than my last year's profits.

These 20 Scientists Want to Make it A Crime to Disagree with Them

I think it is important to publicize these names far and wide:

  • Jagadish Shukla, George Mason University, Fairfax, VA
  • Edward Maibach, George Mason University, Fairfax, VA
  • Paul Dirmeyer, George Mason University, Fairfax, VA
  • Barry Klinger, George Mason University, Fairfax, VA
  • Paul Schopf, George Mason University, Fairfax, VA
  • David Straus, George Mason University, Fairfax, VA
  • Edward Sarachik, University of Washington, Seattle, WA
  • Michael Wallace, University of Washington, Seattle, WA
  • Alan Robock, Rutgers University, New Brunswick, NJ
  • Eugenia Kalnay, University of Maryland, College Park, MD
  • William Lau, University of Maryland, College Park, MD
  • Kevin Trenberth, National Center for Atmospheric Research, Boulder, CO
  • T.N. Krishnamurti, Florida State University, Tallahassee, FL
  • Vasu Misra, Florida State University, Tallahassee, FL
  • Ben Kirtman, University of Miami, Miami, FL
  • Robert Dickinson, University of Texas, Austin, TX
  • Michela Biasutti, Earth Institute, Columbia University, New York, NY
  • Mark Cane, Columbia University, New York, NY
  • Lisa Goddard, Earth Institute, Columbia University, New York, NY
  • Alan Betts, Atmospheric Research, Pittsford, VT

These 20 people, who nominally call themselves "scientists", have written a letter to President Obama urging him to use the RICO statute to prosecute people who disagree with them on climate science, essentially putting scientific disagreement in the same status as organized crime.  If they can't win the scientific debate with persuasion, they will win it with guns.  From the letter:

One additional tool – recently proposed by Senator Sheldon Whitehouse – is a RICO (Racketeer Influenced and Corrupt Organizations Act) investigation of corporations and other organizations that have knowingly deceived the American people about the risks of climate change, as a means to forestall America’s response to climate change.

Of course "deceive the American people" is defined by these folks in practice as "disagreeing with us".

Gawker Was Always Vile

Even before the current unpleasantness, Gawker was always vile.  Here is Adam Weinstein in Gawker arguing that people who disagree with him should be jailed.  Incredibly, Weinstein has been held up in certain quarters as a voice of moderation and reasonableness in the current Gawker brouhaha

Those [climate] denialists should face jail. They should face fines. They should face lawsuits from the classes of people whose lives and livelihoods are most threatened by denialist tactics...

'm talking about Rush and his multi-million-dollar ilk in the disinformation business. I'm talking about Americans for Prosperity and the businesses and billionaires who back its obfuscatory propaganda. I'm talking about public persons and organizations and corporations for whom denying a fundamental scientific fact is profitable, who encourage the acceleration of an anti-environment course of unregulated consumption and production that, frankly, will screw my son and your children and whatever progeny they manage to have.

Those malcontents must be punished and stopped.

Deniers will, of course, fuss and stomp and beat their breasts and claim this is persecution, this is a violation of free speech. Of course, they already say that now, when judges force them into doing penance for comparing climate scientists to child-rapist and denial poster-boy Jerry Sandusky.

But First Amendment rights have never been absolute. You still can't yell "fire" in a crowded theater. You shouldn't be able to yell "balderdash" at 10,883 scientific journal articles a year, all saying the same thing: This is a problem, and we should take some preparations for when it becomes a bigger problem.

Incredibly, he makes this plea while arguing that it is wrong "to deny people the tools they need to inform themselves" --  which we will accomplish by throwing one side of the debate in jail?  Really?

I am so sick of this "First Amendment is not absolute" bullshit.  It is absolute when it comes to issues like debating the merit of a scientific conclusion or debating the political implications of scientific research.  It is absolutely absolute.  In sports terms, this is a pop fly hit to second base.  It is no where near the foul lines.   It is so far from the foul lines that people would look askance at an umpire who screamed "fair ball" when the fact was already so patently obvious.

And no: motives, funding sources, and even being demonstrably right or wrong does not affect this absolute First Amendment protection.

Which all leaves an interesting question for Gawker:  Under what First Amendment theory is outing salacious sexual details of private citizens who happen to work for Gawker's competition in order to gain advertising revenue somehow protected but discussing the shortcomings and political consequences of climate forecasts is not?  I think they are both protected, but the former sure looks closer to the foul line than the latter.

A Great Example of How the Media Twists Facts on Climate

First, let's start with the Guardian headline:

Exxon knew of climate change in 1981, email says – but it funded deniers for 27 more years

So now let's look at the email, in full, which is the sole source for the Guardian headline.  I challenge you, no matter how much you squint, to find a basis for the Guardian's statement.  Basically the email says that Exxon knew of the concern about global warming in 1981, but did not necessarily agree with it.  Hardly the tobacco-lawyer cover-up the Guardian is trying to make it sound like.  I will reprint the email in full because I actually think it is a pretty sober view of how good corporations think about these issues, and it accurately reflects the Exxon I knew from 3 years as a mechanical / safety engineer in a refinery.

I will add that you can see the media denial that a lukewarmer position even exists (which I complained about most recently here) in full action in this Guardian article.  Exxon's position as described in the Guardian's source looks pretty close to the lukewarmer position to me -- that man made global warming exists but is being exaggerated.   But to the Guardian, and many others, there is only full-blown acceptance of the most absurd exaggerated climate change forecasts or you are a denier.  Anyway, here is the email in full:

Corporations are interested in environmental impacts only to the extent that they affect profits, either current or future. They may take what appears to be altruistic positions to improve their public image, but the assumption underlying those actions is that they will increase future profits. ExxonMobil is an interesting case in point.

Exxon first got interested in climate change in 1981 because it was seeking to develop the Natuna gas field off Indonesia. This is an immense reserve of natural gas, but it is 70% CO2. That CO2 would have to be separated to make the natural gas usable. Natural gas often contains CO2 and the technology for removing CO2 is well known. In 1981 (and now) the usual practice was to vent the CO2 to the atmosphere. When I first learned about the project in 1989, the projections were that if Natuna were developed and its CO2 vented to the atmosphere, it would be the largest point source of CO2 in the world and account for about 1% of projected global CO2 emissions. I’m sure that it would still be the largest point source of CO2, but since CO2 emissions have grown faster than projected in 1989, it would probably account for a smaller fraction of global CO2 emissions.

The alternative to venting CO2 to the atmosphere is to inject it into ground. This technology was also well known, since the oil industry had been injecting limited quantities of CO2 to enhance oil recovery. There were many questions about whether the CO2 would remain in the ground, some of which have been answered by Statoil’s now almost 20 years of experience injecting CO2 in the North Sea. Statoil did this because the Norwegian government placed a tax on vented CO2. It was cheaper for Statoil to inject CO2 than pay the tax. Of course, Statoil has touted how much CO2 it has prevented from being emitted.

In the 1980s, Exxon needed to understand the potential for concerns about climate change to lead to regulation that would affect Natuna and other potential projects. They were well ahead of the rest of industry in this awareness. Other companies, such as Mobil, only became aware of the issue in 1988, when it first became a political issue. Natural resource companies – oil, coal, minerals – have to make investments that have lifetimes of 50-100 years. Whatever their public stance, internally they make very careful assessments of the potential for regulation, including the scientific basis for those regulations. Exxon NEVER denied the potential for humans to impact the climate system. It did question – legitimately, in my opinion – the validity of some of the science.

Political battles need to personify the enemy. This is why liberals spend so much time vilifying the Koch brothers – who are hardly the only big money supporters of conservative ideas. In climate change, the first villain was a man named Donald Pearlman, who was a lobbyist for Saudi Arabia and Kuwait. (In another life, he was instrumental in getting the U.S. Holocaust Museum funded and built.) Pearlman’s usefulness as a villain ended when he died of lung cancer – he was a heavy smoker to the end.

Then the villain was the Global Climate Coalition (GCC), a trade organization of energy producers and large energy users. I was involved in GCC for a while, unsuccessfully trying to get them to recognize scientific reality. (That effort got me on to the front page of the New York Times, but that’s another story.) Environmental group pressure was successful in putting GCC out of business, but they also lost their villain. They needed one which wouldn’t die and wouldn’t go out of business. Exxon, and after its merger with Mobil ExxonMobil, fit the bill, especially under its former CEO, Lee Raymond, who was vocally opposed to climate change regulation. ExxonMobil’s current CEO, Rex Tillerson, has taken a much softer line, but ExxonMobil has not lost its position as the personification of corporate, and especially climate change, evil. It is the only company mentioned in Alyssa’s e-mail, even though, in my opinion, it is far more ethical that many other large corporations.

Having spent twenty years working for Exxon and ten working for Mobil, I know that much of that ethical behavior comes from a business calculation that it is cheaper in the long run to be ethical than unethical. Safety is the clearest example of this. ExxonMobil knows all too well the cost of poor safety practices. The Exxon Valdez is the most public, but far from the only, example of the high cost of unsafe operations. The value of good environmental practices are more subtle, but a facility that does a good job of controlling emission and waste is a well run facility, that is probably maximizing profit. All major companies will tell you that they are trying to minimize their internal CO2 emissions. Mostly, they are doing this by improving energy efficiency and reducing cost. The same is true for internal recycling, again a practice most companies follow. Its just good engineering.