Posts tagged ‘Climate Models’

Denying the Climate Catastrophe: 8. The Lukewarmer Middle Ground

This is Chapter 8 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground (this article)
  9. A Low-Cost Insurance Policy

In this chapter we are going to try to sum up where we are and return to our very first chapter, when I said that we would find something odd once we returned to the supposed global warming "consensus".

First, let's return to our framework one last time and try to summarize what has been said:

Slide75

I believe that this is a pretty fair representation of the median luke-warmer position.  Summarized, it would be:

  • Manmade CO2 warms the Earth, though by much less than most climate models claim because these models are assuming unrealistic levels of positive feedback that over-state future warming.  One degree C of warming, rather than four or five, is a more realistic projection of man-made warming over the next century
  • The world has certainly warmed over the last century, though by perhaps a bit less than the 0.8C in the surface temperature record due to uncorrected flaws in that record
  • Perhaps half of this past warming is due to man, the rest due to natural variability
  • There is little evidence that weather patterns are "already changing" in any measurable way from man-made warming

The statements I just wrote above, no matter how reasonable, are enough to get me and many others vilified as "deniers".  You might think that I am exaggerating -- that the denier tag is saved for folks who absolutely deny any warming effect of CO2.  But that is not the case, I can assure you from long personal experience.

The Climate Bait and Switch

Of course, the very act of attempting to shut people up who disagree with one's position on a scientific issue is, I would have thought, obviously anti-science.   The history of science is strewn with examples of the majority being totally wrong.   Even into the 1960's, for example, the 97% consensus in geology was that the continents don't move and that the few scientists who advocated for plate tectonics theory were crackpots.

But that is not how things work today.  Climate action advocates routinely look for ways to silence climate skeptics, up to and including seeking to prosecute these climate heretics and try to throw them in jail.

The reason that alarmists say they feel confident in vilifying and attempting to silence folks like myself is because they claim that the science is settled, that 97% of climate scientists believe in the consensus, and so everyone who is not on board with the consensus needs to shut up.  But what exactly is this consensus?

The 97% number first appeared in a "study" by several academics who sent out a survey to scientists with some climate change questions.  They recieved over 3146 responses, but they decided that only 77 of these respondents "counted" as climate scientists, and of these 75 of the 77 (97%) answered two questions about climate change in the affirmative.

Slide82

We will get to the two questions in a second, but note already the odd study methodology.  If the other 10,000 plus people sent the survey were not the targets of the survey, why were they sent a survey in the first place?  It makes one suspicious that the study methodology was changed mid-stream to get the answer they wanted.

Anyway, what is even more fascinating is the two questions asked in the survey.  Here they are:

  1. When compared with pre-1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?
  2. Do you think human activity is a significant contributing factor in changing mean global temperatures?

The 97% in this survey answered the questions "risen" and "yes".

Do you see the irony here?  If you have been following along with this series, you should be able to say how I would have answered the two questions.  I would certainly have said "risen" to 1.  The answer to question 2 is a bit hard because "significant" is not defined, but in a complex system with literally thousands of variables, I would have said one of those variables was a significant contributor at anything over about 10%.  Since I estimated man's effect on past warming around 40-50%, I would have answered "yes" to #2!  In fact, most every prominent science-based skeptic I can think of would likely have answered the same.

So you heard it right -- I and many prominent skeptics are part of the 97% consensus.  Effectively, I am being told to shut up and not continue to say what I think, in the name of a 97% consensus that represents exactly what I am saying.  This is so weird as to be almost Kafka-esque.

This is what I call the climate bait and switch.  Shaky assumptions about things like high positive feedback assumptions are defended with the near-certainty that surrounds unrelated proposition such as the operation of the greenhouse gas effect.

In fact, merely arguing about whether man-made warming exists or is "significant" falls well short of what we really need in the public policy arena.  What we really should be discussing is a proposition like this:

Is manmade CO2 causing catastrophic increases in warming and warming-driven weather effects whose costs exceed those of reducing CO2 production enough to avoid these effects?

It is about at this point when I usually have people bring up the precautionary principle.  So that I am not unfair to proponents of that principle, I will use the Wikipedia definition:

if an action or policy has a suspected risk of causing harm to the public, or to the environment, in the absence of scientific consensus (that the action or policy is not harmful), the burden of proof that it is not harmful falls on those taking an action that may or may not be a risk.

The principle is used by policy makers to justify discretionary decisions in situations where there is the possibility of harm from making a certain decision (e.g. taking a particular course of action) when extensive scientific knowledge on the matter is lacking. The principle implies that there is a social responsibility to protect the public from exposure to harm, when scientific investigation has found a plausible risk. These protections can be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.

I believe that, as stated, this is utter madness.  I will give you an example.   Consider a vaccine that saves thousands of lives a year.  Let's say, as is typical of almost every vaccine, that it also hurts a few people, such that it may kill 1 person for every thousand it saves.  By the precautionary principle as stated, we would never have approved any vaccine, because the precautionary principle does not put any weight on avoided costs of the action.

So take fossil fuel burning.   Proponents of taking drastic action to curb fossil fuel use in the name of global warming prevention will argue that until there is an absolute consensus that burning fossil fuels is not harmful to the climate, such burning should be banned.  But it ignores the substantial, staggering, unbelievably-positive effects we have gained from fossil fuels and the technology and economy they support.

Just remember back to that corn yield chart.

Slide123

Bill McKibbon wants us to stop using fossil fuels because they may cause warmer temperatures that might reduce corn yields.  But there is a near absolute certainty that dismantling the fossil fuel economy will take us back to the horrendous yields in the yellow years on this chart.  Proponents of climate action point to the possibility of warming-based problems, but miss the near certainty of problems from elimination of fossil fuels.

Over the last 30 years, something unprecedented in the history of human civilization has occurred -- an astounding number of people have exited absolute poverty.

Slide124

Folks like McKibbon act like there is no downside to drastically cutting back on fossil fuel use and switching to substantially more expensive and less convenient fuels, as if protecting Exxon's profits are the only reason anyone would possibly oppose such a measure.  But the billion or so people who have exited poverty of late have done so by burning every bit of fossil fuels than can obtain, and never would have been able to do so in such numbers had such an inexpensive fuel option not been available.  We in the West could likely afford having to pay $50 a month more for fuel, but what of the poor of the world?

Perhaps this will give one an idea of how central inexpensive fossil fuels are to well-being.  This is a chart from World Bank data plotting each country based on their per capital CO2 production and their lifespan.

Slide79

As you can see, there is a real, meaningful relationship between CO2 production and life expectancy.  In fact, each 10x increase in CO2 production is correlated with 10 years of additional life expectancy.  Of course, this relationship is not direct -- CO2 itself does not have health benefits (if one is not a plant).  But burning more CO2 is a byproduct of a growing technological economy, which leads to greater wealth and life expectancy.

The problem, then, is not that we shouldn't consider the future potential costs and risks of climate change, but that we shouldn't consider them in a vaccuum without also considering the costs of placing severe artificial limits on inexpensive fossil fuels.

Slide78

People often say to me that climate action is an insurance policy -- and they ask me, "you buy insurance, don't you?"   My answer invariably is, "yes, I buy insurance, but not when the cost of the policy is greater than the risk being insured against."

As it turns out, there is an approach we can take in this country to creating a low-cost insurance policy against the risks that temperature sensitivity to CO2 is higher than I have estimated in this series.  I will outline that plan in my final chapter.

Here is Chapter 9:  A Low-Cost Insurance Policy

Denying the Climate Catastrophe: 7. Are We Already Seeing Climate Change?

This is Chapter 7 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change (this article)
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

Note:  This is by far the longest chapter, and could have been 10x longer without a lot of aggressive editing.  I have chosen not to break it into two pieces.  Sorry for the length.  TL;DR:  The vast majority of claims of current climate impacts from CO2 are grossly exaggerated or even wholly unsupported by the actual data.  The average quality of published studies in this area is very low compared to other parts of climate science.

Having discussed the theory and reality of man-made warming, we move in this chapter to what is often called "climate change" -- is manmade warming already causing adverse changes in the climate?

click to enlarge

This is a peculiarly frustrating topic for a number of reasons.

First, everyone who discusses climate change automatically assumes the changes will be for the worse.  But are they?  The Medieval Warm Period, likely warmer than today, was a period of agricultural plenty and demographic expansion (at least in Europe) -- it was only the end of the warm period that brought catastrophe, in the form of famine and disease.  As the world warms, are longer growing seasons in the colder parts of the northern hemisphere really so bad, and why is it no one ever mentions such positive offsets?

The second frustrating issues is that folks increasingly talk about climate change as if it were a direct result of CO2, e.g. CO2 is somehow directly worsening hurricanes.  This is in part just media sloppiness, but it has also been an explicit strategy, re-branding global warming as climate change during the last 20 years when global temperatures were mostly flat.  So it is important to make this point:  There is absolutely no mechanism that has been suggested by anyone wherein CO2 can cause climate change except through the intermediate step of warming.  CO2 causes warming, which then potentially leads to changes in weather.  If CO2 is only causing incremental warming, then it likely is only causing incremental changes to other aspects of the climate.   (I will note as an aside that man certainly has changed the climate through mechanisms other than CO2, but we will not discuss these.  A great example is land use.  Al Gore claimed the snows of Kilimanjaro are melting because of global warming, but in fact it is far more likely they are receding due to precipitation changes as a result of deforestation of Kilimanjaro's slopes.)

Finally, and perhaps the most frustrating issue, is that handling claims of various purported man-made changes to the climate has become an endless game of "wack-a-mole".  It is almost impossible to keep up with the myriad claims of things that are changing (always for the worse) due to CO2.  One reason that has been suggested for this endless proliferation of dire predictions is that if one wants to study the mating habits of the ocelot, one may have trouble getting funding, but funding is available in large quantities if you re-brand your study as the effect of climate change on the mating habits of the ocelot.  It is the unusual weather event or natural phenomenon (Zika virus!) that is not blamed by someone somewhere on man-made climate change.

As a result, this section could be near-infinitely long.  To avoid that, and to avoid a quickly tedious series of charts labelled "hurricanes not up", "tornadoes not up", etc., I want to focus more on the systematic errors that lead to the false impression that we are seeing man-made climate changes all around us.

click to enlarge

We will start with publication bias, which I would define as having a trend in the reporting of a type of an event mistaken for a trend in the underlying events itself.  Let's start with a classic example from outside climate, the "summer of the shark".

click to enlarge

The media hysteria began in early July, 2001, when a young boy was bitten by a shark on a beach in Florida.  Subsequent attacks received breathless media coverage, up to and including near-nightly footage from TV helicopters of swimming sharks.  Until the 9/11 attacks, sharks were the third biggest story of the year as measured by the time dedicated to it on the three major broadcast networks’ news shows.

Through this coverage, Americans were left with a strong impression that something unusual was happening — that an unprecedented number of shark attacks were occurring in that year, and the media dedicated endless coverage to speculation by various “experts” as to the cause of this sharp increase in attacks.

click to enlarge

Except there was one problem — there was no sharp increase in attacks. In the year 2001, five people died in 76 shark attacks. However, just a year earlier, 12 people had died in 85 attacks. The data showed that 2001 actually was a down year for shark attacks.  The increased media coverage of shark attacks was mistaken for an increase in shark attacks themselves.

Hopefully the parallel with climate reporting is obvious.  Whereas a heat wave in Moscow was likely local news only 30 years ago, now it is an international story that is tied, in every broadcast, to climate change.  Every single tail-of-the-distribution weather event from around the world is breathlessly reported, leaving the impression among viewers that more such events are occurring, even when there is in fact no such trend.   Further, since weather events can drive media ratings, there is  an incentive to make them seem scarier:

click to enlarge

When I grew up, winter storms were never named.  It was just more snow in Buffalo, or wherever.  Now, though, we get "Winter Storm Saturn: East Coast Beast."  Is the weather really getting scarier, or just the reporting?

click to enlarge

The second systematic error is not limited to climate, and is so common I actually have a category on my blog called "trend that is not a trend".   There is a certain chutzpah involved in claiming a trend when it actually does not exist in the data, but such claims occur all the time.  In climate, a frequent variation on this failure is the claiming of a trend from a single data point -- specifically, a tail-of-the-distribution weather event will be put forward as "proof" that climate is changing, ie that there is a trend to the worse somehow in the Earth's climate.

The classic example was probably just after Hurricane Katrina.  In a speech in September of 2005 in San Francisco, Al Gore told his Sierra Club audience that not only was Katrina undoubtedly caused by man-made global warming, but that it was the harbinger of a catastrophic onslaught of future such hurricanes.     In fact, though, there is no upward trend in Hurricane activity.   2005 was a high but not unprecedented year for hurricanes.  An Katrina was soon followed by a long and historic lull in North American hurricane activity.

Counting hurricane landfalls is a poor way to look at hurricanes.  A better way is to look at the total energy of hurricanes and cyclones globally.  And as you can see, the numbers are cyclical (as every long-time hurricane observer could have told Mr. Gore) but without any trend:

click to enlarge

In fact, the death rates from severe weather have been dropping throughout the last century at the same time CO2 levels have been rising

click to enlarge

Of course, it is likely that increasing wealth and better technology are responsible for much of this mitigation, rather than changes in underlying weather patterns, but this is still relevant to the debate -- many proposed CO2 abatement plans would have the effect of slowing growth in the developing world, leaving them more vulnerable to weather events.   I have argued for years that the best way to fight weather deaths is to make the world rich, not to worry about 1 hurricane more or less.

Droughts are another event where the media quickly finds someone to blame the event on man-made climate change and declare that this one event is proof of a trend.  Bill McKibben tweeted about drought and corn yields many times in 2012, for example:

It turns out that based on US government data, the 2012 drought was certainly severe but no worse than several other droughts of the last 50 years (negative numbers represent drought).

click to enlarge

There is no upward trend at all (in fact a slightly downward trend that likely is not statistically significant) in dry weather in the US

click to enlarge

McKibben blamed bad corn yields in 2012 on man-made global warming, and again implied that one year's data point was indicative of a trend

US corn yields indeed were down in 2012, but still higher than at any time they had been since 1995.

Slide138

It is worth noting the strong upward trend in corn yields from 1940 to today, at the same time the world has supposedly experienced unprecedented man-made warming.   I might also point out the years in yellow, which were grown prior to the strong automation of farming via the fossil fuel economy.  Bill McKibben hates fossil fuels, and believes they should be entirely eliminated.  If so, he also must "own" the corn yields in yellow.  CO2-driven warming has not inhibited corn yields, but having McKibben return us to a pre-modern economy certainly would.

Anyway, as you might expect, corn yields after 2012 return right back to trend and continue to hit new records.  2012 did not represent a new trend, it was simply one bad year.

Slide139

I think most folks would absolutely swear, from media coverage, that the US is seeing more new high temperatures set and an upward trend in heat waves.  But it turns out neither is the case.

click to enlarge

Obviously, one has to be careful with this analysis.  Many temperature stations in the US Historical Climate Network have only been there for  20 or 30 years, so their all time high at that station for any given day is, by definition, going to be in the last 20 or 30 years.  But if one looks at temperature stations with many years of data, as done above, we can see there has been no particular uptick in high temperature records and in fact a disproportionate number of our all-time local records were set in the 1930's.

While there has been a small uptick in heat waves over the last 10-20 years, it is trivial compared to the heat of the 1930's

click to enlarge

Looking at it a different way, there is no upward trend in 100 degree (Fahrenheit) days...

click to enlarge

Or even 110 degree days.  Again, the 1930's were hot, long before man-made CO2 could possibly have made them so

click to enlarge

Why, one might ask, don't higher average global temperatures translate into more day-time high temperature records?  Well, we actually gave the answer back in Chapter 4A, but as a reminder, much of the warming we have seen has occurred at night, raising the nighttime lows without as much affect on daytime highs, so we are seeing more record nighttime high Tmin's than we have in much of the last century without seeing more record daytime Tmax temperatures:

Click to enlarge

We could go on all day with examples of claiming a trend from a single data point.  Watch for it yourself.  But for now let's turn to a third category

click to enlarge

We can measure things much more carefully and accurately than we could in the past.  This is a good thing, except when we are trying to compare the past to the present.  In a previous chapter, we showed a count of sunspots, and databases of sunspot counts go all the way back into the early 18th century.  Were telescopes in 1716 able to see all the sunspots we can see in 2016?  Or might an upward trend in sunspot counts be biased by our better ability today to detect small ones?

A great example of this comes, again, from Al Gore's movie in which Gore claimed that tornadoes were increasing and man-made global warming was the cause.  He was working with this data:

click to enlarge

This certainly looks scary.  Tornadoes have increased by a factor of 5 or 6!  But if you look at the NOAA web site, right under this chart, there is a big warning that ways to beware of this data.  With doppler radar and storm chasers and all kinds of other new measurement technologies, we can detect smaller tornadoes that were not counted in the 1950's.  The NOAA is careful to explain that this chart is biased by changes in measurement technology.  If one looks only at larger tornadoes we were unlikely to miss in the 1950's, there is no upward trend, and in fact there may be a slightly declining trend.

click to enlarge

That, of course, does not stop nearly every person in the media from blaming global warming whenever there is an above-average tornado year

Behind nearly every media story about "abnormal" weather or that the climate is somehow "broken" is an explicit assumption that we know what "normal" is.  Do we?

click to enlarge

We have been keeping systematic weather records for perhaps 150 years, and have really been observing the climate in detail for perhaps 30 years.  Many of our best tools are space-based and obviously only have 20-30 years of data at most.  Almost no one thinks we have been able to observe climate in depth through many of its natural cycles, so how do we know exactly what is normal?  Which year do we point to and say, "that was the normal year, that was the benchmark"?

One good example of this is glaciers.  Over the last 30 years, most (but not all) major glaciers around the world have retreated, leading to numerous stories blaming this retreat on man-made warming.  But one reason that glaciers have retreated over the last 50 years is that they were also retreating the 50 years before that and the 50 years before that:

click to enlarge

In fact, glaciers have been retreating around the world since the end of the Little Ice Age (I like to date it to 1812, with visions of Napoleon's army freezing in Russia, but that is of course arbitrary).

A while ago President Obama stood in front of an Alaskan glacier and blamed its retreat on man.  But at least one Alaskan glacier in the area has been mapped for centuries, and has been retreating for centuries:

click to enlarge

As you can see, from a distance perspective, most of the retreat actually occurred before 1900.  If one wants to blame the modern retreat of these glaciers on man, one is left with the uncomfortable argument that natural forces drove the retreat until about 1950, at which point the natural forces stopped just in time for man-made effects to take over.

Melting ice is often linked to sea level rise, though interestingly net ice melting contributes little to IPCC forecasts of sea level rises due to expected offsets with ice building in Antarctica -- most forecast sea level rise comes from the thermal expansion of water in the oceans.  And of course, the melting arctic sea ice that makes the news so often contributes nothing to sea level rise (which is why your water does not overflow your glass when the ice melts).

But the story for rising sea levels is the same as with glacier retreats -- the seas have been rising for much longer than man has been burning fossil fuels in earnest, going back to about the same 1812 start point:

Slide132

There is some debate about manual corrections added to more recent data (that should sound familiar to those reading this whole series) but recent sea level rise seems to be no more than 3 mm per year.  At most, recent warming has added perhaps 1 mm a year to the natural trend, or about 4 inches a century.

Our last failure mode is again one I see much more widely than just in climate.  Whether the realm is economics or climate or human behavior, the media loves to claim that incredibly complex, multi-variable systems are in fact driven by a single variable, and -- who'd have thunk it -- that single variable happens to fit with their personal pet theory.

click to enlarge

With all the vast complexity of the climate, are we really to believe that every unusual weather event is caused by a 0.013 percentage point change (270 ppm to 400 ppm) in the concentration of one atmospheric gas?

Let me illustrate this in another way.  The NOAA not only publishes a temperature anomaly (which we have mostly been using in all of our charts) but they take a shot at coming up with an average temperature for the US.   The following chart uses their data for the monthly average of Tmax (the daily high at all locations), Tmin (the daily low for all locations) and Tavg (generally the average of Tmin and Tmax).

 

click to enlarge

Note that even the average temperatures vary across a range of 40F through the seasons and years.  If one includes the daily high and low, the temperatures vary over a range of nearly 70F.  And note that this is the average for all the US over a month.  If we were to look at the range of daily temperatures across the breath of locations, we would see numbers that varied from well into the negative numbers to over 110.

The point of all this is that temperatures vary naturally a lot.  Now look at the dotted black line.  That is the long-term trend in the average, trending slightly up (since we know that average temperatures have risen over the last century).  The slope of that line, around 1F per century for the US, is virtually indistinguishable.   It is tiny, tiny, tiny compared to the natural variation of the averages.

The point of this is not that small increases in the average don't matter, but that it is irrational to blame every tail-of-the-distribution temperature event on man-made warming, since no matter how large we decide that number has been, its trivial compared to the natural variation we see in temperatures.

OK, I know that was long, but this section was actually pretty aggressively edited even to get it this short.  For God sakes, we didn't even mention polar bears (the animals that have already survived through several ice-free inter-glacial periods but will supposedly die if we melt too much ice today).  But its time to start driving towards a conclusion, which we will do in our next chapter.

Chapter 8, summarizing the lukewarmer middle ground, is here.

Denying the Climate Catastrophe: 6. Climate Models vs. Actual Temperatures

This is Chapter 6 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures (this article)
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In some sense, this is perhaps the most important chapter, the climax of all the discussion to this point.  It is where we return to climate forecasts and attempt to conclude whether forecasts of catastrophic levels of man-made warming are reasonable.  So let's take a step back and see where we are.

Here is the framework we have been working with -- we have walked through in earlier chapters both the "theory" and "observation" sections, ending most recently in chapter 5 with a discussion of how much past warming can be attributed to man.

click to enlarge

It is important to remember why we embarked on the observation section.  We ended the theory section with a range of future temperature forecasts, from the modest to the catastrophic, based on differing sensitivities of temperature to CO2 which were in turn largely based on varying assumptions about positive feedback effects in the climate.

Slide17

We concluded at the time that there was not much more we could go with pure theory in differentiating between these forecasts, that we had to consult actual observations to validate or invalidate these forecasts.

We've already done one such analysis when we made two comparisons back in Chapter 4.  We showed that temperatures had risen over the last 30 years by only a third to a half the rate projected by James Hanson to Congress...

click to enlarge

And that even the IPCC admitted in its last report that temperatures were running below or at best at the very low end of past forecast bands

click to enlarge

But in the grand scheme of things, even 30 years is a pretty short time frame to discuss climate changes.  Remember that in my own attribution attempt in Chapter 5, I posited an important 66 year decadal cycle, and past temperature reconstructions imply other cycles that are centuries and millennia long.

But there is a way we can seek confirmation of climate forecasts using over 100 years of past temperature data.  Let's take our forecast chart we showed above and give ourselves a bit more space on the graph by expanding the timescale:

click to enlarge

Here is the key insight that is going to help us evaluate the forecasts:  each forecast represents an actual, physical relationship between changes in CO2 concentrations and changes in temperature.  If such a relationship is to hold in the future, it also has to be valid in the past.  So we can take each of these different forecasts for the relation between temperature and CO2 and run them backwards to pre-industrial times in the 19th century, when atmospheric CO2 concentrations were thought to be around 270 ppm.

click to enlarge

The temperature value of each line at 270ppm point represents the amount of warming we should already have seen from man-made CO2

click to enlarge

What we see is that most of the mainstream IPCC and alarmist forecasts greatly over-predict past warming.  For example, this simple analysis shows that for the IPCC mean forecast to be correct, we should have seen 1.6C of manmade warming over the last century and a half.  But we know that we have not seen more than about 0.8C total in warming.  Even if all of that is attributed to man (which we showed in the last chapter is unlikely), warming has still been well-short of what this forecast would predict.  If we define a range for historic man-made warming from 0.33C (the number I came up with in the last chapter) to 0.8C (basically all of past warming due to man), we get numbers that are consistent with the non-catastrophic, zero-feedback cases

click to enlarge

Of course we are leaving out the time dimension -- many of the hypothesized feedbacks take time to operate, so the initial transient response of the world's temperatures is not the same as the longer-term equilibrium response.  But transient response likely is at least 2/3 of the full equilibrium value, meaning that my hypothesized value for man-made past warming of 0.33C would still be less than the no feedback case on an equilibrium basis.

It is from this analysis that I first convinced myself that man-made warming was unlikely to be catastrophic.

I want to add two notes here.

First, we mentioned back in the attribution section that some scientists argue that man has caused not all of but more than the total observed historical warming.  This chapter's analysis explains why.  The fact that climate models tend to overpredict history is not a secret among climate modelers (though it is something they seldom discuss publicly).  To justify their high feedback and sensitivity assumptions in their forecasts, they need more warming in the past.   One way to do this is to argue that the world would have cooled without man-made CO2, so that man-made CO2 contributed 0.8C of warming in addition to whatever the cooling would have been.  It allows attribution of more than 100% of past warming to man.

There are various ways this is attempted, but the most popular centers around man-made sulfate aerosols.  These aerosols are byproducts of burning sulfur-heavy fossil fuels, particularly coal, and they tend to have a cooling effect on the atmosphere (this is one reason why, in the 1970's, the consensus climate prediction was that man was causing the world to cool, not warm).  Some scientists argue that these aerosols have tended to cool the Earth over the past decades, but as we clean up our fuels their effect will go away and we will get catch-up warming.

There are a couple of problems with this line of thought.  The first is that we understand even less about the magnitude of aerosol cooling than we do of CO2 warming.  Any value we choose is almost a blind guess (though as we shall see in a moment, this can be a boon to modelers on a mission).  The second issue is that these aerosols tend to be very short-lived and local.  They don't remain in the atmosphere long enough to thoroughly mix and have a global effect.  Given their localization and observed concentrations, it is almost impossible to imagine them having more than a tenth or two effect on world temperatures.  And I will add that if we need to take into account cooling from sulfate aerosols, we also need to take into account the warming and ice melting effect of black carbon soot from dirty Asian coal combustion.  But we will return to that later in our section on Arctic ice.

My second, related note is that scientists will frequently claim that their computer models models do claim correctly match historic temperatures when run backwards.  As a long-time modeler of complex systems, my advice is this:  don't believe it until you have inspected the model in detail.  At least 9 times out of 10, one will find that this sort of tight fit with history is the result of manual tweaking, usually from the affect of a few "plug" variables.

Here is one example -- there was a study a while back that tried to understand how a number of different climate models could all arrive at very different temperature sensitivities to CO2, but all still claim to model the same history accurately.  What was found was that there was a second variable -- past cooling from man-made aerosols, discussed above -- that also varied greatly between models.  And it turned out that the value chosen in the models for this second variable was exactly the value necessary to make that model's output match history -- that is why I said that our very lack of knowledge of the actual cooling from such aerosols could be a boon to modelers on a mission.  In essence, there is a strong suspicion that this variable's value was not based on any observational evidence, but was simply chosen as a plug figure to make the model match history.

Having gone about as far as we can with the forecasts without diving into a whole new order of detail, let's move on to the final alarmist contention, that man-made CO2 is already changing the climate for the worse.  We will discuss this in Chapter 7.

Chapter 7 on whether we are already seeing man-made climate change is here.

Answer: Because the Climate Models Have All Been Fudged

The Question, as asked (surprisingly) by a global warming believer:

One curious aspect of this result is that it
is also well known [Houghton et al., 2001] that the same models that
agree in simulating the anomaly in surface air temperature differ
significantly in their predicted climate sensitivity. The cited range
in climate sensitivity from a wide collection of models is usually 1.5
to 4.5 deg C for a doubling of CO2, where most global climate models
used for climate change studies vary by at least a factor of two in
equilibrium sensitivity.

The question is: if climate models differ by a factor of 2
to 3 in their climate sensitivity, how can they all simulate the global
temperature record with a reasonable degree of accuracy
. Kerr
[2007] and S. E. Schwartz et al. (Quantifying climate change"“too rosy a
picture?, available at www.nature.com/reports/climatechange, 2007)
recently pointed out the importance of understanding the answer to this
question. Indeed, Kerr [2007] referred to the present work and the
current paper provides the "˜"˜widely circulated analysis'' referred to
by Kerr [2007]. This report investigates the most probable explanation
for such an agreement. It uses published results from a wide variety of
model simulations to understand this apparent paradox between model
climate responses for the 20th century, but diverse climate model
sensitivity.

Much more here at Climate Skeptic

Table of Contents: A Layman's Guide to Anthropogenic (Man-Made) Global Warming

The purpose of this paper is to provide a layman's critique
of the Anthropogenic Global Warming (AGW) theory, and in particular to
challenge the fairly widespread notion that the science and projected
consequences of AGW currently justify massive spending and government
intervention into the world's economies.  This paper will show that
despite good evidence that global temperatures are rising and that CO2 can act
as a greenhouse gas and help to warm the Earth, we are a long way from
attributing all or much of current warming to man-made CO2.  We are even
further away from being able to accurately project man's impact on future
climate, and it is a very debatable question whether interventions today to
reduce CO2 emissions will substantially improve the world 50 or 100 years from
now.

Update:  If you would like to start with the 60-second version of this long paper, try here first.

Note you may click on any of the chapter links below to see the full chapter in HTML, or see below for links to free pdf versions available for download.

. 4Table of
Contents for A Layman's Guide to Anthropogenic Global Warming (AGW)

. 5Chapter 1:
Summary of Global Warming Skeptics Position

. 7Chapter
2:  Is It OK to be a Global Warming Skeptic?
8

  • Charges of Bias
  • The Climate Trojan Horse
  • The Need to Exaggerate

. 13Chapter 3:
The Basics of Anthropogenic Global Warming (AGW) Theory

. 16Chapter
4:  The Historical Evidence for Man-Made Global Warming

16

  • The long view (650,000 years)
  • The medium view (1000 years)
  • The short view (100 years)
  • Sulfates, Aerosols and Dimming
  • The troposphere and Urban heat islands
  • Using Computer Models to Explain the Past

. 34Chapter
5:  The Climate Computer Models and Predicting Future Temperatures

. 35

  • The Dangers in Modeling Complex Systems
  • Do Model Outputs Constitute Scientific Proof?
  • Econometrics and CO2 Forecasts
  • Climate Sensitivity and the Role of Positive Feedbacks
  • Climate Models have to be Aggressively Tweaked to Match History 

. 45Chapter
6:  Alternate Explanations and Models for Global Warming

. 45

  • Solar Irradiance
  • Cosmic Rays
  • Man's Land Use 

. 52Chapter
7:  The Effects of Global Warming

. 52

  • Why only bad stuff?
  • Ice melting / Oceans Rising
  • Hurricanes & Tornadoes
  • Temperature Extremes
  • Extinction and Disease
  • Collapse of the Gulf Stream and Freezing of Europe
  • Non-warming Effects of CO2

. 64Chapter
8:  Kyoto and Climate Change Policy Alternatives

. 64

  • Kyoto
  • Cost of the Solutions vs. the Benefits:  Why Warmer but Richer may be Better than Colder and Pooer

Chapter 9:
Rebuttals by Man-Made Global Warming Theory Supporters

For those interested in getting a copy of my A Skeptical Layman's Guide to Anthropogenic Global Warming, I greatly encourage you to download it for free.  However, I do know that some folks have written about a print version.  I have a print version of my global warming book available now at LuLu.com.
It is $16.98 -- that is my cost -- and I warn you that LuLu's shipping
options are not very cheap.  I will try to find a less expensive print
option, but no one beats LuLu for getting a book set up quickly and
easily for print-to-order.

Agw_cover_front_small

The open comment thread for this paper can be found here.

Chapter 5: Computer Models and Predicting Future Climate (Skeptics Guide to Global Warming)

The table of contents for the rest of this paper, . 4A Layman's Guide to Anthropogenic Global Warming (AGW) is here Free pdf of this Climate Skepticism paper is here and print version is sold at cost here

We have gotten well into this paper, and we still have not
discussed what is perhaps the most problematic aspect of AGW research:
the computer models.

If an economist came up with a computer model that he claimed could predict
the market value of every house in the world in the year 2106 within $1,000,
would you believe him?  No, you would say he was nuts -- there is way too
much uncertainty and complexity.  Climate, of course, is not the same as
housing prices.  It is in fact, much, much more complex and more difficult
to predict.  There is nothing wrong with trying to predict the complex and
chaotic.  But there is a certain sense of hubris in believing that one has
succeeded with the kind of 95% certainty figures used by the IPCC.

All climate forecasting models are created by a pretty insular and
incestuous climate science community that seems to compete to see who can come
up with the most dire forecast.  Certainly there are financial incentives
to be as aggressive as possible in forecasting climate change, since funding
dollars tend to get channeled to those who are the most dramatic.
The global warming community spends a lot of time with ad hominem attacks
on skeptics, usually accusing them of being in the pay of oil and power
companies, but they all know that their own funding in turn would dry up
rapidly if they were to show any bit of skepticism in their own work.

The details of these models is beyond the scope of this paper.
However, it is important to understand how they work in broad outlines.

The modelers begin with certain assumptions about climate that they build
into the model.  For example, the computers themselves don't somehow
decide if CO2 is a more important forcing on the climate than solar activity "“
the modelers, by the assumptions the feed into the model, decide these
things.  The models return the result that CO2 is the most important
driver of climate in the coming century because their programmers built them
with that assumption, not because the model somehow sorts through different
inputs and comes up with the key drivers on its own.

Because the models have been built to test man's possible impact on the
climate via greenhouse gas emissions, they begin with an econometric forecast
of world economic growth, and, based upon assumptions about fuel sources and
efficiencies, they convert this economic growth into emissions forecasts.
The models generally contain subroutines that calculate, again based upon a
variety of assumptions, how man-made CO2 plus other inputs will change the
atmospheric CO2 concentration.  Then, via assumptions about climate
sensitivity to CO2 and various feedback loops programmed in, the models will
create forecasts of temperatures, precipitation, etc.  These models,
depending on their complexity, will show regional variations in many of these
variables.  Finally, the models are tuned so that they better match
history, in theory making them more accurate for the future.

One should note that while the IPCC asked modelers to look at a series of
different cases, the only substantial difference between these cases is the
volume of CO2 and other greenhouse gasses produced.  In other words, the
only sensitivity the IPCC seriously modeled was on levels of CO2.  No
other contingency "“ e.g. potential variations in global temperature sensitivity
to CO2, solar output, land use "“ were considered.  This should give you an
idea of how hard-wired the anthropogenic causation is in the IPCC process.

In this section, I will begin by discussing the models' basic assumptions
about the climate.  I will then discuss the econometric forecasts they are
founded on, the assumptions about CO2 sensitivity and feedback processes, and
finally model tuning and their ability to match history.

The Dangers in Modeling Complex Systems

At any one time, thousands of people are being paid literally millions of
dollars on Wall Street to try to model the behavior of various stock indices
and commodity prices.  The total brain power and money power thrown over
the last 50 years at trying to build an accurate predictive model for financial
markets literally dwarfs, by a factor of 100 or more, the cumulative resources
spent to date on long-term climate modeling.  Financial markets are
incredibly complex, and they are driven by hundreds of variables.
Interest rates, corporate profits, loan default rates, mortgage refinance
rates, real estate prices, GDP growth, exchange rates, etc. all tend to drive
the behavior of financial markets.  And no one has cracked the code.
Sure, some people have built successful short-term trading models, but people
have mostly lost their shirts when they have tried to make long-term bets based
on computer financial models that beautifully matched history but failed to
accurately predict the future.

How is it possible that a model that accurately represents the past fails to
accurately predict the future?  Financial modelers, like climate modelers,
look to history in building their models.  Again, like climate modelers,
they rely both on theory (e.g. higher interest rates should generally mean
lower stock prices) as well as observed correlations in the historic data
set.  The problem they meet, the problem that every modeler meets but most
especially the climate modeler, is that while it is easy to use various
analysis tools to find correlations in the data, there is often nothing that
will tell you if there is really a causal relationship, and which way the
causality runs. For example, one might observe that interest rates and exchange
rates move together.  Are interest rate changes leading to exchange rate
adjustments, or vice versa?  Or, in fact, are they both caused by a third
variable?  Or is their observed correlation merely coincidence?

It was once observed that if an old AFL football team wins the Superbowl, a
bear market will ensue on Wall Street in the next year, while an NFL team
victory presaged a bull market.  As of 1997, this correlation held for 28
of the last 31 years, a far better prediction record than that of any Wall
Street analyst.  But of course this correlation was spurious, and in the
next 4 years it was wrong every time.  Had someone built a financial model
on this indicator, it would have looked great when he ran it against history,
but he would have lost his shirt using it. 

Want a better prediction record?  For seventeen straight US
presidential election cycles, from 1936 to 2000, the winner of the election was
accurately predicted by"¦the Washington Redskins.  In particular, if the
Redskins won their last home game before the election, the party that occupies
the White House holds it in the election.  Otherwise, if the Redskins
lose, the opposition party wins.  Seventeen in a row!  R-squared of
one!  Success against odds of 131,072:1 of guessing all 17 right.
But of course, the input was spurious, and in 2004, soon after this
relationship made the rounds of the Internet, the algorithm failed.

This is why we spent so much time in the previous chapter on evaluating
historic correlations between CO2 and temperature.  Because the models are
built on an assumption that not only is temperature strongly correlated with
CO2, but that temperature is historically highly stable without this outside
anthropogenic forcing.  If there are problems with this assumed causation,
which we saw there are, then there in turn are major inherent problems with the
models themselves.   As climate scientist Syun-Ichi Akasofu of the
International Arctic Research Center at University of Alaska Fairbanks wrote:

The computers are "taught" that the temperature
rise during the last hundred years is due mostly to the greenhouse effect. If
the truth is that only about 10% of the present warming is caused by the
greenhouse effect, the computer code must be rewritten

Do Model Outputs Constitute Scientific Proof?

Remember what I said earlier:  The models produce the result that there
will be a lot of anthropogenic global warming in the future because they are
programmed to reach this result.  In the media, the models are used as a
sort of scientific money laundering scheme.  In money laundering, cash
from illegal origins (such as smuggling narcotics) is fed into a business that
then repays the money back to the criminal as a salary or consulting fee or
some other type of seemingly legitimate transaction.  The money he gets
back is exactly the same money, but instead of just appearing out of nowhere, it
now has a paper-trail and appears more legitimate.  The money has been
laundered.

In the same way, assumptions of dubious quality or certainty that presuppose
AGW beyond the bounds of anything we have see historically are plugged into the
models, and, shazam, the models say that there will be a lot of anthropogenic
global warming.  These dubious assumptions, which are pulled out of thin
air, are laundered by being passed through these complex black boxes we call
climate models and suddenly the results are somehow scientific proof of
AGW.  The quality hasn't changed, but the paper trail looks better, at
least in the press.  The assumptions begin as guesses of dubious quality
and come out laundered at "settled science."  Climate
Scientists Garth Paltridge wrote
:

It needs to be understood that any reasonable
simulation even of present climate requires computer models to be tuned. They contain
parameters (that is, pieces of input information) whose numerical values are
selected primarily to give the right answer about today's climate rather than
because of some actual measurement. This was particularly so in the
mid-eighties. The problem with tuning is that it makes any prediction of
conditions different from those of the present far less believable. Even today
the disease of "tuneable parameters" is still rampant in climate
models, although fairly well hidden and not much spoken of in polite society.
The scientifically-inclined reader might try sometime asking a climate
researcher just how many such parameters there are in his or her latest model.
The reader will find there are apparently lots of reasons why such a question
is ridiculous, or if not ridiculous then irrelevant, and if not irrelevant then
unimportant. Certainly the enquirer will come away having been made to feel
foolish.

Econometrics and
CO2 Forecasts

The IPCC has never been able to choose a particular climate model it thinks
is best.  Instead, it aggregates ten or twelve of them and averages their
results, hoping that if there are errors in the climate models, they will
average out somehow (forgetting that systematic errors don't average out, as we
discussed earlier in the context of historic temperature
reconstructions).  The one thing the IPCC does do to bring some order to
all this is to establish baseline econometric and emissions scenarios for all
the teams to feed into the front end of their models.  That way, for a given
forecast case, they know variation in model output is due to differing
climate-related assumptions rather than differing economic assumptions.

But a funny thing happens when one tries to make an economic growth forecast
for 100-year periods, as the IPCC has: Very small changes in assumptions make
enormous differences.  Here is a simple example.  An economy that
grows by 3% per year will be 19x larger in 100 years.  However, if that
economy were to grow instead by 4% rather than 3%, it will be 51x larger in 100
years.  So a change in the growth rate by one percentage point yields a
final size nearly 2.7 times larger.   The same is true with
forecasting CO2 growth "“ a very small change in assumptions can lead to very
large differences in absolute production.

After release of the 3rd IPCC report in 2001, researchers Ian
Castles, formerly the head of Australia's national office of statistics, and
David Henderson of the Westminster Business School and formerly the chief
economist of the OECD, decided to scrutinize
the IPCC's economic assumptions
.  They found that the IPCC had made a
fundamental mistake in crafting their econometrics, one that caused all of
their economic growth estimates (and therefore estimates of CO2 production) to
be grossly overestimated.  Based on the IPCC assumptions, South Africa
ends up with a GDP per capita far in excess of the United States by the year
2100.  Incredibly, the IPCC numbers imply that Algeria, Argentina, Libya,
Turkey, and North Korea will all pass the US in per capita income by the end of
the century.

Beyond making it clear that there is an element of the absurd in the IPCC's
economic forecasting approach, these errors tend to inflate CO2 forecasts in
two ways.  First, CO2 forecasts are raised because, in the models, larger
economies produce more CO2.  Second, though, the models assume different
rates for CO2 production per unit of GDP for each country.  Most of the
countries the IPCC shows growing so fast "“ Algeria, South Africa, Libya, North
Korea, etc. "“ have lower than average CO2 efficiencies (i.e. higher than
average CO2 production per unit of GDP), so excess growth assumptions in these
countries has a disproportionate impact on projected CO2 emissions.  By
the way, it would be interesting to see if the IPCC is using marginal rather
than average rates.  For example, France has a very low average rate of
CO2 per unit of GDP because of its nukes, but its marginal growth is met mostly
with fossil fuels.

I can't say whether these same mistakes exist in the 2007 4th
Assessment.  However, since the IPCC flatly rejected Castles and
Henderson's critique, it is likely the same methodology was used in 2007 as in
2001.  For example, here are the CO2 emissions forecasts from the 4th
assessment "“ notice most all of them have a step-change increase in slope
between history and the future.  Just look at the jump across the dotted
line in lead case A1B, and several are even steeper.

So what does this mean?  Remember, small changes in growth rate make
big differences in end values.  For example, below are the IPCC fourth
assessment results for CO2 concentration.  If CO2 concentrations were to
increase at about the same rate as they are today, we would expect an end value
in 2100 of between 520 and 570 ppm, as opposed to the IPCC numbers below where
the projection mean is over 800 in 2100.  The difference is in large part
in the economic growth forecasts. 

Since it is not at all clear that the IPCC has improved its forecasting
methodology over the past years, it is instructive as one final exercise to go
back to the 1995 emissions scenarios in the 2nd assessment.
Though the scale is hard to read, one thing is clear "“ only 10 years later we
are well below most of the forecasts, including the lead forecast is92a (this
over-forecasting has nothing to do with Kyoto, the treaty's impact has been
negligible, as will be discussed later).  One can be assured that if the
forecasts are already overstated after 10 years, they will be grossly
overstated in 100.

Climate Sensitivity
and the Role of Positive Feedbacks

As discussed earlier, climate sensitivity generally refers to the expected
reaction of global temperatures to a arbitrary change in atmospheric CO2
concentration.  In normal usage, it is usually stated as degrees Celsius
of global warming from a doubling in CO2 concentrations from pre-industrial
levels (approx 280 ppm to 560 ppm).  The IPCC and most AGW supporters put
this number at about 3.5 to 4.0 degrees C. 

But wait "“ earlier I said the number was probably more like 1.0C, and that
it was a diminishing return.  Why the difference?  Well, it has to do
with something called feedback effects.

Before I get into these, let's think about a climate sensitivity of 4
degrees C, just from a historical standpoint.  According to the IPCC, CO2
has increased by about 100ppm since 1880, which is about 36% of the way to a
doubling.  Over this same time period, global temperatures have increased
about 0.7C. Since not even the most aggressive AGW supporter will attribute all
of this rise to CO2 levels, let's be generous and credit CO2 with 0.5C.
So if we are 36% of the way to a doubling, and giving CO2 credit for 0.5
degrees, this implies that the sensitivity is probably not more than 1.4
degrees C.  And we only get a number this high if we assume a linear
relationship "“ remember that CO2 and temperature are a diminishing return
relation (chart at right), so future CO2 has less impact on temperature than
past CO2, so 1.4 would be at the high end.  In fact, using the logarithmic
relationship we saw before, 0.5 degrees over 36% of the doubling would imply a
sensitivity around 1.0.  So, based on history, we might expect at worst
another 0.5C from warming over the next century. 

Most AGW supporters would argue that the observed sensitivity over the last
30 years has been suppressed by dimming/sulfate aerosols.  However, to get
a sensitivity of 4.0, one would have to assume that without dimming, actual
warming would have been about 2.0C.  This means that for the number 4.0 to
be right,

1. Absolutely nothing else other than CO2 has been causing warming in the
last 50 years AND

2. Sulfate aerosols had to have suppressed 75% of the warming, or about
1.5C, numbers far larger than I have seen anyone suggest.  Remember that
the IPCC classifies our understanding of this cooling effect, if any, as "low"

But in fact, even the IPCC itself admits that its models assume higher
sensitivity than the historically observed sensitivity.  According to the
fourth IPCC report, a number of studies have tried to get at the sensitivity
historically (going back to periods where SO2 does not cloud the picture).
  Basically, their methodology is not much different in concept than
the back of the envelope calculations I made above.

These are shown in a) below, which shows a probability distribution of what
sensitivity is (IPCC4 p. 798). Note many of the highest probability values of
these studies are between 1 and 2.  Also note that since CO2 content is,
as the IPCC has argued, higher than it has been in recorded history, any
sensitivities calculated on historical data should be high vs. the sensitivity
going forward.  Now, note that graph c) shows how a number of the climate
models calculate sensitivity.  You can see that their most likely values
are consistently higher than any of the historical studies from actual
data.  This means that the climate models are essentially throwing out
historical experience and assuming that sensitivity is 1.5 to 2 times higher
going forward, despite the fact a diminishing return relationship says it
should be lower.

 

Sensitivity, based on History

 

 

 

 

 

Sensitivity that is built into the models  (Sorry, I still have no idea
what "constrained by climatology" means, but the text of the report makes it clear
that these sensitivities popped out of climate models

 

 

 

So how do these models get to such high sensitivities?  The answer, as
I have mentioned, is positive feedback.

Let me take a minute to discuss positive feedbacks.  This is something
I know a fair amount about, since my specialization at school in mechanical
engineering was in control theory and feedback processes.  Negative
feedback means that when you disturb an object or system in some way, forces
tend to counteract this disturbance.  Positive feedback means that the
forces at work tend to reinforce or magnify a disturbance.

You can think of negative feedback as a ball sitting in the bottom of a
bowl.  Flick the ball in any direction, and the sides of the bowl,
gravity, and friction will tend to bring the ball back to rest in the center of
the bowl.  Positive feedback is a ball balanced on the pointy tip of a
mountain.  Flick the ball, and it will start rolling faster and faster
down the mountain, and end up a long way away from where it started with only a
small initial flick.

Almost every process you can think of in nature operates by negative
feedback.  Roll a ball, and eventually friction and wind resistance bring
it to a stop.  There is a good reason for this.  Positive feedback
breeds instability, and processes that operate by positive feedback are
dangerous, and usually end up in extreme states.  These processes tend to
"run away."   I can illustrate this with an example:
Nuclear fission is a positive feedback process.  A high energy neutron
causes the fission reaction, which produces multiple high energy neutrons that
can cause more fission.  It is a runaway process, and it is dangerous and
unstable.  We should be happy there are not more positive feedback
processes on our planet.

Since negative feedback processes are much more common, and since positive
feedback processes almost never yield a stable system, scientists assume that
processes they meet are negative feedback until proven otherwise.  Except
in climate, it seems, where everyone assumes positive feedback is common.

In global warming models, water vapor plays a key role as both a positive
and a negative feedback loop to climate change.  Water vapor is a far more
powerful greenhouse gas than CO2, so its potential strength as a feedback
mechanism is high.  Water comes into play because CO2 driven warming will
put more water vapor in the atmosphere, because greater heat will vaporize more
water.  If this extra vapor shows up as more humid clear air, then this in
turn will cause more warming as the extra water vapor absorbs more energy and
accelerates warming.  However, if this extra water vapor shows up as
clouds, the cloud cover will tend to reflect energy back into space and retard
temperature growth. 

Which will happen?  Well, nobody knows.  The IPCC4 report admits
to not even knowing the sign of water's impact (e.g whether water is a
net positive or negative feedback) in these processes.  And this is just
one example of the many, many feedback loops that scientists are able to posit
but not prove. And climate scientists are coming up with numerous other
positive feedback loops.  As
one author put it
:

Regardless, climate models are made interesting by
the inclusion of "positive feedbacks" (multiplier effects) so that a
small temperature increment expected from increasing atmospheric carbon dioxide
invokes large increases in water vapor, which seem to produce exponential
rather than logarithmic temperature response in the models. It appears to have
become something of a game to see who can add in the most creative feedback
mechanisms to produce the scariest warming scenarios from their models but
there remains no evidence the planet includes any such effects or behaves in a
similar manner.

Note that the majority of the warming in these models appears to be from
these feedback processes.  Though it is hard to pick it out exactly,
section 8.6 of the fourth IPCC report seems to imply these positive feedback
processes increase temperature 2 degrees for every one degree from CO2.
This explains how these models get from a sensitivity of CO2 alone of about 1.0
to 1.5 degrees to a sensitivity of 3.5 or more degrees "“ it's all in the
positive feedback.

So, is it reasonable to assume these feedback loops?
First, none have really been proven empirically, which does not of course
necessarily make them wrong. .  In our daily lives, we generally deal
with negative feedback:  inertia, wind resistance, friction are all negative
feedback processes.  If one knew nothing else, and had to guess if a
natural process was governed by negative or positive feedback, Occam's razor
would say bet on negative.   Also, we will observe in the next
section that when the models with these feedbacks were first run against
history, they produced far more warming than we have actually seen (remember
the analysis we started this section with "“ post-industrial warming implies
1-1.5 degrees sensitivity, not four).

Perhaps most damning is to ask, if this really is such a heavily positive
feedback process, what stops it?  Remember the chart from earlier (show
again at the right), showing the long-term relationship of CO2 and
warming.  Also remember that the data shows, and even AGW supporters
acknowledge, that temperature rises led CO2 rises by about 800 years.
Their explanation is that "something" caused the temperature to start
upwards.  This higher temperature, as it warmed the oceans, caused CO2 to
outgas from the oceans to the atmosphere.  Then, this new CO2 caused the
warming to increase further.  In other words, outgassing CO2 from the
oceans was a positive feedback to the initial temperature perturbation.
In turn, the IPCC argues there are various other positive feedbacks that
multiply the effect of the additional warming from the CO2.  This is
positive feedback layered on positive feedback.  It would be like barely
touching the accelerator and having the car start speeding out o f control.

So the question is, if global temperature is built on top of so many positive
feedbacks and multipliers, what stops temperature form rising once it
starts?  Why didn't the Earth become Venus in any of these events?
Because, for whatever else it means, the chart above is strong evidence that
temperature does not run away. 

I have seen two suggestions, neither of which is compelling.  The first
is that the oceans ran out of CO2 at some point.  But that makes no
sense.  We know that the oceans have far more CO2 than could ever be
liberated entirely to the atmosphere today, and besides,  the record above
seems to claim that CO2 in the atmosphere never really got above there it was
say in 1880.

The second suggestion is based on the diminishing return relationship of CO2
to temperature.  At some point, as I have emphasized many times, CO2's
ability to absorb infrared energy is saturated, and incremental quantities have
little effect.  But note in the IPCC chart above, CO2 on the long time
scale never gets as high as it is today.  If you argue that CO2's
absorption ability was saturated in earlier events, then you have to argue that
it is saturated today, and that incremental CO2 will have no further warming
effect, which AGW supporters are certainly NOT arguing.  Any theory based
on some unknown negative feedback has to deal with the same problem:  If
one argues that this negative feedback took over at the temperature peaks (in
black) doesn't one also have to argue that it should be taking over now at our
current temperature peak?  The pro-AGW argument seems to depend on an assumption
of negative feedbacks in the past that for some reason can't be expected to
operate now or in the future.  Why?

In fact, we really have not seen any evidence historically of these positive
feedback multipliers.  As I demonstrated at the beginning of this chapter,
even assigning as much as 0.5C of the 20th century temperature
increase to CO2 only implies a sensitivity just over 1.0, which is about what
we would expect from CO2 alone with no feedbacks.  This is at the heart of
problems with AGW theory "“ There is no evidence that climate sensitivity to CO2
is anywhere near large enough to justify the scary scenarios spun by AGW
supporters nor to justify the draconian abatement policies they advocate.

My tendency is to conclude that in fact, positive feedbacks do not dominate
climate, just as they do not dominate any long-term stable system.  Yes,
certain effects can reasonably be said to amplify warming (ice albedo is
probably one of them) but there must exist negative feedbacks that tend to damp
out temperature movements.  Climate models will never be credible, and
will always overshoot, until they start building in these offsetting forcings.

Climate Models had
to be aggressively tweaked to match history

A funny thing happened when they first started running climate
models with high CO2 sensitivities in them against history:  The models
grossly over-predicted historic warming.  Again, remember our previous
analysis "“ historical warming implies a climate sensitivity between 1 and
1.5.  It is hard to make a model based on a 3.5 or higher sensitivity fit
that history.  So it is no surprise that one can see in the IPCC chart
below that the main model cases are already diverging in the first five years
of the forecast period from reality, just like the Superbowl predictors of the
stock market failed four years in a row.  If the models are already high
by 0.05 degree after five years, how much will they overshoot reality over 100
years?

In a large sense, this is why the global climate community has latched onto
the global dimming / aerosols hypothesis so quickly and so strongly.  The
possible presence of a man-made cooling element in the last half of the 20th
century, even one that the IPCC fourth report ranks our understanding of as
"low," gives modelers a valuable way to explain why their models are
overstating history.  The aerosols hypothesis is valuable for two reasons:

· Since
SO2 is prevalent today, but is expected to go down in the future, it allows
modelers to forecast much higher warming and climate sensitivity in the future
than has been observed in the past.

· Our
very lack of understanding of the amount, if any, of such aerosol cooling is
actually an advantage, because it allows modelers to set the value of such
cooling at whatever value they need to make their models work

I know the last statement seems unfair, but in reading the IPCC and other
reports, it appears to me that aerosol cooling values are set in exactly this
way "“ as what we used to call a "plug" figure between actual temperatures and
model output.  While this may seem a chancy and fairly circular reasoning,
it makes sense for scientists because they trust their models.  They
really believe the outputs are correct, such that any deviation is not
attributed to their assumptions about CO2 or climate sensitivity, but to other
man-made effects.

But sulfates are not the only plug being used to try to make high
sensitivity models match a lower sensitivity past.  You can see this in
the diagram below from the fourth IPCC report.  This is their summary of
how their refined and tweaked models match history. 

 

The blue band is without anthropogenic effects.
The pink band is with anthropogenic effects, including warming CO2 and cooling
aerosols.  The black line is measured temperatures (smoothed out of
course).

 

 

 

 

You can see the pink band which represents the models with anthropogenic
effects really seems to be a lovely fit, which should make us all
nervous.  Climate is way too chaotic a beast to be able to model this
tightly.   In fact, given uncertainties and error bars on our
historical temperature measurements, climate scientists are probably trumpeting
a perfect fit here to the wrong data.  I am reminded again of a beautiful
model for presidential election results with a perfect multi-decadal fit based
on the outcome of NFL football games. 

But ignoring this suspiciously nice fit, take a look at the blue bar.
This is what the IPCC models think the climate would be doing without
anthropogenic effects (both warming CO2 and cooling sulfates, for
example).  With the peaked shape (which should actually be even more
pronounced if they had followed the mid-century temperature peak to its max)
they are saying there is some natural effect that is warming things until 1950
and then turns off and starts cooling, coincidently in the exact same year
that anthropogenic effects start taking off. 
I challenge you to read
the IPCC assessment, all thousand or so pages, and find anywhere in that paper
where someone dares to say exactly what this natural effect was, or why it
turned off exactly in 1950. 

The reality is that this natural effect is another plug.  There is no
actual empirical data to back up the blue line (in fact, as we will see in the
alternate theories section, there is good empirical data that this blue band is
wrong).  Basically, climate scientists ran their models against history,
and found that even with their SO2 plug, they still didn't match well "“ they
were underestimating early century warming and over-estimating late century
warming.  Remember that the scientists believe their models and their
assumptions about a strong CO2 effect, so they have modeled the non-anthropogenic
effect by running their models, tuning them to historical actuals, and then
backing out the anthropogenic forcings to see what is left.  What is left,
the plug figure, is the blue line.

Already, the models used by the IPCC tend to overestimate past warming even
if all past warming is attributable to anthropogenic causes.  If
anthropogenic effects explain only a fraction of past warming, then the current
models are vastly overstated, good for stampeding the populous into otherwise
unpopular political control over the economy, but of diminished scientific
value.

The note I will leave you with is this:  Do not gain false confidence
in the global climate models when they show you charts that their outputs run
backwards closely match history.  This is an entirely circular argument,
because the models have been built, indeed forced, to match history, with
substantial plug figures added like SO2 effects and non-anthropogenic climate
trends, effects for which there are no empirical numbers.

The table of contents for the rest of this paper, . 4A Layman's Guide to Anthropogenic Global Warming (AGW) is here Free pdf of this Climate Skepticism paper is here and print version is sold at cost here

The open comment thread for this paper can be found here.