Archive for April 2016

Denying the Climate Catastrophe: 4a. Actual Temperature Data

This is the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data (this article);   B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In our last chapter, we ended a discussion on theoretical future warming rates by saying that no amount of computer modelling was going to help us choose between various temperature sensitivities and thus warming rates.  Only observational data was going to help us determine how the Earth actually responds to increasing CO2 in the atmosphere.  So in this chapter we turn to the next part of our framework, which is our observations of Earth's temperatures, which is among the data we might use to support or falsify the theory of catastrophic man-made global warming.

click to enlarge

The IPCC position is that the world (since the late 19th century) has warmed about 0.8C.  This is a point on which many skeptics will disagree, though perhaps not as substantially as one might expect from the media.   Most skeptics, myself included, would agree that the world has certainly warmed over the last 100-150 years.  The disagreement tends to be in the exact amount of warming, with many skeptics contending that the amount of warming has been overstated due to problems with temperature measurement and aggregation methodology.

For now, we will leave those issues aside until part B of this section, where we will discuss some of these issues.  One reason to do so is to focus, at least at first, on the basic point of agreement that the Earth has indeed warmed somewhat.  But another reason to put these differences over magnitude aside is that we will find, a few chapters hence, that they essentially don't matter.  Even the IPCC's 0.8C estimate of past warming does not support its own estimates of temperature sensitivity to CO2.

Surface Temperature Record

The most obvious way to measure temperatures on the Earth is with thermometers near the ground.   We have been measuring the temperature at a few select locations for hundreds of years, but it really is only in the last century that we have fairly good coverage of the land surface.  And even then our coverage of places like the Antarctic, central Africa, parts of South America, and all of the oceans (which cover 75% of the Earth) is even today still spotty.  So coming up with some sort of average temperature for the Earth is not a straight averaging exercise -- data must be infilled and estimated, making the process complicated and subject to a variety of errors.

But the problem is more difficult than just data gaps.  How does one actually average a temperature from Denver with a temperature from San Diego?  While a few folks attempt such a straight average, scientists have developed a theory that one can more easily average what are known as temperature anomalies than one can average the temperature itself.  What is an anomaly?  Essentially, for a given thermometer, researchers will establish an average for that thermometer for a particular day of the year.  The exact time period or even the accuracy of this average is not that important, as long as the same time period is used consistently.  Then, the anomaly for any given measurement is the deviation of the measured temperature from its average.   So if the average historical temperature for this day of the year is 25C and the actual measured for the day is 26C, the anomaly for today at this temperature station is +1.0C.

Scientists then develop programs that spatially average these temperature anomalies for the whole Earth, while also adjusting for a myriad of factors, from time-of-day changes in measurement to technology changes over time of the temperature stations to actual changes in the physical location of the measurement.  This is a complicated enough a task, with enough explicit choices that must be made about techniques and adjustments, that there are many different temperature metrics floating around out there, many of which get different results from essentially the same data.  The Hadley Center in England's CRUT4 global temperature metric is generally considered the gold standard, and is the one used preferentially by the IPCC.  Its metric is shown below, with the monthly temperature anomaly in dark blue and the 5 year moving average (centered on its mid-point):

click to enlarge

Again, the zero point of the chart is arbitrary and merely depends on the period of time chosen as the base or average.  Looking at the moving average, one can see the temperature anomaly bounces around -0.3C in the late 19th century and has been around +0.5C over the last several years, which is how we get to about 0.8C warming.

Satellite Temperature Record

There are other ways to take temperature measurements, however.  Another approach is to use satellites to measure surface temperatures (or at least near-surface temperatures).   Satellites measure temperature by measuring the thermal microwave emissions of oxygen atoms in the lower troposphere (perhaps 0-3 miles above the Earth).  Satellites have the advantage of being able to look at the entire Earth without gaps, and are not subject to siting biases for surface temperatures stations (which will be discussed in our part B of this chapter).

The satellite record does, however, rely on a shifting array of satellites all of which have changing orbits for which adjustments must be made.  Of necessity, the satellite record cannot reach as far back into the past.  And the satellites are not actually measuring the temperature of the Earth, but rather a temperature a mile or two up.  Whether that matters is subject to debate, but the clincher for me is that the IPCC and most climate models have always shown that the first and most anthropogenic warming should show up in exactly this spot -- the lower troposphere -- which makes observation of this zone a particularly good way to look for a global warming signal.

Roy Spencer and John Christy have what is probably the leading satellite temperature metric, called "UAH" as a shorthand for University of Alabama, Huntsville's space science center.  The UAH record looks like this:

click to enlarge

Note that the absolute magnitude of the anomaly isn't comparable between the surface and satellite record, as they use different base periods, but changes and growth rates in the anomalies should be comparable between the two indices.

The first thing to note is that, though they are different, both the satellite and surface temperature records show warming since 1980.  For all that some skeptics may want to criticize the authors of the surface temperature databases, and there indeed some grounds for criticism, these issues should not distract us from the basic fact that in every temperature record we have (including other technologies like radiosonde balloons), we see recent warming.

In terms of magnitude, the two indices do not show the same amount of warming -- since 1980 the satellite temperature record shows about 30% less warming than does  the surface temperature record for the same period.   So which is right?  We will discuss this in more depth in part B, but the question is not made any easier by the fact that the surface records are compiled by prominent alarmist scientists while the satellite records are maintained by prominent skeptic scientists.  Which causes each side to accuse the other of having its thumb on the scale, so to speak.  I personally like the satellite record because of its larger coverage areas and the fact that its manual adjustments (which are required of both technologies) are for a handful of instruments rather than thousands, and are thus easier to manage and get right.  But I am also increasingly of the opinion that the differences are minor, and that neither are consistent with catastrophic forecasts.

So instead of getting ourselves involved in the dueling temperature data set food fight (we will dip our toe into this in part B), let's instead apply both these data sets to several propositions we see frequently in the media.  We will quickly see the answers we reach do not depend on the data set chosen.

Test #1:  Is Global Warming Accelerating

One frequent meme you will hear all the time is that "global warming is accelerating."  As of today it had 550,000 results on Google.  For example:

click to enlarge

So.  Is that true?  They can't print it if its not true, right (lol)?  Let's look first at the satellite record through the end of 2015 when this presentation was put together (there is an El Nino driven spike in 2 months after this chart was made, which does not affect the conclusions that follow in the least, but I will update to include ASAP).

click to enlarge

If you want a name for this chart, I could call it the "bowl of cherries" because it has become a cherry-picker's delight.   Everyone in the debate can find a starting point and an end point in this jagged data to find any trend they want to find.  So how do we find an objective basis to define end points for this analysis?  Well, my background is more in economic analysis.  Economists have the same problem in looking at trends for things like employment or productivity because there is a business cycle that adds volatility to these numbers above and beyond any long term trend.  One way they manage this is to measure variables from peak to peak of the economic cycle.

I have done something similar.  The equivalent cyclical peaks in the temperature world are probably the very high Pacific Decadal Oscillation, or El Nino, events.  There was one in 1998 and there is one occurring right now in late 2015/early 2016.  So I defined my period as 18 years from peak to peak.  By this timing, the satellite record shows temperatures to be virtually dead flat for those 18 years.  This is "the pause" that you may have heard of in climate debates.   Such an extended pause is not predicted by global warming theory, particularly when the theory (as in the IPCC main case) assumes high temperature sensitivities to CO2 and low natural variation in temperatures.

So if global warming were indeed accelerating, we would expect the warming rate over the last 18 years to be higher than the rate over the previous 18 years.  But just the opposite is true:

click to enlarge

While "the pause" does not in and of itself disprove the theory of catastrophic manmade global warming, it does easily falsify the myriad statements you see that global warming is accelerating.  At least for the last 20 years, it has been decelerating.

By the way, this is not somehow an artifact of just the satellite record.  This is what the surface record looks like for the same periods:

click to enlarge

Though it shows (as we discussed earlier) higher overall warming rates, the surface temperature record also shows a deceleration rather than acceleration over the last 20 years.

 

Test #2:  Are Temperatures Rising Faster than Expected

OK, let's consider another common meme, that the "earth is warming faster than predicted."

click to enlarge

Again, there over 500,000 Google matches for this meme.  So how do we test it?  Well, certainly not against the last IPCC forecasts -- they are only a few years old.  The first real high-sensitivity or catastrophic forecast we have is from James Hansen, often called the father of global warming.

click to enlarge

In June of 1988, Hanson made a seminal presentation to Congress on global warming, including this very chart (sorry for the sucky 1980's graphics).  In his testimony, he presented his models for the Earth's temperature, which showed a good fit with history**.  Using his model, he then created three forecasts:  Scenario A, with high rates of CO2 emissions;  Scenario B, with more modest emissions; and scenario C, with drastic worldwide emissions cuts (plus volcanoes, that tend to belch dust and chemicals that have a cooling effect).  Surprisingly, we can't even get agreement today about which forecast for CO2 production was closer to the mark (throwing in the volcanoes makes things hard to parse) but it is pretty clear that over the 30 years after this forecast, the Earth's CO2 output has been somewhere between A and B.

click to enlarge

As it turns out, it doesn't matter whether we actually followed the CO2 emissions from A or B.  The warming forecasts for scenario A and B turn out to be remarkably similar.  In the past, I used to just overlay temperature actuals onto Hansen's chart, but it is a little hard to get the zero point right and it led to too many food fights.  So let's pull the scenario A and B forecasts off the chart and compare them a different way.

click to enlarge

The left of chart shows Hanson's scenario A and B, scanned right from his chart.  Scenario A implies a warming rate from 1986 to 2016 of 3.1C per century.  Scenario B is almost as high, at 2.8C per century.  But as you can see on the right, the actual warming rates we have seen over the same period are well below these forecasts.  The surface temperature record shows only about half the warming, and the satellite record shows only about a third the warming, that Hansen predicted.   There is no justification for saying that recent warming rates have been higher than expected or forecast -- in fact, the exact opposite has been true.

We see the same thing when looking at past IPCC forecasts.  At each of its every-five-year assessments, the IPCC has included a forecast range for future temperatures.  In this case, though, we don't have to create a comparison with actuals because the most recent (5th) IPCC Assessment did it for us:

click to enlarge

The colored bands are their past forecasts.  The grey areas are the error bands on the forecast.  The black dots are global temperatures (which actually are shown with error bars, which is good practice but seldom done except perhaps when they are trying to stretch to get into the forecast range).  As you can see, temperatures have been so far below forecasts that they are dropping out of the low end of even the most generous forecast bands.  If temperatures were rising faster than expected, the black dots would be above the orange and yellow bands.  We therefore have to come to the conclusion that, at least for the last 20-30 years, temperatures have not been rising faster than expected, they have been rising slower than expected.

Day vs. Night

There is one other phenomenon we can see in the temperature data that we will come back to in later chapters:  that much of the warming over the last century has been at night, rather than in the daytime.   There are two possible explanations for this.  The first is that most anthropogenic warming models predict more night time warming than they do day time warming.  The other possibility is that a portion of the warming in the 20th century temperature record is actually spurious bias from the urban heat island effect due to siting of temperature stations near cities, since urban heat island warming shows up mainly at night.  We will discuss the latter effect in part B of this chapter.

Whatever the cause, much of the warming we have seen has occurred at night, rather than during the day.  Here is a great example from the Amherst, MA temperature station (Amherst was the first location where I gave this presentation, if that seems an odd choice).

Click to enlarge

As you can see, the warming rate since 1945 is 5 times higher at night than during the day.  This directly affects average temperatures since daily average temperature for a location in the historic record is the simple average of the daily high and daily low.  Yes, I know that this is not exactly accurate, but given technology in the past, this is the best that could be done.

The news media likes to cite examples of heat waves and high temperature records as a "proof" of global warming.   We will discuss this later, but this is obviously a logical fallacy -- one can't prove a trend in noisy data simply by citing isolated data points in one tail of the distribution.  But it is also fallacious for another reason -- we are not actually seeing any upwards trends in high temperature records, at least for daytime highs:

Click to enlarge

To get this chart, we obviously have to eliminate newer temperature stations from the data set -- any temperature station that is only 20 years old will have all of its all time records in the last 20 years (you would be surprised at how many otherwise reputable scientists miss simple things like this).  Looking at just the temperature stations in the US we have a long record for, we see with the black line that there is really no upwards trend in the number of high temperature records (Tmax) being set.   The 1930s were brutally hot, and if not for some manual adjustments we will discuss in part B of this section, they would likely still show as the hottest recent era for the US.   It turns out, with the grey line (Tmin), that while there is still no upward trend, we are actually seeing more high temperature records being set with daily lows (the highest low, as it were) than we are with daily highs.  The media is, essentially, looking in the wrong place, but I sympathize because a) broiling hot daytime highs are sexier and b) it is brutally hard to talk about highest low temperatures without being confusing as hell.

In our next chapter, or really part B of this chapter, we will discuss some of the issues that may be leading the surface temperature record to be exaggerated, or at least inaccurate.

Chapter 4, Part B on problems with the surface temperature record continues here.

If you want to skip Part B, and get right on with the main line of the argument, you can go straight to Chapter 5, part A, which starts in on the question of how much of past warming can be attributed to man.

 

** Footnote:  The history of Wall Street is full of bankrupt people whose models exactly matched history.  I have done financial and economic modeling for decades, and it is surprisingly easy to force multi-variable models to match history.  The real test is how well the model works going forward.  Both Hanson's 1988 models and the IPCC's many models do an awesome job matching history, but quickly go off the rails in future years.  I am reminded of a simple but famous example of the perfect past correlation between certain NFL outcomes and Presidential election outcomes.   This NFL model of presidential elections perfectly matches history, but one would be utterly mad to bet future elections based on it.

Net Neutrality: I Told Your So

From the WSJ (emphasis added):

Netflix now admits that for the past five years, all through the debate on net neutrality, it was deliberately slowing its videos watched by users on AT&T and Verizon’s wireless networks. The company did so for good reason—to protect users from overage penalties. But it never told users at a time when Netflix was claiming carriers generally were deliberately slowing its service to protect their own TV businesses—a big lie, it turned out.

All this has brought considerable and well-deserved obloquy on the head of Netflix CEOReed Hastings for his role in inviting extreme Obama utility regulation of the Internet. Others deserve blame too. Google lobbied the administration privately but was too chicken to speak up publicly against utility regulation.

But Netfix appears to have acted out of especially puerile and venal motives. Netflix at the time was trying to use political pressure to cut favorable deals to connect directly to last-mile operators like Comcast and Verizon—a penny-ante consideration worth a few million dollars at best, for which Netflix helped create a major public policy wrong-turn.

This is what I wrote about net neutrality a couple of years ago:

Net Neutrality is one of those Orwellian words that mean exactly the opposite of what they sound like.  There is a battle that goes on in the marketplace in virtually every communication medium between content creators and content deliverers.  We can certainly see this in cable TV, as media companies and the cable companies that deliver their product occasionally have battles that break out in public.   But one could argue similar things go on even in, say, shipping, where magazine publishers push for special postal rates and Amazon negotiates special bulk UPS rates.

In fact, this fight for rents across a vertical supply chain exists in virtually every industry.  Consumers will pay so much for a finished product.  Any vertical supply chain is constantly battling over how much each step in the chain gets of the final consumer price.

What "net neutrality" actually means is that certain people, including apparently the President, want to tip the balance in this negotiation towards the content creators (no surprise given Hollywood's support for Democrats).  Netflix, for example, takes a huge amount of bandwidth that costs ISP's a lot of money to provide.  But Netflix doesn't want the ISP's to be be able to charge for this extra bandwidth Netflix uses - Netflix wants to get all the benefit of taking up the lion's share of ISP bandwidth investments without having to pay for it.  Net Neutrality is corporate welfare for content creators....

I am still pretty sure the net effect of these regulations, whether they really affect net neutrality or not, will be to disarm ISP's in favor of content providers in the typical supply chain vertical wars that occur in a free market.  At the end of the day, an ISP's last resort in negotiating with a content provider is to shut them out for a time, just as the content provider can do the same in reverse to the ISP's customers.  Banning an ISP from doing so is like banning a union from striking.

 

You Know It Is Time to Short the Economy When...

.... your paper prints headlines that say "Economists: Zero chance of Arizona recession."  I am sure Houston would have said the exact same thing, right up until oil prices dropped to $30 and suddenly there was like a 100% chance.  When the Arizona Republic makes a definitive economic prediction, bet the other side.

Anyway, I am considering this headline to be a flashing indicator of the top.  In 2005 I wrote about another such indicator that told me the housing market had peaked:

So, to date [May 31, 2005], I have been unconvinced about the housing bubble, at least as it applied to our community.  After all, demographics over the next 20-30 years are only going to support Scottsdale area real estate.

However, over the weekend I had a disturbing experience:   At a social function, I heard a dentist enthusiastically telling a doctor that he needs to be buying condos and raw land.  The dentist claimed to be flipping raw land parcels for 100% in less than 6 months.

For those who don't know, this is a big flashing red light.  When doctors and dentists start trying to sell you on a particular type of investment, run away like they have the plague.  At Harvard Business School, I had a great investment management class with a professor who has schooled many of the best in the business.  If an investment we were analyzing turned out to be a real dog, he would ask us "who do you sell this to?" and the class would shout "doctors!"  And, if the investment was really, really bad, to the point of being insane, the class would instead shout "dentists!"

Which reminds me that in the last 6 months I have started hearing radio commercials again urging folks to get into the house-flipping business and make their fortune.  Whenever institutions start selling investments to you, the average Joe, rather than just investing themselves, that should be taken as a signal that we are approaching a top**.  About 12-18 months before oil prices tanked, I started getting flooded with spam calls at work trying to sell me various sorts of oil exploration investments.

** Postscript:  In 2010, when house prices were low and some were going for a song in foreclosure, there were no house flipping commercials on radio.   That is because Blackstone and other major institutions were too busy buying them up.  Now that these companies see less value, you are hearing house flipping commercials.   You know that guy who has a book with his fool-proof method for making a fortune?  So why is he wasting his time selling books for $2 a copy in royalties rather than following his method?

As A Reward for Introducing Price Competition into the Taxi Monopoly, Uber Gets Sued for Price Fixing

From Engadget:

After failing to get a class-action lawsuit dismissed, Uber CEO Travis Kalanick will go to court over price fixing claims. A US district court judge in New York ruled Kalanick has to face the class of passengers alleging that he conspired with drivers to set fares using an algorithm, including hiking rates during peak hours with so-called surge pricing. According to Reuters, district court judge Jed Rakoff ruled the plaintiffs "plausibly alleged a conspiracy" to fix pricing and that the class action could also pursue claims the set rates led to the demise other services, like Sidecar.

I guess this is the downside of calling all their drivers independent contractors -- it leads Uber potentially being vulnerable to accusations of price fixing among these contractors.  Of course, taxi cartels have been fixing prices for decades, but that is government-assisted price-fixing so I suppose that is OK.   It would be ironic that the first price competition introduced into the taxi business in decades is killed based on antitrust charges.

As with just about all modern anti-trust cases, this has little to do with consumer well-being and more about the well-being of supply chain participants (ie the drivers) and competitors (ie Sidecar and taxis).