Author Archive

Dear Bank of America: Stop Protecting Merchants Who Lose My Credit Card Data

Twice in the last week I have had Bank of American credit or debit cards that have had to be replaced due to (accord to BofA) data breaches at merchants.  I (and I assume most others) find these episodes annoying, not the least because I can expect a month or so of warnings and notices from merchants, hosting companies, cable companies, etc that my automatic payment did not go through and I need to immediately tell them my new card number.

So in each case I asked Bank of America to tell me which merchant lost my credit card data.  I don't think this is an unreasonable request -- if a merchant through some sort of data carelessness causes me a bunch of hassle, and endangers my financial privacy, I would like to know who it was so I can consider shifting my business to someone else.  But Bank of America will not tell me.  I think Target initiated a lot of reforms when they suffered through the public backlash from their data breach a while back -- while many merchants have their chip card readers turned off, you can bet they are not turned off at Target.

Wrapped Around the Axle

This is home repair day, so I am working from home while a variety of repair people show up (none of whom has yet shown up in their promised arrival time window).

Anyway, the A/C guy was here first and was diagnosing why my condenser didn't seem to be running.  He found this on the cooling fan motor (dead):

DSC_0257 (1)

Life in Arizona.

Fracking and Foreign Policy

I am happy to see prominent members of Congress from both parties starting to question our support of the deeply flawed government in Saudi Arabia.  I don't want to make war on them (repeating the Lybia mistake) but I also have been leery for quite a while about supporting a country that funds so much terrorism and is frankly as socially backwards as any place in the world.

So here is my question:  Had it not been for the shale oil and gas revolution in this country, would the US Congress be willing to question this relationship today?

Denying the Climate Catastrophe: 5a. Arguments For Attributing Past Warming to Man

This is part A of Chapter 5 of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made (this article); B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

Having established that the Earth has warmed over the past century or so (though with some dispute over how much), we turn to the more interesting -- and certainly more difficult -- question of finding causes for past warming.  Specifically, for the global warming debate, we would like to know how much of the warming was due to natural variations and how much was man-made.   Obviously this is hard to do, because no one has two thermometers that show the temperature with and without man's influence.

I like to begin each chapter with the IPCC's official position, but this is a bit hard in this case because they use a lot of soft words rather than exact numbers.  They don't say 0.5 of the 0.8C is due to man, or anything so specific.   They use phrases like "much of the warming" to describe man's affect.  However, it is safe to say that most advocates of catastrophic man-made global warming theory will claim that most or all of the last century's warming is due to man, and that is how we have put it in our framework below:

click to enlarge

By the way, the "and more" is not a typo -- there are a number of folks who will argue that the world would have actually cooled without manmade CO2 and thus manmade CO2 has contributed more than the total measured warming.  This actually turns out to be an important argument, since the totality of past warming is not enough to be consistent with high sensitivity, high feedback warming forecasts.  But we will return to this in part C of this chapter.

Past, Mostly Abandoned Arguments for Attribution to Man

There have been and still are many different approaches to the attributions problem.  In a moment, we will discuss the current preferred approach.  However, it is worth reviewing two other approaches that have mostly been abandoned but which had a lot of currency in the media for some time, in part because both were in Al Gore's film An Inconvenient Truth.

Before we get into them, I want to take a step back and briefly discuss what is called paleo-climatology, which is essentially the study of past climate before the time when we had measurement instruments and systematic record-keeping for weather.   Because we don't have direct measurements, say, of the temperature in the year 1352, scientists must look for some alternate measure, called a "proxy,"  that might be correlated with a certain climate variable and thus useful in estimating past climate metrics.   For example, one might look at the width of tree rings, and hypothesize that varying widths in different years might correlate to temperature or precipitation in those years.  Most proxies take advantage of such annual layering, as we have in tree rings.

One such methodology uses ice cores.  Ice in certain places like Antarctica and Greenland is laid down in annual layers.  By taking a core sample, characteristics of the ice can be measured at different layers and matched to approximate years.  CO2 concentrations can actually be measured in air bubbles in the ice, and atmospheric temperatures at the time the ice was laid down can be estimated from certain oxygen isotope ratios in the ice.  The result is that one can plot a chart going back hundreds of thousands of years that estimates atmospheric CO2 and temperature.  Al Gore showed this chart in his movie, in a really cool presentation where the chart wrapped around three screens:

click to enlarge

As Gore points out, this looks to be a smoking gun for attribution of temperature changes to CO2.  From this chart, temperature and CO2 concentrations appear to be moving in lockstep.  From this, CO2 doesn't seem to be a driver of temperatures, it seems to be THE driver, which is why Gore often called it the global thermostat.

But there turned out to be a problem, which is why this analysis no longer is treated as a smoking gun, at least for the attribution issue.  Over time, scientists got better at taking finer and finer cuts of the ice cores, and what they found is that when they looked on a tighter scale, the temperature was rising (in the black spikes of the chart) on average 800 years before the CO2 levels (in red) rose.

This obviously throws a monkey wrench in the causality argument.  Rising CO2 can hardly be the cause of rising temperatures if the CO2 levels are rising after temperatures.

It is now mostly thought that what this chart represents is the liberation of dissolved CO2 from oceans as temperatures rise.  Oceans have a lot of dissolved CO2, and as the oceans get hotter, they will give up some of this CO2 to the atmosphere.

The second outdated attribution analysis we will discuss is perhaps the most famous:  The Hockey Stick.  Based on a research paper by Michael Mann when he was still a grad student, it was made famous in Al Gore's movie as well as numerous other press articles.  It became the poster child, for a few years, of the global warming movement.

So what is it?  Like the ice core chart, it is a proxy analysis attempting to reconstruct temperature history, in this case over the last 1000 years or so.  Mann originally used tree rings, though in later versions he has added other proxies, such as from organic matter laid down in sediment layers.

Before the Mann hockey stick, scientists (and the IPCC) believed the temperature history of the last 1000 years looked something like this:

click to enlarge

Generally accepted history had a warm period from about 1100-1300 called the Medieval Warm Period which was warmer than it is today, with a cold period in the 17th and 18th centuries called the "Little Ice Age".  Temperature increases since the little ice age could in part be thought of as a recovery from this colder period.  Strong anecdotal evidence existed from European sources supporting the existence of both the Medieval Warm Period and the Little Ice Age.  For example, I have taken several history courses on the high Middle Ages and every single professor has described the warm period from 1100-1300 as creating a demographic boom which defined the era (yes, warmth was a good thing back then).  In fact, many will point to the famines in the early 14th century that resulted from the end of this warm period as having weakened the population and set the stage for the Black Death.

However, this sort of natural variation before the age where man burned substantial amounts of fossil fuels created something of a problem for catastrophic man-made global warming theory.  How does one convince the population of catastrophe if current warming is within the limits of natural variation?  Doesn't this push the default attribution of warming towards natural factors and away from man?

The answer came from Michael Mann (now Dr. Mann but actually produced originally before he finished grad school).  It has been dubbed the hockey stick for its shape:

 

click to enlarge

The reconstructed temperatures are shown in blue, and gone are the Medieval Warm Period and the Little Ice Age, which Mann argued were local to Europe and not global phenomena.  The story that emerged from this chart is that before industrialization, global temperatures were virtually flat, oscillating within a very narrow band of a few tenths of a degree.  However, since 1900, something entirely new seems to be happening, breaking the historical pattern.  From this chart, it looks like modern man has perhaps changed the climate.  This shape, with the long flat historical trend and the sharp uptick at the end, is why it gets the name "hockey stick."

Oceans of ink and electrons have been spilled over the last 10+ years around the hockey stick, including a myriad of published books.  In general, except for a few hard core paleoclimatologists and perhaps Dr. Mann himself, most folks have moved on from the hockey stick as a useful argument in the attribution debate.  After all, even if the chart is correct, it provides only indirect evidence of the effect of man-made CO2.

Here are a few of the critiques:

  • Note that the real visual impact of the hockey stick comes from the orange data on the far right -- the blue data alone doesn't form much of a hockey stick.  But the orange data is from an entirely different source, in fact an entirely different measurement technology -- the blue data is from tree rings, and the orange is form thermometers.  Dr. Mann bristles at the accusation that he "grafted" one data set onto the other, but by drawing the chart this way, that is exactly what he did, at least visually.  Why does this matter?  Well, we have to be very careful with inflections in data that occur exactly at the point that where we change measurement technologies -- we are left with the suspicion that the change in slope is due to differences in the measurement technology, rather than in the underlying phenomenon being measured.
  • In fact, well after this chart was published, we discovered that Mann and other like Keith Briffa actually truncated the tree ring temperature reconstructions (the blue line) early.  Note that the blue data ends around 1950.  Why?  Well, it turns out that many tree ring reconstructions showed temperatures declining after 1950.  Does this mean that thermometers were wrong?  No, but it does provide good evidence that the trees are not accurately following current temperature increases, and so probably did not accurately portray temperatures in the past.
  • If one looks at the graphs of all of Mann's individual proxy series that are averaged into this chart, astonishingly few actually look like hockey sticks.  So how do they average into one?  McIntyre and McKitrick in 2005 showed that Mann used some highly unusual and unprecedented-to-all-but-himself statistical methods that could create hockey sticks out of thin air.  The duo fed random data into Mann's algorithm and got hockey sticks.
  • At the end of the day, most of the hockey stick (again due to Mann's averaging methods) was due to samples from just a handful of bristle-cone pine trees in one spot in California, trees whose growth is likely driven by a number of non-temperature factors like precipitation levels and atmospheric CO2 fertilization.   Without these few trees, most of the hockey stick disappears.  In later years he added in non-tree-ring series, but the results still often relied on just a few series, including the Tiljander sediments where Mann essentially flipped the data upside down to get the results he wanted.  Taking out the bristlecone pines and the abused Tiljander series made the hockey stick go away again.

There have been plenty of other efforts at proxy series that continue to show the Medieval Warm Period and Little Ice Age as we know them from the historical record

 

click to enlarge

As an aside, Mann's hockey stick was always problematic for supporters of catastrophic man-made global warming theory for another reason.  The hockey stick implies that the world's temperatures are, in absence of man, almost dead-flat stable.   But this is hardly consistent with the basic hypothesis, discussed earlier, that the climate is dominated by strong positive feedbacks that take small temperature variations and multiply them many times.   If Mann's hockey stick is correct, it could also be taken as evidence against high climate sensitivities that are demanded by the catastrophe theory.

 

The Current Lead Argument for Attribution of Past Warming to Man

So we are still left wondering, how do climate scientists attribute past warming to man?  Well, to begin, in doing so they tend to focus on the period after 1940, when large-scale fossil fuel combustion really began in earnest.   Temperatures have risen since 1940, but in fact nearly all of this rise occurred in the 20 year period from 1978 to 1998:

 

click to enlarge

To be fair, and better understand the thinking at the time, let's put ourselves in the shoes of scientists around the turn of the century and throw out what we know happened after that date.  Scientists then would have been looking at this picture:

click to enlarge

Sitting in the year 2000, the recent warming rate might have looked dire .. nearly 2C per century...

click to enlarge

Or possibly worse if we were on an accelerating course...

click to enlarge

Scientists began to develop a hypothesis that this temperature rise was occurring too rapidly to be natural, that it had to be at least partially man-made.  I have always thought this a slightly odd conclusion, since the slope from this 20-year period looks almost identical to the slope centered around the 1930's, which was very unlikely to have much human influence.

 

click to enlarge

But never-the-less, the hypothesis that the 1978-1998 temperature rise was too fast to be natural gained great currency.  But how does one prove it?

What scientists did was to build computer models to simulate the climate.  They then ran the computer models twice.  The first time they ran them with only natural factors, or at least only the natural factors they knew about or were able to model (they left a lot out, but we will get to that in time).  These models were not able to produce the 1978-1998 warming rates.  Then, they re-ran the models with manmade CO2, and particularly with a high climate sensitivity to CO2 based on the high feedback assumptions we discussed in an earlier chapter.   With these models, they were able to recreate the 1978-1998 temperature rise.   As Dr. Richard Lindzen of MIT described the process:

What was done, was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multidecadal Oscillation), claim that such models nonetheless accurately depicted natural internal climate variability, and use the fact that these models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man.

Another way to put this argument is "we can't think of anything natural that could be causing this warming, so by default it must be man-made.  With various increases in sophistication, this remains the lead argument in favor of attribution of past warming to man.

In part B of this chapter, we will discuss what natural factors were left out of these models, and I will take my own shot at a simple attribution analysis.

The next section, Chapter 6 Part B, on natural attribution is here

Why Wind and Solar Are Not Currently the Answer on Emissions Reductions

I have made this point forever, but it always bears repeating -- the variability of wind and solar require hot fossil fuel backups that leads to little reduction in total fossil fuel generation capacity (so that wind and solar investments are entirely duplicative) and less-than-expected reductions in actual emissions.

I don't think wind will ever be viable, except perhaps in a few unique offshore locations.  Solar is potentially viable with a 10x or so reduction in panel costs and a 10-100x reduction in battery/energy storage costs.  I honestly think that day will come, but we are not there.

From the Unbroken Window comes this slide from an interesting presentation at the Ontario Society of Professional Engineers, essentially making the same points I and others have been trying to make for years.

Ontario-Engineers

I made the point about nuclear in my climate legislative proposal here.

Sometimes, It Pays to be A Total Geek. And Some Advice for Android Owners

So a while back, I was playing around with mirroring my android phone screen on my PC, really for no good reason.  But to do so, and really to do a lot of PC control tasks on an android, USB debugging mode needs to be turned on.  So I did that.  And then after playing around with it, forgot about it for months.

Then, last night I dropped my phone and lost the screen functions.  The phone functions and digitizer are just fine, but the screen is black.   Thanks to the cloud, I have backups of all photos, email, calendar, contacts and such.  (Android has a better cloud approach than Apple on photos, mail, and calendar but an inferior approach on apps, messages, and general settings).

I wanted to get into the phone to wipe the memory.   How if the screen is black?  Aha!  I could mirror the phone screen onto my PC.  Which I did and I got everything accomplished.  It would have been impossible to do this, though, if I had not previously set USB debugging mode on, as that requires a bunch of screen taps.  I did have to hit one OK to get the mirroring started (after that the mouse worked on the computer to make selections) but I took a similar phone and mirrored it, found out where the confirmation message was popping up on the screen, and was able to hit the right spot on the broken phone when I needed to.

So here is my advice for android owners.  Broken or blacked out screen is probably the #1 smart phone failure mode.  Go into your phone right now and turn on usb debugging mode while you still can.  There is an odd process involving tapping a certain button 7 times to turn on developer mode, but it is explained here.  The screen mirroring is a chrome app called vysor, which also requires some downloads of special drives but that is explained by the app.

A Blast From My Childhood

Apparently, the sea monkey and x-ray glasses business empire is being fought over by several people.  I would have bet that the business did not exist any more, but apparently it still sells several million dollars a year of the little engineered brine shrimp.  This is a fascinating article throughout, both on the history of sea monkeys, the marketing genius that made it work, and the current legal fight.  Link via Tyler Cowen.

Yes the Middle Class is Shrinking. And the Ranks of the Poor Are Shrinking. Because Americans are Getting Wealthier

Mark Perry has a number of good graphs that show that the shrinking of the middle class is real, but only because they are moving to "rich" -- hardly the implication of those on the Left who are trying to demagogue the issue.  Check them out if you have not seen them but this animated graph was new to me:

econ

Note the general movement to the right.

Interestingly, the only block on the low side getting larger is the percent of people at "zero".    In my mind, this just reinforces my point that the poverty issue is primarily one of having a job, not the rate paid at the job.  For that growing cohort at zero, raising the minimum wage only makes it more likely they stay at zero.

Contact Lenses and Cronyism

Hooray for Veronica de Rugy, who is .

What makes the contact lens market unique — and also leaves it extra vulnerable to crony intervention — is the fact that customers are required by federal law to obtain a prescription from a licensed optometrist in order to purchase lenses.

It is a rare instance where prescribers are also sellers, which leads to a cozy relationship between manufacturers and the doctors who can steer patients toward their brand.

Prescriptions are brand-specific, which makes it difficult for consumers to shop around. Choosing a different brand would require paying for another exam in order to obtain a new prescription.

The simplest solution would be to do away with the gatekeepers altogether and allow the purchase of contact lenses without a prescription.

It works just fine that way in Europe and Japan

I feel like I have been the lone voice in the wilderness on this one, :

I drive into my local Shell station to fill up, and stick my card in the pump, but the pump refuses to dispense.  I walk into the office and ask the store manager why I can't get gasoline.  She checks my account, and says "Mr. Meyer, your Volvo fuel prescription has expired."  I say, "Oh, well its OK, I am sure I am using the right gas."  She replies, "I'm sorry, but the law requires that you have to have a valid prescription from your dealership to refill your gas.  You can't make that determination yourself, and most car dealerships have their prescriptions expire each year to make sure you bring the car in for a checkup.  Regular checkups are important to the health of your car.  You will need to pay for a service visit to your dealership before we can sell you gas."  I reply, "RRRRRRR."

OK, so if this really happened we would all scream SCAM!  While we all recognize that it may be important to get our car checked out every once in a while, most of us would see this for what it was:  A government regulation intended mainly to increase the business of my Volvo dealership's service department by forcing me to pay for regular visits.

So why don't we cry foul when the exact same situation occurs every day with glasses and contact lenses?

Denying the Climate Catastrophe: 4b. Problems With The Surface Temperature Record

This is the part B of the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data;  B) Problems with the Surface Temperature Record (this article)
  5. Attribution of Past Warming;  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In part A of this chapter, we showed that the world had indeed warmed over the past 30-100 years, whether you looked at the surface temperature record or the satellite record.  Using either of these metrics, though, we did not see global warming accelerating, nor did we see warming rates that were faster than predicted.  In fact, we saw the opposite.

One story I left out of part A, because it did not affect the basic conclusions we drew, is the criticisms of the surface temperature record.  In this part B, we will discuss some of these criticisms, and see why many skeptics believe the 0.8C warming number for the past century is exaggerated.  We will also gain some insights as to why the satellite measured warming rates may be closer to the mark than rates determined by surface temperature stations.

Uncorrected Urban Biases

Years ago a guy named Steve McIntyre published a graphical portrayal of warming rates across the US.  This is a common chart nowadays. Anyway, this chart (almost 10 years old) drew from temperature measurement stations whose locations are shows with the crosses on the map:

usgrid80

I was living in Arizona at the time and I was interested to learn that the highest warming rate was being recorded at the USHCN station in Tucson (remember, just because Arizona is hot is no reason to necessarily expect it to have high warming rates, they are two different things).  At the time, Anthony Watt was just kicking off an initiative to develop quality control data for USHCN stations by having amateurs photograph the sites and upload them to a central data base.  I decided I would go down to the Tucson site to experience the highest warming rate myself.  This is what I found when I tracked down the station, and took this picture (which has been reproduced all over the place at this point):

click to enlarge

That is the temperature station, around that fenced in white box (the uproar over this picture eventually caused this location to be closed).  It was in the middle of a parking lot in the middle of a major university in the middle of a growing city.  100 years ago this temperature station was in the countryside, in essentially the open desert - no paving, no buildings, no cars.  So we are getting the highest warming rates in the country by comparing a temperature today in an asphalt parking lot in the middle of a city to a temperature a hundred years ago in the open desert.

The problem with this is what's called the urban heat island effect.   Buildings and concrete absorb heat from the sun during the day, more than would typically be absorbed by raw land in its natural state.  This heat is reradiated at night, causing nights to be warmer in cities than in the areas surrounding them.  If you live in a city, you will likely hear weather reports that predict colder temperatures in outlying areas, or warn of freezes in the countryside but not in the city itself.

It turns out that this urban heat island effect is easily measured -- it even makes a great science fair project!

Click to enlarge

My son and I did this project years ago, attaching a small GPS and temperature probe to a car.  We then drove out of the city center into the country and back in the early evening, when the urban heat island effect should be largest.  We drove out and then back to average out any effects of overall cooling during our testing.  One of the trips is shown above, with around 6 degrees F of temperature change.  We, and most others who have done this in other cities, found between 5 and 10 degrees of warming as one drives into a city at night.

If this effect were constant over time, it would not pose too many problems for our purposes here, because we are looking at changes in average temperatures over time, not absolute values.  But the urban heat island warming of a city (and particular temperature stations) increases as the urban area grows larger.   Because this urban warming is many times the global warming signal we are trying to measure, and since most temperature stations are located near growing urban locations, it introduces an important potential bias into measurement.

A number of studies have found that, in fact, we do indeed see more warming historically in thermometers located in urban areas than in those located in rural areas.  Two studies in California have shown much lower warming rates at rural thermometers than at urban ones:

click to enlarge

Click to enlarge

Anthony Watt has been working for years to do this same analysis for the entire US.  In fact, the pictures taken above of the temperature station in Tucson were part of the first phase of his project to document each USHCN site used in the global warming statistics with pictures.  Once he had pictures, he compared the details of the siting with a classification system scientists use to measure the quality of a temperature sites, from the best (class 1) to the worst with the most biases (class 5).  He found that perhaps a third of the warming in the official NOAA numbers may come from the introduction of siting biases from bad sites.  Or put another way, the warming at well-sited temperature stations was only about 2/3 in the official metric.

Click to enlarge

By the way, this is one other reason why I tend to favor the satellite measurements.  Going back to the numbers we showed in part A, the satellite temperature metric had about 2/3 the trend of the surface temperature reading, or almost exactly what the surface readings would be if this siting bias were eliminated (the absolute values of the trends don't match, because they are for different time periods and different geographies).

Click to enlarge

There is one other aspect of this chart that might have caught your eye -- if some temperature stations are showing 2 degrees of warming and some 3.2 degrees of warming, why is the total 3.2 degrees of warming.  Shouldn't it be somewhere in the middle?

One explanation is that the NOAA and other bodies take the data from these stations and perform a number of data manipulation steps in addition to a straight spatial averaging.   One such step is that they will use a computer process to try to correct temperature stations based on the values from neighboring stations.  The folks that run these indices argue that this computational process overcomes the site bias problem.  Skeptics will argue that this approach is utter madness -- why work to correct a known bad temperature point, why not just eliminate it?  If you have a good compass and a bad compass, you don't somehow mathematically average the results to find north, you throw out the bad one and use the good one.  In short, skeptics argue that this approach does not eliminate the error, it just spreads the error around to all the good stations, smearing the error like peanut butter.  Here is an example from the GISS, using station data that has only been adjusted for Time of Observation changes (TOBS).
Grand_12

This is exactly what we might expect - little warming out in undeveloped nature in Grand Canyon National Park, lots of warming in a large and rapidly growing modern city (yes, the Tucson data is from our favorite temperature station we featured above).  Now, here is the same data after the GISS has adjusted it:

Grand_15

You can see that Tucson has been adjusted down a degree or two, but Grand Canyon has been adjusted up a degree or two (with the earlier mid-century spike adjusted down).  OK, so it makes sense that Tucson has been adjusted down, though there is a very good argument to be made that it should be been adjusted down more, say by at least 3 degrees.  But why does the Grand Canyon need to be adjusted up by about a degree and a half?  What is currently biasing it colder by 1.5 degrees, which is a lot?  One suspects the GISS is doing some sort of averaging, which is bringing the Grand Canyon and Tucson from each end closer to a mean -- they are not eliminating the urban bias from Tucson, they are just spreading it around to other stations in the region.

Temperature Adjustments and Signal-To-Noise Ratio

Nothing is less productive, to my mind, than when skeptics yell the word "fraud!" on the issue of temperature adjustments.  All temperature databases include manual adjustments, even the satellite indices that many skeptics favor.    As mentioned above, satellite measurements have to be adjusted for orbital decay of the satellites just as surface temperature measurements have to be adjusted for changes in the daily time of observation.  We may argue that adjustment methodologies are wrong (as we did above with urban biases).  We may argue that there are serious confirmation biases (nearly every single adjustment to every temperature and sea level and ocean heat database tends to cool the past and warm the present, perhaps reinforced by preconceived notions that we should be seeing a warming signal.)  But I find that charges of fraud just cheapen the debate.

Even if the adjustments are all made the the best of intentions, we are still left with an enormous problem of signal to noise ratio.  It turns out that the signal we are trying to measure -- warming over time -- is roughly equal to the magnitude of the manual adjustments.  In other words, the raw temperature data does not show warming, only the manually adjusted data show warming.  This does not mean the adjusted data is wrong, but it should make us substantially less confident that we are truly measuring the signal in all this noise of adjustment.  Here are two examples, for an individual temperature station and for the entire database as a whole:

Click to enlarge

In this first example, we show the raw data (with Time of Observation adjustments only) in orange, and the final official adjusted version in blue.  The adjustments triple the warming rate for the last century.

Click to enlarge

We can see something similar for the whole US, as raw temperature measurements (this time before time of observation adjustments) actually shows a declining temperature trend in the US.  In this case, the entirety of the global warming signal, and more, comes from the manual adjustments.  Do these adjustments (literally thousands and thousands of them) make sense when taken in whole?  Does it make sense that there was some sort of warming bias in the 1920's that does not exist today? This  is certainly an odd conclusion given that it implies a bias exactly opposite of the urban heat island effect.

We could go into much more detail, but this gives one an idea of why skeptics prefer the satellite measurements to the surface temperature record.  Rather than endlessly working to try to get these public agencies to release their adjustment details and methodology for third party validation to the public that pays them (an ongoing task that still has not been entirely successful), skeptics have simply moved on to a better approach where the adjustments (to a few satellites) are much easier to manage.

Ultimately, both approaches for seeking a global warming signal are a bit daft.  Why?  Because, according to the IPCC, of all the extra warming absorbed by the surface of the Earth from the greenhouse effect, only about 1% goes into the atmosphere:

 

click to enlarge

Basically, water has a MUCH higher heat carrying capacity than air, and over 90% of any warming should be going into oceans.  We are just starting to get some new tools for measuring the changes to ocean heat content, though the task is hard because we are talking about changes in the thousandths of a degree in the deep oceans.

After this brief digression into the surface temperature records, it is now time to get back to our main line of discussion.  In the next chapter, we will begin to address the all-important attribution question:  Of the warming we have seen in the past, how much is man-made?

Chapter 5, Part A on the question of attributing past warming to man is here.

So @tylercowen, You Want to Understand the Great Stagnation? Here It Is

Certainly the government's current permission-based approach to business regulation combined with an overt hostility of government (or at least those parties that influence it) to radically new business models (see: Uber) is a big part of the great stagnation story.

But insanity like this is also a big part:

Vague but expensive-if-not-correct rules on employee seating just got vaguer and harder to figure out

Weighing in on two California laws that require employers to provide suitable seating to workers when “the nature of the work” permits it, the California Supreme Court said the phrase refers to an employee's tasks performed at a given location for which the right to a suitable seat is asserted.

In response to questions certified by the U.S. Court of Appeals for the Ninth Circuit, the state high court said April 4 that the phrase “nature of the work” doesn't require a holistic evaluation of the full range of an employee's tasks completed during a shift.

An employer's business judgment and the layout of the workplace are relevant in determining whether sitting is permitted, but courts should apply an objective analysis based on the totality of the circumstances, the California Supreme Court said.

It held that “if an employer argues there is no suitable seat available, the burden is on the employer to prove unavailability.”

As a business owner in California, I am going to have to do a ton of research to figure out just how we can comply with all this, and even then I will likely be wrong because whether one is in compliance or not is never actually clear until it is tested in court.  I had to do the same thing with California meal break law (multiple times), California heat stress law, new California harassment rules, California sick leave rules, the California minimum wage, Obamacare rules, Obamacare reporting, the new upcoming DOL rules on salaried employees, etc.

Five or ten years ago, I spent most of my free time thinking about improving and growing the business.  Now, all my mental bandwidth is consumed by regulatory compliance.  I have not added a new business operation for years, but instead have spent most of my time exiting businesses in California.  Perhaps more important is what I am doing with my managers.  My managers are not Harvard MBAs, they are front-line blue collar folks who have been promoted to manager because they have proven themselves adept at our service process.  There are only a finite number of things I can teach them and new initiatives I can give them in a year.  And instead of using this limited bandwidth to teach some of the vital productivity enhancement tools we should be adopting, I spend all my training time on compliance management issues.

Speech Restriction Stories I Have Read in Just the Last 24 Hours

NY state attorney general (and others) pursuing potential criminal and civil charges against ExxonMobil for its climate change advocacy

US Virgin Islands AG (really) going after non-profit CEI for its climate change advocacy

Elizabeth Warren wants the SEC to ban companies from "saying whatever they want about Washington policy debates," a demand inspired by her frustration that financial firms are publicly disagreeing with her on the impact of her desired regulations

California AG Kamala Harris demanding non-profit donor lists, presumably so she can harass and intimidate the ones she does not like

California AG Kamala Harris has raided the home and seized video footage of an independent advocated/journalist  who did secret sting videos of Planned Parenthood, the exact same sort of advocacy journalism pursued legally (without legal harassment) by any number of Leftish groups in California and elsewhere  (I doubt Ms Harris plans to raid the home of PETA activists who trespass on farms to secretly film chicken and pig breeding).

It turns out there are strong speech protections in this country, except when you are a professional, and then there are none.

And of course, I still am fighting against a libel lawsuit meant to force me to remove this product review.

Update, add this one:  Tenured Marquette professor faces termination based on blog post with which University disagrees

When the student replied that he has a right to argue his opinion, Ms. Abbate responded that “you can have whatever opinions you want but I can tell you right now, in this class homophobic comments, racist comments and sexist comments will not be tolerated. If you don’t like that you are more than free to drop this class.” The student reported the exchange to Marquette professor John McAdams, who teaches political science. Mr. McAdams also writes a blog called the Marquette Warrior, which often criticizes the Milwaukee school for failing to act in accordance with its Catholic mission.

Mr. McAdams wrote on his blog that Ms. Abbate was “using a tactic typical among liberals now. Opinions with which they disagree are not merely wrong, and are not to be argued against on their merits, but are deemed ‘offensive’ and need to be shut up.” His blog went viral, and Ms. Abbate received vicious emails. She has since left Marquette.

But now Marquette is going after Mr. McAdams. In December 2014, the school sent him a letter suspending his teaching duties and banning him from campus while it reviewed his “conduct” related to the blog post. “You are to remain off campus during this time, and should you need to come to campus, you are to contact me in writing beforehand to explain the purpose of your visit, to obtain my consent and to make appropriate arrangements for that visit,” Dean Richard Holz wrote.

Lol, the university is going to prove he was wrong to write that universities avoid dialog in favor of saying "shut up" by telling him to  ...  shut up or be fired.

By the way, since nowadays it seems that supporting someone's free speech rights is treated the same as agreeing with that person, I will remind folks that having led a pro gay marriage ballot initiative briefly in Arizona, I am unlikely to agree with someone who thinks it should be banned.  But so what?  I would have absolutely no problem arguing with such a person in a rational way, something that faculty member Ms. Abbate seemed incapable of doing.  While I might disagree with him on any number of issues, Professor McAdams was totally right to call her out.  Besides, is the Left's goal really to take all opinion with which they disagree and drive it underground?  Force folks underground and you never know what will emerge some day.  Things like.... Trump supporters.

It is amazing to me that universities have become the least viable place in the US to raise and discuss controversial issues in the light of day.

 

 

Bernie Sanders and The Panama Papers

As much as Bernie would like to blame the money laundering and money-hiding in the Panama papers on capitalism, in fact the vast majority of clients in those papers appear to be from socialist and strongly interventionist, populist governments.

Socialist countries tend still have winners and losers just like capitalist countries.  However, those winners and losers are not determined by success in making products and services in the marketplace, but in success in reaching a leadership position in the government or cozying up to those in government.  Soviet government elite had special privileges and special stores not accessible by ordinary Russians.  The Castro brothers and Hugo Chavez's daughter are among the richest people in the western hemisphere.

However, these wealthy leaders now have two problems.  First, they likely spend most of their time spouting egalitarian claptrap, so that they would like to hide their wealth in order to mask the obvious consistency problem.  Perhaps more importantly, their socialist policies have likely destroyed the country's economy -- there is, for example, no place any sane person would want to invest a billion dollars in Venezuela.  They need to get their money out of the country but because everyone else in a socialist country is also trying to get their money out, the self-same leader has likely instituted capital controls.  So the leader needs to put his or her money in a different country where it can actually be invested productively, and in doing so must evade their own capital controls.

Greatest Video on Management Ever

Perhaps it is just because of the day I have had, but I am increasingly convinced that this video portrays the true experience of running a company better than any other video I have seen

New Star Wars Trailer

This looks encouraging.

AT-ATs are back!

The Virtues of Short-Selling

Is there anything that rankles populists who are "anti-speculator" more than the ability to short stocks?  From time to time countries that are upset about falling markets will ban short-selling.  But I have defended stock (and other asset shorting) as a critical market mechanism that helps to limit damaging bubbles.  I wrote waaaaaay back in 2008, after the US temporarily banned short selling of certain assets:

At the start of the bubble, a particular asset (be it an equity or a commodity like oil) is owned by a mix of people who have different expectations about future price movements.  For whatever reasons, in a bubble, a subset of the market develops rapidly rising expectations about the value of the asset.  They start buying the asset, and the price starts rising.  As the price rises, and these bulls buy in, folks who owned the asset previously and are less bullish about the future will sell to the new buyers.  The very fact of the rising price of the asset from this buying reinforces the bulls' feeling that the sky is the limit for prices, and bulls buy in even more.

Let's fast forward to a point where the price has risen to some stratospheric levels vs. the previous pricing as well as historical norms or ratios.  The ownership base for the asset is now disproportionately
made up of those sky-is-the-limit bulls, while everyone who thought these guys were overly optimistic and a bit wonky have sold out. 99.9% of the world now thinks the asset is grossly overvalued.  But how does it come to earth?  After all, the only way the price can drop is if some owners sell, and all the owners are super-bulls who are unlikely to do so.  As a result, the bubble might continue and grow long after most of the world has seen the insanity of it.

Thus, we have short-selling.  Short-selling allows the other 99.9% who are not owners to sell part of the asset anyway, casting their financial vote [on] the value of the company.  Short-selling shortens bubbles, hastens the reckoning, and in the process generally reduces the wreckage on the back end.

I am remembering this old post because Arnold Kling links an interesting bit on economists discussing the Big Short, who among a number of interesting things say this:

Shorting the market in the way they did is very risky, and one has to be very confident, perhaps overconfident, in one’s forecast to take such risks. As a consequence, many people who were pessimistic about the housing market simply stayed on the sidelines—which in turn meant that for a while, valuations in the market primarily reflected the beliefs of optimists.

The timing issue is key.  I have been right probably in 4 of out the 5 major market shorting opportunities I have identified in the last 10 years, but have been on average 2 years early with all of them, meaning I lost money on most of them, or made money after enduring some really big paper losses for a while.

The Downside of Web/Cloud Enabled Devices (Including My Oddest Analogy of the Week)

Google's parent Alphabet is abandoning support for Revlov's Smart Home Hub (which they bought a while back).  In and of itself this part of an irritating strategy (pursued enthusiastically both by Alphabet and Apple) of identifying edgy new devices with enthusiastic user bases, buying them, and then shutting them down.   I was a SageTV fan and user back in the day until Google bought it and shut it down (as a potential competitor to GoogleTV and its other streaming products).  The bright side is that this pushed me to XBMC/KODI, which is better.  The dark side is that I am sure Google could easily write those guys a check and then they will be gone too.

Anyway, after SageTV was shut down by Google, I could still use the hardware and software, it just did not get improved or updated or supported any more.  But increasingly new electronic products are requiring some sort of cloud integration or online account activation.  To work, the product actually has to check in with the manufacturer's servers.  So what happens when those servers are shut down?

Alphabet-owned company Nest is going to pull the plug on the Revolv smart home hub and app on May 15, rendering the hardware unusable next month.

Just to be clear on how much of a big deal this is, the company isn't only out to stop support but to really disable the device and turn the hub into a $300 teardrop-shaped brick. How much does a pitchfork go for nowadays?

...Needless to say, existing users are outraged by the development, and they have very good reason to be so."When software and hardware are intertwined, does a warranty mean you stop supporting the hardware or does it mean that the manufacturer can intentionally disable it without consequence? Tony Fadell seems to believe the latter. Tony believes he has the right to reach into your home and pull the plug on your Nest products," Arlo Gilbert, CEO of Televero and formerly proud owner of a Revolv hub, says, emphasizing that "Google is intentionally bricking hardware that he owns."

Video game enthusiasts have worried about this for years, and have started to encounter this problem, as the new most-favored copyright protection scheme is to require an online account and an account-check each time the game is run.  They try to say the online component is adding value, and they do a few things like leader boards and achievements, but the primary rational is copy protection.    Personally I find this generally easier to work with than other types of copy protection that have been tried (I really like Steam, for example) but what happens when the login servers are shut down?

This sort of reminds me, oddly enough, of cemeteries.  There used to be a problem where private cemetery owners would sell out the cemetery, fill it up, and move on.  But then the cemetery itself would fall apart.  It's not like the owners are still around to pay association dues like condo owners do.  Once people figured out that problem, they quickly began demanding that cemeteries have a plan for long-term maintenance, with assets in trust or some such thing.  Perhaps the hardware and software industry will do the same thing.  I could see a non-profit trust getting set up by the major players to which manufacturers pay dues in exchange for having the trust take over their servers after a product is abandoned.

Raising the Cost of Hiring Unskilled Workers by 50% is A Bad Way to Fight Poverty

After my prior post, I have summarized the chart I included from Mark Perry to the key data that I think really makes the point.  Household income is obviously a product of hours worked and hourly wages.  Looking at the chart below, poverty seems to be much more a function of not working than it is of low wages.  Which makes California's decision to raise the price of hiring unskilled workers by 50% (by raising their minimum wage from $10 to $15) all the more misguided.

click to enlarge

Note the calculations in the last two lines, which look at two approaches to fighting poverty.  If we took the poorest 20% and kept their current number of hours worked the same, but magically raised their hourly earnings to that of the second quintile (ie from $14.21 to $17.33), it would increase their annual household income by $2,558, a 22% increase (I say magically because clearly if wages are raised via a minimum wage mandate, employment in this groups would drop even further, likely offsetting most of the gains).  However, if instead we did nothing to their wages but encouraged more employment such that their number of workers rose to that of the second quintile, this would increase household income by a whopping $13,357, a 115% percent increase.

From this, would you logically try to fight poverty by forcing wages higher (which will almost surely reduce employment) or by trying to increase employment?

Why The Minimum Wage Does Not Make Moral Sense: Unemployment, Not Low Wage Rate, Causes Most Poverty

In response to his new $15 minimum wage in California, Governor Jerry Brown said:

Economically, minimum wages may not make sense. But morally, socially, and politically they make every sense because it binds the community together to make sure parents can take care of their kids.

Let me explain as briefly as I can why this minimum wage increase is immoral.  We will use data from the chart below which was cribbed from Mark Perry in this post.

click to enlarge

The average wage of people who work  in the poorest 20% in the US is already near $15 ($28,417 divided by 2000 full time hours - $14.20 per hour).    This is not that much lower than the hourly earnings of those in the second poorest or even the middle quintiles.  So why are they poor?  The biggest different is that while only 16% of the middle quintile households had no one who worked, and 31.5% of the second poorest quintile had no one who worked, of the poorest 20% of households a whopping 63% had no one who worked.  Only 16.1% of poor adults had a full time job.

The reason for poverty, then, is not primarily one of rate, it is one of achieving full time employment.  Many of these folks have limited education, few job skills, little or no work experience, and can have poor language skills. And California has just increased the cost of giving these folks a job by 50%.  The poor will be worse off, as not only will more of them miss out on the monetary benefits of employment, but also the non-monetary ones (building a work history, learning basic skills, etc.)

Past studies have shown that most of the benefit of the minimum wage goes to non-poor households (ie second and third earners in middle class homes).  The targets Jerry Brown speaks of, parents earning the minimum wage to take care of families, are perhaps only 1/8 of minimum wage earners.

MaCurdy found that less than 40% of wage increases [from a minimum wage hike] went to people earning less than twice the poverty line, and among that group, about third of them are trying to raise a family on the minimum wage.

Of course, the price of a lot of stuff poor people have to buy in California is about to go up.  We are going to have to raise our campground rates by 20-25% to offset the labor cost increase.  But that is another story.

Incredibly, Not A Single University Has Challenged This On Their Own

FIRE is looking for a client (University or aggrieved student) whom it can help sue the Department of Education over their sexual misconduct guidance

Five years ago today, the Department of Education’s Office for Civil Rights (OCR) announced sweeping new requirements for colleges and universities adjudicating allegations of sexual misconduct. By unilaterally issuing these binding mandates via a controversial “Dear Colleague” letter (DCL), OCR ignored its obligation under federal law to notify the public of the proposed changes and solicit feedback.

To correct this error, and to begin to fix a broken system of campus sexual assault adjudication that regularly fails all involved, the Foundation for Individual Rights in Education (FIRE) seeks a student or institution to challenge OCR’s abuse of power. FIRE has made arrangements to secure legal counsel for a student or institution harmed by OCR’s mandates and in a position to challenge the agency’s violation of the Administrative Procedure Act(APA). In keeping with FIRE’s charitable mission to advance the public interest, representation will be provided at no cost to the harmed party.

“In the five years since its issuance, OCR has acted as though the 2011 Dear Colleague letter is binding law—but it isn’t,” said FIRE Executive Director Robert Shibley. “By circumventing federal law, OCR ignored all stakeholders: victims, the accused, civil liberties advocates, administrators, colleges, law enforcement, and the general public. Real people’s lives are being irreparably harmed as a result. It’s time that OCR be held accountable.”

The DCL requires that schools use the low “preponderance of the evidence” standard of proof (i.e., that they find an accused student guilty with just 50.01 percent certainty) when adjudicating claims of sexual assault and sexual harassment. The DCL’s requirement that colleges use this standard—found nowhere in Title IX or its implementing regulations, and specified before 2011 only in letters between OCR and individual schools—effectively creates a new substantive rule for institutions to follow.

Here is what is amazing to me:  Not a single university has challenged this rule, even though trashes the due process rights of is male students.  These same universities had no problem defying the law on things like ROTC and army recruiting (which represent mostly voluntary enticements of their students) but have rolled over and played dead over this much more direct threat to their students' well-being.

Thank God We Don't Have Cable Neutrality

Time Warner Cable, the owners of the Dodgers local broadcast rights is continuing to battle with local cable channels to be added to their cable package.  Like last year, it appears that no deal will be forthcoming and the Dodgers (and perhaps more disheartening, Vin Scully in his last year) won't be on many TV sets this summer in LA.  Kevin Drum essentially says bravo to the cable companies for opposing the Dodgers bid to jack up basic cable rates in the area.

Boo hoo. They tried everything—everything, I tell you. Except, of course, for the one thing that would have worked: the right to make the Dodgers an extra-cost option, not part of basic cable. Most cable operators see no reason that every television viewer in the LA basin should have to pay 60 bucks a year more in cable fees regardless of whether or not they care about baseball.

And that's the one thing TWC won't do. Why? Because then it will become crystal clear just how few households actually care enough about the Dodgers to pay for them. And that would truly be a disaster beyond reckoning. There's a limit to the amount of sports programming that people are willing to have crammed down their throats!

I actually agree with him, and will add that it is always great to see a progressive acknowledge consumers do actually exercise accountability on businesses.

But I will observe that had we adopted cable neutrality rules** as we have for net neutrality, the cable companies would have found it impossible, or at least much more difficult, to oppose carriage by a pushy and expensive content provider.  It is this sort of intra-supply-chain tug of war that generally benefits consumers in the long run (as it has in LA, at least for Drum) that is essentially outlawed by net neutrality rules which basically declare content providers the victors by default.  As I wrote before:

Net Neutrality is one of those Orwellian words that mean exactly the opposite of what they sound like.  There is a battle that goes on in the marketplace in virtually every communication medium between content creators and content deliverers.  We can certainly see this in cable TV, as media companies and the cable companies that deliver their product occasionally have battles that break out in public.   But one could argue similar things go on even in, say, shipping, where magazine publishers push for special postal rates and Amazon negotiates special bulk UPS rates.

In fact, this fight for rents across a vertical supply chain exists in virtually every industry.  Consumers will pay so much for a finished product.  Any vertical supply chain is constantly battling over how much each step in the chain gets of the final consumer price.

What "net neutrality" actually means is that certain people, including apparently the President, want to tip the balance in this negotiation towards the content creators (no surprise given Hollywood's support for Democrats).  Netflix, for example, takes a huge amount of bandwidth that costs ISP's a lot of money to provide.  But Netflix doesn't want the ISP's to be be able to charge for this extra bandwidth Netflix uses - Netflix wants to get all the benefit of taking up the lion's share of ISP bandwidth investments without having to pay for it.  Net Neutrality is corporate welfare for content creators....

I am still pretty sure the net effect of these regulations, whether they really affect net neutrality or not, will be to disarm ISP's in favor of content providers in the typical supply chain vertical wars that occur in a free market.  At the end of the day, an ISP's last resort in negotiating with a content provider is to shut them out for a time, just as the content provider can do the same in reverse to the ISP's customers.  Banning an ISP from doing so is like banning a union from striking.

** Footnote:  OK, we sortof did have cable neutrality in one respect -- over the air broadcasters were able to obtain crony legislation that cable companies had to carry every locally broadcast channel.  So that channel 59 that you never bothered to watch now get's equal treatment with the NBC affiliate.   This was a huge boon for these stations, and the value of these often tiny stations exploded with this must-carry rule.  Essentially they were given an asset for free, ie position in a cable lineup, that other competitors had to fight for.

Al Gore, as an aside, actually became rich with exactly this game.   It is hard to fight your way into a cable lineup nowadays.  Al Gore did it with this Current TV startup based on his name and a promise of a sort of MTV for politics.  The channel went nowhere and lost a lot of money, but it now had one valuable asset -- placement in cable TV lineups.  So it sold this asset to Al Jazzera, which had struggled to get placement.

Denying the Climate Catastrophe: 4a. Actual Temperature Data

This is the fourth chapter of an ongoing series.  Other parts of the series are here:

  1. Introduction
  2. Greenhouse Gas Theory
  3. Feedbacks
  4.  A)  Actual Temperature Data (this article);   B) Problems with the Surface Temperature Record
  5. Attribution of Past Warming:  A) Arguments for it being Man-Made; B) Natural Attribution
  6. Climate Models vs. Actual Temperatures
  7. Are We Already Seeing Climate Change
  8. The Lukewarmer Middle Ground
  9. A Low-Cost Insurance Policy

In our last chapter, we ended a discussion on theoretical future warming rates by saying that no amount of computer modelling was going to help us choose between various temperature sensitivities and thus warming rates.  Only observational data was going to help us determine how the Earth actually responds to increasing CO2 in the atmosphere.  So in this chapter we turn to the next part of our framework, which is our observations of Earth's temperatures, which is among the data we might use to support or falsify the theory of catastrophic man-made global warming.

click to enlarge

The IPCC position is that the world (since the late 19th century) has warmed about 0.8C.  This is a point on which many skeptics will disagree, though perhaps not as substantially as one might expect from the media.   Most skeptics, myself included, would agree that the world has certainly warmed over the last 100-150 years.  The disagreement tends to be in the exact amount of warming, with many skeptics contending that the amount of warming has been overstated due to problems with temperature measurement and aggregation methodology.

For now, we will leave those issues aside until part B of this section, where we will discuss some of these issues.  One reason to do so is to focus, at least at first, on the basic point of agreement that the Earth has indeed warmed somewhat.  But another reason to put these differences over magnitude aside is that we will find, a few chapters hence, that they essentially don't matter.  Even the IPCC's 0.8C estimate of past warming does not support its own estimates of temperature sensitivity to CO2.

Surface Temperature Record

The most obvious way to measure temperatures on the Earth is with thermometers near the ground.   We have been measuring the temperature at a few select locations for hundreds of years, but it really is only in the last century that we have fairly good coverage of the land surface.  And even then our coverage of places like the Antarctic, central Africa, parts of South America, and all of the oceans (which cover 75% of the Earth) is even today still spotty.  So coming up with some sort of average temperature for the Earth is not a straight averaging exercise -- data must be infilled and estimated, making the process complicated and subject to a variety of errors.

But the problem is more difficult than just data gaps.  How does one actually average a temperature from Denver with a temperature from San Diego?  While a few folks attempt such a straight average, scientists have developed a theory that one can more easily average what are known as temperature anomalies than one can average the temperature itself.  What is an anomaly?  Essentially, for a given thermometer, researchers will establish an average for that thermometer for a particular day of the year.  The exact time period or even the accuracy of this average is not that important, as long as the same time period is used consistently.  Then, the anomaly for any given measurement is the deviation of the measured temperature from its average.   So if the average historical temperature for this day of the year is 25C and the actual measured for the day is 26C, the anomaly for today at this temperature station is +1.0C.

Scientists then develop programs that spatially average these temperature anomalies for the whole Earth, while also adjusting for a myriad of factors, from time-of-day changes in measurement to technology changes over time of the temperature stations to actual changes in the physical location of the measurement.  This is a complicated enough a task, with enough explicit choices that must be made about techniques and adjustments, that there are many different temperature metrics floating around out there, many of which get different results from essentially the same data.  The Hadley Center in England's CRUT4 global temperature metric is generally considered the gold standard, and is the one used preferentially by the IPCC.  Its metric is shown below, with the monthly temperature anomaly in dark blue and the 5 year moving average (centered on its mid-point):

click to enlarge

Again, the zero point of the chart is arbitrary and merely depends on the period of time chosen as the base or average.  Looking at the moving average, one can see the temperature anomaly bounces around -0.3C in the late 19th century and has been around +0.5C over the last several years, which is how we get to about 0.8C warming.

Satellite Temperature Record

There are other ways to take temperature measurements, however.  Another approach is to use satellites to measure surface temperatures (or at least near-surface temperatures).   Satellites measure temperature by measuring the thermal microwave emissions of oxygen atoms in the lower troposphere (perhaps 0-3 miles above the Earth).  Satellites have the advantage of being able to look at the entire Earth without gaps, and are not subject to siting biases for surface temperatures stations (which will be discussed in our part B of this chapter).

The satellite record does, however, rely on a shifting array of satellites all of which have changing orbits for which adjustments must be made.  Of necessity, the satellite record cannot reach as far back into the past.  And the satellites are not actually measuring the temperature of the Earth, but rather a temperature a mile or two up.  Whether that matters is subject to debate, but the clincher for me is that the IPCC and most climate models have always shown that the first and most anthropogenic warming should show up in exactly this spot -- the lower troposphere -- which makes observation of this zone a particularly good way to look for a global warming signal.

Roy Spencer and John Christy have what is probably the leading satellite temperature metric, called "UAH" as a shorthand for University of Alabama, Huntsville's space science center.  The UAH record looks like this:

click to enlarge

Note that the absolute magnitude of the anomaly isn't comparable between the surface and satellite record, as they use different base periods, but changes and growth rates in the anomalies should be comparable between the two indices.

The first thing to note is that, though they are different, both the satellite and surface temperature records show warming since 1980.  For all that some skeptics may want to criticize the authors of the surface temperature databases, and there indeed some grounds for criticism, these issues should not distract us from the basic fact that in every temperature record we have (including other technologies like radiosonde balloons), we see recent warming.

In terms of magnitude, the two indices do not show the same amount of warming -- since 1980 the satellite temperature record shows about 30% less warming than does  the surface temperature record for the same period.   So which is right?  We will discuss this in more depth in part B, but the question is not made any easier by the fact that the surface records are compiled by prominent alarmist scientists while the satellite records are maintained by prominent skeptic scientists.  Which causes each side to accuse the other of having its thumb on the scale, so to speak.  I personally like the satellite record because of its larger coverage areas and the fact that its manual adjustments (which are required of both technologies) are for a handful of instruments rather than thousands, and are thus easier to manage and get right.  But I am also increasingly of the opinion that the differences are minor, and that neither are consistent with catastrophic forecasts.

So instead of getting ourselves involved in the dueling temperature data set food fight (we will dip our toe into this in part B), let's instead apply both these data sets to several propositions we see frequently in the media.  We will quickly see the answers we reach do not depend on the data set chosen.

Test #1:  Is Global Warming Accelerating

One frequent meme you will hear all the time is that "global warming is accelerating."  As of today it had 550,000 results on Google.  For example:

click to enlarge

So.  Is that true?  They can't print it if its not true, right (lol)?  Let's look first at the satellite record through the end of 2015 when this presentation was put together (there is an El Nino driven spike in 2 months after this chart was made, which does not affect the conclusions that follow in the least, but I will update to include ASAP).

click to enlarge

If you want a name for this chart, I could call it the "bowl of cherries" because it has become a cherry-picker's delight.   Everyone in the debate can find a starting point and an end point in this jagged data to find any trend they want to find.  So how do we find an objective basis to define end points for this analysis?  Well, my background is more in economic analysis.  Economists have the same problem in looking at trends for things like employment or productivity because there is a business cycle that adds volatility to these numbers above and beyond any long term trend.  One way they manage this is to measure variables from peak to peak of the economic cycle.

I have done something similar.  The equivalent cyclical peaks in the temperature world are probably the very high Pacific Decadal Oscillation, or El Nino, events.  There was one in 1998 and there is one occurring right now in late 2015/early 2016.  So I defined my period as 18 years from peak to peak.  By this timing, the satellite record shows temperatures to be virtually dead flat for those 18 years.  This is "the pause" that you may have heard of in climate debates.   Such an extended pause is not predicted by global warming theory, particularly when the theory (as in the IPCC main case) assumes high temperature sensitivities to CO2 and low natural variation in temperatures.

So if global warming were indeed accelerating, we would expect the warming rate over the last 18 years to be higher than the rate over the previous 18 years.  But just the opposite is true:

click to enlarge

While "the pause" does not in and of itself disprove the theory of catastrophic manmade global warming, it does easily falsify the myriad statements you see that global warming is accelerating.  At least for the last 20 years, it has been decelerating.

By the way, this is not somehow an artifact of just the satellite record.  This is what the surface record looks like for the same periods:

click to enlarge

Though it shows (as we discussed earlier) higher overall warming rates, the surface temperature record also shows a deceleration rather than acceleration over the last 20 years.

 

Test #2:  Are Temperatures Rising Faster than Expected

OK, let's consider another common meme, that the "earth is warming faster than predicted."

click to enlarge

Again, there over 500,000 Google matches for this meme.  So how do we test it?  Well, certainly not against the last IPCC forecasts -- they are only a few years old.  The first real high-sensitivity or catastrophic forecast we have is from James Hansen, often called the father of global warming.

click to enlarge

In June of 1988, Hanson made a seminal presentation to Congress on global warming, including this very chart (sorry for the sucky 1980's graphics).  In his testimony, he presented his models for the Earth's temperature, which showed a good fit with history**.  Using his model, he then created three forecasts:  Scenario A, with high rates of CO2 emissions;  Scenario B, with more modest emissions; and scenario C, with drastic worldwide emissions cuts (plus volcanoes, that tend to belch dust and chemicals that have a cooling effect).  Surprisingly, we can't even get agreement today about which forecast for CO2 production was closer to the mark (throwing in the volcanoes makes things hard to parse) but it is pretty clear that over the 30 years after this forecast, the Earth's CO2 output has been somewhere between A and B.

click to enlarge

As it turns out, it doesn't matter whether we actually followed the CO2 emissions from A or B.  The warming forecasts for scenario A and B turn out to be remarkably similar.  In the past, I used to just overlay temperature actuals onto Hansen's chart, but it is a little hard to get the zero point right and it led to too many food fights.  So let's pull the scenario A and B forecasts off the chart and compare them a different way.

click to enlarge

The left of chart shows Hanson's scenario A and B, scanned right from his chart.  Scenario A implies a warming rate from 1986 to 2016 of 3.1C per century.  Scenario B is almost as high, at 2.8C per century.  But as you can see on the right, the actual warming rates we have seen over the same period are well below these forecasts.  The surface temperature record shows only about half the warming, and the satellite record shows only about a third the warming, that Hansen predicted.   There is no justification for saying that recent warming rates have been higher than expected or forecast -- in fact, the exact opposite has been true.

We see the same thing when looking at past IPCC forecasts.  At each of its every-five-year assessments, the IPCC has included a forecast range for future temperatures.  In this case, though, we don't have to create a comparison with actuals because the most recent (5th) IPCC Assessment did it for us:

click to enlarge

The colored bands are their past forecasts.  The grey areas are the error bands on the forecast.  The black dots are global temperatures (which actually are shown with error bars, which is good practice but seldom done except perhaps when they are trying to stretch to get into the forecast range).  As you can see, temperatures have been so far below forecasts that they are dropping out of the low end of even the most generous forecast bands.  If temperatures were rising faster than expected, the black dots would be above the orange and yellow bands.  We therefore have to come to the conclusion that, at least for the last 20-30 years, temperatures have not been rising faster than expected, they have been rising slower than expected.

Day vs. Night

There is one other phenomenon we can see in the temperature data that we will come back to in later chapters:  that much of the warming over the last century has been at night, rather than in the daytime.   There are two possible explanations for this.  The first is that most anthropogenic warming models predict more night time warming than they do day time warming.  The other possibility is that a portion of the warming in the 20th century temperature record is actually spurious bias from the urban heat island effect due to siting of temperature stations near cities, since urban heat island warming shows up mainly at night.  We will discuss the latter effect in part B of this chapter.

Whatever the cause, much of the warming we have seen has occurred at night, rather than during the day.  Here is a great example from the Amherst, MA temperature station (Amherst was the first location where I gave this presentation, if that seems an odd choice).

Click to enlarge

As you can see, the warming rate since 1945 is 5 times higher at night than during the day.  This directly affects average temperatures since daily average temperature for a location in the historic record is the simple average of the daily high and daily low.  Yes, I know that this is not exactly accurate, but given technology in the past, this is the best that could be done.

The news media likes to cite examples of heat waves and high temperature records as a "proof" of global warming.   We will discuss this later, but this is obviously a logical fallacy -- one can't prove a trend in noisy data simply by citing isolated data points in one tail of the distribution.  But it is also fallacious for another reason -- we are not actually seeing any upwards trends in high temperature records, at least for daytime highs:

Click to enlarge

To get this chart, we obviously have to eliminate newer temperature stations from the data set -- any temperature station that is only 20 years old will have all of its all time records in the last 20 years (you would be surprised at how many otherwise reputable scientists miss simple things like this).  Looking at just the temperature stations in the US we have a long record for, we see with the black line that there is really no upwards trend in the number of high temperature records (Tmax) being set.   The 1930s were brutally hot, and if not for some manual adjustments we will discuss in part B of this section, they would likely still show as the hottest recent era for the US.   It turns out, with the grey line (Tmin), that while there is still no upward trend, we are actually seeing more high temperature records being set with daily lows (the highest low, as it were) than we are with daily highs.  The media is, essentially, looking in the wrong place, but I sympathize because a) broiling hot daytime highs are sexier and b) it is brutally hard to talk about highest low temperatures without being confusing as hell.

In our next chapter, or really part B of this chapter, we will discuss some of the issues that may be leading the surface temperature record to be exaggerated, or at least inaccurate.

Chapter 4, Part B on problems with the surface temperature record continues here.

If you want to skip Part B, and get right on with the main line of the argument, you can go straight to Chapter 5, part A, which starts in on the question of how much of past warming can be attributed to man.

 

** Footnote:  The history of Wall Street is full of bankrupt people whose models exactly matched history.  I have done financial and economic modeling for decades, and it is surprisingly easy to force multi-variable models to match history.  The real test is how well the model works going forward.  Both Hanson's 1988 models and the IPCC's many models do an awesome job matching history, but quickly go off the rails in future years.  I am reminded of a simple but famous example of the perfect past correlation between certain NFL outcomes and Presidential election outcomes.   This NFL model of presidential elections perfectly matches history, but one would be utterly mad to bet future elections based on it.

Net Neutrality: I Told Your So

From the WSJ (emphasis added):

Netflix now admits that for the past five years, all through the debate on net neutrality, it was deliberately slowing its videos watched by users on AT&T and Verizon’s wireless networks. The company did so for good reason—to protect users from overage penalties. But it never told users at a time when Netflix was claiming carriers generally were deliberately slowing its service to protect their own TV businesses—a big lie, it turned out.

All this has brought considerable and well-deserved obloquy on the head of Netflix CEOReed Hastings for his role in inviting extreme Obama utility regulation of the Internet. Others deserve blame too. Google lobbied the administration privately but was too chicken to speak up publicly against utility regulation.

But Netfix appears to have acted out of especially puerile and venal motives. Netflix at the time was trying to use political pressure to cut favorable deals to connect directly to last-mile operators like Comcast and Verizon—a penny-ante consideration worth a few million dollars at best, for which Netflix helped create a major public policy wrong-turn.

This is what I wrote about net neutrality a couple of years ago:

Net Neutrality is one of those Orwellian words that mean exactly the opposite of what they sound like.  There is a battle that goes on in the marketplace in virtually every communication medium between content creators and content deliverers.  We can certainly see this in cable TV, as media companies and the cable companies that deliver their product occasionally have battles that break out in public.   But one could argue similar things go on even in, say, shipping, where magazine publishers push for special postal rates and Amazon negotiates special bulk UPS rates.

In fact, this fight for rents across a vertical supply chain exists in virtually every industry.  Consumers will pay so much for a finished product.  Any vertical supply chain is constantly battling over how much each step in the chain gets of the final consumer price.

What "net neutrality" actually means is that certain people, including apparently the President, want to tip the balance in this negotiation towards the content creators (no surprise given Hollywood's support for Democrats).  Netflix, for example, takes a huge amount of bandwidth that costs ISP's a lot of money to provide.  But Netflix doesn't want the ISP's to be be able to charge for this extra bandwidth Netflix uses - Netflix wants to get all the benefit of taking up the lion's share of ISP bandwidth investments without having to pay for it.  Net Neutrality is corporate welfare for content creators....

I am still pretty sure the net effect of these regulations, whether they really affect net neutrality or not, will be to disarm ISP's in favor of content providers in the typical supply chain vertical wars that occur in a free market.  At the end of the day, an ISP's last resort in negotiating with a content provider is to shut them out for a time, just as the content provider can do the same in reverse to the ISP's customers.  Banning an ISP from doing so is like banning a union from striking.

 

You Know It Is Time to Short the Economy When...

.... your paper prints headlines that say "Economists: Zero chance of Arizona recession."  I am sure Houston would have said the exact same thing, right up until oil prices dropped to $30 and suddenly there was like a 100% chance.  When the Arizona Republic makes a definitive economic prediction, bet the other side.

Anyway, I am considering this headline to be a flashing indicator of the top.  In 2005 I wrote about another such indicator that told me the housing market had peaked:

So, to date [May 31, 2005], I have been unconvinced about the housing bubble, at least as it applied to our community.  After all, demographics over the next 20-30 years are only going to support Scottsdale area real estate.

However, over the weekend I had a disturbing experience:   At a social function, I heard a dentist enthusiastically telling a doctor that he needs to be buying condos and raw land.  The dentist claimed to be flipping raw land parcels for 100% in less than 6 months.

For those who don't know, this is a big flashing red light.  When doctors and dentists start trying to sell you on a particular type of investment, run away like they have the plague.  At Harvard Business School, I had a great investment management class with a professor who has schooled many of the best in the business.  If an investment we were analyzing turned out to be a real dog, he would ask us "who do you sell this to?" and the class would shout "doctors!"  And, if the investment was really, really bad, to the point of being insane, the class would instead shout "dentists!"

Which reminds me that in the last 6 months I have started hearing radio commercials again urging folks to get into the house-flipping business and make their fortune.  Whenever institutions start selling investments to you, the average Joe, rather than just investing themselves, that should be taken as a signal that we are approaching a top**.  About 12-18 months before oil prices tanked, I started getting flooded with spam calls at work trying to sell me various sorts of oil exploration investments.

** Postscript:  In 2010, when house prices were low and some were going for a song in foreclosure, there were no house flipping commercials on radio.   That is because Blackstone and other major institutions were too busy buying them up.  Now that these companies see less value, you are hearing house flipping commercials.   You know that guy who has a book with his fool-proof method for making a fortune?  So why is he wasting his time selling books for $2 a copy in royalties rather than following his method?

As A Reward for Introducing Price Competition into the Taxi Monopoly, Uber Gets Sued for Price Fixing

From Engadget:

After failing to get a class-action lawsuit dismissed, Uber CEO Travis Kalanick will go to court over price fixing claims. A US district court judge in New York ruled Kalanick has to face the class of passengers alleging that he conspired with drivers to set fares using an algorithm, including hiking rates during peak hours with so-called surge pricing. According to Reuters, district court judge Jed Rakoff ruled the plaintiffs "plausibly alleged a conspiracy" to fix pricing and that the class action could also pursue claims the set rates led to the demise other services, like Sidecar.

I guess this is the downside of calling all their drivers independent contractors -- it leads Uber potentially being vulnerable to accusations of price fixing among these contractors.  Of course, taxi cartels have been fixing prices for decades, but that is government-assisted price-fixing so I suppose that is OK.   It would be ironic that the first price competition introduced into the taxi business in decades is killed based on antitrust charges.

As with just about all modern anti-trust cases, this has little to do with consumer well-being and more about the well-being of supply chain participants (ie the drivers) and competitors (ie Sidecar and taxis).