New Star Wars Trailer
This looks encouraging.
AT-ATs are back!
Dispatches from District 48
Author Archive
This looks encouraging.
AT-ATs are back!
Is there anything that rankles populists who are "anti-speculator" more than the ability to short stocks? From time to time countries that are upset about falling markets will ban short-selling. But I have defended stock (and other asset shorting) as a critical market mechanism that helps to limit damaging bubbles. I wrote waaaaaay back in 2008, after the US temporarily banned short selling of certain assets:
At the start of the bubble, a particular asset (be it an equity or a commodity like oil) is owned by a mix of people who have different expectations about future price movements. For whatever reasons, in a bubble, a subset of the market develops rapidly rising expectations about the value of the asset. They start buying the asset, and the price starts rising. As the price rises, and these bulls buy in, folks who owned the asset previously and are less bullish about the future will sell to the new buyers. The very fact of the rising price of the asset from this buying reinforces the bulls' feeling that the sky is the limit for prices, and bulls buy in even more.
Let's fast forward to a point where the price has risen to some stratospheric levels vs. the previous pricing as well as historical norms or ratios. The ownership base for the asset is now disproportionately
made up of those sky-is-the-limit bulls, while everyone who thought these guys were overly optimistic and a bit wonky have sold out. 99.9% of the world now thinks the asset is grossly overvalued. But how does it come to earth? After all, the only way the price can drop is if some owners sell, and all the owners are super-bulls who are unlikely to do so. As a result, the bubble might continue and grow long after most of the world has seen the insanity of it.Thus, we have short-selling. Short-selling allows the other 99.9% who are not owners to sell part of the asset anyway, casting their financial vote [on] the value of the company. Short-selling shortens bubbles, hastens the reckoning, and in the process generally reduces the wreckage on the back end.
I am remembering this old post because Arnold Kling links an interesting bit on economists discussing the Big Short, who among a number of interesting things say this:
Shorting the market in the way they did is very risky, and one has to be very confident, perhaps overconfident, in one’s forecast to take such risks. As a consequence, many people who were pessimistic about the housing market simply stayed on the sidelines—which in turn meant that for a while, valuations in the market primarily reflected the beliefs of optimists.
The timing issue is key. I have been right probably in 4 of out the 5 major market shorting opportunities I have identified in the last 10 years, but have been on average 2 years early with all of them, meaning I lost money on most of them, or made money after enduring some really big paper losses for a while.
Google's parent Alphabet is abandoning support for Revlov's Smart Home Hub (which they bought a while back). In and of itself this part of an irritating strategy (pursued enthusiastically both by Alphabet and Apple) of identifying edgy new devices with enthusiastic user bases, buying them, and then shutting them down. I was a SageTV fan and user back in the day until Google bought it and shut it down (as a potential competitor to GoogleTV and its other streaming products). The bright side is that this pushed me to XBMC/KODI, which is better. The dark side is that I am sure Google could easily write those guys a check and then they will be gone too.
Anyway, after SageTV was shut down by Google, I could still use the hardware and software, it just did not get improved or updated or supported any more. But increasingly new electronic products are requiring some sort of cloud integration or online account activation. To work, the product actually has to check in with the manufacturer's servers. So what happens when those servers are shut down?
Alphabet-owned company Nest is going to pull the plug on the Revolv smart home hub and app on May 15, rendering the hardware unusable next month.
Just to be clear on how much of a big deal this is, the company isn't only out to stop support but to really disable the device and turn the hub into a $300 teardrop-shaped brick. How much does a pitchfork go for nowadays?
...Needless to say, existing users are outraged by the development, and they have very good reason to be so."When software and hardware are intertwined, does a warranty mean you stop supporting the hardware or does it mean that the manufacturer can intentionally disable it without consequence? Tony Fadell seems to believe the latter. Tony believes he has the right to reach into your home and pull the plug on your Nest products," Arlo Gilbert, CEO of Televero and formerly proud owner of a Revolv hub, says, emphasizing that "Google is intentionally bricking hardware that he owns."
Video game enthusiasts have worried about this for years, and have started to encounter this problem, as the new most-favored copyright protection scheme is to require an online account and an account-check each time the game is run. They try to say the online component is adding value, and they do a few things like leader boards and achievements, but the primary rational is copy protection. Personally I find this generally easier to work with than other types of copy protection that have been tried (I really like Steam, for example) but what happens when the login servers are shut down?
This sort of reminds me, oddly enough, of cemeteries. There used to be a problem where private cemetery owners would sell out the cemetery, fill it up, and move on. But then the cemetery itself would fall apart. It's not like the owners are still around to pay association dues like condo owners do. Once people figured out that problem, they quickly began demanding that cemeteries have a plan for long-term maintenance, with assets in trust or some such thing. Perhaps the hardware and software industry will do the same thing. I could see a non-profit trust getting set up by the major players to which manufacturers pay dues in exchange for having the trust take over their servers after a product is abandoned.
After my prior post, I have summarized the chart I included from Mark Perry to the key data that I think really makes the point. Household income is obviously a product of hours worked and hourly wages. Looking at the chart below, poverty seems to be much more a function of not working than it is of low wages. Which makes California's decision to raise the price of hiring unskilled workers by 50% (by raising their minimum wage from $10 to $15) all the more misguided.
Note the calculations in the last two lines, which look at two approaches to fighting poverty. If we took the poorest 20% and kept their current number of hours worked the same, but magically raised their hourly earnings to that of the second quintile (ie from $14.21 to $17.33), it would increase their annual household income by $2,558, a 22% increase (I say magically because clearly if wages are raised via a minimum wage mandate, employment in this groups would drop even further, likely offsetting most of the gains). However, if instead we did nothing to their wages but encouraged more employment such that their number of workers rose to that of the second quintile, this would increase household income by a whopping $13,357, a 115% percent increase.
From this, would you logically try to fight poverty by forcing wages higher (which will almost surely reduce employment) or by trying to increase employment?
In response to his new $15 minimum wage in California, Governor Jerry Brown said:
Economically, minimum wages may not make sense. But morally, socially, and politically they make every sense because it binds the community together to make sure parents can take care of their kids.
Let me explain as briefly as I can why this minimum wage increase is immoral. We will use data from the chart below which was cribbed from Mark Perry in this post.
The average wage of people who work in the poorest 20% in the US is already near $15 ($28,417 divided by 2000 full time hours - $14.20 per hour). This is not that much lower than the hourly earnings of those in the second poorest or even the middle quintiles. So why are they poor? The biggest different is that while only 16% of the middle quintile households had no one who worked, and 31.5% of the second poorest quintile had no one who worked, of the poorest 20% of households a whopping 63% had no one who worked. Only 16.1% of poor adults had a full time job.
The reason for poverty, then, is not primarily one of rate, it is one of achieving full time employment. Many of these folks have limited education, few job skills, little or no work experience, and can have poor language skills. And California has just increased the cost of giving these folks a job by 50%. The poor will be worse off, as not only will more of them miss out on the monetary benefits of employment, but also the non-monetary ones (building a work history, learning basic skills, etc.)
Past studies have shown that most of the benefit of the minimum wage goes to non-poor households (ie second and third earners in middle class homes). The targets Jerry Brown speaks of, parents earning the minimum wage to take care of families, are perhaps only 1/8 of minimum wage earners.
MaCurdy found that less than 40% of wage increases [from a minimum wage hike] went to people earning less than twice the poverty line, and among that group, about third of them are trying to raise a family on the minimum wage.
Of course, the price of a lot of stuff poor people have to buy in California is about to go up. We are going to have to raise our campground rates by 20-25% to offset the labor cost increase. But that is another story.
FIRE is looking for a client (University or aggrieved student) whom it can help sue the Department of Education over their sexual misconduct guidance
Five years ago today, the Department of Education’s Office for Civil Rights (OCR) announced sweeping new requirements for colleges and universities adjudicating allegations of sexual misconduct. By unilaterally issuing these binding mandates via a controversial “Dear Colleague” letter (DCL), OCR ignored its obligation under federal law to notify the public of the proposed changes and solicit feedback.
To correct this error, and to begin to fix a broken system of campus sexual assault adjudication that regularly fails all involved, the Foundation for Individual Rights in Education (FIRE) seeks a student or institution to challenge OCR’s abuse of power. FIRE has made arrangements to secure legal counsel for a student or institution harmed by OCR’s mandates and in a position to challenge the agency’s violation of the Administrative Procedure Act(APA). In keeping with FIRE’s charitable mission to advance the public interest, representation will be provided at no cost to the harmed party.
“In the five years since its issuance, OCR has acted as though the 2011 Dear Colleague letter is binding law—but it isn’t,” said FIRE Executive Director Robert Shibley. “By circumventing federal law, OCR ignored all stakeholders: victims, the accused, civil liberties advocates, administrators, colleges, law enforcement, and the general public. Real people’s lives are being irreparably harmed as a result. It’s time that OCR be held accountable.”
The DCL requires that schools use the low “preponderance of the evidence” standard of proof (i.e., that they find an accused student guilty with just 50.01 percent certainty) when adjudicating claims of sexual assault and sexual harassment. The DCL’s requirement that colleges use this standard—found nowhere in Title IX or its implementing regulations, and specified before 2011 only in letters between OCR and individual schools—effectively creates a new substantive rule for institutions to follow.
Here is what is amazing to me: Not a single university has challenged this rule, even though trashes the due process rights of is male students. These same universities had no problem defying the law on things like ROTC and army recruiting (which represent mostly voluntary enticements of their students) but have rolled over and played dead over this much more direct threat to their students' well-being.
Time Warner Cable, the owners of the Dodgers local broadcast rights is continuing to battle with local cable channels to be added to their cable package. Like last year, it appears that no deal will be forthcoming and the Dodgers (and perhaps more disheartening, Vin Scully in his last year) won't be on many TV sets this summer in LA. Kevin Drum essentially says bravo to the cable companies for opposing the Dodgers bid to jack up basic cable rates in the area.
Boo hoo. They tried everything—everything, I tell you. Except, of course, for the one thing that would have worked: the right to make the Dodgers an extra-cost option, not part of basic cable. Most cable operators see no reason that every television viewer in the LA basin should have to pay 60 bucks a year more in cable fees regardless of whether or not they care about baseball.
And that's the one thing TWC won't do. Why? Because then it will become crystal clear just how few households actually care enough about the Dodgers to pay for them. And that would truly be a disaster beyond reckoning. There's a limit to the amount of sports programming that people are willing to have crammed down their throats!
I actually agree with him, and will add that it is always great to see a progressive acknowledge consumers do actually exercise accountability on businesses.
But I will observe that had we adopted cable neutrality rules** as we have for net neutrality, the cable companies would have found it impossible, or at least much more difficult, to oppose carriage by a pushy and expensive content provider. It is this sort of intra-supply-chain tug of war that generally benefits consumers in the long run (as it has in LA, at least for Drum) that is essentially outlawed by net neutrality rules which basically declare content providers the victors by default. As I wrote before:
Net Neutrality is one of those Orwellian words that mean exactly the opposite of what they sound like. There is a battle that goes on in the marketplace in virtually every communication medium between content creators and content deliverers. We can certainly see this in cable TV, as media companies and the cable companies that deliver their product occasionally have battles that break out in public. But one could argue similar things go on even in, say, shipping, where magazine publishers push for special postal rates and Amazon negotiates special bulk UPS rates.
In fact, this fight for rents across a vertical supply chain exists in virtually every industry. Consumers will pay so much for a finished product. Any vertical supply chain is constantly battling over how much each step in the chain gets of the final consumer price.
What "net neutrality" actually means is that certain people, including apparently the President, want to tip the balance in this negotiation towards the content creators (no surprise given Hollywood's support for Democrats). Netflix, for example, takes a huge amount of bandwidth that costs ISP's a lot of money to provide. But Netflix doesn't want the ISP's to be be able to charge for this extra bandwidth Netflix uses - Netflix wants to get all the benefit of taking up the lion's share of ISP bandwidth investments without having to pay for it. Net Neutrality is corporate welfare for content creators....
I am still pretty sure the net effect of these regulations, whether they really affect net neutrality or not, will be to disarm ISP's in favor of content providers in the typical supply chain vertical wars that occur in a free market. At the end of the day, an ISP's last resort in negotiating with a content provider is to shut them out for a time, just as the content provider can do the same in reverse to the ISP's customers. Banning an ISP from doing so is like banning a union from striking.
** Footnote: OK, we sortof did have cable neutrality in one respect -- over the air broadcasters were able to obtain crony legislation that cable companies had to carry every locally broadcast channel. So that channel 59 that you never bothered to watch now get's equal treatment with the NBC affiliate. This was a huge boon for these stations, and the value of these often tiny stations exploded with this must-carry rule. Essentially they were given an asset for free, ie position in a cable lineup, that other competitors had to fight for.
Al Gore, as an aside, actually became rich with exactly this game. It is hard to fight your way into a cable lineup nowadays. Al Gore did it with this Current TV startup based on his name and a promise of a sort of MTV for politics. The channel went nowhere and lost a lot of money, but it now had one valuable asset -- placement in cable TV lineups. So it sold this asset to Al Jazzera, which had struggled to get placement.
This is the fourth chapter of an ongoing series. Other parts of the series are here:
In our last chapter, we ended a discussion on theoretical future warming rates by saying that no amount of computer modelling was going to help us choose between various temperature sensitivities and thus warming rates. Only observational data was going to help us determine how the Earth actually responds to increasing CO2 in the atmosphere. So in this chapter we turn to the next part of our framework, which is our observations of Earth's temperatures, which is among the data we might use to support or falsify the theory of catastrophic man-made global warming.
The IPCC position is that the world (since the late 19th century) has warmed about 0.8C. This is a point on which many skeptics will disagree, though perhaps not as substantially as one might expect from the media. Most skeptics, myself included, would agree that the world has certainly warmed over the last 100-150 years. The disagreement tends to be in the exact amount of warming, with many skeptics contending that the amount of warming has been overstated due to problems with temperature measurement and aggregation methodology.
For now, we will leave those issues aside until part B of this section, where we will discuss some of these issues. One reason to do so is to focus, at least at first, on the basic point of agreement that the Earth has indeed warmed somewhat. But another reason to put these differences over magnitude aside is that we will find, a few chapters hence, that they essentially don't matter. Even the IPCC's 0.8C estimate of past warming does not support its own estimates of temperature sensitivity to CO2.
Surface Temperature Record
The most obvious way to measure temperatures on the Earth is with thermometers near the ground. We have been measuring the temperature at a few select locations for hundreds of years, but it really is only in the last century that we have fairly good coverage of the land surface. And even then our coverage of places like the Antarctic, central Africa, parts of South America, and all of the oceans (which cover 75% of the Earth) is even today still spotty. So coming up with some sort of average temperature for the Earth is not a straight averaging exercise -- data must be infilled and estimated, making the process complicated and subject to a variety of errors.
But the problem is more difficult than just data gaps. How does one actually average a temperature from Denver with a temperature from San Diego? While a few folks attempt such a straight average, scientists have developed a theory that one can more easily average what are known as temperature anomalies than one can average the temperature itself. What is an anomaly? Essentially, for a given thermometer, researchers will establish an average for that thermometer for a particular day of the year. The exact time period or even the accuracy of this average is not that important, as long as the same time period is used consistently. Then, the anomaly for any given measurement is the deviation of the measured temperature from its average. So if the average historical temperature for this day of the year is 25C and the actual measured for the day is 26C, the anomaly for today at this temperature station is +1.0C.
Scientists then develop programs that spatially average these temperature anomalies for the whole Earth, while also adjusting for a myriad of factors, from time-of-day changes in measurement to technology changes over time of the temperature stations to actual changes in the physical location of the measurement. This is a complicated enough a task, with enough explicit choices that must be made about techniques and adjustments, that there are many different temperature metrics floating around out there, many of which get different results from essentially the same data. The Hadley Center in England's CRUT4 global temperature metric is generally considered the gold standard, and is the one used preferentially by the IPCC. Its metric is shown below, with the monthly temperature anomaly in dark blue and the 5 year moving average (centered on its mid-point):
Again, the zero point of the chart is arbitrary and merely depends on the period of time chosen as the base or average. Looking at the moving average, one can see the temperature anomaly bounces around -0.3C in the late 19th century and has been around +0.5C over the last several years, which is how we get to about 0.8C warming.
Satellite Temperature Record
There are other ways to take temperature measurements, however. Another approach is to use satellites to measure surface temperatures (or at least near-surface temperatures). Satellites measure temperature by measuring the thermal microwave emissions of oxygen atoms in the lower troposphere (perhaps 0-3 miles above the Earth). Satellites have the advantage of being able to look at the entire Earth without gaps, and are not subject to siting biases for surface temperatures stations (which will be discussed in our part B of this chapter).
The satellite record does, however, rely on a shifting array of satellites all of which have changing orbits for which adjustments must be made. Of necessity, the satellite record cannot reach as far back into the past. And the satellites are not actually measuring the temperature of the Earth, but rather a temperature a mile or two up. Whether that matters is subject to debate, but the clincher for me is that the IPCC and most climate models have always shown that the first and most anthropogenic warming should show up in exactly this spot -- the lower troposphere -- which makes observation of this zone a particularly good way to look for a global warming signal.
Roy Spencer and John Christy have what is probably the leading satellite temperature metric, called "UAH" as a shorthand for University of Alabama, Huntsville's space science center. The UAH record looks like this:
Note that the absolute magnitude of the anomaly isn't comparable between the surface and satellite record, as they use different base periods, but changes and growth rates in the anomalies should be comparable between the two indices.
The first thing to note is that, though they are different, both the satellite and surface temperature records show warming since 1980. For all that some skeptics may want to criticize the authors of the surface temperature databases, and there indeed some grounds for criticism, these issues should not distract us from the basic fact that in every temperature record we have (including other technologies like radiosonde balloons), we see recent warming.
In terms of magnitude, the two indices do not show the same amount of warming -- since 1980 the satellite temperature record shows about 30% less warming than does the surface temperature record for the same period. So which is right? We will discuss this in more depth in part B, but the question is not made any easier by the fact that the surface records are compiled by prominent alarmist scientists while the satellite records are maintained by prominent skeptic scientists. Which causes each side to accuse the other of having its thumb on the scale, so to speak. I personally like the satellite record because of its larger coverage areas and the fact that its manual adjustments (which are required of both technologies) are for a handful of instruments rather than thousands, and are thus easier to manage and get right. But I am also increasingly of the opinion that the differences are minor, and that neither are consistent with catastrophic forecasts.
So instead of getting ourselves involved in the dueling temperature data set food fight (we will dip our toe into this in part B), let's instead apply both these data sets to several propositions we see frequently in the media. We will quickly see the answers we reach do not depend on the data set chosen.
Test #1: Is Global Warming Accelerating
One frequent meme you will hear all the time is that "global warming is accelerating." As of today it had 550,000 results on Google. For example:
So. Is that true? They can't print it if its not true, right (lol)? Let's look first at the satellite record through the end of 2015 when this presentation was put together (there is an El Nino driven spike in 2 months after this chart was made, which does not affect the conclusions that follow in the least, but I will update to include ASAP).
If you want a name for this chart, I could call it the "bowl of cherries" because it has become a cherry-picker's delight. Everyone in the debate can find a starting point and an end point in this jagged data to find any trend they want to find. So how do we find an objective basis to define end points for this analysis? Well, my background is more in economic analysis. Economists have the same problem in looking at trends for things like employment or productivity because there is a business cycle that adds volatility to these numbers above and beyond any long term trend. One way they manage this is to measure variables from peak to peak of the economic cycle.
I have done something similar. The equivalent cyclical peaks in the temperature world are probably the very high Pacific Decadal Oscillation, or El Nino, events. There was one in 1998 and there is one occurring right now in late 2015/early 2016. So I defined my period as 18 years from peak to peak. By this timing, the satellite record shows temperatures to be virtually dead flat for those 18 years. This is "the pause" that you may have heard of in climate debates. Such an extended pause is not predicted by global warming theory, particularly when the theory (as in the IPCC main case) assumes high temperature sensitivities to CO2 and low natural variation in temperatures.
So if global warming were indeed accelerating, we would expect the warming rate over the last 18 years to be higher than the rate over the previous 18 years. But just the opposite is true:
While "the pause" does not in and of itself disprove the theory of catastrophic manmade global warming, it does easily falsify the myriad statements you see that global warming is accelerating. At least for the last 20 years, it has been decelerating.
By the way, this is not somehow an artifact of just the satellite record. This is what the surface record looks like for the same periods:
Though it shows (as we discussed earlier) higher overall warming rates, the surface temperature record also shows a deceleration rather than acceleration over the last 20 years.
Test #2: Are Temperatures Rising Faster than Expected
OK, let's consider another common meme, that the "earth is warming faster than predicted."
Again, there over 500,000 Google matches for this meme. So how do we test it? Well, certainly not against the last IPCC forecasts -- they are only a few years old. The first real high-sensitivity or catastrophic forecast we have is from James Hansen, often called the father of global warming.
In June of 1988, Hanson made a seminal presentation to Congress on global warming, including this very chart (sorry for the sucky 1980's graphics). In his testimony, he presented his models for the Earth's temperature, which showed a good fit with history**. Using his model, he then created three forecasts: Scenario A, with high rates of CO2 emissions; Scenario B, with more modest emissions; and scenario C, with drastic worldwide emissions cuts (plus volcanoes, that tend to belch dust and chemicals that have a cooling effect). Surprisingly, we can't even get agreement today about which forecast for CO2 production was closer to the mark (throwing in the volcanoes makes things hard to parse) but it is pretty clear that over the 30 years after this forecast, the Earth's CO2 output has been somewhere between A and B.
As it turns out, it doesn't matter whether we actually followed the CO2 emissions from A or B. The warming forecasts for scenario A and B turn out to be remarkably similar. In the past, I used to just overlay temperature actuals onto Hansen's chart, but it is a little hard to get the zero point right and it led to too many food fights. So let's pull the scenario A and B forecasts off the chart and compare them a different way.
The left of chart shows Hanson's scenario A and B, scanned right from his chart. Scenario A implies a warming rate from 1986 to 2016 of 3.1C per century. Scenario B is almost as high, at 2.8C per century. But as you can see on the right, the actual warming rates we have seen over the same period are well below these forecasts. The surface temperature record shows only about half the warming, and the satellite record shows only about a third the warming, that Hansen predicted. There is no justification for saying that recent warming rates have been higher than expected or forecast -- in fact, the exact opposite has been true.
We see the same thing when looking at past IPCC forecasts. At each of its every-five-year assessments, the IPCC has included a forecast range for future temperatures. In this case, though, we don't have to create a comparison with actuals because the most recent (5th) IPCC Assessment did it for us:
The colored bands are their past forecasts. The grey areas are the error bands on the forecast. The black dots are global temperatures (which actually are shown with error bars, which is good practice but seldom done except perhaps when they are trying to stretch to get into the forecast range). As you can see, temperatures have been so far below forecasts that they are dropping out of the low end of even the most generous forecast bands. If temperatures were rising faster than expected, the black dots would be above the orange and yellow bands. We therefore have to come to the conclusion that, at least for the last 20-30 years, temperatures have not been rising faster than expected, they have been rising slower than expected.
Day vs. Night
There is one other phenomenon we can see in the temperature data that we will come back to in later chapters: that much of the warming over the last century has been at night, rather than in the daytime. There are two possible explanations for this. The first is that most anthropogenic warming models predict more night time warming than they do day time warming. The other possibility is that a portion of the warming in the 20th century temperature record is actually spurious bias from the urban heat island effect due to siting of temperature stations near cities, since urban heat island warming shows up mainly at night. We will discuss the latter effect in part B of this chapter.
Whatever the cause, much of the warming we have seen has occurred at night, rather than during the day. Here is a great example from the Amherst, MA temperature station (Amherst was the first location where I gave this presentation, if that seems an odd choice).
As you can see, the warming rate since 1945 is 5 times higher at night than during the day. This directly affects average temperatures since daily average temperature for a location in the historic record is the simple average of the daily high and daily low. Yes, I know that this is not exactly accurate, but given technology in the past, this is the best that could be done.
The news media likes to cite examples of heat waves and high temperature records as a "proof" of global warming. We will discuss this later, but this is obviously a logical fallacy -- one can't prove a trend in noisy data simply by citing isolated data points in one tail of the distribution. But it is also fallacious for another reason -- we are not actually seeing any upwards trends in high temperature records, at least for daytime highs:
To get this chart, we obviously have to eliminate newer temperature stations from the data set -- any temperature station that is only 20 years old will have all of its all time records in the last 20 years (you would be surprised at how many otherwise reputable scientists miss simple things like this). Looking at just the temperature stations in the US we have a long record for, we see with the black line that there is really no upwards trend in the number of high temperature records (Tmax) being set. The 1930s were brutally hot, and if not for some manual adjustments we will discuss in part B of this section, they would likely still show as the hottest recent era for the US. It turns out, with the grey line (Tmin), that while there is still no upward trend, we are actually seeing more high temperature records being set with daily lows (the highest low, as it were) than we are with daily highs. The media is, essentially, looking in the wrong place, but I sympathize because a) broiling hot daytime highs are sexier and b) it is brutally hard to talk about highest low temperatures without being confusing as hell.
In our next chapter, or really part B of this chapter, we will discuss some of the issues that may be leading the surface temperature record to be exaggerated, or at least inaccurate.
Chapter 4, Part B on problems with the surface temperature record continues here.
If you want to skip Part B, and get right on with the main line of the argument, you can go straight to Chapter 5, part A, which starts in on the question of how much of past warming can be attributed to man.
** Footnote: The history of Wall Street is full of bankrupt people whose models exactly matched history. I have done financial and economic modeling for decades, and it is surprisingly easy to force multi-variable models to match history. The real test is how well the model works going forward. Both Hanson's 1988 models and the IPCC's many models do an awesome job matching history, but quickly go off the rails in future years. I am reminded of a simple but famous example of the perfect past correlation between certain NFL outcomes and Presidential election outcomes. This NFL model of presidential elections perfectly matches history, but one would be utterly mad to bet future elections based on it.
From the WSJ (emphasis added):
Netflix now admits that for the past five years, all through the debate on net neutrality, it was deliberately slowing its videos watched by users on AT&T and Verizon’s wireless networks. The company did so for good reason—to protect users from overage penalties. But it never told users at a time when Netflix was claiming carriers generally were deliberately slowing its service to protect their own TV businesses—a big lie, it turned out.
All this has brought considerable and well-deserved obloquy on the head of Netflix CEOReed Hastings for his role in inviting extreme Obama utility regulation of the Internet. Others deserve blame too. Google lobbied the administration privately but was too chicken to speak up publicly against utility regulation.
But Netfix appears to have acted out of especially puerile and venal motives. Netflix at the time was trying to use political pressure to cut favorable deals to connect directly to last-mile operators like Comcast and Verizon—a penny-ante consideration worth a few million dollars at best, for which Netflix helped create a major public policy wrong-turn.
This is what I wrote about net neutrality a couple of years ago:
Net Neutrality is one of those Orwellian words that mean exactly the opposite of what they sound like. There is a battle that goes on in the marketplace in virtually every communication medium between content creators and content deliverers. We can certainly see this in cable TV, as media companies and the cable companies that deliver their product occasionally have battles that break out in public. But one could argue similar things go on even in, say, shipping, where magazine publishers push for special postal rates and Amazon negotiates special bulk UPS rates.
In fact, this fight for rents across a vertical supply chain exists in virtually every industry. Consumers will pay so much for a finished product. Any vertical supply chain is constantly battling over how much each step in the chain gets of the final consumer price.
What "net neutrality" actually means is that certain people, including apparently the President, want to tip the balance in this negotiation towards the content creators (no surprise given Hollywood's support for Democrats). Netflix, for example, takes a huge amount of bandwidth that costs ISP's a lot of money to provide. But Netflix doesn't want the ISP's to be be able to charge for this extra bandwidth Netflix uses - Netflix wants to get all the benefit of taking up the lion's share of ISP bandwidth investments without having to pay for it. Net Neutrality is corporate welfare for content creators....
I am still pretty sure the net effect of these regulations, whether they really affect net neutrality or not, will be to disarm ISP's in favor of content providers in the typical supply chain vertical wars that occur in a free market. At the end of the day, an ISP's last resort in negotiating with a content provider is to shut them out for a time, just as the content provider can do the same in reverse to the ISP's customers. Banning an ISP from doing so is like banning a union from striking.
.... your paper prints headlines that say "Economists: Zero chance of Arizona recession." I am sure Houston would have said the exact same thing, right up until oil prices dropped to $30 and suddenly there was like a 100% chance. When the Arizona Republic makes a definitive economic prediction, bet the other side.
Anyway, I am considering this headline to be a flashing indicator of the top. In 2005 I wrote about another such indicator that told me the housing market had peaked:
So, to date [May 31, 2005], I have been unconvinced about the housing bubble, at least as it applied to our community. After all, demographics over the next 20-30 years are only going to support Scottsdale area real estate.
However, over the weekend I had a disturbing experience: At a social function, I heard a dentist enthusiastically telling a doctor that he needs to be buying condos and raw land. The dentist claimed to be flipping raw land parcels for 100% in less than 6 months.
For those who don't know, this is a big flashing red light. When doctors and dentists start trying to sell you on a particular type of investment, run away like they have the plague. At Harvard Business School, I had a great investment management class with a professor who has schooled many of the best in the business. If an investment we were analyzing turned out to be a real dog, he would ask us "who do you sell this to?" and the class would shout "doctors!" And, if the investment was really, really bad, to the point of being insane, the class would instead shout "dentists!"
Which reminds me that in the last 6 months I have started hearing radio commercials again urging folks to get into the house-flipping business and make their fortune. Whenever institutions start selling investments to you, the average Joe, rather than just investing themselves, that should be taken as a signal that we are approaching a top**. About 12-18 months before oil prices tanked, I started getting flooded with spam calls at work trying to sell me various sorts of oil exploration investments.
** Postscript: In 2010, when house prices were low and some were going for a song in foreclosure, there were no house flipping commercials on radio. That is because Blackstone and other major institutions were too busy buying them up. Now that these companies see less value, you are hearing house flipping commercials. You know that guy who has a book with his fool-proof method for making a fortune? So why is he wasting his time selling books for $2 a copy in royalties rather than following his method?
After failing to get a class-action lawsuit dismissed, Uber CEO Travis Kalanick will go to court over price fixing claims. A US district court judge in New York ruled Kalanick has to face the class of passengers alleging that he conspired with drivers to set fares using an algorithm, including hiking rates during peak hours with so-called surge pricing. According to Reuters, district court judge Jed Rakoff ruled the plaintiffs "plausibly alleged a conspiracy" to fix pricing and that the class action could also pursue claims the set rates led to the demise other services, like Sidecar.
I guess this is the downside of calling all their drivers independent contractors -- it leads Uber potentially being vulnerable to accusations of price fixing among these contractors. Of course, taxi cartels have been fixing prices for decades, but that is government-assisted price-fixing so I suppose that is OK. It would be ironic that the first price competition introduced into the taxi business in decades is killed based on antitrust charges.
As with just about all modern anti-trust cases, this has little to do with consumer well-being and more about the well-being of supply chain participants (ie the drivers) and competitors (ie Sidecar and taxis).
It appears that California is going to increase its state minimum wage to $15 in steps over the next five or six years. This is yet another body blow for unskilled workers in the state. As I wrote a while back, it is already overly difficult to build a business based on unskilled labor in that state, and increasing the price people have to pay for that labor by 50% is only going to make things worse. It is possible low-skill workers in large wealthy cities like San Francisco will be OK, as service businesses are still going to want to be there to access all that wealth, and will just raise their prices even higher to account for the higher wages. For laborers in rural areas that are already suffering from high unemployment, the prospects are not very bright.
As most readers know, we run a service business operating campgrounds across the country, including a number in California. Over the last years, due to past regulation and minimum wage increases, and in anticipation of further goofiness of this sort, we exited about 2/3 of our business in California.
Our problem going forward is that in rural locations, sometimes without even electricity or cell phone service on site, we have simply exhausted all the productivity measures I can think of. There appears to be a minimum amount of labor required to clean a bathroom and do landscaping. Which leaves us the options of exiting more businesses or raising prices. Most of our customers in California are blue collar rural folks whose lot is only going to be worse as a result of these minimum wage increases, and so I am not sure how far they will be able to bear the price increases we will need to cover our higher costs. Likely we will keep raising prices until customers can bear no more, and then exit.
By the way, the 5-6 year implementation time is a frank admission by the authors of the law, not matter what they say in pubic to the contrary, that they know there will be substantial negative employment effects from the minimum wage increase. They are hoping that by spreading it out over several years, those negative effects will lost in the noise of economic fluctuations. The Leftist playbook is to do something like this that trashes the earnings of the most vulnerable low-skilled workers, and then later point to the income inequality of those low-skilled workers as a failure of free markets.
On a related note, one of the more interesting things I have read lately is this comparison of successful integration of Muslim immigrants in the US vs. poor integration in Europe. Alex Tabarrok raises the hypothesis that high minimum wages and labor market rigidity in Europe may be an important factor in reducing immigrant integration. He quotes from the OECD:
Belgian labour market settings are generally unfavourable to the employment outcomes of low-skilled workers. Reduced employment rates stem from high labour costs, which deter demand for low-productivity workers…Furthermore, labour market segmentation and rigidity weigh on the wages and progression prospects of outsiders. With immigrants over-represented among low-wage, vulnerable workers, labour market settings likely hurt the foreign-born disproportionately.
…Minimum wages can create a barrier to employment of low-skilled immigrants, especially for youth. As a proportion of the median wage, the Belgian statutory minimum wage is on the high side in international comparison and sectoral agreements generally provide for even higher minima. This helps to prevent in-work poverty…but risks pricing low-skilled workers out of the labour market (Neumark and Wascher, 2006). Groups with further real or perceived productivity handicaps, such as youth or immigrants, will be among the most affected.
In 2012, the overall unemployment rate in Belgium was 7.6% (15-64 age group), rising to 19.8% for those in the labour force aged under 25, and, among these, reaching 29.3% and 27.9% for immigrants and their native-born offspring, respectively.
Wow, I guess it is sure lucky California does not have a very large immigrant population. Oh, wait....
I just found that my guest computer in the lobby of my office has a screen that said "Welcome to Windows 10". I never asked for or initiated such this operating system switch. It was done entirely without my permission by a bit of malware Microsft has introduced to Windows 7 computers. I have had malware issues in the past, but never have I had one that A) put unwanted advertisements on my desktop every day and B) changed my entire operating system without my intervention or approval.
Thank goodness I read somewhere that one can avoid even this seeming fait accompli by declining the terms and conditions. Which I did -- hopefully the computer is rolling back right now.
I am very worried that my 30 or so field managers, who have poor computer skills on average, will get their computer upgraded without knowing it. I do all the tech support in the company and have no desire (and since I don't know windows 10, no ability) to support another operating system right now other than Windows 7.
I will double down on my recommendation of the free GWX control panel to remove this windows 10 upgrade malware from computers. It has worked fine for me (except of course of computers like the one above I did not even think about).
In one recent year alone, Congress passed 138 laws—while federal agencies finalized 2,926 rules. Federal judges conduct about 95,000 trials a year, but federal agencies conduct nearly 1 million. Put all that together and you have a situation in which one branch of government, the executive, is arrogating to itself the powers of the other two.
This probably understates the case. Most of the laws were probably brief fixes or extensions or for national _____ day declarations. The administrative rules can be thousands of pages long and create nightmarish compliance issues. Already, most of our businesses compliance efforts (which seem to be rising exponentially in time and cost) are due to administrative rules changes rather than new laws per se.
I find the judicial issue potentially even more concerning. While we have pretty well-protected due process rights in court, most of these get tossed aside in administrative hearings and trials.
This is the third chapter of an ongoing series. Other parts of the series are here:
We ended the last chapter on the greenhouse gas theory with this:
So whence comes the catastrophe? As mentioned in the introduction, the catastrophe comes from a second, independent theory that the Earth's climate system is dominated by strong positive feedbacks that multiply greenhouse warming many times into a catastrophe.
In this chapter, we will discuss this second, independent theory: that the Earth's climate system is dominated by positive feedbacks. I suppose the first question is, "What do we mean by feedback?"
In a strict sense, feedback is the connection of the output of a system to its input, creating a process that is circular: A system creates an output based on some initial input, that output changes the system's input, which then changes its output, which then in turn changes its input, etc.
Typically, there are two types of feedback: negative and positive. Negative feedback is a bit like the ball in the trough in the illustration above. If we tap the ball, it moves, but that movement creates new forces (e.g. gravity and the walls of the trough) that tend to send the ball back where it started. Negative feedback tends to attenuate any input to a system -- meaning that for any given push on the system, the output will end up being less than one might have expected from the push.
Positive feedback is more like the ball sitting on top of the hill. Even a small tap will send it rolling very far away, because the shape of the hill and gravity tend to push the ball even further in the direction of the tap. Positive feedback amplifies or multiplies any input to a system, meaning that even small pushes can lead to very large results.
The climate temperature system has a mix of positive and negative feedbacks.
For example, consider cumulus clouds. If the Earth warms, more water tends to evaporate from the oceans, and some of that water will form big fluffy white clouds. These clouds act as an umbrella for the Earth, reflecting heat back into space. So as more clouds form due to warming, there is a net new cooling effect that offsets some of the original warming. The amount of warming we might have expected is smaller due to the negative feedback of cloud formation.
On the other side, consider ice and snow. Ice and snow reflect sunlight back into space and keep the Earth cooler than it would be without the ice and snow cover. As the world warms, ice and snow will melt and thus reflect less sunlight back into space, having the effect of warming the Earth even more. So an initial warming leads to more warming, amplifying the effect of the initial warming.
Since we know both types of feedback exist, what we care about is the net effect -- does negative or positive feedback dominate? In every catastrophic forecast you have seen for global warming, in nearly every climate model the IPCC uses, the authors have assumed that the climate is dominated by strong positive feedbacks that multiply incremental warming from greenhouse gasses many times.
This is the result:
As a reminder, the green line is the warming from increases in atmospheric CO2 concentration solely from the greenhouse gas effect, without any feedbacks taken into account. It is generally agreed to be a warming rate of about 1.2C per doubling of CO2 concentrations, with which I and many (or most) science-based skeptics agree. The other lines, then, are a variety of forecasts for warming after feedbacks are taken into account. You can see that all these forecasts assume positive feedback, as the effect is multiplicative of the initial greenhouse gas warming (the pink, purple, and orange lines are approximately 3x, 5x, and 10x the green line, implying very high levels of positive feedback).
The pink line is the mean forecast from the 4th IPCC, implying a temperature sensitivity to CO2 of about 3C. The purple line is the high end of the IPCC forecast band, implying a temperature sensitivity of 5C. And the highest is not from a mathematical model per se, but from the mouth of Bill McKibben (sorry for the misspelling in the chart) who has on several occasions threatened that we could see as much as 10C of warming from CO2 by the end of the century.
Skeptics have pointed out a myriad of issues with the climate computer models that develop these forecasts, but I will leave those aside for now. Suffice it to say that the models exclude many important aspects of the climate and are subject to hand tuning that allows modellers to produce pretty much any output they like.
But I do want to say a few words about computer models and scientific proof. Despite what you will hear from the media, and even from the mouths of prominent alarmist scientists, computer models do not and cannot constitute "proof" of any sort. Computer models are merely tools we use to derive the predicted values of physical parameters from complex hypotheses. They are no different than the pen and paper computations an 18th century researcher might have made for the position of Saturn from Newton's celestial mechanics equations. The "proof" comes when we take these predicted values and compare them against actual measurements over time and find that they are or are not accurate predictions. Newton's laws were proved as his equations' outputs for Saturn's position were compared to Saturn's actual measured position (and in fact they were disproved, to a small extent, when Mercury's position did not accurately match and Einstein has to fix things a bit). Similarly, hypotheses about global warming will be proved or disproved when the predictions of various models are compared to actual temperatures.
So we can't really get much further until we get to actual observations of the climate, which we will address in the next several chapters. But I want to make sure that the two-part theory that leads to catastrophic global warming is clear.
This is the portion of the warming due to greenhouse gas theory:
As you can see, the portion due to greenhouse gas theory is relatively small and likely not catastrophic. The catastrophe comes from the second independent theory that the Earth's climate system is dominated by strong (very strong!) positive feedbacks.
It is the positive feedback that causes the catastrophe, not greenhouse gas theory. So in debating catastrophic man-made global warming theory, we should be spending most of our time debating the theory that the climate is dominated by strong positive feedbacks, rather than debating the greenhouse gas theory.
But in fact, this does not happen in the mainstream media. If you are an average consumer of climate news, I will be you have never heard a discussion in the media about this second theory.
And this second theory is far from settled. If on the "settled" scale from 1-10, greenhouse gas theory is an 8 or 9, this theory of strong positive feedbacks dominating the climate is about a 2. In fact, there is plenty of evidence that not only are scientists estimating feedbacks incorrectly, but that they don't even have the sign right and that net feedbacks may be negative.
This is a bit hard to communicate to a layman, but the positive feedbacks assumed by the most alarmist and catastrophic climate forecasts are very, very high. Way higher than one might expect in advance upon encountering a new system. This assumption of strong positive feedbacks is one that might even offend the sensibilities of the natural scientist. Natural systems that are long-term stable (and certainly for all its variation the climate system has remained in a pretty narrow range for millions and millions of years) are typically not dominated by positive feedbacks, they are dominated by negative feedbacks.
If in fact our climate temperature system is dominated by negative feedbacks, the future warming forecast would actually be below the green line:
OK, without getting in and criticizing the details of these models (which would by the way be a pointless wack-a-mole game because there are dozens of them) the best way to assess the validity of these various forecasts is to now consult actual observations. Which we will begin to do in our next chapter, part 4a on actual temperature measurements.
People act as if it is something new and different when actors shoot scenes and 95% of the space on the screen is later filled in by CGI. This has actually been going on for decades with matte paintings on glass. Movie scenes were either filmed directly through the glass (there are some great examples in the linked article with Disney artists painting sailing ships on a bay for filming) or reshot later by projecting the original film and reshooting it with the matte art.
Here is a an example before and after the painted matt. Just like CGI, only CGI can add movement and dynamic elements
I had thought all this stuff was done in post production but apparently Disney at least shot a lot of scenes straight through a matte. I love this guy, sitting on the beach painting ships on glass so they would be sitting on the bay in the scene. You can almost imagine the actors tapping their feet waiting for him to be finished.
Much of the beauty of the original Star Wars movie was in its great matte paintings, not only of planets but of the large Death Star interior scenes.
Not sure where this came from:
What President Obama has been pushing for, and moving toward, is more insidious: government control of the economy, while leaving ownership in private hands. That way, politicians get to call the shots but, when their bright ideas lead to disaster, they can always blame those who own businesses in the private sector.
What President Obama has been pushing for, and moving toward, is more insidious: government control of the economy, while leaving ownership in private hands. That way, politicians get to call the shots but, when their bright ideas lead to disaster, they can always blame those who own businesses in the private sector.Politically, it is heads-I-win when things go right, and tails-you-lose when things go wrong. This is far preferable, from Obama's point of view, since it gives him a variety of scapegoats for all his failed policies, without having to use President Bush as a scapegoat all the time.
Back in the 1920s, however, when fascism was a new political development, it was widely -- and correctly -- regarded as being on the political left. ....Mussolini, the originator of fascism, was lionized by the left, both in Europe and in America, during the 1920s. Even Hitler, who adopted fascist ideas in the 1920s, was seen by some, including W.E.B. Du Bois, as a man of the left.
People get blinded (probably for good reason, given the heinousness) by Hitler's rounding people up in camps and can't really get beyond that in thinking about fascism. Which is why I sometimes find it helpful to use the term "Mussolini-style fascism". And the US Left, led by FDR, was very much in thrall with portions of Mussolini-style fascism, so much so that the National Industrial Recovery Act was a modelled on Mussolini's economic management of command and control by corporatist boards. Here is one description:
The image of a strong leader taking direct charge of an economy during hard times fascinated observers abroad. Italy was one of the places that Franklin Roosevelt looked to for ideas in 1933. Roosevelt's National Recovery Act (NRA) attempted to cartelize the American economy just as Mussolini had cartelized Italy's. Under the NRA Roosevelt established industry-wide boards with the power to set and enforce prices, wages, and other terms of employment, production, and distribution for all companies in an industry. Through the Agricultural Adjustment Act the government exercised similar control over farmers. Interestingly, Mussolini viewed Roosevelt's New Deal as "boldly... interventionist in the field of economics." Hitler's nazism also shared many features with Italian fascism, including the syndicalist front. Nazism, too, featured complete government control of industry, agriculture, finance, and investment.
The NRA has to be in the top 10 best overturn decisions by the Supreme Court. Thought experiment -- do you think you could buy a Honda, Toyota, Tesla, Nissan or Kia in the US today if GM and the UAW were running the automotive board?
Nicholas Kristof urges us not to exaggerate or overreact to the risk of terrorism based on a few high-profile but isolated and nearly-impossible-to-control events, particularly since there is no upward trend in terrorism deaths.
He urges us instead to exaggerate and overreact to the risk of catastrophic man-made climate change based on a few high-profile but isolated and nearly-impossible-to-control weather events for which data show there is no actual upward trend (e.g. hurricanes, tornadoes, droughts, heat waves, etc).
Everyone today seems to be trying to stampede everyone else into some kind of fear based on overblown risks, whether it be to terrorism or climate change or immigrant-related crime or vaccine-caused autism or, uh, whatever is supposed to be bad that is caused by GMO's. It is all a quest for power. They hope that fear will cause you to write them a blank check for exercising power over you. Don't give it to them.
If your business is like mine, a lot of folks to whom I owe money are insisting on the ability to automatically remove the money I owe them each month from our checking account (via an electronic process known as ACH, which is slower but much cheaper and easier to use than the old wire transfer method). At first, any loan I took out insisted that the lender be able to automatically withdraw my payments. Then my workers compensation company. Then certain vendor accounts. And of course my merchant processing companies are constantly shoving money in and out of my bank accounts.
In retrospect, I was far too sanguine about this situation. What finally caused me to abandon my sense of security was a libel lawsuit filed by one of my vendors over a bad review I wrote of their product [I won't mention the name here but I am sure anyone can figure it out with a simple search]. Anyway, I realized that this company, who was suing me for untold bazillions of dollars, actually had the right to freely jack whatever they wanted out of my checking account. What is worse, this same company is being sued by many companies for trying to take an arbitrarily high final payment out of their accounts at contract termination. Eeek! And this does not even include the possibility of outright fraud. I have ACH tools where if I have your bank's name and your account number, I could pull out money from your account without your ever knowing about it until you see it missing. I presume criminals could do the same thing.
Something had to be done, and it turned out that my bank, Bank of America, has something called ACH positive pay wherein nothing gets ACH'ed out of my accounts without my first approving the payments. I check a screen each morning and in 60 seconds can do the approvals for the day. They also have a very easy to use rules system where one can set up rules such that payments to certain vendors or for certain amounts don't need further daily approvals.
I presume most major banks have a similar product. It cost me some money but I feel way safer and encourage you to look into it if you are in the same situation.
If you search Coyoteblog for the title of this post, you will see a number of others with the same title. It seems to be a theme we keep having to come back to. Here is one example of where I tried to explain why the trade deficit is not a debt.
Take the Chinese for example. One thing that people often miss is that the Chinese buy a LOT more American stuff than the trade numbers portray. The numbers in the balance of trade accounts include only products the Chinese buy from the US and then take back to China to consume there. But the Chinese like to buy American stuff and consume it here, in the US. They buy land and materials to build factories and trade offices. They buy houses in California. They buy our government bonds. None of this stuff shows up in the trade numbers. Is it somehow worse that the Chinese wish to consume their American products in America? No. How could it be. In fact, its a compliment. They know that our country is, long-term, a safer and more reliable place to own and hold on to things of value than their own country.
Dollars paid to a Chinese manufacturer have to get recycled to the US -- they don't just build up in a pile. If I am a construction contractor in LA and build that manufacturer a new office or a local home and get paid with those recycled dollars, I am effectively exporting to the Chinese, only the goods and services I sold them never leave the country and so don't show up in the trade numbers. So what does this mean? In my mind, it means that the trade deficit number is a stupid metric to obsess over.
Another way I think about it is to observe that the US is winning the battle of stuff. Money as money itself does not improve my well-being -- only the stuff (goods and services) I can purchase with it can do so. So i t turns out that other countries ship far more stuff to the US than we ship out. And then these folks in other countries take the money they earn from this trade and buy more stuff in the US and keep keep that stuff here!
I am reminded of all this because several other folks are taking a swing at trying to make this point to the economically illiterate. Don Boudreaux does so here, and Dan Ikensan here. And here is Walter Williams as well.
Other parts of the series are here:
We continue our multi-part series on the theory of catastrophic man-made global warming by returning to our framework we introduced in the last chapter.
In the introduction, we discussed how catastrophic man-made global warming theory was actually made up of two independent parts. In this section, we will discuss the first of these two parts, the greenhouse gas effect, which is the box in the upper left of our framework.
For those unfamiliar with exactly what the greenhouse effect is, I encourage you to check out this very short primer. Essentially, certain gasses in the atmosphere can absorb some of the heat the Earth is radiating into space, and re-radiate some of this heat back to Earth. These are called greenhouse gasses. Water vapor is a relatively strong greenhouse gas, while CO2 is actually a relatively weak greenhouse gas.
It may come as a surprise to those who only know of skeptics' arguments from reading their opponents (rather than the skeptics themselves), but most prominent skeptics accept the theory of greenhouse gas warming. Of course there are exceptions, including a couple of trolls who like to get attention in the comments section of this and other blogs, and including a few prominent politicians and talk-show hosts. But there are also environmental alarmists on the other side who have signed petitions to ban dihydrogen monoxide. It is always tempting, but seldom intellectually rewarding, to judge a particular position by its least capable defenders.
There is simply too much evidence both from our and other planets (as well as simple experiments in a laboratory) to deny that greenhouse gasses in the atmosphere have a warming effect on planets, and that CO2 is such a greenhouse gas. What follows in the rest of this section represents something of a consensus of people on both sides of the debate.
To investigate the effect of CO2 on Earth's temperature, we are going to use this chart:
On the X axis is the atmospheric concentration of CO2 in parts-per-million (ppm). Frequently, forecasts of CO2 warming are shown as a relationship over time. I prefer this view, because it separates the assumption of CO2 emissions rates from assumptions about the sensitivity of temperatures to CO2.
Note that the concentrations we are talking about are remarkably small. Currently the Earth is just over 400 ppm, which is 0.04%. Only one in 2500 molecules in air is CO2.
On the Y-axis we then have the incremental warming we might see, on average, across the surface of the Earth from increased concentrations of CO2. Unless I point it out explicitly, we will use Celsius throughout this and later chapters.
What we now want to do is graph the relationship between the concentrations of CO2 in the atmosphere and the temperature increase of the Earth. We will use 400 ppm and 0C increase as our starting points. For now (and we will come back to this assumption) we will look at just the direct effect of warming from the greenhouse gas effect of CO2 and leave out any other complicated, 2nd order interactions with the Earth and its climate.
The estimate I will use comes from Dr. Michael Mann and was first cited in the early IPCC reports. A quick note on the IPCC -- the IPCC is a body that meets every 5 years or so under the auspices of the United Nations to try to summarize the current state of climate science. Many skeptics, including myself, would argue that the IPCC process is flawed and overly politicized, but as much as possible in this series I will try to use the IPCC position, making it explicit when I differ. But what follows is very much IPCC canon. In fact, I like using Michael Mann's work here because, as author of the hockey stick, he is a vocal and prominent advocate on the alarmist end of the debate and certain not in the tank for the skeptic side.
The relationship is shown in the equation at the top (where delta T is the temperature increase and c is the atmospheric concentration in ppm). I have graphed the equation in green because most of us do not have a good intuition for what this equation might look like.
The first thing you might note is that the line is curved, and represents a diminishing return relationship, which means that each incremental molecule of CO2 in the atmosphere has less warming effect than the last (see my short presentation on the greenhouse gas effect here). Thus a constant rate of growth in CO2 concentrations would yield a slowing growth rate in temperatures. This is a well-understood relationship, so much so that the sensitivity of temperature to CO2 is generally written not as degrees per ppm but as degrees per doubling of CO2 levels. This means that the increase from 400-800 ppm would be expected to have about the same impact on temperature as the increase from 800 to 1600 ppm.
Of course, without any sense of CO2 growth rates, it's hard to relate this line to our lives. So as a next step, we will overly some CO2 forecasts for the atmospheric levels of CO2 by 2100. [As an aside, there is a group of skeptics that think that most CO2 increases are coming from warming itself, flipping the arrow of causality, rather than from man. There is some evidence for this proposition in ice core analysis, but I will leave it aside and for our purposes assume most CO2 increases in this century are coming from hydrocarbon combustion].
Though I think that their forecasts are exaggerated, I have taken the UN IPCC's 4 most likely CO2 cases for the year 2100 and overlayed them on the chart below:
Taking the midpoint of these forecasts, we arrive at about 1C of warming between now and the end of the century.
So now, if you are paying attention, you may be ready to call bullsh*t on me. Coyote, you say, every catastrophic forecast I have ever seen in the media is for WAY more than 1C of warming! Bill McKibbon says its going to be 10 degrees of warming (and if you can't trust Harvard journalism majors on scientific issues, who can you trust?) You are obviously lying, you evil denier.
Actually, not. Everything in this chapter has been pretty much canon in the global warming world. The direct, first order contribution of CO2 via the greenhouse effect is expected to be around a degree over the next century. So whence comes the catastrophe? As mentioned in the introduction, the catastrophe comes from a second, independent theory that the Earth's climate system is dominated by strong positive feedbacks that multiply greenhouse warming many times into a catastrophe.
If you have never heard of this second theory, don't be surprised. In many years of reading press articles on global warming, I can't remember one that adequately explained the two-part nature of the theory that is embedded in most global warming forecasts and climate models. But, perhaps not coincidentally, it is this second theory with which we skeptics have the most issues. We will take this up in our next installment.
from my Twitter account:
Your government: IRS web site actually closes after business hours https://t.co/o65g0ZHlhr @instapundit @reason pic.twitter.com/nIMlM16spX
â Coyoteblog (@Coyoteblog) March 24, 2016
One of the ugly facts about how we manage water is that by eschewing markets and prices to allocate scarce water, all that is left is command and control allocation to match supply and demand. The uglier fact is that politicians like it that way. A golf course that pays a higher market rate for water doesn't help a politician one bit. A golf course that has to beg for water through a political process is a source of campaign donations for life.
In a free society without an intrusive government, it would not matter whether California almond growers were loved or hated. If people did not like them, then they just wouldn't buy their product. But in California, the government holds the power of life or death over businesses through a number of levers, not least of which is water.
Almonds have become the Left's] new bête noir. The nut is blamed for exacerbating the California drought, overtaxing honeybee colonies, starving salmon of river water, and price-gauging global consumers. Almonds may be loved by consumers, but almond growers, it seems, are increasingly despised in the media. In 2014, The Atlantic published a melodramatic essay, âThe Dark Side of Almond Useââwith the ominous subtitle, âPeople are eating almonds in unprecedented amounts. Is that okay?â If no one much cared that California agriculture was in near depression for much of the latter twentieth centuryâand that almonds were hardly worth growing in the 1970sâthey now worry that someone is netting $5,000 to $10,000 per acre on the nut.
It is almost too much to bear for a social or environmental activist that a corporate farm of 5,000 acres could in theory clear $30 million a yearâwithout either exploiting poor workers or poisoning the environment, but in providing cool people with a healthy, hip, natural product. The kind of people who eat almond butter and drink almond milk, after all, are the kind of people who tend to endorse liberal causes.
As for almonds worsening the drought: The truth is that the nut uses about the same amount of water per acre as other irrigated California crops such as pasture, alfalfa, tree fruit, pistachios, cotton, or rice. In fact, almonds require a smaller percentage of yearly irrigation use than their percentage of California farmland calls for. Nonetheless, the growth of almond farming represents to many a greedy use of scarce collective resource.