Posts tagged ‘Occam Razor’

## A Window into the Reality-Based Community

Kevin Drum links to a blog called Three-Toed Sloth in a post about why our climate future may be even worse than the absurdly cataclysmic forecasts we are getting today in the media.  Three-Toed Sloth advertises itself as "Slow Takes from the Canopy of the Reality-Based Community."  His post is an absolutely fabulous example how one can write an article where most every line is literally true, but the conclusion can still be dead wrong because one tiny assumption at the beginning of the analysis was incorrect  (In this case, "incorrect" may be generous, since the author seems well-versed in the analysis of chaotic systems.  A better word might be "purposely fudged to make a political point.")

He begins with this:

The
climate system contains a lot of feedback loops.  This means that the ultimate
response to any perturbation or forcing (say, pumping 20 million years of
accumulated fossil fuels into the air) depends not just on the initial
reaction, but also how much of that gets fed back into the system, which leads
to more change, and so on.  Suppose, just for the sake of things being
tractable, that the feedback is linear, and the fraction fed back
is f.  Then the total impact of a perturbation I is

J + Jf + Jf2 + Jf3 + ...

The infinite series of tail-biting feedback terms is in fact
a geometric
series
, and so can be summed up if f is less than 1:

J/(1-f)

So far, so good.  The math here is entirely correct.  He goes on to make this point, arguing that if we are uncertain about  f, in other words, if there is a distribution of possible f's, then the range of the total system gain 1/(1-f) is likely higher than our intuition might first tell us:

If we knew the value of the feedback f, we could predict the
response to perturbations just by multiplying them by 1/(1-f) "â€
call this G for "gain".  What happens, Roe and Baker ask, if we do not
know the feedback exactly?  Suppose, for example, that our measurements are
corrupted by noise --- or even, with something like the climate,
that f is itself stochastically fluctuating.  The distribution of
values for f might be symmetric and reasonably well-peaked around a
typical value, but what about the distribution for G?  Well, it's
nothing of the kind.  Increasing f just a little increases
G by a lot, so starting with a symmetric, not-too-spread distribution
of f gives us a skewed distribution for G with a heavy right
tail.

Again all true, with one small unstated proviso I will come back to.  He concludes:

In short: the fact that we will probably never be able to precisely predict
the response of the climate system to large forcings is so far from being a
reason for complacency it's not even funny.

Actually, I can think of two unstated facts that undermine this analysis.  The first is that most catastrophic climate forecasts you see utilize gains in the 3x-5x range, or sometimes higher (but seldom lower).  This implies they are using an f of between .67 and .80.  These are already very high numbers for any natural process.  If catastrophist climate scientists are already assuming numbers at the high end of the range, then the point about uncertainties skewing the gain disproportionately higher are moot.  In fact, we might tend to actually draw the reverse conclusion, that the saw cuts both ways.  His analysis also implies that small overstatements of f when the forecasts are already skewed to the high side will lead to very large overstatements of Gain.

But here is the real elephant in the room:  For the vast, vast majority of natural processes, f is less than zero.  The author has blithely accepted the currently unproven assumption that the net feedback in the climate system is positive.  He never even hints at the possibility that that f might be a negative feedback rather than positive, despite the fact that almost all natural processes are dominated by negative rather than positive feedback.  Assuming without evidence that a random natural process one encounters is dominated by negative feedback is roughly equivalent to assuming the random person you just met on the street is a billionaire.  It is not totally out of the question, but it is very, very unlikely.

When one plugs an f in the equation above that is negative, say -0.3, then the gain actually becomes less than one, in this case about 0.77.  In a negative feedback regime, the system response is actually less than the initial perturbation because forces exist in the system to damp the initial input.

The author is trying to argue that uncertainty about the degree of feedback in the climate system and therefore the sensitivity of the system to CO2 changes does not change the likelihood of the coming "catastrophe."  Except that he fails to mention that we are so uncertain about the feedback that we don't even know its sign.  Feedback, or f, could be positive or negative as far as we know.  Values could range anywhere from -1 to 1.  We don't have good evidence as to where the exact number lies, except to observe from the relative stability of past temperatures over a long time frame that the number probably is not in the high positive end of this range.  Data from climate response over the last 120 years seems to point to a number close to zero or slightly negative, in which case the author's entire post is irrelevant.   In fact, it turns out that the climate scientists who make the news are all clustered around the least likely guesses for f, ie values greater than 0.6.

Incredibly, while refusing to even mention the Occam's Razor solution that f is negative, the author seriously entertains the notion that f might be one or greater.  For such values, the gain shoots to infinity and the system goes wildly unstable  (nuclear fission, for example, is an f>1 process).  In an f>1 world, lightly tapping the accelerator in our car would send us quickly racing up to the speed of light.  This is an ABSURD assumption for a system like climate that is long-term stable over tens of millions of years.  A positive feedback f>=1 would have sent us to a Venus-like heat or Mars-like frigidity eons ago.

A summary of why recent historical empirical data implies low or negative feedback is here.  You can learn more on these topics in my climate video and my climate book.  To save you the search, the section of my movie explaining feedbacks, with a nifty live demonstration from my kitchen, is in the first three and a half minutes of the clip below:

## Update on Coyote's Law

Given all of the conspiracy theories bouncing around the net nowadays, I thought it would be timely to revisit Coyote's Law.  Coyote's Law states:

When the same set of facts can be explained equally well by

1. A massive conspiracy coordinated without a single leak between hundreds or even thousands of people    -OR -
2. Sustained stupidity, confusion and/or incompetence

Assume stupidity.

To some extent, Coyote's Law is a logical extension of Occam's Razor.  However, it seems to have consistent and frequent application in modern politics.  Here are a couple of examples, but I am sure the reader can think of more:

• There are a number of revisionist historians that make the argument that Pearl Harbor was actually an elaborate FDR plot to overcome domestic isolationism and bring the US into the war.  They point to the many missed intelligence clues, the incredible unreadiness of the defenses at Pearl Harbor, and the missing US carriers as evidence of a conspiracy.  However, most historians have concluded that Coyote's Law holds, that our failure at Pearl Harbor we the result of mistakes and incompetence, not conspiracy.
• The mother of all conspiracy theory subjects is, of course, the JFK shooting.  Many people simply refuse to believe that a lone gunman, and a fairly unimpressive one at that, could have pulled off such a killing.  He must have had help from the Cubans, or the Mafia, or the FBI, or the CIA, or the grassy knoll, or whatever.  Despite all the millions of hours of research into these theories, Coyote's Law still holds - it is much more likely that JFK was killed due to poor protection and the vulnerability of any one man to a sufficiently dedicated gunman who is not committed to getting away after the assassination (which, by the way, is still true).

To some extent, in both these cases it is a bit unfair to use the word "stupidity".  I am reminded of a quote by Frank Borman (as portrayed in the awesome mini-series "From the Earth to the Moon", I have not been able to find out if it was his actual words) in a committee hearing on the Apollo 1 fire that killed three astronauts.  Under intense scrutiny for a set of conditions that in retrospect seemed ridiculously unsafe, Borman described the problem as "a failure of imagination".  To some extent, that is what happened both at Pearl Harbor and with the JFK assassination, and, essentially, with the 9/11 attacks.  What occurred was so new, so unprecedented, that no one could really make themselves believe in advance that it would happen.  But, none-the-less, it resulted in incompetence, not conspiracy.

Which brings us to the 2004 election.  Certainly, in this case, no one can claim a failure of imagination, as just about everyone half anticipated vote-tally screw-ups after Florida in 2000.  However, in their review of conspiracy charges regarding election counts, this Caltech-MIT report has a fantastic restatement of Coyote's Law:

Well, I don't want to write off legitimate questions about the integrity of the voting system. But turn the question around: Which is more likely -- that an exit polling system that has been consistently wrong and troubled turned out to be wrong and troubled again, or that a vast conspiracy carried out by scores and scores of county and state election officials was successfully carried off to distort millions of American votes?

UPDATE

EEEK!  Frank Borman is the astronaut.  I had Martin Borman, the Nazi.  Sorry.  (and yes, this mistake was due to my STUPIDITY and INCOMPETENCE, and not a Boys From Brazil conspiracy.