The sprawling city of Phoenix, of all places, is showing us how light rail should be done. They just opened a 20 mile line with 28 stops last December, and ridership statistics are beating all forecasts (evidence that the same might be true in other cities where they are afraid to invest because their forecasts are too low) with 40,000 daily riders instead of the 25,000 expected.
But here are the ridership figures from Valley Metro, who runs Phoenix Light Rail. This is weekday ridership (actually number of daily boardings) -- weekend ridership is much less:
Jan: 30,617
Feb: 35,277
Mar: 34,376
Apr: 37,386
May: 33,553
Jun: 29,469
Jul: 26,554
It is hard to see where one gets a 40,000 figure, especially since a true daily rider/boarding figure would have to average in the lower Saturday/Sunday numbers.
And who cares if it meets some sandbagged forecast or not? Is 40,000 even a reasonable number? Note that even at the higher 40,000 figure this implies just 20,000 round trip customers. This higher ridership number would still make the capital cost of the $1.4 billion line to be $70,000 per round trip rider, and ABSURD subsidy.
Update: The ridership numbers will likely pick up when Arizona State is back in school. ASU and the baseball stadium are about the only major destinations on the line through dispersed, low-density Phoenix (it goes through our "downtown" but that is not saying much -- it is not a big center of employment). Did we really build light rail as another subsidy for ASU students?
Update #2: Let's say there are 50,000,000 big city commuters in the US in cities outside of Boston/NY/Chicago with large transit systems. Serving these commuters at $70,000 each would create a capital cost of $3.5 trillion for light rail. Who on the planet really thinks this is reasonable? Sure, you would get some network effects as you built out lines that increased ridership, but these would be offset by diminishing returns (presumably the first Phoenix line was built on the most promising corridor, and all future corridors will be less promising).
Kevin Drum links to a blog called Three-Toed Sloth in a post about why our climate future may be even worse than the absurdly cataclysmic forecasts we are getting today in the media. Three-Toed Sloth advertises itself as "Slow Takes from the Canopy of the Reality-Based Community." His post is an absolutely fabulous example how one can write an article where most every line is literally true, but the conclusion can still be dead wrong because one tiny assumption at the beginning of the analysis was incorrect (In this case, "incorrect" may be generous, since the author seems well-versed in the analysis of chaotic systems. A better word might be "purposely fudged to make a political point.")
He begins with this:
The
climate system contains a lot of feedback loops. This means that the ultimate
response to any perturbation or forcing (say, pumping 20 million years of
accumulated fossil fuels into the air) depends not just on the initial
reaction, but also how much of that gets fed back into the system, which leads
to more change, and so on. Suppose, just for the sake of things being
tractable, that the feedback is linear, and the fraction fed back
is f. Then the total impact of a perturbation I is
J + Jf + Jf2 + Jf3 + ...
The infinite series of tail-biting feedback terms is in fact
a geometric
series, and so can be summed up if f is less than 1:
J/(1-f)
So far, so good. The math here is entirely correct. He goes on to make this point, arguing that if we are uncertain about f, in other words, if there is a distribution of possible f's, then the range of the total system gain 1/(1-f) is likely higher than our intuition might first tell us:
If we knew the value of the feedback f, we could predict the
response to perturbations just by multiplying them by 1/(1-f) "â
call this G for "gain". What happens, Roe and Baker ask, if we do not
know the feedback exactly? Suppose, for example, that our measurements are
corrupted by noise --- or even, with something like the climate,
that f is itself stochastically fluctuating. The distribution of
values for f might be symmetric and reasonably well-peaked around a
typical value, but what about the distribution for G? Well, it's
nothing of the kind. Increasing f just a little increases G by a lot, so starting with a symmetric, not-too-spread distribution
of f gives us a skewed distribution for G with a heavy right
tail.
Again all true, with one small unstated proviso I will come back to. He concludes:
In short: the fact that we will probably never be able to precisely predict
the response of the climate system to large forcings is so far from being a
reason for complacency it's not even funny.
Actually, I can think of two unstated facts that undermine this analysis. The first is that most catastrophic climate forecasts you see utilize gains in the 3x-5x range, or sometimes higher (but seldom lower). This implies they are using an f of between .67 and .80. These are already very high numbers for any natural process. If catastrophist climate scientists are already assuming numbers at the high end of the range, then the point about uncertainties skewing the gain disproportionately higher are moot. In fact, we might tend to actually draw the reverse conclusion, that the saw cuts both ways. His analysis also implies that small overstatements of f when the forecasts are already skewed to the high side will lead to very large overstatements of Gain.
But here is the real elephant in the room: For the vast, vast majority of natural processes, f is less than zero. The author has blithely accepted the currently unproven assumption that the net feedback in the climate system is positive. He never even hints at the possibility that that f might be a negative feedback rather than positive, despite the fact that almost all natural processes are dominated by negative rather than positive feedback. Assuming without evidence that a random natural process one encounters is dominated by negative feedback is roughly equivalent to assuming the random person you just met on the street is a billionaire. It is not totally out of the question, but it is very, very unlikely.
When one plugs an f in the equation above that is negative, say -0.3, then the gain actually becomes less than one, in this case about 0.77. In a negative feedback regime, the system response is actually less than the initial perturbation because forces exist in the system to damp the initial input.
The author is trying to argue that uncertainty about the degree of feedback in the climate system and therefore the sensitivity of the system to CO2 changes does not change the likelihood of the coming "catastrophe." Except that he fails to mention that we are so uncertain about the feedback that we don't even know its sign. Feedback, or f, could be positive or negative as far as we know. Values could range anywhere from -1 to 1. We don't have good evidence as to where the exact number lies, except to observe from the relative stability of past temperatures over a long time frame that the number probably is not in the high positive end of this range. Data from climate response over the last 120 years seems to point to a number close to zero or slightly negative, in which case the author's entire post is irrelevant. In fact, it turns out that the climate scientists who make the news are all clustered around the least likely guesses for f, ie values greater than 0.6.
Incredibly, while refusing to even mention the Occam's Razor solution that f is negative, the author seriously entertains the notion that f might be one or greater. For such values, the gain shoots to infinity and the system goes wildly unstable (nuclear fission, for example, is an f>1 process). In an f>1 world, lightly tapping the accelerator in our car would send us quickly racing up to the speed of light. This is an ABSURD assumption for a system like climate that is long-term stable over tens of millions of years. A positive feedback f>=1 would have sent us to a Venus-like heat or Mars-like frigidity eons ago.