After debating people online for years on issues from catastrophic man-made climate change to genetically-modified crops to common chemical hazards (e.g. BPA) to vaccination, I wanted to offer a couple quick thoughts on the common mistakes I see in evaluating risks.
1. Poor Understanding of Risk, and of Studies that Evaluate Risk
First, people are really bad at thinking about incremental risk above and beyond the background risk (e.g. not looking at "what is my risk of cancer" but "what is my incremental added risk from being exposed to X"). Frequently those incremental risks are tiny and hard to pick out of the background risk at any level of confidence. They also tend to be small compared to everyday risks on which people seldom focus. You have a far higher - almost two orders of magnitude - risk in the US of drowning in your own bathtub than you have in being subject to terrorism, but which do we obsess over?
Further, there are a lot of folks who seem all-to-ready to shoot off in a panic over any one scary study in the media. And the media loves this, because it drives the meter on their earnings, so they bend over backwards to look for studies with scary results and then make them sound even scarier. "Tater-tots Increase Risk of Ebola!" But in reality, most of these scary studies never get replicated and turn out to be mistaken. Why does this happen?
The problem is that every natural process is subject to random variation. Even without changing the conditions of an experiment, there is going to be random variation in measurements. For example, one population of white mice might have 6 cancers, but the next might have 12 and the next might have zero, all from natural variation. So the challenge of most experiments is to determine whether the thing one is testing (e.g. exposure to a particular substance) is actually changing the measurements in a population, or whether that change is simply the result of random variation. That is what the 95% confidence interval (that Naomi Oreskes wants to get rid of) really means. It means there is only a 5% chance that the results measured were due to natural variation.
This is a useful test, but I hope you can see how it can fail. Something like 5% of the time that one is measuring two things that actually are uncorrelated, the test is going to give you a false positive. Let's say in a year that the world does 1000 studies to test links that don't actually exist. Just from natural variation, 5% of these studies will still seem to show a link at the 95% confidence level. We will have 50 studies that year broadcasting false links. The media will proceed to scare the crap out of you over these 50 things.
I have never seen this explained better than in this XKCD cartoon (click to enlarge):
All of this is just exacerbated when there is fraud involved, an unfortunate but not unknown occurrence when reputations and large academic grants are on the line. This is why replication of the experiment is important. Do the study a second time, and all but 2-3 of these 50 "false positive" studies will fail to replicate the original results. Do it three times, and all will likely fail to replicate. This, for example, is exactly what happened with the vaccine-autism link -- it came out in one study with a really small population and some evidence of fraud, and was never replicated.
2. The Precautionary Principle vs. the Unseen, with a Dollop of Privilege Thrown In
When pressed to the wall too hard about the size and quality of the risk assessment, most folks subject to these panics will fall back on the "precautionary principle". I am not a big fan of the precautionary principle, so I will let Wikipedia define it so I don't create a straw man:
The precautionary principle or precautionary approach to risk management states that if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is not harmful, the burden of proof that it is not harmful falls on those taking an action.
I will observe that as written, this principle is inherently anti-progress. The proposition requires that folks who want to introduce new innovations must prove a negative, and it is very hard to prove a negative -- how do I prove there are no invisible aliens in my closet who may come out and eat me someday, and how can I possibly get a scientific consensus to this fact? As a result, by merely expressing that one "suspects" a risk (note there is no need listed for proof or justification of this suspicion), any advance may be stopped cold. Had we followed such a principle consistently, we would still all be subsistence farmers, vassals to our feudal lord.
One other quick note before I proceed, it turns out that proponents of the precautionary principle are very selective as to where they apply the principle. They feel like it absolutely must be applied to fossil fuel burning, or BPA use, or GMO's. But precautionary principle supporters never apply it in turn to, say, major new government programs and regulations and economic interventions, despite many historically justified concerns about the risks of these programs.
But neither of these is necessarily the biggest problem with the precautionary principle. The real problem is that it focuses on only one side of the equation -- it says that risks alone justify stopping any action or policy without any reference at all to benefits of that policy or opportunity costs of its avoidance. A way of restating the precautionary principle is, "when faced with risks and benefits of a certain proposal, look only at the risks."
Since the precautionary principle really hit the mainstream with the climate change debate, I will use that as an example. Contrary to media appellations of being a "denier," most science-based climate skeptics like myself accept that man is adding to greenhouse gasses in the atmosphere and that those gasses have an incremental warming effect on the planet. What we deny is the catastrophe -- we believe we have good evidence that catastrophic forecasts from computer models are exaggerating future warming, and greatly exaggerating resulting forecast climate changes. Whenever I am fairly successful making this argument, the inevitable rejoinder is "well, the precautionary principle says that if we have even a small percentage chance that burning fossil fuels will lead to a climate disaster, then we have to limit their use immediately".
The problem with this statement is that it assumes there is no harm or risk to reducing fossil fuel use. But fossil fuel use pays enormous benefits to everyone in the world. Even if we could find near substitutes that don't create CO2 emissions (and it is every much open to debate if such substitutes currently exist), these substitutes tend to be much more expensive and much more infrastructure-intensive than are fossil fuels. The negative impact to the economy would be substantial. One could argue that one particular impact -- climate or economy -- outweighs the other, but it is outright fraud to refuse to discuss the trade-off altogether. Particularly since catastrophic climate change may only be a low-percentage risk while economic dislocation from reduction in fossil fuel use is a near certainty.
My sense is that if the United States chose to cut way back on fossil fuel use in a concerted effort, we could manage it and survive the costs. But that is because we are a uniquely rich nation. I am not sure anyone in this country understands how rich. I am not talking just about Warren Buffet. Even the poorest countries have a few rich people at the top. I am talking about everybody. Our poorest 20% would actually be among the richest quintile in many nations of the world. A worldwide effort to eliminate fossil fuel use or to substantially raise its costs or to force shifts to higher cost, less easily-used alternatives would simply devastate many developing nations, which need every erg their limited resources can get their hands on. We are at a unique moment in history when more than a billion people are in the process of emerging from poverty around the world, progress that would be stopped in its tracks by a concerted effort to limit CO2 output. Why doesn't the precautionary principle apply to actions that affect their lives?
College kids have developed a popular rejoinder they use in arguments that states "check your privilege." I thought at first it was an interesting phrase. I used it in arguments a few times about third world "sweat shops". I argued that those who wanted to close down the Nike factory paying $1 an hour in China needed to check their privilege -- they had no idea what alternatives those Chinese who took the Nike jobs were facing. Yes, you middle class Americans would never take that job, but what if your alternative was 12 hours a day in a rice paddy somewhere that barely brought in enough food for your family to subsist? Only later, I learned that "check your privilege" didn't mean what I thought it meant, and in fact in actual academic use it instead means "shut up, white guy." In a way, though, this use is consistent with how the precautionary principle is often used -- in many of my arguments, "precautionary principle" is another way of saying "stop talking about the costs and trade-offs of what I am proposing."
Perhaps the best example of the damage that can be wrought by a combination of Western middle class privilege and the precautionary principle is the case of golden rice. According to the World Health Organization between 250,000 to 500,000 children become blind every year due to vitamin A deficiency, half of whom die within a year of becoming blind. Millions of other people suffer from various debilitating conditions due to the lack of this essential nutrient. Golden Rice is a genetically modified form of rice that, unlike conventional rice, contains beta-Carotene in the rice kernel, which is converted to vitamin A in humans.
By 2002, Golden Rice was technically ready to go. Animal testing had found no health risks. Syngenta, which had figured out how to insert the Vitamin Aâproducing gene from carrots into rice, had handed all financial interests over to a non-profit organization, so there would be no resistance to the life-saving technology from GMO opponents who resist genetic modification because big biotech companies profit from it. Except for the regulatory approval process, Golden Rice was ready to start saving millions of lives and preventing tens of millions of cases of blindness in people around the world who suffer from Vitamin A deficiency.
Seems like a great idea. Too bad its going nowhere, due to fierce opposition on the Left (particularly from Greenpeace) to hypothetical dangers from GMO's
Itâs still not in use anywhere, however, because of the opposition to GM technology. Now two agricultural economists, one from the Technical University of Munich, the other from the University of California, Berkeley, have quantified the price of that opposition, in human health, and the numbers are truly frightening.
Their study, published in the journalEnvironment and Development Economics, estimates that the delayed application of Golden Rice in India alone has cost 1,424,000 life years since 2002. That odd sounding metric â not just lives but âlife yearsâ â accounts not only for those who died, but also for the blindness and other health disabilities that Vitamin A deficiency causes. The majority of those who went blind or died because they did not have access to Golden Rice were children.
Note this is exactly the sort of risk tradeoff the precautionary principle is meant to ignore. The real situation is that a vague risk of unspecified and unproven problems with GMO's (which are typically driven more by a distrust on the Left of the for-profit corporations that produce GMO's rather than any good science) should be balanced with absolute certainty of people dying and going blind. But the Greenpeace folks will just shout that because of the "precautionary principle", only the vague unproven risks should be considered and thus golden rice should be banned.
Risk and Post-Modernism
A few weeks ago, I wrote about Naomi Oreskes and the post-modern approach to science, where facts and proof take a back-seat to political narratives and the feelings and intuition of various social groups. I hadn't really thought much about this post-modernist approach in the context of risk assessment, but I was struck by this comment by David Ropeik, who blogs for Scientific American.
The whole GMO issue is really just one example of a far more profound threat to your health and mine. The perception of risk is inescapably subjective, a matter of not just the facts, but how we feel about those facts. As pioneering risk perception psychologist Paul Slovic has said, ârisk is a feeling.â So societal arguments over risk issues like Golden Rice and GMOs, or guns or climate change or vaccines, are not mostly about the evidence, though we wield the facts as our weapons. They are mostly about how we feel, and our values, and which groupâs values win, not what will objectively do the most people the most good. Thatâs a dumb and dangerous way to make public risk management decisions.
Mr. Ropeik actually disagrees with me on the risk/harm tradeoffs of climate change (he obviously thinks the harms outweigh the costs of prevention -- I will give him the benefit of the doubt that he has actually thought about both sides of the equation). Fine. I would be thrilled for once to have a discussion with someone about climate change when we are really talking about costs and benefits on both sides of the equation (action and inaction). Unfortunately that is all too rare.
Postscript: To the extent the average person remembers Bjorn Lomborg at all, they could be excused for assuming he is some crazed right-wing climate denier, given how he was treated in the media. In fact, Lomborg is very much a global warming believer. He takes funding from Right-ish organizations now, but that is only because he has been disavowed by the Left, which was his original home.
What he did was write a book in which he looked at a number of environmental problems -- both their risks and costs as well as their potential mitigation costs -- and he ranked them on bang for the buck: Where can we get the most environmental benefit and help the most people for the least investment. The book talked about what he thought were the very real dangers of climate change, but it turned out climate change was way down this ranked list in terms of benefits vs. costs of solutions.
This is a point I have made before. Why are we spending so much time, for example, harping on China to reduce CO2 when their air is poisonous? We know how to have a modern technological economy and still have air without soot. It is more uncertain if we can have a modern technological economy, yet, without CO2 production. Lomborg thought about just this sort of thing, and made the kind of policy risk-reward tradeoffs based on scientific analysis that we would hope our policy makers were pursuing. It was exactly the kind of analysis that Ropeik was advocating for above.
Lomborg must have expected that his work would be embraced by the environmental Left. After all, it was scientific, it achnowleged the existence of a number of environmental issues that needed to be solved, and it advocated for a strong government-backed effort led by smart technocrats doing rational prioritizations. But Lomborg was absolutely demonized by just about everyone in the environmental community and on the Left in general. He was universally trashed. He was called a climate denier when in fact he was no such thing -- he just pointed out that man-made climate change was way harder to solve than other equally harmful environmental issues. Didn't he get the memo that the narrative was that global warming was the #1 environmental threat? How dare he suggest a re-prioritization!
Lomborg's prioritization may well have been wrong, but no one was actually sitting down to make that case. He was simply demonized from day one for getting the "wrong" answer, defined as the answer not fitting the preferred narrative. We are a long, long way from any reasonable ability to assess and act on risks.