Computer Modeling as "Evidence"

The BBC has decided not to every talk to climate skeptics again, in part based on the "evidence" of computer modelling

Climate change skeptics are being banned from BBC News, according to a new report, for fear of misinforming people and to create more of a "balance" when discussing man-made climate change.

The latest casualty is Nigel Lawson, former London chancellor and climate change skeptic, who has just recently been barred from appearing on BBC. Lord Lawson, who has written about climate change, said the corporation is silencing the debate on global warming since he discussed the topic on its Radio 4 Today program in February.

This skeptic accuses "Stalinist" BBC of succumbing to pressure from those with renewable energy interests, like the Green Party, in an editorial for the Daily Mail.

He appeared on February 13 debating with scientist Sir Brian Hoskins, chairman of the Grantham Institute for Climate Change at Imperial College, London, to discuss recent flooding that supposedly was linked to man-made climate change.

Despite the fact that the two intellectuals had a "thoroughly civilized discussion," BBC was "overwhelmed by a well-organized deluge of complaints" following the program. Naysayers harped on the fact that Lawson was not a scientist and said he had no business voicing his opinion on the subject.

...

Among the objections, including one from Green Party politician Chit Chong, were that Lawson's views were not supported by evidence from computer modeling.

I see this all the time.  A lot of things astound me in the climate debate, but perhaps the most astounding has been to be accused of being "anti-science" by people who have such a poor grasp of the scientific process.

Computer models and their output are not evidence of anything.  Computer models are extremely useful when we have hypotheses about complex, multi-variable systems.  It may not be immediately obvious how to test these hypotheses, so computer models can take these hypothesized formulas and generate predicted values of measurable variables that can then be used to compare to actual physical observations.

This is no different (except in speed and scale) from a person in the 18th century sitting down with Newton's gravitational equations and grinding out five years of predicted positions for Venus (in fact, the original meaning of the word "computer" was a human being who ground out numbers in just his way).  That person and his calculations are the exact equivalent of today's computer models.  We wouldn't say that those lists of predictions for Venus were "evidence" that Newton was correct.  We would use these predictions and compare them to actual measurements of Venus's position over the next five years.  If they matched, we would consider that match to be the real evidence that Newton may be correct.

So it is not the existence of the models or their output that are evidence that catastrophic man-made global warming theory is correct.  It would be evidence that the output of these predictive models actually match what plays out in reality.  Which is why skeptics think the fact that the divergence between climate model temperature forecasts and actual temperatures is important, but we will leave that topic for other days.

The other problem with models

The other problem with computer models, besides the fact that they are not and cannot constitute evidence in and of themselves, is that their results are often sensitive to small changes in tuning or setting of variables, and that these decisions about tuning are often totally opaque to outsiders.

I did computer modelling for years, though of markets and economics rather than climate.  But the techniques are substantially the same.  And the pitfalls.

Confession time.  In my very early days as a consultant, I did something I am not proud of.  I was responsible for a complex market model based on a lot of market research and customer service data.  Less than a day before the big presentation, and with all the charts and conclusions made, I found a mistake that skewed the results.  In later years I would have the moral courage and confidence to cry foul and halt the process, but at the time I ended up tweaking a few key variables to make the model continue to spit out results consistent with our conclusion.  It is embarrassing enough I have trouble writing this for public consumption 25 years later.

But it was so easy.  A few tweaks to assumptions and I could get the answer I wanted.  And no one would ever know.  Someone could stare at the model for an hour and not recognize the tuning.

Robert Caprara has similar thoughts in the WSJ (probably behind a paywall)  Hat tip to a reader

The computer model was huge—it analyzed every river, sewer treatment plant and drinking-water intake (the places in rivers where municipalities draw their water) in the country. I'll spare you the details, but the model showed huge gains from the program as water quality improved dramatically. By the late 1980s, however, any gains from upgrading sewer treatments would be offset by the additional pollution load coming from people who moved from on-site septic tanks to public sewers, which dump the waste into rivers. Basically the model said we had hit the point of diminishing returns.

When I presented the results to the EPA official in charge, he said that I should go back and "sharpen my pencil." I did. I reviewed assumptions, tweaked coefficients and recalibrated data. But when I reran everything the numbers didn't change much. At our next meeting he told me to run the numbers again.

After three iterations I finally blurted out, "What number are you looking for?" He didn't miss a beat: He told me that he needed to show $2 billion of benefits to get the program renewed. I finally turned enough knobs to get the answer he wanted, and everyone was happy...

I realized that my work for the EPA wasn't that of a scientist, at least in the popular imagination of what a scientist does. It was more like that of a lawyer. My job, as a modeler, was to build the best case for my client's position. The opposition will build its best case for the counter argument and ultimately the truth should prevail.

If opponents don't like what I did with the coefficients, then they should challenge them. And during my decade as an environmental consultant, I was often hired to do just that to someone else's model. But there is no denying that anyone who makes a living building computer models likely does so for the cause of advocacy, not the search for truth.

49 Comments

  1. Morlock Publishing:

    > "What number are you looking for?"

    My dad told me a joke once:

    Ask a mathematician "what's 2+2?" and he replies "four".
    Ask an engineer "what's 2+2?" and he replies "four point oh, plus or minus zero point one".
    Ask a CPA "what's 2+2?" and he shuts the door, closes the blinds, and asks 'what do you WANT it to be?'.

  2. a_random_guy:

    There was some mathematician or scientist who said something like: give me four free variables, and I can fit a curve to anything. Given non-linear equations, it's not even particularly hard. The climate models often have *dozens* of variables, all carefully tuned...to something.

    As you say, the models in the end are meaningless, unless they are providing predictions that can be reasonable verified. Models lacking such predictions, or models that produce predictions which are then falsified, are simply worthless.

  3. Onlooker:

    Hit the nail on the head again with this post. Pretty much the entirety (I'd say all, but hate to make unequivocal statements) of the catastrophic part of the CAGW hypothesis is based on modeling. And we're supposed to just trust that these models will be right someday in their predictions/projections, whatever. Uh huh.

    Show me some real science. But just models - I don't think so.

  4. terrence:

    You beat me to it, Morlock. Unfortunately it is all too true, not juts about accountants, but OBVIOUSLY about climate "scientists"

  5. Matthew Slyfield:

    I don't know who it was either, but on http://wattsupwiththat.com/ I have repeatedly seen "Give me five free variables and I can fit an elephant.".

  6. mahtso:

    "We wouldn't say that those lists of predictions for Venus were "evidence" that Newton was correct."

    Correct, but the blogger is wrong that the models are not evidence of climate change. As the quote shows, the blogger has conflated two things: (1) whether the model is proof that the model is correct (it is not); and (2) whether the model is proof that climate change is real (it is, but if one accepts that these models have many problems, the models are only very weak evidence of that change).

  7. Katabasis:

    These same people who would extol the virtues of computer models for climate science are also the first to damn them in investment banking.

  8. marque2:

    Per the Wall St Journal article - models are used to provide evidence for funding when little or no real evidence exists. Tweek some figures have a computer model and I guess folks are suppose to be so impressed it is from a computer that they will believe whatever it says and then provide billions in funding.

  9. Ann_In_Illinois:

    When I was in grad school (econ PhD), I was a research assistant at a regional Federal Reserve Bank, and one thing I did regularly was to help run the forecasting models for our Fed's estimates to present to the Board of Governors. Each regional Fed had two supercomputers (the second for back-up; this was before PCs), and we used ours to crunch massive amounts of data to come up with the predictions. What I recall best was taking the estimates to the guy in charge, so he could decide whether he liked them or wanted me to tweak the model (and sometimes even the inputs, which is very different), to get an answer he felt comfortable with.

    All of the Feds came up with forecasts using some combination of models and intuition/judgment, but in our particular Fed, our model at the time was supposedly entirely data-driven (regress everything on everything, rather than having humans use judgment to decide what the relationships should be). We claimed that we were simply letting the data speak, but we still tweaked it when necessary to make sure it said the right thing.

    Anyone who has worked with these big, complicated models knows that you can come up with whatever answer you want. For the regional Fed forecasts, the various models were judged based on out-of-sample forecast error (i.e. how close were the predictions?). Climate "science" is pioneering a new approach to modeling, in this sense.

  10. FelineCannonball:

    There are no governing laws for economic modeling like there are laws of physics for modeling physical systems. Instead you're left with pure rear-view empirical data and presumed realationships between variables based on the way things seem to have worked in the past. Comparing the two really shows a failure of insight. Yes, they both involve math and computers . . .

    There's plenty to discuss with physics based models in terms of assumptions, parameterization, coding errors, testing at a variety of physical and temporal scales, competing models, sensitivity, precise predictions vs. statistical predictions, and so on. We can start there if you like, but the idea that it is like an economic model, or one bad model or modeler with an agenda means all the competing GCM models are bogus is itself bogus. As far as testing goes, you really have to understand the nature of the GCM predictions. Once again, the mean surface temperature increase predicted in simulations is not like a weather prediction. It is a prediction of long term statistics in the driven system. If you want short term predictions be sure to look at the entire envelope of simulations. It gives you a fairly large error bar. There are a variety of other internal tests, boundary tests, past climate tests, improved data collection, improved understanding of atmospheric physics, etc. which are used to actually refine GCM models.

  11. Katabasis:

    "(2) whether the model is proof that climate change is real (it is, but
    if one accepts that these models have many problems, the models are
    only very weak evidence of that change)."

    - I have an excellent computer game called Skyrim. It simulates dragons. Therefore, even though the simulation has many problems, it nevertheless provides very weak evidence of the existence and behaviour of dragons.

  12. joshv:

    The computer models, built on the assumption that CO2 causes warming, are thus far predicting temperature trends much higher than the temperature trend that's being observed. How is this evidence that CO2 causes warming? Now CO2 could be cause the warming that is claimed, but then we must admit that there is some unmodelled cooling dynamic which is offsetting the warming, or that models do not accurately reflect the internal variability of the climate system.

    Until somebody produce a model that successfully predicts climate 50 years out I don't see model dynamics as proof of much of anything. They are hypotheses, that will be supported or falsified by observation. I think that maybe in 100 or 200 years we might have a good handle on projecting climate, but it will take a lot of trial and error, and a lot of waiting to measure prediction against reality.

  13. rxc:

    This sort of thing, involving "tweaking the dials", goes on all the time in engineering analyses. I used to see it done with nuclear reactor safety models, and had an epiphany one day, when one of my staff members, who was very respected, very straight, very competent, asked me "what answer do you want me to get?", in response to a request for an analysis.

    It happens in models of relatively small systems, and the opportunities for mischief and self-induced delusions in very large systems are proportionately greater. Especially when you don't understand, really understand, the underlying phenomena you are trying to model. And the better and more experienced the analyst, the better they can justify results that an outsider just looks at and dismisses. I have seen this in government, in industry, and in academic analyses.

    When there was nothing at stake other than professional pride, the tricks become more creative than you can imagine.

  14. Craig Loehle:

    When defenders of the climate models say it is "just physics" they may be correct for some part of the models, but not for others. For example, the behavior of clouds in the models is assumed: there is no physics that tells you what all the different types of clouds do under various conditions such as warming. Even the scattering of light from clouds in the models is empirical. As another example, there are satellites which measure solar radiation. There are various efforts to piece together the different generations of satellite (their orbits degrade over time and they must be replaced). But even once pieced together there is not agreement on how to adjust to the absolute baseline of TSI (total solar irradiance). The models thus adjust the baseline so they don't diverge from a stable temperature. Not "just physics".

  15. mahtso:

    You have changed the terminology: the blogger wrote "evidence" whereas you write about "proof." Whether the models prove anything is, in part, based on their quality/strength as evidence. (And for what it is worth, the blogger accepts that co2 causes warming.)

  16. mahtso:

    I don't know anything about Skyrim, so if you say it is evidence that dragons exist, I'll not dispute that. I suspect there are models of biologic processes (particularly those related to the primordial soup and what was required for life to start) that could be extended to show that dragons do or could exist.

    Your comment serves to point out that there are various types of models. The blogger references two types: one making predictions based on theories of causation (climate models, as a group) and one that makes predictions without addressing the causation (Newton's law of universal gravitation). Whether Skyrim is either of these or some other type of model, I have no idea.

  17. Bill:

    "Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal [natural] climate variability. ... Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5 models (see Supplementary Information). These models generally simulate natural variability ­ including that associated with the El Niño–Southern Oscillation and explosive volcanic eruptions ­ as well as estimate the combined response of climate to changes in greenhouse gas concentrations, aerosol abundance (of sulphate, black carbon and organic carbon, for example), ozone concentrations (tropospheric and stratospheric), land use (for example, deforestation) and solar variability. By averaging simulated temperatures only at locations where corresponding observations exist, we find an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C per decade (using 95% confidence intervals on the model average). The observed rate of warming given above is less than half of this simulated rate, and only a few simulations provide warming trends within the range of observational uncertainty. . ."
    ( http://www.nature.com/nclimate/journal/v3/n9/full/nclimate1972.html?WT.ec_id=NCLIMATE-201309 )

    According to these findings, the average decadal prediction error of the CMIP5 climate models over the 1993-2012 interval is 0.16C (i.e., 0.30C - 0.14C.) This is the same average prediction error that would have occurred if someone had suggested the average decadal surface temperature change over that interval would be NEGATIVE 0.02C. A suggestion that surface temperature would be UNCHANGED would have outperformed the CMIP5 model simulations with an average decadal prediction error of 0.14C.

  18. FelineCannonball:

    GCMs are simplified simulations. Based on the same basic ones that everyone in the world relies upon for weather forecasting. Except they include a few more interactions with respect to energy budget and the oceans and they aren't interested in the details of weekly to decadal simulations, but the statistical properties of model results changing with driving forces over longer periods. There are "dials" parameterizing aspects of the climate system that can't be modeled on first principles. They are the subjects of publications and are adjusted based on a variety of real world, internal, and boundary tests. If you look back at the history of the effort you'll also see widespread coding errors, math errors, physics errors that have been pointed out and removed from models over the decades. You'll also see outstanding problems in models that people are still trying to address. Weird results for details that don't march the real world.

    For all that, and the necessary uncertainty resulting, I am not aware of any model tweaked in any reasonable way that denies the driving effect of CO2 or the existence of significant constrained positive feedbacks in water vapor and ice albedo. Back of the envelope impacts of CO2 were calculated back 100 years ago. Anthropogenic impacts were theorized in the 1930s and magnitudes estimated. The whole system's energy budget and self-evident feedbacks and limiters have been obvious for a very long time, and more sophisticated models have given us details without changing the net result. Even Coyote acknowledges CO2 induced warming at the lower range of model predictions and presumably OK with the idea that warmer air holds more moisture and melts more snow (the basis of simple constrained positive feedbacks). Models are still sort of lousy--giving CO2 sensitivity of 1.5-6 degrees per doubling and there are plenty of questions and uncertainties about the trajectory. That's a fair point to throw at the modelers. But the basic physics and range of sensitivity has been obvious for a century. There was work around 1900 that came up with a sensitivity of 2-4 degrees C based on forcing the energy budget to make sense with respect to natural glacial-interglacial periods. Much lower and you couldn't explain the phenomenon. Much higher and you boiled some Australopithicans in Sudan.

  19. FelineCannonball:

    The only models that are pure physics are a few supercomputer models doing simulations of small molecules. Everything more complicated involves some empirical relationships and parameterization that gets dialed in.

    Water vapor and cloud modeling is probably one of the biggest unknowns in current GCM models. It's demonstrative of model tests which can be done in real time though. Water has a very short half life in the atmosphere and testing the response of water vapor and clouds to temperature and circulation can be done by looking at this years weather with this years weather balloons. Models with more realistic water vapor profiles and more realistic cloud formation will likely have more accurate climate predictions with respect to water vapor and cloud feedbacks.

  20. rxc:

    Yes, some of the basic physics is well understood. F=ma works pretty well in systems that don't move too fast. Basic absorption spectra for CO2 and H2O and other molecules can be measured pretty well. They have even measured the solar irradience pretty well.

    But then there are clouds. Which cannot be simulated from first principles. Which must have ad hoc models with LOTS of parameters applied to try to understand their effect on the overall system.

    And there are lots of biological feedback mechanisms, many of which are completely unknown, and the interactions of the atmosphere with the oceans (all that heat hiding conveniently in the deep oceans, where no one can find it).

    When you try to put it all together, either in an integrated code, or in an assemblage of model results that need to be individually compared, you just get garbage because you don't really understand how the system works well enough to make any predictions.

    I used to run international comparison exercises to see whether computer models could predict real experiments of nuclear power plants, and let me tell you that it was really hard to do, in a very well defined system, with good data about heat sources and sinks and material properties. The safety engineers took the results and applied safety factors to deal with the uncertainties. But with a global model you are trying to calculate a best estimate result that will be used to drive policy. It is absolute lunacy to think that your models are good enough to predict the average temperature of the globe (whatever that is) with an accuracy of less than 1 degree C, 100 years into the future. We could not predict the temperature in a building to an accuracy of better than 10 degrees, over a period of a few days.

    The primary phenomena related to global warming are not well understood at all. Unless you get them right, the rest of it is just playing games with expensive computers. And the peer-review process is compeletely broken - all the experts who don't understand the primary phenomena are sitting around telling themselves about how well they are calculate things (NOT). They are like Hollywood awards - congratulating each other and generating buzz and feel-good blather.

  21. rxc:

    I accept that the separate effect of injecting some gases into the atmosphere at higher concentrations might be to increase the temperature, and the effect of CO2 alone might be a rise of around 1C. I accept that the temperature has been generally rising for the past 200 years, I accept that humans have been pumping CO2 into the atmospherseat increasing rates for the past 200 years, and the concentration of CO2 in the atmosphere has been rising, as well, probably due to humans. That is NOT evidence that CO2 is actually raising the temperature, and it is proof of absolutely nothing, because there are a LOT of other things going on at the same time that are not well understood, or even known.

    When the climate experts can explain why the Roman warm period and the Minoan warm period and the mideval Warm periods occurred, without CO2 to help, and why the atmosphere cooled down after those episodes, I would start of listen.

    They have to explain ALL the data, not just the data they want to explain. F=ma explains most of the data about the motion of bodies, but not quite all of it, and it took Einstein to figure out a way to reconcile ALL the data. To this day, 100 years later, people are STILL looking for data that dis-proves Einstein. In spite of nuclear reactors and bombs, measurements of Mercury, and LOTS of other actual experiments and measurements. To say that an additional couple hundred parts per million of CO2 is going to destroy the planet requires extraordinay evidence, and I haven't seen it.

  22. mahtso:

    " F=ma explains most of the data about the motion of bodies, but not
    quite all of it, and it took Einstein to figure out a way to reconcile
    ALL the data"

    Did he? Maybe, but even if the answer is "yes," since that time there is new data that is not explained by Einstein's theories, both at the the quantum level and the macro/galaxy scale level.

  23. mahtso:

    Newton's theory or model, if you prefer, is actually quite instructive in this debate. It worked so well and matched observational data so well that when it failed to accurately show where Mercury would be, scientists added a fudge factor (like the climate modellers do now), namely a Planet X that was said to be influencing Mercury.

    Subsequently, Einstein developed a new model that did account for Mercury's position and it became the accepted standard. We now know that there may be errors in Einstein's model both at the quantum level and at the galaxy scale. At the galaxy scale, scientists have again added a fudge factor: dark matter. To my understanding however, they have done so only after examining whether it was Einstein's theory and the modern understanding of gravity that was wrong. I don't recall the specifics, but there are several good reasons to conclude that the modern theory of gravity is correct and that the fudge factor/dark matter does exist.

    Also, for what its worth, Einstein did not accept that the universe was NOT static, so he added a fudge factor/cosmological constant to "make" it static. He greatest mistake he later said. Yet that fudge factor may well be what we now call dark energy.

  24. FelineCannonball:

    As I said above, there is quite a bit of uncertainty, but it's all about amplitude of certain effects, not direction. I wouldn't put our ability to model clouds at the "throw your hands in the air" level. It is an active subject and it's quite complicated to understand nucleation and understand all the factors involved in say california coastal fog distribution and timing. But no matter how you work it, you can't get cloud albedo to overcome water vapor absorption.

    In a lot of ways global T is easier to work with than many engineering models of particular systems or say buildings. You are looking at the radiation budget of an entire closed system floating in space. That simplifies things. It's like trying to look at the magnetic field at a point versus trying to look at it across a closed surface.

    (BTW, other than their impact on CO2 absorption, biological feedback is pretty much ignored in GCMs. It takes too long for ecosystem shifts and adaptation. As is feedbacks from permafrost, increased soil decay, sediment remineralization, etc. most of which are strong positive feedbacks to warming from our current state).

    Models are basically ways to test the reasonableness of mechanisms and uncertainty in the magnitude of their effects. I'd say the paleo record is more direct evidence of potential climate variability and its sensitivity to CO2. The idea that climate is immutable, or 2-3 degree shifts are impossible, or CO2 is not a primary driver of climate are bunk.

  25. rxc:

    If CO2 is the primary driver of climate, then why did the Minoan, Roman, and Midieval warm periods occur, and why the the cooling afterwards occur? What started the most recent warming, which began before any significant CO2 increase occurred?

  26. FelineCannonball:

    None of these (or much of anything in the Holocene) has been significant or global. The last 10,000 years have been pretty much flat line for climate and CO2. Lots of internal oscillations and a few perturbations but not much going on. It's the major reason we were able to develop stable agricultural societies.

    If you want to look at the CO2 impacts you'll need to look back deeper in time where it's been a major amplifying factor in glacial-interglacial cycles, and been behind various greenhouse worlds in the early tertiary, Cretaceous, Triassic, Precambrian.

  27. joshv:

    Ok. Until somebody produce a model that successfully predicts climate 50 years out I don't see model dynamics as evidence of much of anything

  28. rxc:

    This seems to be the standard AGW supporter answer to any inconvenient questions that are raised about inconvenient facts - say that it doesn't matter. "We just have to get rid of the Medieval Warm Period" is the way tht it was framed in the stolen climategate email, I believe.

    Your whole argument is one of "throwing your hands in the air", and claiming that it doesn't matter. Where is the data, the reproducible data or reproducible observations that show that CO2 is the primary driver of climate? I see lots of inferences and possibilties, etc, which are all subject to lots of other confounding factors that the climate experts don't like to talk about, because "the science is settled". Is it CO2 that affects bristlecones or the rain or the animals that nested nearby or maybe it was the choice of the investigator to settle on the one tree that fits his theory of the climate?

    When they use chartmanship (Mike's Nature trick") to splice instrument data onto bristlecone data ("hide the decline"), because the bristlecones inconveniently go the wrong way, I suspect all sorts of problems with the underlying models.

  29. FelineCannonball:

    MWP -- I don't think any one argues that it was significant at the scale of glacial-interglacial events. Greenland didn't melt. If you look at ice cores there is evidence of a few more warm summers but no missing stratigraphy. More to the point it was apparently regional as upon global integration of proxies, it pretty much disappears.

    Natural CO2 sources: on shorter time scales it's volatile soil and sediment reservoirs of carbon. Also some biomass. If you warm soils through some other mechanism (orbital cycles impacting northern hemisphere insolation) the soils retain less carbon and can amplify warming by giving off CO2. It's a constrained positive feedback that (along with ice albedo) can jump us out of glacial periods. The last transition was associated with an increase in CO2 from 180 to 280 ppm. So a small change in N S insolation balance triggered a release of carbon from metastable reservoirs in cold soils and permafrost and peats. CO2 and ice albedo feedbacks and secondary water vapor feedbacks acted together until the metastable carbon depleted and we reached a new stable climate state. The insolation shift from south to north, itself, would not have impacted global insolation or temperatures much at all on its own.

    Over longer periods CO2 is also driven by weathering, burial, and extreme volcanic events. pCO2 recorded in ice cores and measured by proxy further back in time show variation at a variety of scales, but it's always closely associated with climate.

  30. Harry:

    Billions? That's chump change when you are dealing with guys who have to use scientific notation to write a paper check. Think globally!

    It looks like the usual suspects have shown up to demonstrate BS skills.

  31. marque2:

    I was thinking on a per project basis.

  32. Harry:

    Hahaha. To do the math one has to do the portrait thing on the iPhone calculator. Investment banking mode.

  33. Gil G:

    Yes, the consistent graphs showing unprecedented warming since the Industrial Revolution count for jack.

  34. Gil G:

    Newton was wrong? In a strict sense yes but NASA uses Newtonian mathematics when they send probes throughout the Solar System. So it's still good enough. Models allow for a view of what will happen if nothing changes (and nothing will if Conservatives and Libertarians have their way) as to what will future temperatures be and which places will face serious agriculture issues. Nonetheless the evidence is in the past and there's unprecedented warming.

    http://skepticalscience.com/climate-models.htm

  35. rxc:

    If I recall correctly, the rises in CO2 concentrations that are associated with glacial melting occur AFTER the temperatures rise. I will go find the references.

    So, why did the MWP and the Roman warming and the Minoan warming occur, and why did the planet subsequently cool down? And why did the temperature start to rise after the little ice age, when there was no significant increase in CO2? You do not want to deal with these questions, but instead insist that they are not significant - only the rise in temperature in the last half of the 20th century is significant, according to the AGW proponants. And the "pause", of course, is not significant to you.

    What prediction does AGW make that can be falsified to show that the theory is incorrect? It used to be that the "pause" was sufficient, until it actually occurred. Now, I believe that the argument is that it would have to last much longer - probably 30 years or so. Is this the only way to falsify the AGW argument, or do you continue to insist that the science is settled, and there is no argument at all?

  36. marque2:

    Well are you talking about the charts from 1999 or current after they they jacked up all the current temps and lowered the temps in the 1930's. All the heating from last century to now can be accounted for by the adjustments made since 2000 - otherwise it is basically flat.

    That is the real problem the "scientists" are doctoring the base data to prove their point. When you make a conclusion based on false or manipulated data - the conclusion is also false.

  37. FelineCannonball:

    A few too many questions for a Sunday.

    1) recent glacial-interglacial cycles are driven by northern hemisphere insolation changes due to long period orbital cycles called milankovich cycles. So, yes, warming precedes CO2 release in the northern hemisphere. It and ice albedo feedback appear to trigger it. The insolation change itself does not change the global radiation budget and is calculated to have minimal effects on global temperature. Feedbacks, including CO2, are the main mechanism for warming.

    2) events in the last 2000 years are of a few different types. There are internal oscillations which cause regional warming and cooling (and precipitation changes) at disparate locations on the globe, including short decadal oscillations, El Niño - SO, and longer period oscillations. And there are event-initiated climate deviations. Large volcanic eruptions can cool the planet enough with stratospheric aerosols to initiate decades of ice-snow albedo feedback. Changes in the frequency pf eruptions can have even longer and larger effects.

    3) The "pause". Climate modeling predicts slow shifts in the statistical state of the climate system. It does not predict rainstorms, hurricanes, El Niño, internal oscillations with longer periods. What they do us run a large number of simulations and calculate long term trends in the average. For any given decade 10 percent of the simulations might actually show a decline in global temperature, in spite of the fact that the average simulation is showing an increase. In models this is driven by internal oscillations in heat exchange with the oceans or random fluctuations initiating albedo feedbacks. If its the former we can expect the next El Niño to exceed the global temperature of the so-called super El Niño in 1997-1998.

    4) falsification: there is no single "AGW theory". You'd have to come up with better theories explaining past climate change and drivers, revise the fundamentals of weather forecasting, revise the absorption spectrum of CO2 and water, come up with negative feedbacks, that exceed the magnitude of water vapor and ice albedo, find better proxy records or better global temperature integration algorithms, etc. To scientists none of this is sacrisanct. In fact they're thinking about or working on all of this. Mean climate predictions aren't sacrosanct either. They're continually trying to revise models to add detail and better reflect our understanding of the ocean and atmosphere and its response to forcing. To eventually judge the accuracy of different models you'll have to look deeper than the output curve. Again that is the trajectory of the mean simulation. And a model published this year is dependent on contemporary economic projections and assumptions on aerosol production/distribution, etc.

    If just want to know how to test a model result, I guess I might look at the 95 percent confidence interval of the simulations. It doesn't really falsify it, but if reality falls outside the interval it makes it less likely and it might be time to lean against that particular model. The model results do get more interesting in 2030, 2050, etc. The 95% CI is significantly separate from modern conditions as you go out further.

  38. Andrew_M_Garland:

    It is ridiculous to trust in the integrity of a scientific "community" where prominent members of academic and government institutions announce they are willing to lie for the supposed good of the peasants.

    The models are complex, Dr. Mann won't release his data, and manipulations of peer-reviewed, published graphs to "hide the decline" are explained as unfortunate limitations of space. The warmists have no credibiility, meaning that there is no reason to believe anything they say which is not supported in transparent detail. They are revolutionaries, not scientists.

    One might argue that these liars are only a few. If so, why aren't the rest denouncing them, proclaiming proudly that the data is enough, proclaiming that exaggeration and lying are not needed? Their silence demeans the global warming community and their science.

    Lying for climate change
    3/3/12 - Ed Driscoll   [edited]
    === ===
    It is a fascinating development when people admit that they’re willing to lie for their cause.

    •  Prof. Chris Folland, Hadley Centre for Climate Prediction and Research
    “The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”

    •  Dr David Frame, Climate modeler, Oxford University
    “The models are convenient fictions that provide something very useful.”

    •  Paul Watson, Co-founder of Greenpeace
    “It doesn’t matter what is true, it only matters what people believe is true.”

    •  Sir John Houghton, First chairman of the IPCC
    “Unless we announce disasters no one will listen.”

    •  Christine Stewart, former Canadian Minister of the Environment
    “No matter if the science of global warming is all phony … climate change provides the greatest opportunity to bring about justice and equality in the world.”
    === ===

    The global warming scare is political, not scientific. It represents an old political ploy. When you scare the sheep, they are more willing to be sheared.

    There can be no discussion with liars. It is worse when the liars claim the status of learned, dispationate science, and then tell you that your peasant mind should just believe, because you can never match the facts to their "models". You are not allowed to question their models because they were never designed to be questioned.

    It does not surprise me that an average person cannot directly evaluate the claims and models. It amazes me that the average person can't tell when they are being lied to, even when the liars say it right out and have the sheep clippers (tax schemes) in their hands.

  39. Gil G:

    Yeah right. Sure. Whatever.

  40. Brennan Schweitzer:

    Warren, all the posts seem to be down in the weeds about the subject at hand. I, for one, want to thank you for having the moral courage to recognize a past misjudgment and come clean about it. It's an inspiration to me. We've all swallowed bile harshly while following the party line at some point in our past. Your contrition is genuine; don't beat yourself up about it anymore. Please continue to go forth and do great things!

  41. marque2:

    Yeah sure what ever - remember climate gate.

    You are definitely in denial. So sad.

  42. Maximum Libert:

    Warren:

    I agree with Brennan. This is particularly a problem for young professionals, who haven't yet had enough experience to know that -- whatever the pressure is for the result of the day -- none of the people ultimately responsible for that pressure from above would want falsified data or results (at least in responsible organizations). That is, there are lines that will be respected if you push back, but no one will know when you go along. The other lesson for young professionals is that, if you find yourself in an organization where that isn't true, plan to move. In any case, kudos for your confession.
    Max L.

  43. marque2:

    https://coyoteblog.com/wp-content/uploads/2014/06/1998changesannotated.gif

    Here is an example charts from NOAA's own website from 1999 and 2012 showing the difference due to adjustments. I am sure there could be credible reasons to adjust decades after the fact, but it is unlikely.

    They were posted on this blog several weeks ago and have appeared in numerous other places as well.

    Take off the blinders buddy.

  44. rxc:

    I googled "CO2 lags temperature", and the first hit was a page at skepticalscience.com that addresses the “popular myth amongst climate 'skeptics' [...] that historically atmospheric CO2 levels have risen after temperature increases began, and therefore it's actually temperature increases that cause CO2 increases, and not vice-versa as basic climate science and physics would have us believe.” It includes the graph that I had remembered, and it admits that the CO2 concentration does indeed lag the temperature, but it explains that the initial atmospheric temperature rise (due to the Milankovitch cycles and a change in solar radiation) causes the ocean temperature to rise and the oceans outgas their CO2, which then takes over, and with the subsequent amplification effects, e more CO2 is outgassed, causing the temperature to rise even more. The argument is that without the CO2, the initial temperature rise would just fizzle out, or would be much smaller than it actually was.

    This page references a paper in Science in 2010, by Lacis, et. al. that purports to prove that CO2 is “the principal control knob governming Earth’s temperature.” That paper can be found here:

    http://wattsupwiththat.files.wordpress.com/2010/10/lacis101015.pdf

    When you actually read it, you find that it really does not prove that CO2 is the principal control knob by quoting any actual data. Instead, they made this determination “from radiative flux experiments that we performed using Goddard Institute for Space Studies (GISS) ModelE (6).”

    Note how the “experiments” were done – using a computer model.

    Further reading the paper we find that “The relevant physical processes that turn the CO2 control knob on thousand-year time scales between glacial and interglacial extremes are not fully understood, but appear to involve both the biosphere and the ocean chemistry, including a significant role for Milankovitch variations of the Earth-orbital parameters.”

    And some of the important input data used in the “experiment” come from this paper: J. Hansen et al., AGU Geophys. Monogr. 29, 130 (1984). This paper was important because it revised the climate sensitivity parameter (one of those “dials“ in the computer models) from a low value to one that produces much scarier “projections of the future.” This paper is a real laugh to a real engineer. It combines electrical feedback equations with some bald, unproven assertions about the effect of changing a number of parameters, and then it just throws everything together (in a set of calculations!) to come up with a new ensitivity number.

    There is no real discussion (in terms of presenting actual measured data from real experiments or real observations) of the actual phenomena, which ones are the most significant, how they interact, or how actual combinations of those phenomena respond. The paper states that “a number of physical mechanisms have been identified as causing significant climate feedback” but it only discusses two – water vapor and ice/snow effects.

    Throughout this paper, Hansen talks about the “experiments” that he and his people have done, when they only did computer simulations. They did use real observations to establish some of the parameters in the models, but most of the results in the paper are “inferences” from the different computer runs, and the variations that they made to the inputs for each of those runs.

    In the course of tracking this whole chain of documents, I found a fascinating book that was published by Nature in 2012 – “The Social Life of Climate Change Models; Anticipating Nature”, by Hastrup and Skrydstrup. I have only read a few pages, but might be tempted to buy the e-book to read the rest. It talks about how Hansen and his colleagues were driven by the “politically relevant scientific activity” of climate modeling to push the results into the limelight, and how they turned the results of their computer models into “facts” and “data” that became accepted by the scientific community and the public.

    If you publish enough charts and graphs, evidently some people start to think that you know what you are talking about.

    I will stop here, to take some time to go look at this book. Published by Nature, it looks like a typical post-modern blurb, but it may be quite instructive about how our latest bout of Lysenkoism occurred.

    Just a short response to your latest - if you can't give me something concretely physical that can be independently, reproducibly measured to disprove your thesis, you are not doing science. Even Einstein identified a bunch of things that people could measure to prove him wrong. And, to the comment about quantum mechanics, yes, it produces results that are not always in agreement with relativity, which is why all these high energy physicists get to build enormous atom smashers in Switzerland. Their science is not really settled yet. And when one of them thinks he has found something that contradicts either Einstein or quantum mechanics, he (invariably a he) also publishes everything he can think of that could cause his results to be wrong. He does not "hide the decline", or worry about the conflict between telling the truth or being effective.

    Go read that book - I think it might be informative.

  45. FelineCannonball:

    I think I've been fair in my replies in summarizing the work. CO2 is not the initiator (ever). People are the initiator today. Insolation changes or random climate fluctuations or vocanism (with albedo/water vapor feedback) were in the past.

    But no parameter better tracks paleoclimate than CO2 in the past. Look at ice cores going back 700k or alkenone isotopes going back further -- CO2 vs temp. Or isotope excursions during extreme warming during the P/T event or snowball earth events. CO2 is correlated with climate -- and as a long-lived atmospheric gas with a well known absorption spectra -- the mechanism is obvious. Water vapor can't sustain a climatic regime. One short cold period and the stuff rains out.

    There is a lot of uncertainty in climate projections and paleoclimate interpretations. In the past we have proxies for temperature and other climatic variables, chemical, isotopic, biological. We also have proxies for CO2 concentration. Making sense of it you have theory and mechanisms. Computer models are pretty much the only way to test the viability of mechanisms-- ie does orbital driven insolation alone have enough impact on global temperature to act alone in driving ice age transitions? Was CO2 dropping through 400 ppm the proximal cause of ice sheet formation on Greenland? At this point the models aren't great but they help you figure out that there's no reasonable way to push the model such that insolation changes are the main driver by themselves, and there does appear to be reasonable parameterization allowing CO2 to be the main long term driver.

    Think about it. You can't run laboratory tests with alternative earths, or build an engineering model and test it to the breaking point. In paleoclimate you can test climate models by drilling new cores and testing predicted parameters and trends -- pole to equator temperature gradients, the precise timing of warming versus CO2 release, surface to deep temperature gradients, intensity of the anoxic zone, etc.

    Likewise, predictions for the mean global temperature rise through 2100 can't be tested in the lab. You sort of have to wait for it to play out. You can test the models in other ways but for this key prediction the results aren't expected to be obvious until the signal is larger than the noise.

    If you have better ideas about answering geological questions or predicting climate impacts of CO2, you should become a professor or something. You're right that it's not engineering or material science or pharmaceutical synthesis. Uncertainty is on another scale than stuff published in elite physics journals. But the uncertainty is also pretty obvious in publications. If you try to answer these questions you make do with what you have.

  46. Katabasis:

    "Yes, the consistent graphs showing unprecedented warming since the Industrial Revolution count for jack."

    - *consistent" eh? What's your definition of that? The "trend" going back to 1880 or before goes back a to a vastly different temperature sensing network, both in location of the thermometers and the technology used. It is not, never has been and never will be comparison of like with like. In any other field, this would be thrown out as in any way acceptable for evidential uses, but not - for some reason - in the spooky field of climate scientology.

  47. marque2:

    The Fembots did computer modeling.

  48. Brennan Schweitzer:

    Max, (obviously a nom de guerre, as it is a cultural war) you've stated in very clear terms what I took the long way around the barn to state. It goes back to what my dad taught me about the business world:

    "Always remember the basis of any good decision:

    Is it technically sound?
    Is it morally right?
    Does it make good business sense?"

    If the answer to all three is "Yes," then you'll probably do well.

  49. treeher:

    While I agree wholeheartedly with your comments, I read the reports and don't see any specific references to "computer modeling" although you could argue that it is certainly implied. But your comment that "computer models ... are not evidence of anything" is a great sound bite and I will use it often when I'm confronted with mindless factmongers.