Shortcomings of Computer Modelling
From an entirely unexpected quarter, comes a story of the shortcomings of computer modelling, in this case in the America's cup. It is a great example of how models reflect the biases of their authors. In this case, the author assumed that the fastest upwind path was the shortest path (ie with the shallowest possible tacks). It turns out that with the changing technology of boats, particularly the hydrofoil, a longer but higher velocity path was more optimal, but the model refused to consider that solution because it was programmed not to.
bigfire:
Remember, this is just yacht racing, and Team Oracle with their big money backing and big computer modeling still managed to screwed up to almost losing. We're supposed to entrust the fate of our planet to a bunch of self serving Warmist who haven't predicted anything right?
February 28, 2014, 10:39 amslocum:
There has to be more to it than that because you don't have to be any kind of sailboat racing expert to know that sailing as close to the wind as possible isn't necessarily the fastest way to get there. In fact even my first sailing-oriented GPS (from the mid 1990's) had a VMG (velocity made good) function, which makes it possible to tell what upwind angle is getting you toward your destination fastest.
February 28, 2014, 2:24 pmMingoV:
Models are only as good as their presumptions. A few types of models work with no presumptions (neural network-based models, for example), but most require the designer to incorporate all factors that are likely to affect the outcome. The model must be allowed to exclude factors that turn out not to be relevant (eg: atmospheric CO2 in global warming models). If likely factors are excluded or forced to be included, then there is a high chance of model failure.
February 28, 2014, 3:49 pmobloodyhell:
If you read it, there is -- they had calculated, via their models, the optimal tack angle, which was steeper (more towards the wind) than the opponent's but they were still getting their asses handed to them.
It wasn't until they re-examined the models that they realized a shallower angle would actually go faster, and they began beating out the Kiwis as a result of now having a faster boat in both directions, up AND downwind.
Realize -- models -- computer programs -- are, at the end of the day, LOGIC engines.
And the term GIGO applies as much to them as to anything else logic is applied to.
March 1, 2014, 4:28 pmsean2829:
What the models did not account for was the hull being in the water on the upwind tack vs. running high on the hydrofoils. The New Zealand team figured out that not running as close to the wind would allow them to get up on the foils and the hydrodynamic drag was much less without the hulls in the water even though they had to travel a greater distance. That allowed New Zealand to beat team Oracle in the early races. It's quite remarkable that it took 4 or 5 races for the Oracle team to figure that running less close to the wind and getting up on the foils would allow them to get more speed relative to added distance and beat the Kiwis. A systematic error in strategy.
March 2, 2014, 4:35 pmIgnoramus:
Modeling has even more challenges when applied to things economic and other human behavior. This doesn't mean its useless, just that it has limitations.
March 3, 2014, 7:20 amI worked on modeling subprime mortgages back in the late 90s. The models showed that back then the value of a subprime mortgage to an investor was much more highly dependent on prepayment speeds than to a borrower default. The risk was more that a "good" borrower would refinance in the near term and that you'd be stuck with a pool that had more "bads." Back then borrowers mostly had some meaningful equity in the home, so that defaults were relatively infrequent and on average the loss for a lender was tolerable. This modeled conclusion turned out to be substantially correct. Subprime securitizations from this era, and their sponsors, underperformed because of prepayments being faster than what earlier data suggested.
Flash forward and borrower equity came to be systematically ignored by lenders, rating agencies and regulators. Models built on prior data turned out to be horrendously wrong. We're still dealing with the consequences.
DensityDuck:
It's worth pointing out that Team Oracle did just fine in the actual *sailing* part of the competition. Where they lost was in failing to comply with rules invented halfway through the competition that were specifically designed to make their boat illegal.
March 3, 2014, 8:51 amFelineCannonball:
The model had an error. The model was fixed.
I don't really see how it is a "shortcoming of computer modelling." It's like saying programming is stupid because of programming bugs. A caveat maybe. Like garbage in, garbage out.
Any look into the modelling of anything will show an evolution from simple to complex and from buggy/wrong to better and more useful. Generally you do exactly like the Oracle team did and scratch your head when your model can't reproduce the New Zealand team's results. You look into the code and figure out the problem. You absolutely expect this type of problem when you are the first team programming a model. Even the best programmers don't code models the correct way the first time.
I can think of a commercial example where an instrument maker discovered a math error in a critical review paper 30 years before it's competitors. They had a huge advantage in design based on being able to model actual ion optics without the error. The other companies flailed with inferior products for decades until patents started timing out, at which point they just copied the successful instruments.
--------------
As for your slight of climate models, I'm not really sure this gets you that far. Hundreds of different teams have approached the problem since the 1980s from different perspectives with some of the most talented programmers and graduate students going over the code. They have over time found bad physics in some models and artificially limited parameters. They are continuously testing it against reality and paleoclimate realities, and comparing models to understand why different models make different particular predictions and why some deviate more from the real structure of the atmosphere -- say moisture content of the stratosphere or other detail.
The biggest failing on the part of skeptics in my mind is to grasp exactly what is being modeled and to grasp the certainty with which it is being modeled. GCMs don't predict weather. They don't predict intradecadal and interdecadal climate cycles due to ocean-atmosphere heat exchange oscillations. They don't model the atmosphere or ocean with highly accurate stratification, circulation, and mechanics. They are simplifications aimed at predicting long term multi-decadal trends due to lack of equilibrium heat-exchange between the current earth surface and the top of the atmosphere. Precise predictions of regional temperature and precipitation are beyond most of these models and are a subject of active research. Hurricane frequency is a subject of active research. There are a few things that can be stated with relative certainty in the field, but they must be stated precisely and with caveats.
March 3, 2014, 10:51 amTed Rado:
Any program not based on basics and not including ALL variables, as well as being validated against actual data, is nonsense.
March 3, 2014, 12:30 pmFelineCannonball:
"ALL variables"
I'd recommend reading up on sensitivity analysis: http://en.wikipedia.org/wiki/Sensitivity_analysis
Some chaotic systems are in fact impossible to model precisely even if you recreate every atom's position and velocity at the beginning of the simulation. If you ask another question, say regarding the average behavior of that chaotic system, you can model it quite well with considerable simplifications.
For instance the precise timing of drips in a chaotically dripping faucet can not be predicted out further than a minute, but certain aspects of the repeating pattern of drips, maximum drip size, maximum time gap between drips can be modeled in a few minutes with a handful of differential equations. Similarly GCMs might model the position of storm tracks and storm track variability (say during different ENSO modes) but not be able to predict the timing of individual storms or the timing and intensity of ENSO cycles.
March 3, 2014, 2:13 pm