Archive for the ‘Science’ Category.

I Am Starting To Believe Most Studies Are Crap

I spent years, before I burned out on the task, picking over bad climate studies, and at the time reached the conclusion that there was something about the climate science field that was anomalous,  tolerating so much bad science, bad sampling methodology, and bad statistical approaches.

However, now I am coming to the conclusion that perhaps most studies in every field are dominated by this same crap.  Here is an example, from the NTSB on busses.

SAT Scores Are Bad Education Evaluation Data

I am happy to see the public school system coming in for much-deserved criticism.  I don't have anything to add to this article that I have not already said about schools many times.  But I want to make one complaint about a chart used in the blog post:

sat-scores

 

SAT scores are a terrible metric for measuring academic performance over time.

First, I am not at all convinced that the test scoring does not shift over time (no WAY my son had a higher score than me, LOL).

But perhaps the most important problem is that all students don't take the SAT -- it is a choice.  Shifts in the mix of kids taking the test -- for example, if over time more kids get interested in college so that more marginal academic kids take the test -- then the scores are going to move solely based on mix shifts.  Making this more complicated, there is at least one competitive test (the ACT) which enjoys more popularity in some states than others, so the SAT will represent an incomplete and shifting geographic mix of the US.  Finally, as students have gotten smarter about this whole process**, they gravitate to the ACT or the SAT based on differing capabilities, since they test in different ways.

To me, all this makes SAT scores barely more scientific than an Internet poll.

** If you have not had a college-bound student recently, you will have to trust me on this, but parents can spend an astounding amount of time trying to out-think this stuff.  And that is here in flyover country.  Apparently private school parents on the East Coast can be absurd (up to and including hiring consultants for 6 figures).  A few years ago it was in vogue to try to find your kid a unique avocation.  Violin was passe -- I knew kids playing xylophone and the bagpipes.  A friend of mine at a high profile DC private school used to have fun with other parents telling them his son was a national champion at falconry, the craziest thing he could make up on the spur of the moment at a cocktail party.  Other parents would sigh enviously, wishing they had thought of that one for their kid.

How To Achieve A Titillating Headline

Glenn Reynolds linked this titillating headline:

NINE PERCENT OF YALE STUDENTS SURVEYED SAY THEY’VE ACCEPTED MONEY FOR SEX

Of course, when you read the article (of course I clicked through, I have no pride),  you find that:

  • The sample size is approximately 40
  • The sample was from a group of people who self-selected to attend a seminar by the owner of a sex-toy business

The "3% who participated in bestiality" is actually 1 person out of 40 who have a self-selected interest in pushing sexual boundaries.  With a little larger sample size, a bit poorer math, and a bit more work goal-seeking to a desired outcome, this might almost meet the standards of climate research.

Which is all a relief to me -- after 30+ years of being a Yale hater, I was afraid I might have to admit it was a more interesting place than I thought.

Scale of the Universe

Alarmism Fail

Anthony Watt has a nice catalog of past predictions of doom (e.g. running out of oil, food, climate issues, etc).  It really would be funny if not such a serious and structural issue with the media.   I would love to see someone like the NY Times have a sort of equivalent of their reader advocate whose job was to go through past predictions published in the paper and see how they matched up to reality.  If I had more time, it is the blog I would like to start.

Update:  One of his readers Dennis Wingo took the resource depletion table from Ehrlich's Limits to Growth and annotated it -- the numbers in red show the resources Ehrlich predicted we should already run out of.

However, rather than ever, ever going back and visiting these forecasting failures and trying to understand the structural problem with them, the media still runs back to Ehrlich as an "expert".

He Who Has Two Clocks is Never Sure

I had no idea there were so many time standards

We already have way too many time standards, including:

  • TAI, time based on an atomic clock, which ignores all motion of the Earth
  • UT0 and UT1, time based on precise measurement of the Earth’s rotation
  • GPS, the time standard used by GPS satellites
  • UTC, the standard used in computing, which is like TAI but with leap seconds to keep it in sync with Earth
  • TDTTBTTCB, and TCG, which are all even worse

This leads to all kinds of little headaches, particularly for programmers. For example, the clock in your smartphone’s GPS is 16 seconds out of sync with the phone’s system clock. This is because the system clock uses Coordinated Universal Time (which has leap seconds), but GPS time doesn’t. They were in sync in January of 1980 and probably never will be again.

IQ Tests

I have never been convinced that IQ tests have really distinguished core intelligence from education.  I scored much better on IQ tests after I practiced and read about how to tackle certain types of problem.

It is for this reason that I have always assumed the Flynn effect to be due to education, not changes in native intelligence.

News You Can Use

Soon to be the subject of a Michael Bay film, I am sure.  Can you point a machine gun at the ground in order to fly.

Italy Jails Scientists for Failing to Predict Earthquake

Unbelievable.  We will be burning witches next.

Six Italian scientists and an ex-government official have been sentenced to six years in prison over the 2009 deadly earthquake in L'Aquila.

A regional court found them guilty of multiple manslaughter.

Prosecutors said the defendants gave a falsely reassuring statement before the quake, while the defence maintained there was no way to predict major quakes.

The 6.3 magnitude quake devastated the city and killed 309 people.

It took Judge Marco Billi slightly more than four hours to reach the verdict in the trial, which had begun in September 2011.

The seven - all members of the National Commission for the Forecast and Prevention of Major Risks - were accused of having provided "inexact, incomplete and contradictory" information about the danger of the tremors felt ahead of 6 April 2009 quake, Italian media report.

This is what I call the layman's "CSI" view of science, which assumes that certainty is possible in analyzing and forecasting complex systems.  I am not going to blame the victim here, but I will note that scientists have to some extent made this situation far worse by insisting that they have levels of certainty they do not have, particularly in highly charged political debates (e.g. economics and climate).

Harvard physicist Luboš Motl argues it will give scientists roughly the same incentives doctors have in areas with lots of malpractice suits:

The verdict de facto lionizes crackpots who were screaming that there had to be a large earthquake and they just happened to be right in that case – while isomorphic and sometimes the very same crackpots are wrong in 99.9% of other cases in which they cry wolf – and it condemns the scientific method. They are wrong in 99.9% of cases because their predictive framework has nothing to do with science – it's all about a psychopathological paranoia – but even a broken clock is right twice a day.

The lesson for the scientists is clear: If you are a scientist who is qualified in a discipline that has implications for the safety of people, you must always recommend precautionary measures to be taken even if you conclude that the probability that something bad will happen is tiny. Italy may expect much more hysteria in various similar science-related situations than it has had so far because a court has declared a war on everyone who is honest and balanced.

Can you imagine that this sick logic would be applied e.g. to surgeons? Surgeons could spend 6 years in prison after every death of a patient whom they or others were optimistic about. It's just insane. People sometimes die, natural catastrophes sometimes occur, and it's just impossible to identify a human culprit in most cases. Only if a professional makes a mistake in which he or she has demonstrably violated some established and functional rules to reduce the risk – and whether or not this was the case may only be determined by another expert – he or she could be considered co-responsible for the deaths.

More Scientific Than Thou

MSNBC has worked hard to be the official TV channel of the "reality-based community" which so often lectures us skeptics on how we are all anti-science and stuff.     (source)

The author of XKCD has a site now that answers odd science questions.  Here is mine:  If, at a mass of over 200 pounds, Felix Baumgartner was indeed be accelerated faster than light and pointed at the Earth, what would happen?

Isaac Asimov has a short story mystery something like this, with a pool ball accelerated to light speed.

Streambed on Mars?

This is pretty cool -- what look like rounded river rocks and sedimentary conglomerates on Mars.

Rename the Chevy Volt to the Chevy Bastiat

Quick - in your last fill up, how much did you pay for gas?  About how many gallons did you use?

If you are like most people, you can probably come pretty close to this.  I paid somewhere just north of $4.00 for about 18 gallons.

OK, second set of questions:  On your last electric bill, how much did you pay per KwH?  How many KwH did it take to run your dishwasher last night?

Don't know?  I don't think you are alone.  I don't know the answers to the last questions.   Part of the reason is that gas prices are posted on every corner, and we stare at a dial showing us fuel used every time we fill up.  There is nothing comparable for electricity -- particularly for an electric car.

I understand some inherent appeals to electric cars.  They are fun to drive, kind of quiet and stealthy like KIT from Knight Rider.  They are really torquy and have nice acceleration.  There is no transmission and gear changing.  All cool and awesome reasons to buy an electric car.

However, my sense is that the main appeal of electric cars is that because we don't see the fuel price on the corner, and because we don't stare at a spinning dial as electrons are flowed into the car, we pretend it is not costing us anything to fill up.  Out of sight is out of mind.  Heck, even experienced car guys who should know better take this attitude.  Popular Mechanics editor Jim Meigs wrote to Glenn Reynolds, re: the Volt:

Others might like the notion of going a month or two without filling the tank

This drives me crazy.  Of COURSE you are filling the freaking tank.  You are just filling the lead-acid (or lithium-ion) one with electrons rather than filling the hollow steel one with hydrocarbon molecules.  The only difference is that you don't stand there watching the meter spin.   But that should not mean that we pretend we are not filling the car and paying a cost to do so.

By the way, if you have read me before, you know I also have a problem with the EPA equivalent mileage standards for electric cars, which basically inflate the numbers by a factor of three by ignoring the second law of thermodynamics.  This fraudulent mileage number, combined with the EPA's crazy-high new mileage standards, represents an implicit subsidy, almost a mandate, for electric cars that gets little attention.  And that will have zero effect on energy usage because the numbers are gamed.

 

Because They Are Humanitarians

I used to scoff at how Ayn Rand turned the word "humanitarian" in the Fountainhead into a term of derision.  I didn't think it was justified to assume anyone adopting the humanitarian title had to be evil.  Surely, for example, Andrew Carnegie with his philanthropy and opposition to war could be considered a positive humanitarian?

But maybe she was on to something.  At least as far as Greenpeace is concerned:

According to the World Health Organization between 250,000 to 500,000 children become blind every year due to vitamin A deficiency, half of whom die within a year of becoming blind. Millions of other people suffer from various debilitating conditions due to the lack of this essential nutrient.[2]

Golden Rice is a genetically modified form of rice that, unlike conventional rice, contains beta-Carotene in the rice kernel. Beta-Carotene is converted to vitamin A in humans and is important for eyesight, the immune system, and general good health.[3] Swiss scientist and humanitarian Dr. Ingo Potrykus and his colleagues developed Golden Rice in 1998. It has been demonstrated in numerous studies that golden rice can eliminate vitamin A deficiency.[4]

Greenpeace and its allies have successfully blocked the introduction of golden rice for over a decade, claiming it may have “environmental and health risks” without ever elaborating on what those risks might be. After years of effort the Golden Rice Humanitarian Project, led by Dr. Potrykus, The Rockefeller Foundation and others were unable to break through the political opposition to golden rice that was generated directly by Greenpeace and its followers.[5]

To their credit, Bill and Melinda Gates are giving it another try.

Things I Did Not Know

I suppose I should have guessed this, but it never occurred to me.  There seems to be a problem with growing weed resistance to herbicides that is entirely parallel to growing antibiotic resistance of certain germs.

"Abnormal" Events -- Droughts and Perfect Games

Most folks, and I would include myself in this, have terrible intuitions about probabilities and in particular the frequency and patterns of occurance in the tail ends of the normal distribution, what we might call "abnormal" events.  This strikes me as a particularly relevant topic as the severity of the current drought and high temperatures in the US is being used as absolute evidence of catastrophic global warming.

I am not going to get into the global warming bits in this post (though a longer post is coming).  Suffice it to say that if it is hard to accurately directly measure shifts in the mean of climate patterns given all the natural variability and noise in the weather system, it is virtually impossible to infer shifts in the mean from individual occurances of unusual events.  Events in the tails of the normal distribution are infrequent, but not impossible or even unexpected over enough samples.

What got me to thinking about this was the third perfect game pitched this year in the MLB.  Until this year, only 20 perfect games had been pitched in over 130 years of history, meaning that one is expected every 7 years or so  (we would actually expect them more frequently today given that there are more teams and more games, but even correcting for this we might have an expected value of one every 3-4 years).  Yet three perfect games happened, without any evidence or even any theoretical basis for arguing that the mean is somehow shifting.  In rigorous statistical parlance, sometimes shit happens.  Were baseball more of a political issue, I have no doubt that writers from Paul Krugman on down would be writing about how three perfect games this year is such an unlikely statistical fluke that it can't be natural, and must have been caused by [fill in behavior of which author disapproves].  If only the Republican Congress had passed the second stimulus, we wouldn't be faced with all these perfect games....

Postscript:  We like to think that perfect games are the ultimate measure of a great pitcher.  This is half right.  In fact, we should expect entirely average pitchers to get perfect games every so often.  A perfect game is when the pitcher faces 27 hitters and none of them get on base.  So let's take the average hitter facing the average pitcher.  The league average on base percentage this year is about .320 or 32%.  This means that for each average batter, there is a 68% chance for the average pitcher in any given at bat to keep the batter off the base.  All the average pitcher has to do is roll these dice correctly 27 times in a row.

The odds against that are .68^27 or about one in 33,000.  But this means that once in every 33,000 pitcher starts  (there are two pitcher starts per game played in the MLB), the average pitcher should get a perfect game.  Since there are about 4,860 regular season starts per year (30 teams x 162 games) then average pitcher should get a perfect game every 7 years or so.  Through history, there have been about 364,000 starts in the MLB, so this would point to about 11 perfect games by average pitchers.  About half the actual total.

Now, there is a powerful statistical argument for demonstrating that great pitchers should be over-weighted in perfect games stats:  the probabilities are VERY sensitive to small changes in on-base percentage.  Let's assume a really good pitcher has an on-base percentage against him that is 30 points less than the league average, and a bad pitcher has one 30 points worse.   The better pitcher would then expect a perfect game every 10,000 starts, while the worse pitcher would expect a perfect game every 113,000 starts.  I can't find the stats on individual pitchers, but my guess is the spread between best and worst pitchers on on-base percentage against has more than a 60 point spread, since the team batting average against stats (not individual but team averages, which should be less variable) have a 60 point spread from best to worst. [update:  a reader points to this, which says there is actually a 125-point spread from best to worst.  That is a different in expected perfect games from one in 2,000 for Jared Weaver to one in 300,000 for Derek Lowe.  Thanks Jonathan]

Update:  There have been 278 no-hitters in MLB history, or 12 times the number of perfect games.  The odds of getting through 27 batters based on a .320 on-base percentage is one in 33,000.  The odds of getting through the same batters based on a .255 batting average (which is hits but not other ways on base, exactly parallel with the definition of no-hitter) the odds are just one in 2,830.  The difference between these odds is a ratio of 11.7 to one, nearly perfectly explaining the ratio of no-hitters to perfect games on pure stochastics.

Great Idea

Great idea, and consistent with my growing skepticism of all published research given a general bias towards positive results.

If you’re a psychologist, the news has to make you a little nervous—particularly if you’re a psychologist who published an article in 2008 in any of these three journals:Psychological Science, the Journal of Personality and Social Psychology,or the Journal of Experimental Psychology: Learning, Memory, and Cognition.

Because, if you did, someone is going to check your work. A group of researchers have already begun what they’ve dubbedthe Reproducibility Project, which aims to replicate every study from those three journals for that one year. The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” This is a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”

I wrote about problems in research publishing here.

We Need A Radical Restructuring of Scientific Publishing

I have written a number of times before that having only a few page-limited scientific journals is creating a bias towards positive results that can't be replicated

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 “landmark” publications — papers in top journals, from reputable labs — for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

I observed:

This is not really wildly surprising.    Consider 20 causal relationships that don’t exist.  Now consider 20 experiments to test for this relationship.  Likely 1 in 20 will show a false positive at the 95% certainty level — that’s what 95% certainty means.  All those 1 in 20 false positives get published, and the other studies get forgotten.

Actually, XKCD did a better job of making this point.  It's a big image so I won't embed it but check it out.

Also, Kevin Drum links a related finding that journal retractions are on the rise (presumably from false positives that could not be replicated or were the results of bad process).

In 1890, there were technological and cost reasons why only a select few studies were culled into page-limited journals.  But that is not the case today.  Why do we still tie science to the outdated publication mechanism.  Online publication would allow publication of both positive and negative results.  It would also allow mechanisms for attaching critiques and defenses to the original study as well as replication results.  Sure, this partially breaks the academic pay and incentive system, but I think most folks are ready to admit that it needs to be broken.

The Positive Result Bias

This is a pretty well-known non-secret among about anyone who does academic research, but Arnold Kling provides some confirmation that there seems to be a tremendous bias towards positive results.  In short, most of these can't be replicated.

A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."...

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

This is not really wildly surprising.    Consider 20 causal relationships that don't exist.  Now consider 20 experiments to test for this relationship.  Likely 1 in 20 will show a false positive at the 95% certainty level -- that's what 95% certainty means.  All those 1 in 20 false positives get published, and the other studies get forgotten.

To some extent, this should be fixable now that we are not tied to page-limited journals.  Simply require as a grant condition that all findings be published online, positive or negative, would be a good start.

 

Difference Between Trusting Science and Scientists

I don't often defend Conservatives but I will say that there is nothing much more useless to the public discourse that bullsh*t sociology studies trying to show that Conservatives are dumber or whatever (and remember, those same studies show libertarians the smartest, so ha ha).

In this general category of schadenfreude masquerading as academics is the recent "finding" that conservatives are increasingly anti-science or have lost trust in science.  But here is the actual interview question:

166. I am going to name some institutions in this country. Some people have complete confidence in the people running these institutions. Suppose these people are at one end of the scale at point number 1. Other people have no confidence at all in teh people running these institutions. Suppose these people are at the other end, at point 7. Where would you place yourself on this scale for: k. Scientific community?

A loss of trust in the scientific community is way, way different than a loss of trust in science.   Confusing these two is roughly like equating a loss in trust of Con Edison to not believing in electricity.  Here is an example from Kevin Drum describing this study's results

In other words, this decline in trust in science has been led by the most educated, most engaged segment of conservatism. Conservative elites have led the anti-science charge and the rank-and-file has followed.

There are a lot of very good reasons to have lost some trust in our scientific institutions, in part due to non-science that gets labeled as real science today.  I don't think that makes me anti-science.  This sloppy mis-labeling of conclusions in ways that don't match the data, which Drum is ironically engaging in, is one reason may very scientific-minded people like myself are turned off  by much of the public discourse on science.  The irony here is that while deriding skepticism in the scientific community, Drum provides a perfect case example of why this skepticism has grown.

Things I Did Not Know

Survivors of lightening strikes are often left with unique fractal scars.

I can just see someone walking up to one of these folks

"Dude, where did you get those amazing tats?"

"Forget it.  The price is too high."

Am I Missing Something?

Maybe I am missing something, but "friction-reduced" tires seem to be going in the wrong direction.   Hopefully friction-reduced brake pads or inflation-reduced airbags are not next.

Looking for Help on Tesla Battery

I have read a number of stories about how Tesla batteries become bricked if they are completely discharged.  What I have not seen is an explanation of the physics or chemistry of why this is true.  Can anyone explain it or give me a pointer to an explanation?  Certainly if this happened to, say, iPod batteries we would have had torches and pitchforks outside of Cupertino long ago.

Graphics Fail

One of the classic mistakes in graphics is the height / volume fail.  This is how it works:  the length of an object is used to portray some sort of relative metric.  But in the quest to make the graphic prettier, the object is turned into a 2D, or worse, 3D object.  This means that for a linear dimension where one object is 2x as long as another, its area is actually 4x the other and its volume is 8x.  The eye tends to notice the area or volume, so that the difference is exaggerated.

This NY Times graph is a great example of this fail (via here)

The Tebow character is, by the data, supposed to be about 1.7x the Brady character.  And this may be true of the heights, but visually it looks something like 4x larger because the eye is processing something in between area and volume, distorting one's impression of the data.   The problem is made worse by the fact that the characters are arrayed over a 3D plane.   Is there perspective at work?  Is Rodgers smaller than Peyton Manning because his figure is at the back, or because of the data?  The Vick figure, by the data, should be smaller than the Rodgers figure but due to tricks of perspective, it looks larger to me.

This and much more is explained in this Edward Tufte book, the Visual Display of Quantitative Information.  You will find this book on a surprising number of geek shelves (next to a tattered copy of Goedel-Escher-Bach) but it is virtually unknown in the general populace.  Every USA Today graphics maker should be forced to read it.

The Media and Cancer Risks

The old saying goes, "where there is smoke, there's fire."  I think we all are at least subconciously suceptible to thinking this way vis a vis the cancer risks in the media.  We hear so much about these risks that, even if the claims seem absurd, we worry if there isn't something there.  After all, if the media is concerned, surely the balance of evidence must be at least close - there is probably a small risk or increase in mortality.

Not so.  Take cell phones.  We have heard for decades concern about cancer risk from cell phones.  But they are not even close to dangerous, missing danger levels by something like 5 and a half orders of magnitude.

Cell phones do not cause cancer. They do not even theoretically cause cancer. Why? Because they simply do not produce the type of electromagnetic radiation that is capable of causing cancer. Michael Shermer explains, using basic physics:

...known carcinogens such as x-rays, gamma rays and UV rays have energies greater than 480 kilojoules per mole (kJ/mole), which is enough to break chemical bonds... A cell phone generates radiation of less than 0.001 kJ/mole. That is 480,000 times weaker than UV rays...

If the radiation from cell phones cannot break chemical bonds, then it is not possible for cell phones to cause cancer, no matter what the World Health Organization thinks. And just to put the "possible carcinogen" terminology into perspective, the WHO also considers coffee to be a possible carcinogen. Additionally, it appears that politics and ideology may have trumped science in the WHO's controversial decision.

Debating the Second

I thought this was an interesting discussion of leap seconds.  At its heart, the debate is about a tradeoff between hassle (a lot of programming goes into inserting a second into a day every year or so) and how close we want time to match its traditional association with astronomical observations (e.g. noon is exactly noon at Greenwich).   This is a debate that has occured at least since the imposition of time zones (mainly at the behest of railroads) which for many cities converted "sun time" to "railroad time."  Until then, every town was on a different time with noon set to local astronomic noon.  Now, only a few cities actually have noon at noon.  Of course, daylight savings time took this even further.