Crazy Government Responses to COVID Part 3: The Wrong Metrics

It should not be surprising that any roundup I do of problems with COVID response would include a chapter on metrics -- I am a very strong believer that metrics and incentives live at the very heart of most private and public organizational failures.  I already dealt with incentives in part I, though I will come back to them a bit in this piece.

For the metrics, I want to focus narrowly on the selection and quality of COVID-related tracking metrics.   Perhaps I will cover this in a later chapter, but I will not cover that absolutely awful performance of the media in reporting COVID data and COVID-related science.   Suffice it to say that the media has once again shown itself absolutely incapable of reading a scientific study and assessing the quality of the methodology, or parsing the true results of the study vs those ascribed to the study in the press release.  Inherent problems in the data, such as the time delay for death reporting, have been made an order of magnitude worse by the media's inability (unwillingness?) to explain shortcomings in the data.  Is it really so hard to explain how deaths reported yesterday in such and such state did not actually all occur yesterday and in fact represent data updates sometimes weeks old?  This simple bit of clarity has been a bridge too far for most of the media.

Poor Data Quality

For the last 18 months, we have had to work with absolutely awful data on COVID.  I am willing to believe that in March 2020 we had excuses for not knowing what we were doing.  But now?  Some examples:

  • We are testing for COVID using PCR tests that are far too sensitive.  These tests use a series of cycles to concentrate the virus being searched for.  Many tests are being conducted as high as 40 cycles, which pretty much everyone agrees is way too sensitive and is likely to give false positives.  Given the importance of this cycle number, it is astounding that in 18 months I have never seen -- not once -- a media article that has a statistic on positive COVID tests along with the cycle number at which these tests were conducted.
  • Hospitalization data is skewed by the fact that hospitals have strong financial incentives to report patients as COVID patients.   This means a dude in a car wreck who tests positive for COVID once brought in might be listed as a COVID patient, despite the fact that this person fits no definition any of us have for what should count as a COVID hospitalization.  Retrospective studies have consistently shown huge overcounts of COVID patients, confusing "patients with COVID" with the more important "patients who were hospitalized because of COVID."
  • Most retrospective audits have found that COVID death data suffers from the same over-reporting as hospitalization, as a person testing positive for COVID but dying of a stroke might still be listed as a COVID death
  • I have no idea what is going on in many states with COVID death reporting delays.  We still see COVID deaths being added to counts for dates months in the past.  Why the long long delay?  Is there some sort of reclassification going on, and if so why? If not, given that we literally have spent trillions of dollars on COVID response, why can't we fix these data issues?

Following the Wrong Metric

The metric we should really care about is deaths (or given the fact that COVID deaths skew so old, perhaps total life-years lost to COVID).  After all, if we are to be honest, it is the prospect of death and not getting really sick that has certain elements of our population nearly catatonic in fear.  The problem is that even without the death reporting problems outlined above, deaths are way too much of a lagging indicator to be useful in spotting early trends.

Unfortunately, though, because the vast vast majority of positive tests for COVID are for folks who will never display anything but mild symptoms (and due to the testing issues discussed above), this is not a very good metric either.  But there is another problem -- all positive tests are not created equal.  A positive test of a health 20-year-old is pretty much the occasion for a big yawn.  A positive test for an 86-year-old with heart problems and diabetes is a cause of immense concern.  But the metrics do not differentiate.  We just see case counts on the news.  And note the ratio between these two extremes has not been stable -- early on a lot of the cases were in older folks, while today most of the positive tests are in young and healthy people.  Add to this the fact that we now have positive tests in the vaccinated, who are highly unlikely to die of the disease, and I would argue that 1000 positive tests in August 2021 are FAR less worrisome than 1000 positive tests in March 2020.  But again we treat them the same.

So Coyote, is this just academic?  I don't think so.  Personally I think we have seen several decisions of late that are impossible to justify based on science.  For example:

  • Oregon governor orders that even the vaccinated must mask outdoors
  • Many, Many school districts are demanding that kids as young as 3 must mask in school

Neither of these are supported by any science, and to the extent that the former discourages people from getting out of crowded cities and into the outdoors, and the latter discourages children from getting educated, they likely have net negative consequences.  So why?

I would argue the problem is that we have gotten stuck on cases (from overly sensitive tests) being the key metric.  Kids going back to school will almost certainly increase case counts, but for a disease that is less threatening to them than the ordinary flu, so what?  We are stuck on a stupid metric that no longer reflects actual risk and we have politicians mindlessly (see part 1 on incentives) managing to that metric.

So what metric would be better?

First, a good metric needs to really measure what we care about.  At some level, if we really think about it, we shouldn't really give a sh*t about case counts -- we care about people who die or have serious health complications from the disease.

Second, a good metric needs to be easy to calculate and reach the same figure no matter who does the calculation

Third, a good metric needs to be timely.  It is not helpful to have a collision indicator in the cockpit that only lights up 30 seconds after the plane hits the mountain.

Through the middle part of last year, I tended to look at hospitalization data.  It had its flaws (discussed above) but it struck me as the best balance between being timely (more timely than deaths) and indicating true risk (vs just cases).

I can envision a better metric: risk adjusted cases.  This could be as simple or as complicated as one likes, but I would favor a simple version that did not look at too many metrics -- maybe just 5 or 6 age bands and maybe vaccination status.  How it would work in its simplest form is that each case in the summary statistics would be weighted with a factor based on that person's risk of death.  For a simple version with age bands, this means that a case in the 80+ band might have a multiplier of 10 or greater while a case in the 0-18 age band might have a multiplier of 0.1 or smaller.  This would have been impossible to do last March but certainly by the time of the winter peak last year it would have been very doable.  We could easily do this today and back calculate the 2020 data for comparability.  I think age adjustment would be enough, I wouldn't get more fancy than that because we tend to have the age in the case data but not a lot else.  Perhaps we could add a factor for vaccination status as we measure that too.

This approach give us a much clearer idea of how much we should be worried about rising case counts and would be a better leading indicator of potential stress on health care systems 1-2 weeks out.