by Mike Hearn
A recent paper in the Lancet claims that one in five people might not get immunity from being infected with COVID. The study is invalid. Although these sorts of problems have been seen before, this is a good opportunity to quickly recall why COVID science is in such dire straits.
The research has a straightforward goal: follow a population of Danish people who tested positive in Denmark’s first wave, and re-test them during the second wave to see if they became infected a second time. Denmark has a large free PCR testing programme so there is plenty of data to analyse. Out of 11,068 who tested positive in the first wave, 72 also tested positive during the second wave. This fact is used to advocate for vaccination of people who’ve already had COVID.
The obvious problem with this strategy is that false positives can cause apparent reinfection even when no such thing has happened. The paper doesn’t mention this possibility until page 7, where the entire topic is dismissed in a single sentence: “Some misclassifications by PCR tests might have occurred; however, the test used is believed to be highly accurate, with a sensitivity of 97·1% and specificity of 99·98%.” My curiosity was piqued by this figure because, as I’ve written about previously, at least as of June last year nobody knew what the false positive rate of COVID PCR testing is. The problem is circular logic: COVID is defined as having a positive test, therefore by definition it has no false positives, even though we know this cannot be true.
Is it possible this problem has been fixed? Sadly we’re talking about public health, so the answer is no. The citation is deceptive. The cited paper is from August and is a modelling paper. When read carefully we discover two surprising facts: firstly, the conclusion says clearly that “A high risk of false-positives should be considered… This may have consequences for, e.g., containment strategies and research”. In other words, the opposite of what the Lancet study tries to imply it says. And secondly, the 99.98% figure is totally made up:
[W]e set specificity to 99% – the lower level suggested by the Danish Health Authority. However, this figure may be an underestimate. Cross-reactivity to other endemic respiratory viruses has not been found under reference conditions. Contamination etc. are minimised by strict procedures in clinical practice. We therefore also repeated the analyses using a higher specificity of 99.98%…
In other words, although the government tells them to expect a 1% FP rate, they decided they felt more optimistic by nearly two orders of magnitude. No justification for the 99.98% specificity figure is provided beyond their faith in “strict procedures”. It’s pulled out of thin air and used as an alternative model scenario. To get the number of “reinfections” in the Lancet study only requires an FP rate of ~0.65% so actually, if the Danish government’s advice is correct, we should expect all the reinfections to be false positives. Certainly, no useful evidence is provided that it’s not the case.
I think most of us have stopped being surprised by this sort of thing. Papers with severe problems that literally anyone can find in five minutes keep being published by major journals. Worse, this particular issue is so basic it’s hard to see how it could be a mistake. Although it’s painful to reach, the only plausible conclusion is that scientists know they are misleading people and are doing it deliberately out of a misguided belief that it’s for the greater good.
Finally, please remember that a paper being invalid does not automatically prove the inverse claim is true i.e., the takeaway here is not “being infected always grants immunity”, even though that seems rather likely, but only that this paper doesn’t prove the opposite.
Mike Hearn is a former Google software engineer. You can read his blog at Plan 99.