I wonder what will happen if they put France on the “red list”, as Culture Secretary Oliver Dowden seems to be considering… I suspect if you are a haulier you will still be able to come in from France – otherwise, where will all the food come from?
As has been seen from a previous LS post where the author set up a “removals” company and went skiing, one thinks this could be open to abuse.
I live five minutes away from the 24 hours South Mimms testing centre – I do wonder what is to stop me from getting free tests. Also, I wonder if van rentals will start popping up as members of the public suddenly become hauliers to find a reasonable excuse to go abroad!
Stop Press: In England, Lloyds Pharmacy is cutting costs for PCR tests to the bone by undercutting Boots by (wait for it!) £1! Its now only £119 per test with them, which means the cost of the PCR tests you’ll need to go on a trip to France has been slashed from £360 to £357. Bon voyage!
In Spring 2020 a novel coronavirus swept across the world: novel, but related to other viruses. In the UK, unknown at the time, around 50% of the population were already immune. The evidence for this is unequivocal and arose due to prior infection by common cold-causing coronaviruses (of which four are endemic). This prior immunity has been confirmed around the world by top cellular immunologists. There is even a very recent paper from Public Health England on the topic of prior immunity and a wealth of other evidence from studies on memory T-cells, studies on household transmission and on antibodies.
Because of the extent of the prior immunity, and as a result of heterogeneity of contacts, once only a low percentage of the population, perhaps as low as 10-20% had been infected, “herd immunity” was established. This is why daily deaths, which were rising exponentially, turned abruptly and began to fall, uninterrupted by street protests, the return to work, the reopening of pubs and crowded beaches during the summer. (See this explainer by the data scientist Joel Smalley.)
Immunity to ordinary respiratory viruses occurs mainly through T-cells which ‘take a picture of the invader’ at a molecular level, ‘reproduce’ it on certain immune cells and essentially ‘never forget a face’. This T-cell immunity is robust and durable. Those exposed to the highly related SARS virus in 2003 still have this immunity 17 years later. In relation to SARS-CoV-2, the pattern of immunity to date is identical and after around 800 million infections across the world, there is no convincing evidence for significant levels of re-infection. Not only are those who’ve been infected and have now recovered immune (they cannot get ill again with the same virus), but importantly they do not participate in transmission. (See my article on what SAGE got wrong for Lockdown Sceptics.) Furthermore, because the immune response is diverse, a proportion of them will also be immune to novel but similar viruses in the future.
In Spring, however, this virus did kill or hasten the end for approximately 40,000 vulnerable people, who were mostly old (median age 83, which is longer than that cohort’s life expectancy when born) and many of whom had multiple other medical conditions. There were some rare and very unfortunate younger people who also died, but age is clearly the strongest risk factor.
But due to extraordinary errors in modelling created by unaccountable academics at Imperial College, the country was told to expect over a half a million deaths. Three Nobel prize-winning scientists wrote to that modelling team in February correcting their errors. This was done confidentially. This expert, third-party estimate was remarkably accurate – it predicted that there would be a total of 40k deaths from COVID-19. I believe this is in fact correct and is what has happened. While I have no proficiency in modelling, I can distinguish predictions that are biological plausible from those which are literally incredible. When inputs to a model are wrong or missing, their outputs cannot be trusted. The Imperial model made the extreme assumption that there was zero prior immunity in the population or social contact heterogeneity.
The ease with which humans develop immunity to this virus is striking. Incidentally, it is this immune adeptness which has probably played an important role in why, against prior pessimism, many vaccines for SARS-CoV-2 have apparently ‘worked’ (though there is much to criticise about how efficacy has been defined, because a reduction in the propensity to become PCR positive has not previously been regarded as a leading indicator of the degree to which a vaccine will protect a population against severe illness).
Available evidence suggests that herd immunity at a national level (in England) was attained as early as May. (Joel Smalley again.) There have been no alternative explanations promulgated for the force which bore down on infections and deaths during the largely unmitigated spreading of the virus early in Spring. As an example of evidence that we are at herd immunity, London is relatively peaceful in relation to the virus now, having been the national epicentre in Spring, with hundreds of deaths daily in the capital.
Government actions have been nothing but peculiar from the very beginning
In any other year, that would be the end of the tale. Neither the existence of prior immunity nor that herd immunity can be readily reached without us noticing are new.
What was new was the belief that forcing citizens to run and hide from a respiratory virus with greater contagiousness than ‘flu was other than a fool’s errand. Acts of Parliament giving the executive a degree of power more suited to a war, and with it, a budget 10 times larger than any previous such emergency, were also deemed necessary, none of these being justified by the situation or by science. (See Jonathan Sumption make this point.)
We were invited to “Save the NHS” by not attending hospitals or seeing our doctors: soon both were heavily restricted and have remained so ever since. Most corrosively, broadcasters were and still are heavily constrained from free expression by innocent-sounding Ofcom guidelines.
I am of the view that the effect of these guidelines approximates censorship. When scientific debate is stifled, people die. Science requires the airing of opinions and debate to allow the evolution of ideas. Censorship has meant that nothing has been learnt, no model adjusted and errors compounded. The Government was told to expect a ‘second wave’, and a huge one at that. This was mystifying. Virus don’t do waves and no reason to expect an exception on a truly unprecedented scale has ever been forthcoming. I hasten to distinguish what I have termed a secondary ripple from what SAGE means by a ‘second wave’.
The secondary ripple term recognises that not everyone will have been infected by mid-summer. As an important aside, I’ve invited many to consider how long it takes for an influenza epidemic, which we experience most years, to criss-cross the country before apparently burning out, only to occur the next year, because it’s one of the few respiratory viruses which mutates so quickly that, by the time a year has gone by, it’s sufficiently different from what our immune systems have seen before that it can wreak brief havoc upon us once again. The answer to that time question is variously given as three to four months.
I ask readers to consider how long might it be expected to take for a more contagious respiratory virus like SARS-CoV-2 to thoroughly criss-cross the country. It seems hard to credit that with taking longer than four months. We know the virus was in the UK at least by February 2020 (potentially earlier) and so by June it’s not at all unlikely that it had travelled almost everywhere. It has been argued that perhaps lockdown was very effective and so many people will still be susceptible, as SAGE claims. We know that is not correct. Lockdown was started far too late to repress the spread of the virus, as even Professor Whitty agreed in giving testimony to a parliamentary select committee in the summer. As he said, the lockdown began after the peak of infection – the outbreak was already in retreat by Mar 23rd.
Remember also that just because we were in ‘lockdown’ doesn’t mean much changed when it came to the transmission of the virus. Many people continued to go to work, other people still shopped almost every day, supply chains for all essential goods continued with few interruptions. Hospitals were open and, for the most part, extremely busy, as were care homes. The virus travelled along these routes and did not need to travel far, having reached every major urban centre before anyone even thought of locking us down or any other measures. When lockdown was lifted, there wasn’t the slightest alteration in the long, slow decline in the number of daily deaths. Personally, I don’t think there’s any evidence that the spring lockdown achieved anything in terms of saving lives from SARS-CoV-2, but there is evidence it contributed to some deaths, including deaths from non-COVID-19 causes. Reflecting back, months after, its main effect was to condition us to accept SAGE’s guidance as this was followed by the Government and echoed by media. This doesn’t mean locking people down is a sensible policy. The onus remains on its advocates to persuade us that it is, and I’m afraid they’ve not persuaded me.
So, no: there’s no good reason to think that large proportions of the nation were spared exposure to the virus as a result of the first lockdown. But it is true that some regions did experience less deaths in spring than others and while some are almost certainly due to more extensive prior immunity, others probably were incompletely exposed. That’s what I mean by secondary ripple: as transmission was increased by cooler weather, a limited amount of disease did reappear. But this was always going to be local, self-limiting and under no circumstances a public health emergency for a city, let alone a nation. This secondary ripple started at the beginning of September and was over by the end of October. Symptom-tracking data, NHS triage data and notified disease data all support that hypothesis. After this ripple, immunity levels in the underexposed pockets of the country have been topped up to herd immunity levels. From now on, COVID-19 outbreaks will be a feature of winter but will not be able to spread beyond small outbreaks.
No, what SAGE meant by a ‘second wave’ was a really big one, with twice as many deaths as in spring 2020. This is completely without precedent.
Planning for a ‘second wave’ might have led to its very creation
Viruses don’t do waves (beyond the secondary ripple concept as outlined above). I have repeatedly asked to see the trove of scientific papers used to predict a ‘second wave’ and to build a model to compute its likely size and timing. They have never been forthcoming. It’s almost as if there is no such foundational literature. I’m sure SAGE can put us right on this.
The post-WW1 “Spanish flu” appears to be all there is where it comes to evidence of waves. Most scholars accept that what most likely happened was that more than one infectious agent was involved. It was 102 years ago and no molecular biological techniques indicate multiple waves of a single agent then or anywhere else. In any case, that was influenza. There have been no examples of multiple waves since and the most recent novel coronavirus with any real spread (SARS) performed one wave each in each geographical region affected. Why a model with a ‘second wave’ in it was even built, I cannot guess. It seems completely illogical to me. Worse, as far as the public can discern, the model fails to account for the unequivocally demonstrated population prior immunity, to which must be added the recently-acquired immunity arising from the spring wave. This is why I’m reasserting what I’ve been argued for months – a ‘second wave’ cannot happen and must, perforce, not be happening as described
Despite the absence of any evidence for a ‘second wave’ – and the evidence of absence of waves for this class of respiratory virus – there was an across-the-board, multi-media platform campaign designed to plant the idea of a ‘second wave’ in the minds of everyone. This ran continually for many weeks. It was successful: a poll of GPs showed almost 86% of them stated that they expected a ‘second wave’ this winter.
As research for this piece, I sought the earliest mention of a ‘second wave’. Profs Heneghan and Jefferson, on Apr 30th, noted that we were being warned to expect a ‘second wave’ and that the PM had, on Apr 27th, warned of a ‘second wave’. The Professors cautioned anyone making confident predictions of a ‘second’ and ‘third wave’ that the historical record doesn’t provide support so to do.
I looked for mentions by the BBC of a ‘second wave’. The following report was on June 24th and at least two of the three scientists interviewed were SAGE members. The strange thing though is that SAGE minutes (brought into the public domain by Simon Dolan’s judicial review) early in the year made no mention of a sizeable ‘second wave’. Not one. On February 10th, there was a mention of multiple waves for post-WW1 flu. On Mar 3rd and 6th, there is mention of a single SARS-CoV-2 wave with most (95%) of the impact early on. What looks to be the final document, Mar 29th, still just refers to one wave. This is what history and immunology teaches. So, what happened later in the year to alter the clearly held view of SAGE that the virus would manifest itself in a single wave? We need SAGE to tell us.
PCR is a powerful tool, but has weaknesses when used on an industrial scale
Despite this bothersome oddity about a ‘second wave’ and almost as if there was a plan for one, the PCR (polymerase chain reaction) testing infrastructure in the UK began to be reshaped.
PCR is a quite remarkable technique, which has unparalleled ability to find truly tiny quantities of a fragment of a genetic sequence, right down to the level of finding a single, broken fragment of a virus in a messy biological sample. There are notable limitations, well known to those who’ve personally used PCR in a research context. The most important one is its propensity to suffer from contamination, and the integrity of a PCR is very easily destroyed by invisible levels of contamination even in the hands of an expert, working alone and on a small handful of samples.
This is a good moment to mention that the PCR test protocol for SARS-CoV-2, which everyone in the world is now using, was invented in the lab of Prof Drosten in Berlin. The scientific paper in which the method was described was published in January 2020, two days after the manuscript was submitted. One of the authors of the paper is on the editorial board of the journal that published it. There is concern that this extremely important article, which contains a PCR test protocol that has been used to run hundreds of millions of PCR tests across the world, including the UK, was not peer-reviewed. No peer review report has been released, despite many requests to do so. Furthermore, as a method, it contains numerous technical weaknesses, some of which are serious and highly complex. Suffice to say that a very detailed dissection of the paper and of the Drosten protocol has been made by Drs Borger and Malhotra, experienced and concerned molecular biologists. A group of other medics and scientists (of which I am one) have put their names to a letter, which accompanies the dissection, to the whole editorial board of the journal, Eurosurveillance, demanding that the paper be retracted. This was submitted on Nov 26th.
In addition, the Portuguese high court determined two weeks ago that this PCR test is not a reliable way to determine the health status or infectiousness of citizens, nor to restrain their movements. Other countries are also receiving legal challenges, one being submitted earlier this week in Germany by Reiner Fuellmich, a lawyer who successfully sued VW in relation to diesel emissions (The YouTube video in which Fuellmich sets out the principal points of concern about the misuse of PCR has been removed). I am aware of other legal challenges being assembled in further countries, including Italy, Switzerland and South Africa. With the scientific validity of this test under severe challenges, I believe it must immediately be withdrawn from use.
There are deep concerns internationally about the reliability and selectivity of this PCR test protocol and this should be borne in mind through the rest of this article.
NHS labs ran PCR competently in spring
In spring, the relatively constrained amount of PCR testing was at least conducted independently by very many, experienced labs and I am of the view that it was trustworthy, reaching more than adequate numbers of tests by the end of May (50k per day). Now it’s being run in newly-established large, private labs and most of their current staff are far less experienced than those in the NHS labs. We have no idea why this has happened. Regardless of any concerns about testing capacity, the need was and should have been expected only to be of limited duration. Remember, viruses don’t do waves and we’d already been fully exposed to the virus. Of course, it was argued that “a second wave was coming”, so we’d need more capacity. But as I’ve already shown, the certainty of expectation of a ‘second wave’ was bizarre and unaccountable.
So why was PCR testing removed from NHS labs? One answer is because they didn’t have the capacity to cope with testing requirements for a ‘second wave’. But this is circular: it was simply impossible to claim with certainty that there’d be such a wave. Also, it’s not true that the NHS labs couldn’t cope. As a staff member there pointed out: “I want to know why the new super-labs have been set up, because if they gave the NHS labs the (consumables) resources they could easily do the tests. Our lab has been ready for ages to do large numbers of tests. We have the equipment and we have staff. We lack only the test kits and these are not available to any new labs, either.”
It wasn’t just NHS lab staff who were perturbed by the move. I’m quoting extensively from this article because it contains crucial information. The President of the Institute of Biomedical Sciences (IBMS), the leading professional body in the field of biomedical science, said:
It concerns me when I see significant investments being made in mass testing centres that are planning to conduct 75,000 of the 100,000 tests a day. These facilities would be a welcome resource and take pressure off the NHS if the issue around testing was one of capacity. However, we are clear that it is a global supply shortage holding biomedical scientists back, not a lack of capacity. The profession is now rightly concerned that introducing these mass testing centres may only serve to increase competition for what are already scarce supplies and that NHS testing numbers will fall if their laboratories are competing with the testing centres for COVID-19 testing kits and reagents in a ‘Wild West testing’ scenario. The UK must avoid this for the sake of patient safety. It is clear that two testing streams now exist: one delivered by highly qualified and experienced Health and Care Professions Council (HCPC) registered biomedical scientists working in heavily regulated United Kingdom Accreditation Services (UKAS) accredited laboratories, the other delivered mainly by volunteer unregistered staff in unaccredited laboratories that have been established within a few weeks. This has presented another key concern – in that we have not been involved in assuring the quality of the testing centres and are now being kept at arm’s length from their processes, even when they exist close to large NHS laboratories.
On proof reading this article, I was struck at how powerful the case was for keeping things under the quality control of the NHS. What could the motives against this sensible plan have possibly been?
These testing facilities were presumably expected to be temporary. If so, why would it make sense to spend large sums of money and to displace equipment and consumables, which were the sole key missing item when the Lighthouse super-labs were announced, instead of using existing, keen, accredited staff who knew what they were doing? Those new labs would be as limited by consumables as the NHS labs.
We never really needed mass testing of those without symptoms
Arguably, we would never have been short on capacity if we had limited the testing to those with symptoms. The only reason one might even consider mass testing of those without symptoms is if you were convinced that those without symptoms were significant sources of transmission. This has always seemed to me to be a very tenuous assumption. Specifically, respiratory viruses are spread by droplets of secretions and generally the expulsion of these is linked to the symptoms of infection – coughing in particular. Humans have evolved over millions of years to recognise threats to health by close observation of the health status of others. It works well. We’re familiar with avoiding those with flu-like symptoms in winter and behaving responsibly by staying away from work and vulnerable people when we are symptomatic. The burden of proof rests with those claiming something very different in the case of SARS-CoV-2 to show conclusively that asymptomatic people are indeed major sources of transmission. I don’t think that case has at all been made. The medical literature on this is contradictory but almost all the papers claiming such transmission originated in China.
Consequently, there is simply no need to get into the business of mass testing the population. Indeed, as we will see, such mass testing brings with it, when using PCR as the method, a severe risk of what we call a “PCR false positive pseudo-epidemic”. This could never happen if we were not using PCR mass testing of the mostly well. So, for whatever reason and against all historical precedent and immunological reasoning, a major initiative was launched with the goal of reaching 500,000 tests a day by year’s end. Again, unaccountably, the Government didn’t just get on and build these new labs, working in parallel with the available NHS capabilities. Instead, responsibility for testing was swept out from 44 NHS labs, with skilled and accredited staff who’d already been running SARS-CoV-2 PCR. In their place, new labs were created, outside the help and control network of the Institute of Biomedical Sciences. These Lighthouse Labs are still not all fully accredited under UKAS to ISO 15189, a quality management system accreditation relating to medical laboratories.
There is a reliable test, fully-characterised and already validated with real-world use
At the end of October, the British Army was called in to help Liverpool City Council find the cases which the ONS PCR testing survey predicted should be there but which were no longer being found in the numbers expected. It was possible that people were no longer coming forward to be tested, though there is no way to be sure of this. Despite not having sought consent from the parents of school children and the absence before the survey began of proper protocols and ethics review, scores of thousands of people were tested using a lateral-flow test (LFT). (See here and here for more details on the LFT.) These look rather like the familiar pregnancy test kits you can purchase over the counter. They look similar, because they use related tried and trusted technology to detect virus proteins in the swab, not RNA. All tests have limits and weaknesses. However, the LFTs are not subject to the same flaws as PCR – specifically the risk of over-amplification and of cross-contamination before the test is actually run. LFT has similar sensitivity and specificity in the lab to PCR. It is certainly capable of identifying the same proportion of those truly infected as PCR.
In brief, the army found very few people with positive LFT results, only slightly higher than the background operational false positive rate: just over 0.3%, values expected when the tests are used in the real world. Since testing began, the positive rate has tended to a mean of 0.7% which might mean a few people were positive. My own experience of reading around this area is that this (around 0.7%) is almost certainly the true false positive rate when, in the real-world, careful but inexpert people administer the LFT. It meant that, in the city in the centre of the national hotspot for COVID-19, almost no one had the virus. This experiment has been repeated for 8,000 people in Merthyr Tydfil resulting in 0.77% testing positive. That these two test series have returned such similar values suggests that this is indeed the true, operational false positive rate for the LFT, though another test series will be helpful in refining that possible interpretation. Some leapt to criticise the LFT, as if it was its fault that it couldn’t find the expected cases. Of course, to many of us, the results were exactly what we’d expected, because we were by then sure that PCR was wildly over-reading. PCR has gone wrong before and Occam’s razor indicated that this was by far the most likely explanation for the otherwise inexplicable failure of PCR “cases” to correlate with symptomatic disease. These are the kind of results expected in populations protected by herd immunity. They’re completely inconsistent with a city and town in the grip of a highly-infectious respiratory virus.
To the Lighthouse
By September, the great bulk of PCR testing was being run by large, private labs, some of which are called Lighthouse Labs, and I’ll use this term as a coverall for all such labs. It was as September began that literally incredible things started to happen. Students returning to University towns were all required to submit to swabbing and PCR testing. We were then told there was an epidemic running through young people and it was just a matter of time before it reached the elderly and that would be that. The percentage of tests which were returning positive started skyrocketing, reaching in some towns values that were close to those in A&E at the peak of the pandemic in April. Strong linkage was observed between numbers of tests run and their positivity. This is most odd and can happen if the error rate increases with the pressure on the testing system.
Now, in late November, we are told there are sometimes 25,000 new “cases” daily and that several hundred daily “COVID-19 deaths” are occurring. How can this be happening if I’m right and the population has achieved herd immunity (as supported by large numbers of scientific papers detailing extensive T-cell immunity, as well as careful examination of the profile of deaths in spring vs recently, and the examination of patterns of deaths around the country recently as compared with spring)? It’s a conundrum.
As the numbers of daily PCR tests conducted began to climb very steeply, reaching 370,000 per day in mid-November, many of us have had the uncomfortable feeling that the chances of PCR testing on this scale returning accurate results are vanishingly small. To avoid cross-contamination and to have such high throughput flies in the face of decades of relevant experience for some of us. The classic triad of speed, throughput and quality always has one of them as the lead, limiting factor. In this case, my entire career experience tells me that the limiting factor is quality.
How we can square these claims of tens of thousands of daily “cases” and an unprecedented ‘second wave’ of deaths with the unfeasible quantity of testing using a technique considered by bench experts difficult to perform reliably even on a small scale?
A PCR false positive pseudo-epidemic looks just like a real epidemic, but isn’t
It’s important to appreciate while digesting this counter-narrative which, unlike the official line, is at least internally consistent, that the only data suggesting a ‘second wave’ is upon us are PCR results. Everything is dependent on this. A “case” is a positive PCR test. No symptoms are involved. A “COVID-19 admission” to a hospital is a person testing positive by PCR before, on entry or at any time during a hospital stay, no matter the reason for the admission or the symptoms the patient is presenting. A “COVID-19 death” is any death within 28 days of a positive PCR test. If there is any doubt about the reliability of the PCR test, all of this falls away at a single stroke.
I have to tell you that there is more than common-or-garden doubt about the PCR mass testing that purports to identify the virus. We have very strong evidence that the PCR mass testing as currently conducted is completely worthless.
At this point, it’s appropriate to give the game away and invite you to read the explanation that the team of which I’m part have assembled.
In brief: the pandemic was over by June and herd immunity was the main force which turned the pandemic and pressed it into retreat. In the autumn, the claimed “cases” are an artefact of a deranged testing system, which I explain in detail below. While there is some COVID-19 along the lines of the “secondary ripple” concept explained above, it has occurred primarily in regions, cities and districts that were less hard hit in the spring. Real COVID-19 is self-limiting and may already have peaked in some Northern towns. It will not return in force, and the example again is London. Even here, certain boroughs, e.g. Camden and Sutton, have had minimal positive test results. I’ve explained a number of times how this happened – the prominent role of prior immunity is often ignored or misunderstood. The extent of this was so large that, coupled with the uneven spread of infection, it needed only a low percentage of the population to be infected before herd immunity was reached.
That’s it. All the rest is a PCR false positive pseudo-epidemic. The cure, of course, as it has been in the past when PCR has replaced the pandemic itself as the menace in the land, is to stop PCR mass testing.
In case you’re still not convinced and think several hundred people are dying of COVID-19 each day, please watch this 10 min explainer video, created by data scientist Joel Smalley. By the end you will appreciate how the difference between reporting date and date of occurrence in relation to deaths and the large difference in this regard between COVID-19 deaths, most of which occur in hospital, and non-COVID-19 deaths, many of which happen at home, gives at any moment an impression of excess deaths which, when corrected for this differential delay, collapses into nothing or into such a small signal that surely it’s not faintly a public health concern. It’s also important to be aware that, for the best of intentions, physicians are too quick to assign COVID-19 as the cause of death, partly because the death sometimes has the right kind of elements, but mostly because the rules require them to: any death within 28 days of a positive test has to be recorded as a COVID-19 death, no matter what the circumstances. The degree of misattribution is so large that the number of deaths from the top 10 leading causes have been pushed far below normal levels, which is highly suggestive of these deaths having been mislabelled. Do note, you should at this point expect some excess deaths, if from nothing else, a number of people dying – mostly at home – from non-COVID-19 causes, a result of restricted access to healthcare for eight months.
I think the evidence is unequivocal that we are in a PCR false positive pseudo-epidemic
It’s happened before, with whooping cough (caused by a bacterium, but the technique for diagnosing the disease was the same, PCR). Hundreds of apparent “cases” were diagnosed at a hospital in New Hampshire using PCR and physicians fitted the symptoms of various coughs and colds to what the “gold standard test” was telling them. In fact, not a single person had the disease. The positivity in the PCR test was around 15%, but no actual infection was found. 100% of the PCR positives were false. Unrealistically high positivity and no recent, independent confirmation of infection is now the situation in UK.
To the Lighthouse (again)
How can this PCR false positive pseudo-epidemic be occurring? A false positive is simply a positive outcome of a test when the item sought was absent from the original sample (there are a variety of sources of false positives and they are often ignored or confused). Most false positives in PCR occur due to cross-contamination. This can occur if a sample containing the virus is even briefly in contact with a sample not containing the virus. Contamination can and does happen at any of the stages from sample acquisition all the way into the reaction vessel in which the cyclical amplification of PCR takes place. This contamination can include the reference material used to confirm the test run is working, the so-called positive control, itself a piece of synthetic viral RNA. Such positive controls are potent sources of error as they are an intensely concentrated supply of the very material sought in miniscule amounts by the test, right down to a single, broken fragment of virus. Other common sources of contamination are a small number of samples which actually do contain the virus, which almost certainly continues to circulate at low levels and may already have become endemic (like the four, common cold-inducing coronaviruses, OC43, HKU1, 229E and NL63).
It is my opinion, and I am not alone, that industrialized molecular biology PCR mass testing is and always was unfeasible on the scale it’s currently being conducted. With high speed and throughput, something has to give and in this case it’s quality. Here are just a few of the reasons why you should no longer have any faith or confidence in the PCR testing in use in UK. As the drive to industrialize the process proceeded, responsibility for PCR testing was mostly moved into one centralised set of facilities called Lighthouse Labs. I shall describe testimony (for Milton Keynes) and video evidence (Randox in Northern Ireland) which are concordant.
We have horrifyingly clear evidence that the work processes, staffing, lack of quality control and external validation means that this facility cannot work reliably and produce trustworthy testing results. I have spoken at length to the brave scientist who’s blown the whistle on the Milton Keynes super-lab, Dr Julian Harris, who is one of the most experienced lab PCR scientists in the UK. He was been involved in high biosecurity level labs since 1987 and has operated PCR for decades. What’s been missed in the expose is that his concerns are not only with health and safety (though these are important). Almost any building can be adapted to carry out a highly sensitive assay such as PCR, while keeping contamination issues down to a minimum. The problem with the Milton Keynes site is the lack of thought that went into minimising thr risk of contamination of the COVID-19 PCR Assay. To this should be added the fact they have no appropriate biosafety level 2 and contagion expertise on site (as clearly stated in the HSE reports that can be viewed at the foot of Julian Harris’s article for Lockdown Scepticshere). It is this that is a recipe for disaster in terms of the inflation of positive test results by the generation of false positives.
No-one competent is inspecting these facilities, staff processes and results. The only person capable of looking from stem to stern who’s actually done so is Dr Julian Harris and he unequivocally condemns the operation. He highlighted overcrowded, bioinsecure workspaces, the absence of health and safety training, poor safety protocols and a lack of suitable PPE, such as the enforcement of wearing paper-visitor lab coats when handling swab samples in Class II BSCs – was this to cut down on laundry expenses? Handwashing facilities were available, but as the HSE discovered, they were often out of soap, sanitizer and towels, a consequence of personnel not knowing where to go to replenish these supplies.. The Health and Safety Executive was called in (by Dr Harris). Management of the facility failed to answer requests to set up a visit, so eventually, they made unannounced visits in late-September (see letters from the HSE at the base of Dr Harris’s piece). Their visits, which most unusually (and tells us of the degree of concern they felt) were accompanied by HM Inspector of Health and Safety, uncovered safety breaches at the Lighthouse Lab in Milton Keynes.
“I found they’ve got no experience with this sort of facility or handling bio-hazardous materials, and then they’re just launched into this activity,” Dr Harris says of the Milton Keynes team. Dr Harris was so troubled by what he saw that he contacted the Health and Safety Executive (HSE). He saw two people using biosecurity cabinets – enclosed, ventilated workspaces where scientists open the tubes containing the contaminated swabs – which were only calibrated to have protective airflow for one person. “Once you disrupt that [airflow] by overloading plus too much disruption of the veil nearest the operator, you might as well be working on an open bench. It just disrupts the whole reason for a cabinet to protect the operator. And it is really disturbing,” Dr Harris says. He alleges that the lab recruited local young people to work long shifts.
Dr Harris says he saw mobile phones being used in the labs and then taken to the canteen. The HSE visited the Milton Keynes lab and found five material breaches of health and safety legislation. A UK Biocentre manager admitted to the HSE that the training in place did not look “robust enough” for these new recruits. Dr Harris tells me that there was little or no Health and Safety training at all, despite the facility being rated BSL2.
It’s not only procedural issues in the labs that are concerning. With individual PCR tests, the scientist views the change in signal vs cycle and determines whether a test is positive, negative or indeterminate. In high throughput mode, this can only be done by software. Thus, the choice of provider is absolutely crucial to the accuracy and trustworthiness of the output, not only for an individual sample but also at a population level. For reasons not explained, the facility chose a software product which was apparently inferior to another. Why did the Lighthouse Lab choose an inferior product? In the example given, it ‘under-called’ positives but that doesn’t tell you that’s what it does now. What it does tell us is that it’s less reliable at ‘calling’ results. Surely the firm whose product performed better and had already passed regulatory standards would have been the better choice?
Underscoring their problems with staffing, the Lighthouse Lab did have a quality management system (QMS) specialist while Dr Harris worked there. However, that person resigned and, as far as I know, has not yet been replaced with someone of equivalent experience. This will undoubtedly have contributed to continuing failure to be UKAS accredited to ISO 15189, quality and competence in medical laboratories. While this can be seen as voluntary, the customer (Her Majesty’s Government) determines whether or not such accreditation is essential. Given there has never been a medical diagnostic test of such importance in the entire history of the nation, HMG must surely have specified ISO 15189 accreditation. If they have not, that is in my view a severe dereliction of duty. In any case, its absence does not in any way reduce the need to run these critical PCR tests to the highest standards and for the output to be trustworthy.
Separately, though the HSE accreditation doesn’t prove quality and accuracy of the end product, the test results, and that the facility is still not so accredited, indicates a continuing failure to get to grips with the overlapping issues in the lab which directly pertain to end-to-end sample integrity.
This detailed recounting of evidence is not designed to be a teach-in on health and safety, important though that is. It is instead to demonstrate that neither management nor staff have the scrupulous attention to every detail required to ensure sample integrity from end-to-end, which is merely the starting point to have any chance at all to successfully run this delicate and powerful technique, which is notoriously susceptible to cross-contamination of the smallest kind. Although the integrity of the laminar airflow is preserved in the cabinets – simultaneously protecting operator and sample – it does not cater for the overloading of the working area and clogging up the back grates that is dangerous for sample integrity and contagion exposure of personnel.
Micro-pipetting (dispensing volumes ranging from 1ml down to 0.0005ml) relies on highly accurate pipetting devices and their proper use is crucial in any application of molecular biology technologies and it is therefore the case with PCR. These micropipettors are used by personnel throughout the COVID-19 testing process. If misused, that can result not only in the incorrect volume of sample being withdrawn and dispensed into another receptacle, but can be the cause of contaminating test samples. As most staff had little to no PCR experience and in many cases, no experience of professional laboratory work at all, this would contribute to the inaccuracy of the end product – the COVID19 test results. As a hallmark of how low the hiring bar has been set, the Milton Keynes facility has a staff member who carries out ‘pipette training’. Dr Harris commented that even this individual had difficulties in understanding the standing operating procedure used for the pipette training, having come from their previous role of stacking shelves in Tesco’s. Micropipetting is a fundamental skill usually learnt at the beginning of a scientific career. I’ve never heard of such a role anywhere before in 39 years of conducting and supervising laboratory work in UK.
It is imperative that those performing liquid handling in a biofacility comprehensively understand how liquid biosamples can spread by droplets and aerosols. Most importantly, how they can inadvertently contaminate the sample(s) as well as expose the personnel to contagion. These skills must become second-nature – acquired over many months to years – before anyone is allowed to step foot in such a biohazardous environment.
Finally, I asked Dr Harris when, in the sequence of steps, the ‘negative control’ samples were placed. The most vulnerable part of the task to cross-contamination is the bag opening to sample placement in the final, racked tubes, which are then placed into the automated workflow, finally dispensing sample for testing into the PCR plate. Therefore, I expected to be told that there were at least two negative control swab samples (unused with their own bar codes) that were included at this initial stage of the process. One should insert some unused tubes early on, so that, if there was cross-contamination, it would be detected in the final, PCR step.
But no. The sole, negative control that is used at Milton Keynes is virus-free medium, carefully placed into a designated well as part of the first stage of the automated liquid handling process, where simultaneously 0.2ml of each sample is transferred to a well of a 96-well plate, each well containing the virus inactivation buffer. But this bypasses the first steps where cross-contamination may occur – that is, during the initial processing of samples. That’s not only bad scientific technique but, in my view, bad scientific acumen. If I was teaching an undergraduate student, and they came up with this as an experimental design, I would fail them. It’s no wonder that the positivity rate – the percentage of tests which come up positive – is so high as to be literally unbelievable. I’m sure the Lighthouse Lab tells its client that there’s no evidence of cross-contamination, as the negative controls are consistently free of virus. Yet we drive our entire national policy on the strength of this?
There are a small group of large labs which were set up at speed to become “Lighthouse Labs” or “Superlabs”. A second one, the Randox facility in Antrim, Northern Ireland, has been the subject of a Channel 4 Dispatches program. This detailed documentary film centres on this very large, private contract lab testing over 100K COVID-19 samples per day using PCR. Watching this program with an eye of someone experienced in lab procedures related to mass testing (though not this technique) I observed: workers cutting open plastic bags containing swab samples in tubes, some of which had leaked. The scissors were then used to open the next bag and so on. Tubes were wiped externally using a wipe, but the same wipe was used to mop the outsides of several tubes in a row. The tubes were then placed on their sides in a tray, where they were free to roll around and touch other tubes. Workers kept on the same pair of disposable gloves while opening a large number of such bags, one after another. A worker commented that just under 10% of tubes with red caps leaked. Randox stated that it didn’t make the tubes and that a fix was in progress.
Firstly, using scissors or any sharp instruments shouldn’t be used with biohazardous samples in BSL2/3/4 facilities. The exposure of the biosample contents to the air-conditioned room environment, plus the sample fluid contaminating cardboard boxes, is a recipe for disaster and could lead to:
Cross-contamination between samples
Cross-contamination between samples and personnel
Cross-contamination between sample and the room environment
Exposure of personnel to contagion of unknown origin(s)
A consultant microbiologist, who’d run an NHS pathology lab for 1- years, commented for the film: “If you have a tube which has leaked and is in your unpacking environment, it’s then quite easy for that to get onto other tubes. If the leaked sample was positive, it would cause the other tubes to become positive. These are very sensitive tests we’re using and it’s very easy to get (contamination-related) false positives. We would be shut down if we performed that way”.
Taking Milton Keynes and Randox together, I contend that there was a policy decision to create an expectation in the minds of most people that a ‘second wave’ was expected, and that this would require increased testing capability. The conditions which resulted from these industrialisation attempts (Lighthouse Labs and similar) by virtue of the poor sample handling evidenced in two examples (Milton Keynes, in the same building which houses the U.K. Biobank, and Randox, on a former military base) actively created that ‘second wave’ (of misdiagnosed cases, admissions and deaths). I believe the unavoidable conclusion is that the mechanism whereby large numbers of “cases” were and still are being created is insidious, uncontrolled and undetected cross-contamination during the swab sample processing stages.
I have no doubt that those conducting the manual steps of pipetting are doing their best. But they do not have the skills and experience of this technique, which must be performed repetitively and for hours, while never creating a burst of micro-aerosol as they drive the thumb plunger on the pipette slightly too fast, or creating a micro-splash as they change the disposable tip. They must never contaminate a fingertip of a glove as they open a potentially leaking tube and then touch another. They must never disturb the laminar airflow in the hoods so as to facilitate invisible levels of contamination from one tube to another. There are so many ways in which miniature levels of contamination compromise sample integrity and increase the number of positives, and no one has taught them to avoid them all.
In these two PCR mass testing factories, among the largest, there is now strong evidence of completely inadequate effort to ensure that end-to-end sample integrity is maintained. These are, in my view, simulacra of proper testing facilities. Meanwhile, daily testing capacity has grown considerably, approaching the goal of conducting 500,000 tests by PCR daily.
Criticisms of PCR (again)
Even if the Lighthouse Labs did work from a technical perspective, the Government has admitted that PCR’s characteristics as a test are literally out of control. Lord Bethel confirmed in a written answer that the UK Government does not know the operational false positive rate (OFPR). While the Government claimed it could adopt as an estimate a range from prior related tests (0.8-2.3%) this is tendentious. These earlier tests were done by highly experienced lab scientists working at relatively small scale. Each PCR test will have a unique false positive rate dependant on the design of the test and it cannot be deduced from other tests. The Lighthouse Labs are mostly staffed by young and inexperienced people, many of whom have never previously worked professionally in a lab. It is absurd to suggest the combination of inexperienced staff, coupled with an industrialized process of a technique so sensitive to cross-contamination that such cross-contamination is a routine problem in research labs performed by careful, knowledgeable scientists, could yield reliable, trustworthy results.
I maintain that lack of knowledge of the OFPR alone renders this PCR test in this configuration completely incapable of providing trustworthy results. If this was a diagnostic test in use in the NHS today, no physician would submit a patient sample to it, because it would be impossible to interpret a positive result. Of course, it is a diagnostic test in use today.
In summary, I argue that it is criminally dangerous to drive policy based in any way on this test (set up the way it is) and its results. No amount of argument or prevarication can alter these damning facts.
The entire ‘second wave’ is supported solely on the back of a flawed mass PCR test, which at industrialized scale was never, in my view and the views of others skilled in PCR, capable of delivering trustworthy results. I have detailed the evidence supporting the claim that the autumn PCR test results are not reliably detecting COVID-19 infection. It may seem a leap to damn the PCR test and claim that there isn’t an epidemic but a pseudo-epidemic. But even in the hands of skilled and careful people, the strange phenomenon of the PCR false positive pseudo-epidemic has occurred several times before. In large, industrialised labs, it is very likely that significant and unmeasured cross-contamination related false positive rates are occurring.
The key sign of a PCR false positive pseudo-epidemic is the relative paucity of excess deaths equal to the deaths claimed to be occurring as a result of the lethal infective agent. This key sign is present.
The unprecedented “’second wave’ conundrum is solved. It’s of course not happening, but why a ‘second wave’ was talked up, months before unreliable PCR testing data was brought into service, demands deeper investigation. It’s not a science matter: not unless the team predicting the wave can produce the scientific literature upon which the prediction and modelling was based.
As a reference, I spent over an hour consulting with the owner-manager of a well-run facility in another country, which mainly serves private clients. This person only hires staff to do this kind of work who have at least four years’ experience of PCR, not just of highly competent laboratory experience. These will in almost all cases be post-doctoral students, having already obtained a research-based PhD involving use of PCR techniques.
Those who observe that PCR testing at scale elsewhere seems to run well tell us only that it can be done acceptably if it’s set up carefully. That’s assuming you can trust their results, something to which my research cannot extend. In any case, in no way does that observation undermine any of what I’ve written.
Until we end the use of PCR mass testing, there is no chance that “cases” will reduce to very low levels. Lateral flow tests must become the gold standard test for COVID with PCR only used for confirmatory diagnosis. This will minimise the number of PCR tests that need to be performed allowing testing to return to competent NHS laboratories. Without such an intervention, even if the virus stopped circulating, I believe we’ll still hear of tens of thousands of “cases” every day, and several hundred deaths.
As the above graph clearly shows, there was a notable peak of excess deaths due to SARS-CoV-2 in the spring, but it has not returned. As noted earlier, some excess deaths are now to be expected at very least as a consequence of prolonged and widespread restricted access to the NHS.
So, just one wave, as expected. The ‘secondwave’ of “cases” and even “COVID-19 deaths” are an artefact of flawed testing.
This is an explanation of false positives in qRT-PCR: what they are and how they occur. qRT-PCR is the type of PCR used to test for COVID-19 and it stands for ‘Semi-Quantitative Reverse Transcriptase-Polymerase Chain Reaction’ (a bit of a mouthful, so sometimes people just say ‘The PCR test’). To unpack qRT-PCR and understand its use(s), we need to step back and think about genetic information and PCR in the round.
Your genes and how PCR amplifies tiny amounts of DNA
The genetic information of many species is made of DNA. This is true (as far as we know) for all bacteria, fungi, protozoa, plants, insects and vertebrates, including you. If you need to study the DNA from a small sample of one of these, you can amplify part of it using PCR (polymerase chain reaction – we’ll come to the ‘RT’ part in a minute). This is shown in Figure 1a. PCR is incredibly sensitive. It can start with as little as a single DNA molecule and quickly (in a couple of hours) amplify part of it to produce billions and billions of copies – enough to study in depth. PCR is used widely in research, clinically and in forensic medicine: genomic DNA in a tiny blood stain can be amplified by PCR so that investigators can combine it with other tests to show whether the blood came from Suspect A or Suspect B.
To tell (in general) if PCR has worked, the potential PCR products can be separated and visualised on a gel system. The gel tells you how long the PCR product is, which gives you an idea of whether you’ve amplified the intended target (which doesn’t always happen), and it contains a negative control to show that the PCR hasn’t amplified an unintended product, such as a contaminant. Both are very important and are considered below.
Today’s PCR is often measured by machines rather than gel systems. With a combination of light, detectors and clever fluorescent dye chemistry, it is possible for the machine to detect PCR amplification as it occurs, in ‘real time’. Real time PCR is sometimes referred to as qPCR, for semi-quantitative PCR. qPCR is immensely powerful, because it can tell you not only whether any amplifiable DNA was present in your starting material, but if all goes well, how much – it’s quantitative. However, it does not tell you what the amplified PCR product was; machines can be blind to the nature of the product. Without careful calibration, this can be a problem.
PCR only works on DNA
You may have noticed that the genetic information list above excluded viruses. Although the genomes of many viruses – like ‘flu – are also made of DNA, the genetic information of others is different: they use a related molecule called RNA. Coronaviruses (eg COVID-19) are in this second group: they use RNA as their genetic material. But PCR doesn’t work on RNA – it requires DNA. So the PCR of Figure 1a won’t work to amplify COVID-19 genetic material, which is made of RNA.
Fortunately, there is a work-around for this: we can first make a DNA copy of the RNA. This brings us to the ‘RT’ in ‘RT-PCR’. In RT-PCR, there is an initial step in which the enzyme reverse transcriptase (RT) uses RNA to make a DNA copy, and then PCR can use this copy as shown in Figure 1a. In certain circumstances, RT-PCR can be evaluated in a ‘real time’ way (qRT-PCR) using machines, as described above. This is a broadly adopted and powerful approach in research. It can tell molecular biologists how much of a certain type of RNA is present in a sample, for example whether a human biopsy contains an unhealthy level of cancer-associated RNA. A similar qRT-PCR approach is also taken to determine whether samples contain the COVID-19 virus genome. Given that much is at stake with this approach, it is probably wise to be aware of the challenges to, and limitations of qRT-PCR in general.
qRT-PCR as a double-edged sword
The amazing sensitivity of methods based on PCR is both their exoneration and their potential downfall. Each PCR cycle doubles the amount of material, which may not sound impressive, but it really is. To illustrate this, imagine you were perched on top of the Big Ben tower (96 metres up) and it doubled in length every second. Within 22 seconds (22 doublings), you would be travelling at the speed of light (leaving aside Special Relativity). So if something goes wrong in the PCR, you quickly amplify an aberrant result to staggering proportions. Although it varies, researchers are typically interested in PCR amplification that becomes detectable within 25-35 cycles (doublings), referred to as the threshold value (Ct) and discussed below. Let’s now turn to some of the issues in qRT-PCR that research laboratories are punctiliously careful to ensure – or guard against.
Specificity of the PCR primers
Primer specificity is critical, because you wish to amplify your sequence of interest, nothing else. Although represented here simplistically, this specificity has a lot to do with shape of the primer and it is, alas, not a binary thing. Figure 1b shows how primers can bind to regions of DNA that have a similar shape to the target, but are different. This binding is less efficient than binding to the bona fide, intended target but there are many more ‘target look-alikes’ than intended targets, so given enough time, primer binding to non-targets will occur. Primer binding is also affected by the chemical composition of the PCR reaction. In research laboratories, this can be carefully standardised, but it is more difficult to do so where samples come from different sources that are each unique, such as the nose or throat contents of people screened population-wide. Note that undesired primer binding events only have to occur twice, because the resulting PCR product then has primer sequences at each end that make a perfect match for subsequent cycles. If the PCR reaction has products of different sizes as it goes along, the shorter one tends to predominate, so if the undesired primer binding generates a short product, it will be amplified preferentially – but the PCR machine won’t tell you this. PCR reactions containing multiple primer pairs (‘multiplexing’) have a greater chance of producing undesired products because there are even more ‘target look-alikes’.
Reverse transcriptase: the ‘RT’ in qRT-PCR
Reverse transcriptase (RT) is a sensitive enzyme and goes ‘off’ easily, so it must be stored at a low temperature. Good research labs validate it in every experiment. But on the other hand, supposing the reverse transcriptase is working when there is little RNA present in the sample but a lot of DNA. Is the reverse transcriptase attracted to make a copy of DNA, perhaps rather than RNA? Yes, it is. So how much DNA do we each contain?
DNA and RNA in qRT-PCR samples
With some basic assumptions, the genetic material (DNA) in most human cell nuclei is 12.8 billion bases long. If you work out how much this represents in an entire person – you – it is truly amazing: there is enough DNA in each of us (more or less) to make 431 round trips to… the sun (from Droitwich). Even if 100 genomes-worth of COVID-19 virus RNA existed per cell and it was efficiently copied by reverse transcriptase, there would be over 4,000x more genomic DNA. Where possible, and in general, research laboratories therefore take careful steps to purify the RNA or remove DNA before beginning the qRT-PCR protocol. This doesn’t always work ideally, so there are important checks, such as running the reaction without the RT step; if you detect a product in this situation, you know something’s gone wrong, because PCR doesn’t work on RNA – the RT step should be critical. An additional complication is that healthy human cells contain as much as 5x more RNA than DNA. All of this means that human samples contain ample material for off-target amplification.
Most research applications limit the number of qRT-PCR cycles. More technically speaking, this is determined by the threshold cycle number – or Ct – which is the PCR cycle number at which there is enough product to give a signal that is above the background level. In research, Ct values are often well under 30 and those much over 35 may merely reflect background levels. (It’s slightly more complicated because a PCR signal is more likely to be detected the more starting material there is, so research scientists determine the relative amount of product; we’ll skip this here, as it doesn’t affect false positives.) Increasing the cycle number also increases the chance of detecting non-specific primer binding. Critically, PCR machines do not distinguish between ‘false’ signals and those that come from intended targets during PCR; the machine measures product levels, not what the product is. Only with a gel system, sequencing or some other method, does the nature of the PCR product become clearer and these checks are used in research, particularly when setting up an experiment. Otherwise, the machine can happily register a product that has nothing to do with your intended target, and you will never know without additional checks.
Contamination with amplifiable DNA
This is an extraordinary facet of the sensitivity of PCR and it plagues all laboratories that use it, especially ones that often amplify the same type of PCR product from different samples. The nature of PCR contamination can seem magical to beginners: somehow, a vanishingly tiny amount of product from last week’s PCR got into today’s. And this contamination can be caused by as little as a single molecule. Contamination is detected by negative controls, such as a PCR reaction where you do not expect any product, like setting everything up but leaving out the sample or the reverse transcriptase. But because contamination is such a major pain, by far the best approach is to avoid contamination in the first place, rather than detecting it after it happened. For this reason, special precautions are taken in most laboratories to avoid contamination, such as using aerosol barrier tips (which are expensive) for pipetting, frequently changing gloves, using dedicated materials and working in a regularly cleaned and dedicated area. This is expensive. If there is contamination, experiments have to stop until it is rooted out. This is time-consuming, frustrating, can take days and can also be costly. Solutions and enzymes have to be tested and discarded if they are suspected of being a source. Pipettes and machines can also be the source of contamination and have to be purged if this turns out to be the case. Contamination is sometimes blatant, such as when all of the PCR reactions give products although only some (or none) were expected to. But it can be subtle because the amount of contaminating material is lower; in these cases, only some PCR reactions produce a product due to contamination. These are difficult to detect and require trouble-shooting experience as they can, of course, critically alter the interpretation of the data.
This is a technical summary stripped of as much jargon as possible. As it relates to COVID-19, it doesn’t cover so-called ‘cold’ positives, in which virus RNA (including RNA fragments) is present in samples that do not contain viable or infectious virus and yet may still may give a positive signal. But it should highlight that although qRT-PCR is immensely powerful in research, its potential pitfalls require punctilious safeguards. In research, each experiment is performed with independent samples on at least two occasions – a minimal requirement for publication by respected journals. Interpreting both positive and negative qRT-PCR results requires experience that is most abundant among molecular biologists working on eukaryotic systems, and one wonders to what extent they have been called upon to advise on COVID-19 testing. There are few technical grounds on which to be confident that qRT-PCR is readily scalable, but doubts about its clinical application could be met squarely, whilst respecting patient anonymity, by complete, contemporaneous and auditable transparency.
Figure 1. Polymerase chain reaction, PCR.a, The amount of PCR primers is unlimiting and the primers usually match their target perfectly. The PCR principle works in qRT-PCR. b, In an ideal world, each primer is specific for its intended target, but in reality they can match ‘target look-alikes’. Although these don’t work as well as the intended targets, there are many more look-alikes (than intended targets) and binding to them on only two occasions may be sufficient to produce an unintended (‘false’) positive.
The author is a research scientist working on eukaryotic molecular biology who has a PhD in microbial pathogenicity and has been using RT-PCR for over thirty years.
“When the facts change, I change my mind. What do you do sir?” – John Maynard Keynes
The UK has a big problem with the false positive rate (FPR) of its COVID-19 tests. The authorities acknowledge no FPR, so positive test results are not corrected for false positives and that is a big problem.
The standard COVID-19 RT-PCR test results have a consistent positive rate of ≤ 2% which also appears to be the likely false positive rate (FPR), rendering the number of official ‘cases’ virtually meaningless. The likely low virus prevalence (~0.02%) is consistent with as few as 1% of the 6,100+ Brits now testing positive each week in the wider community (pillar 2) tests actually having the disease.
We are now asked to believe that a random, probably asymptomatic member of the public is 5x more likely to test ‘positive’ than someone tested in hospital, which seems preposterous given that ~40% of diagnosed infections originated in hospitals.
The high amplification of PCR tests requires them to be subject to black box software algorithms, which the numbers suggest are preset at a 2% positive rate. If so, we will never get ‘cases’ down until and unless we reduce, or better yet cease altogether, randomized testing. Instead the government plans to ramp them up to 10m a day at a cost of £100bn, equivalent to the entire NHS budget.
Government interventions have seriously negative political, economic and health implications yet are entirely predicated on test results that are almost entirely false. Despite the prevalence of virus in the UK having fallen to about 2-in-10,000, the chances of testing ‘positive’ stubbornly remain ~100x higher than that.
First do no harm
It may surprise you to know that in medicine, a positive test result does not often, or even usually, mean that an asymptomatic patient has the disease. The lower the prevalence of a disease compared to the false positive rate (FPR) of the test, the more inaccurate the results of the test will be. Consequently, it is often advisable that random testing in the absence of corroborating symptoms, for certain types of cancer for example, is avoided and doubly so if the treatment has non-trivial negative side-effects. In Probabilistic Reasoning in Clinical Medicine (1982), edited by Nobel laureate Daniel Kahneman and his long-time collaborator Amos Tversky, David Eddy provided physicians with the following diagnostic puzzle. Women age 40, participate in routine screening for breast cancer which has a prevalence of 1%. The mammogram test has a false negative rate of 20% and a false positive rate of 10%. What is the probability that a woman with a positive test actually has breast cancer? The correct answer in this case is 7.5% but 95/100 doctors in the study gave answers in the range 70-80%, i.e. their estimates were out by an order of magnitude. [The solution: in each batch of 100,000 tests, 800 (80% of the 1,000 women with breast cancer) will be picked up; but so too will 9,920 (10% FPR) of the 99,200 healthy women. Therefore, the chance of actually being positive (800) if tested positive (800 + 9,920 = 10,720) is only 7.46% (800/10,720).]
In the section on conditional probability in their new book Radical Uncertainty, Mervyn King and John Kay quote a similar study by psychologist Gerd Gigerenzer of the Max Planck Institute and author of Reckoning with Risk, who illustrated medical experts’ statistical innumeracy with the Haemoccult test for colorectal cancer, a disease with an incidence of 0.3%. The test had a false negative rate of 50% and a false positive rate of 3%. Gigerenzer and co-author Ulrich Hoffrage asked 48 experienced (average 14 years) doctors what the probability was that someone testing positive actually had colorectal cancer. The correct answer in this case is around 5%. However, about half the doctors estimated the probability at either 50% or 47%, i.e. the sensitivity (FNR) or the sensitivity less the specificity (FNR – FPR) respectively. [The solution: from 100,000 test subjects, the test would correctly identify only half of the 300 who had cancer but also falsely identify as positive 2,991 (3%) of the 99,700 healthy subjects. This time the chance of being positive if tested positive (150 + 2,991 = 3,141) is 4.78% (150/3,141).]As Gigerenzer concluded in a subsequent paper in 2003, “many doctors have trouble distinguishing between the sensitivity (FNR), the specificity (FPR), and the positive predictive value (probability that a positive test is a true positive) of test —three conditional probabilities.” Because doctors and patients alike are inclined to believe that almost all ‘positive’ tests indicate the presence of disease, Gigerenzer argues that randomised screening is far too poorly understood and too inaccurate in the case of low incidence diseases and can prove harmful where interventions have non-trivial, negative side-effects. Yet this straightforward lesson in medical statistics from the 1990s has been all but forgotten in the COVID-19 panic of 2020. Whilst false negatives might be the major concern if a disease is rife, when the incidence is low, as with the specific cancers above or COVID-19 PCR test, for example, the overriding problem is the false positive rate (FPR). There have been 17.6m cumulative RT-PCR (antigen) tests in the UK, 350k (2%) of which gave positive results. Westminster assumes this means the prevalence of COVID-19 is about 2% but that conclusion is predicated on the tests being 100% accurate which, as we will see below, is not the case at all.
Positives ≠ cases
One clue is that this 2% positive rate crops up worryingly consistently, even though the vast majority of those tested nowadays are not in hospital, unlike the early days. For example, from the 520k pillar 2 (community) tests in the fortnight around the end of May, there were 10.5k positives (2%), in the week ending June 24th there were 4k positives from 160k pillar 2 tests (2%) and last week about 6k of the 300k pillar 2 tests (2% again) were also ‘positive’. There are two big problems with this. First, medically speaking, a positive test result is not a ‘case’. A ‘case’ is by definition both symptomatic and must be diagnosed by a doctor but few of the pillar 2 positives report any symptoms at all and almost none are seen by doctors. Second, NHS diagnosis, hospital admission and death data have all declined consistently since the peak, by over 99% in the case of deaths, suggesting it is the ‘positive’ test data that have been corrupted. The challenge therefore is to deduce what proportion of the reported ‘positives’ actually have the disease (i.e. what is the FPR)? Bear in mind two things. First, the software that comes with the PCR testing machines states that these machines are not to be used for diagnostics (only screening). Second, the positive test rate can never be lower than the FPR.
Is UK prevalence now 0.02%?
The epidemiological rule-of-thumb for novel viruses is that medical cases can be assumed to be about 10x deaths and infections 10x cases. Note too that by medical cases what is meant is symptomatic hospitalisations not asymptomatic ‘positive’ RT-PCR test results. With no reported FPR to analyse and adjust reported test positives with, but with deaths now averaging 7 per day in the UK, we can backwardly estimate 70 daily symptomatic ‘cases’. This we can roughly corroborate with NHS diagnoses, which average 40 per day in England (let’s say 45 for the UK as a whole). The factor 10 rule-of-thumb therefore implies 450-700 new daily infections. UK government figures differ from the NHS and daily hospital admissions are now 84, after peaking in early April at 3,356 (-97.5%). Since the infection period lasts 22-23 days, the official death and diagnosis data indicate roughly 10-18k current active infections in the UK, 90% of whom feel just fine. Even the 2k daily pillar 1 (in hospital) tests only result in about 80 (0.4%) positives, 40 diagnoses and 20 admissions. Crucially, all these data are an order of magnitude lower than the positive test data and result in an inferred virus prevalence of 0.015%-0.025% (average 0.02%), which is far too low for randomized testing with anything less than a 100% perfect test; and the RT-PCR test is certainly less than 100% perfect.
Only 1% of ‘positives’ are positive
So, how do we reconcile an apparent prevalence of around 0.02% with a consistent positive PCR test rate of around 2%, which is some 100x higher? Because of the low prevalence of the disease, reported UK pillar 2 positives rate and the FPR are both about 2%, meaning almost all ‘positive’ test results are false with an overall error rate of 99:1 (99 errors for each correct answer). In other words, for each 100,000 people tested, we are picking up at least 24 of the 25 (98%) true positives but also falsely identifying 2,000 (2%) of the 99,975 healthy people as positives too. Not only do < 1.2% (24/2024) of pillar 2 ‘positives’ really have COVID-19, of which only 0.1% would be medically defined as symptomatic ‘cases’, but this 2% FPR rate also explains the ~2% (2.02% in this case) positive rate so consistently observed in the official UK data.
The priority now: FPR
This illustrates just how much the FPR matters and how seriously compromised the official data are without it. Carl Mayers, Technical Capability Leader at the Ministry of Defence Science and Technology Laborartory (Dstl) at Porton Down, is just one government scientist who is understandably worried about the undisclosed FPR. Mayers and his co-author Kate Baker submitted a paper at the start of June to the UK Government’s Scientific Advisory Group for Emergencies (SAGE) noting that the RT-PCR assays used for testing in the UK had been verified by Public Health England (PHE) “and show over 95% sensitivity and specificity” (i.e. a sub-5% false positive rate) in idealized laboratory conditions but that “we have been unable to find any data on the operational false positive rate” (their bold) and “this must be measured as a priority” (my bold). Yet SAGE minutes from the following day’s meeting reveal this paper was not even discussed.
According to Mayers, an establishment insider, PHE is aware the COVID-19 PCR test false positive rate (FPR) may be as high as 5%, even in idealized ‘analytical’ laboratory environments. Out in the real world though, ‘operational’ false positives are often at least twice as likely to occur: via contamination of equipment (poor manufacturing) or reagents (poor handling), during sampling (poor execution), ‘aerosolization’ during swab extraction (poor luck), cross-reaction with other genetic material during DNA amplification (poor design specification), and contamination of the DNA target (poor lab protocol), all of which are aggravating factors additional to any problems inherent in the analytic sensitivity of the test process itself, which is itself far less binary than the policymakers seem to believe. As if this wasn’t bad enough, over-amplification of viral samples (i.e. a cycle threshold ‘Ct’ > 30) causes old cases to test positive, at least 6 weeks after recovery when people are no longer infectious and the virus in their system is no longer remotely viable, leading Jason Leitch, Scotland’s National Clinical Director to call the current PCR test ‘a bit rubbish.’
The RT-PCR swab test looks for the existence of viral RNA in infected people. Reverse Transcription (RT) is where viral RNA is converted into DNA, which is then amplified (doubling each cycle) in a polymerase chain reaction (PCR). A primer is used to select the specific DNA and PCR works on the assumption that only the desired DNA will be duplicated and detected. Whilst each repeat cycle increases the likelihood of detecting viral DNA, it also increases the chances that broken bits of DNA, contaminating DNA or merely similar DNA may be duplicated as well, which increases the chances that any DNA match found is not from the Covid viral sequence.
Amplification makes it easier to discover virus DNA but too much amplification makes it too easy. In Europe the amplification, or ‘cycle threshold’ (Ct), is limited to 30Ct, i.e. doubling 30x (2 to the power of 30 = 1 billion copies). It has been known since April, that even apparently heavy viral load cases “with Ct above 33-34 using our RT-PCR system are not contagious and can thus be discharged from hospital care or strict confinement for non-hospitalized patients.” A review of 25 related papers by Carl Heneghan at the Centre for Evidence-Based Medicine (CEBM) also has concluded that any positive result above 30Ct is essentially non-viable even in lab cultures (i.e. in the absence of any functional immune system), let alone in humans. However, in the US, an amplification of 40Ct is common (1 trillion copies) and in the UK, COVID-19 RT-PCR tests are amplified by up to 42Ct. This is 2 to the power of 42 (i.e. 4.4 trillion copies), which is 4,400x the ‘safe’ screening limit. The higher the amplification, the more likely you are to get a ‘positive’ but the more likely it is that this positive will be false. True positives can be confirmed by genetic sequencing, for example at the Sanger Institute, but this check is not made, or at least if it is, the data is also unreported.
The sliding scale
Whatever else you may therefore have previously thought about the PCR COVID-19 test, it should be clear by now that it is far from either fully accurate, objective or binary. Positive results are not black or white but on a sliding scale of grey. This means labs are required to decide, somewhat subjectively, where to draw the line because ultimately, if you run enough cycles, every single sample would eventually turn positive due to amplification, viral breakdown and contamination. As Marianne Jakobsen of Odense University Hospital Denmark puts it on UgenTec’s website: “there is a real risk of errors if you simply accept cycler software calls at face value. You either need to add a time-consuming manual review step, or adopt intelligent software.”
Adjusting Ct test results
Most labs therefore run software to adjust positive results (i.e. decide the threshold) closer to some sort of ‘expected’ rate. However, as we have painfully discovered with Prof. Neil Ferguson’s spectacularly inaccurate epidemiological model (expected UK deaths 510,000; actual deaths 41,537) if the model disagrees with reality, some modelers prefer to adjust reality not their model. Software programming companies are no exception and one of them, diagnostics.ai, is taking another one UgenTec (which won the no-contest bid for setting and interpreting the Lighthouse Labs thresholds), to the High Court on September 23rd apparently claiming UgenTec had no track record, external quality assurance (EQA) or experience in this field. Whilst this case may prove no more than sour grapes on diagnostics.ai’s part, it does show that PCR test result interpretation, whether done by human or computer, is ultimately not only subjective but as such will always effectively bury the FPR.
Increase tests, increase ‘cases’
So, is it the software that is setting the UK positive case rate ≤ 2%? Because if it is, we will never get the positive rate below 2% until we cease testing asymptomatics. Last week (ending August 26th) there were just over 6,122 positives from 316,909 pillar 2 tests (1.93%), as with the week of July 22nd (1.9%). Pillar 2 tests deliver a (suspiciously) stable proportion of positive results, consistently averaging ≤ 2%. As Carl Heneghan at the CEBM in Oxford has explained, the increase in absolute number of pillar 2 positives is nothing more than a function of increased testing, not increased disease as erroneously reported in the media. Heneghan shows that whilst pillar 1 cases per 100,000 tests have been steadily declining for months, pillar 2 cases per 100,000 tests are “flatlining” (at around 2%).
30,000 under house arrest
In the week ending August 26th, there were 1.45m tests processed in the UK across all 4 pillars, though there seem to be no published results for the 1m of these tests that were pillar 3 (antibody tests) or pillar 4 “national surveillance” tests (NB. none of the UK numbers ever seem to match up). But as far as pillar 1 (hospital) cases are concerned, these have fallen by about 90% since the start of June, so almost all positive cases now reported in the UK (> 92% of the total) come from the largely asymptomatic pillar 2 tests in the wider community. Whilst pillar 2 tests were originally intended to be only for the symptomatic (doctor referral etc) the facilities have been swamped with asymptomatics wanting testing, and their numbers are only increasing (+25% over the last two weeks alone) perhaps because there are now very few symptomatics out there. The proportion of pillar 2 tests that that are taken by asymptomatics is yet another figure that is not published but there are 320k pillar 2 tests per week, whilst the weekly rate of COVID-19 diagnoses by NHS England is just 280. Assume that Brits are total hypochrondriacs and only 1% of those reporting respiratory symptoms to their doctor (who sends them out to get a pillar 2 test) end up diagnosed, that still means well over 90% of all pillar 2 tests are taken by the asymptomatic; and asymptomatics taking PCR tests when the FPR is higher than the prevalence (100x higher in this instance) results in a meaningless FPR (of 99% in this instance).Believing six impossible things before breakfast
Whilst the positive rate for pillar 2 is consistently ~2% (with that suspiciously low degree of variability), it is more than possible that the raw data FPR is 5-10% (consistent with the numbers that Carl Mayers referred to) and the only reason we don’t see such high numbers is that the software is adjusting the positive threshold back down to 2%. However, if that is the case, no matter what the true prevalence of the disease, the positive count will always and forever be stuck at ~2% of the number of tests. The only way to ‘eradicate’ COVID-19 in that case would be to cease randomized testing altogether, which Gerd Gigerenzer might tell you wouldn’t be a bad idea at all. Instead, lamentably, the UK government is reportedly doubling down with its ill-informed ‘Operation Moonshot’, an epically misguided plan to increase testing to 10m/day, which would obviously mean almost exclusively asymptomatics, and which we can therefore confidently expect to generate an apparent surge in positive ‘cases’ to 200,000 a day, equivalent to the FPR and proportionate to the increase in the number of tests.
Emperor’s new clothes
Interestingly, though not in a good way, the positive rate seems to differ markedly depending on whether we are talking about pillar 1 tests (mainly NHS labs) or pillar 2 tests, mainly managed by Deloitte (weird but true) which gave the software contract to UgenTec and which between them set the ~2% positive thresholds for the Lighthouse Lab network. This has had the quirky result that a gullible British public is now expected to believe that people in hospital are 4-5x less likely to test positive (0.45%) than fairly randomly selected, largely asymptomatic members of the general public (~2%), despite 40% of transmissions being nosocomial (at hospital). The positive rate, it seems, is not just suspiciously stable but subject to worrying lab-by-lab idiosyncrasies pre-set by management consultants, not doctors. It is little wonder no one is willing to reveal what the FPR is, since there’s a good chance nobody really knows any longer; but that is absolutely no excuse for implying it is zero.
Wave Two or wave goodbye?
The implications of the overt discrepancy between the trajectories of UK positive tests (up) and diagnoses, hospital admissions and deaths (all down) need to be explained. Positives bottomed below 550 per day on July 8th and have since gone up by a factor of three to 1500+ per day. Yet over the same period (shifted forward 12 days to reflect the lag between hospitalisation and death), daily deaths have dropped, also by a factor of three, from 22 to 7, as indeed have admissions, from 62 to 20 (compare the right-hand side of the upper and lower panels in the Chart below). Much more likely, positive but asymptomatic tests are false positives. The Vivaldi 1 study of all UK care home residents found that 81% of positives were asymptomatic, which for this most vulnerable cohort, probably means false positive.
This almost tenfold discrepancy between positive test results and the true incidence of the disease also shows up in the NHS data for 9th August (the most recent available), showing daily diagnoses (40) and hospital admissions (33) in England that are way below the Gov.UK positive ‘cases’ (1,351) and admissions (53) data for the same day. Wards are empty and admissions are so low that I know of at least one hospital (Taunton in Devon), for example, which discharged its last COVID-19 patient three weeks ago and hasn’t had a single admission since. Thus the most likely reason < 3% (40/1351) of positive ‘cases’ are confirmed by diagnosis is the ~2% FPR. Hence the FPR needs to be expressly reported and incorporated into an explicit adjustment of the positive data before even more harm is done.
Oxford University’s Sunetra Gupta believes it is entirely possible that the effective herd immunity threshold (HIT) has already been reached, especially given that there hasn’t been a genuine second wave anywhere. The only measure suggesting higher prevalence than 0.025% is the positive test rate but this data is corrupted by the FPR. The very low prevalence of the disease means that the most rational explanation for almost all the positives (2%), at least in the wider community, is the 2% FPR. This benign conclusion is further supported by the ‘case’ fatality rate (CFR), which has declined 40-fold: from 19% of all ‘cases’ at the mid-April peak to just 0.45% of all ‘positives’ now. The official line is that we are getting better at treating the disease and/or it is only healthy young people getting it now; but surely the far simpler explanation is the mathematically supported one that we are wrongly assuming, against all the evidence, that the PCR test results are 100% accurate.
Fear and confusion
Deaths and hospitalizations have always provided a far truer, and harder to misrepresent, profile of the progress of the disease. Happily, hospital wards are empty and deaths had already all but disappeared off the bottom of the chart (lower panel, in the chart above) as long ago as mid/late July; implying the infection was all but gone as long ago as mid-June. So, why are UK businesses still facing restrictions and enduring localized lockdowns and 10pm curfews (Glasgow, Bury, Bolton and Caerphilly)? Why are Brits forced to wear masks, subjected to traveler quarantines and, if randomly tested positive, forced into self-isolation along with their friends and families? Why has the UK government listened to the histrionics of discredited self-publicists like Neil Ferguson (who vaingloriously and quite sickeningly claims to have ‘saved’ 3.1m lives) rather than the calm, quiet and sage interpretations offered by Oxford University’s Sunetra Gupta, Cambridge University’s Sir David Spiegelhalter, the CEBM’s Carl Heneghan or Porton Down’s Carl Mayers? Let’s be clear: it certainly has nothing to do with ‘the science’ (if by science we mean ‘math’); but it has a lot to do with a generally poor grasp of statistics in Westminster; and even more to do with political interference and overreach.
Bad Math II
As an important aside, it appears that the whole global lockdown fiasco might have been caused by another elementary mathematical mistake from the start. The case fatality rate (CFR) is not to be confused with the infection fatality rate (IFR), which is usually 10x smaller. This is epidemiology 101. The epidemiological rule-of-thumb mentioned above is that (mild and therefore unreported) infections can be initially assumed to be approximately 10x cases (hospital admissions) which are in turn about 10x deaths. The initial WHO and CDC guidance following Wuhan back in February was that COVID-19 could be expected to have the same 0.1% CFR as flu. The mistake was that 0.1% was flu’s IFR, not its CFR. Somehow, within days, Congress was then informed on March 11th that the estimated mortality for the novel coronavirus was 10x that of flu and days after that, the lockdowns started.
Neil Ferguson: Covid’s Matthew Hopkins
This slip-of-the-tongue error was, naturally enough, copied, compounded and legitimized by the notorious Prof. Neil Ferguson, who referenced a paper from March 13th he had co-authored with Verity et al. which took “the CFR in China of 1.38% (to) obtain an overall IFR estimate for China of 0.66%”. Not three days later his ICL team’s infamous March 16th paper further bumped up “the IFR estimates from Verity et al… to account for a non-uniform attack rate giving an overall IFR of 0.9%.” Just like magic, the IFR implied by his own CFR estimate of 1.38% had, without cause, justification or excuse, risen 6.5-fold from his peers’ rule-of-thumb of 0.14% to 0.9%, which incidentally meant his mortality forecast would also be similarly multiplied. Not satisfied with that, he then exaggerated terminal herd immunity.
Because Ferguson’s model simplistically assumed no natural immunity (there is) and that all socialization is homogenous (it isn’t), his model doesn’t anticipate herd immunity until 81% of the population has been infected. All the evidence since as far back as February and the Diamond Princess indicated that effective herd immunity is occurring around a 20-25% infection rate; but the modelers have still not updated their models to any of the real world data yet and I don’t suppose they ever will. This is also why these models continue to report an R of ≥ 1.0 (growth) when the data, at least on hospital admissions and deaths, suggest the R has been 0.3-0.6 (steadily declining) since March. Compound all these errors and Ferguson’s expected UK death toll of 510k has proved to be 12x too high. His forecast of 2.2m US deaths has also, thankfully but no thanks to him, been 11x too high too. The residual problem is that the politicians still believe this is merely Armageddon postponed, not Armageddon averted. “A coward dies a thousand times before his death, but the valiant taste of death but once” (Shakespeare).
It is wholly standard to insist on external quality assurance (EQA) for any test but none such has been provided here. Indeed all information is held back on a need-to-know rather than a free society basis. The UK carried out 1.45m tests last week but published the results for only 452k of them. No pillar 3 (antibody) test results have been published at all, which begs the question: why not (official reason – the data has been anonymized, as if that makes any sense)? The problem is that instead of addressing the FPR, the authorities act as if it is zero, and so assume relatively high virus prevalence. If however, the 2% positive rate is merely a reflection of the FPR, a likely explanation for why pillar 3 results remain unpublished might be that they counterintuitively show a decline in antibody positives. Yet this is only to be expected if the prevalence is both very low and declining. T-cells retain the information to make antibodies but if there is no call for them because people are no longer coming into contact with infections, antibodies present in the blood stream decline. Why there are no published data on pillar 4 (‘national surveillance’ PCR tests remains a mystery).
It’s not difficult
However, it is relatively straightforward to resolve the FPR issue. The Sanger Institute is gene sequencing positive results but will fail to achieve this with any false positives, so publishing the proportion of failed sequencing samples would go a long way to answering the FPR question. Alternatively, we could subject positive PCR tests to a protein test for confirmation. Lab contaminated and/or previously-infected-now-recovered samples would not be able to generate these proteins like a live virus would, so once again, the proportion of positive tests absent protein would give us a reliable indication of the FPR.
Scared to death
The National Bureau of Economic Research (NBER) has filtered four facts from the international COVID-19 experience and these are: that the growth in daily deaths declines to zero within 25-30 days, that they then decline, that this profile is ubiquitous and so much so that governmental non-pharmaceutical interventions (NPIs) made little or no difference. The UK government needs to understand that neither assuming that ‘cases’ are growing, without at least first discounting the possibility that what is observed is merely a property of the FPR, nor ordering anti-liberal NPIs, is in any way ‘following the science’. Even a quite simple understanding of statistics indicates that positive test results must be parsed through the filter of the relevant FPR. Fortunately, we can estimate the FPR from what little raw data the government has given us but worryingly, this estimate suggests that ~99% of all positive tests are ‘false’. Meanwhile, increased deaths from drug and alcohol abuse during lockdowns, the inevitable increase in cases of depression and suicide once job losses after furlough, business and marriage failures post loan forbearance become manifest and, most seriously, the missed cancer diagnoses from the 2.1m screenings that have been delayed must be balanced against a government response to COVID-19 that looks increasingly out of all proportion to the hard evidence. The unacknowledged FPR is taking lives, so establishing the FPR, and therefore accurate numbers for the true community prevalence of the virus, is absolutely essential.
James Ferguson is the Founding Partner of MacroStrategy
The Office for National Statistics has admitted that in its Covid infection survey it has been reporting PCR tests as positive when only a single coronavirus gene is detected, despite this being contrary to the instructions of the manufacturer that two or more target genes must be found before a positive result can be declared.
According to a rapid response in the BMJ this week by Dr Martin Neil, a statistics professor at the University of London, targeting only a single gene in this way massively increases the risk of a false positive because of the possibility of cross-reactivity with other coronaviruses as well as prevalent bacteria or other contamination.
Digging into the detail of the methods followed by the lighthouse laboratories which process the tests for the ONS, Professor Neil writes:
The kit used by the Glasgow and Milton Keynes lighthouse laboratories is the ThermoFisher TaqPath RT-PCR which tests for the presence of three target genes from SARS-COV-2. Despite Corman et al originating the use of PCR testing for SARS-CoV-2 genes there is no agreed international standard for SARS-COV-2 testing. Instead, the World Health Organisation (WHO) leaves it up to the manufacturer to determine what genes to use and instructs end users to adhere to the manufacturer instructions for use.
The WHO’s emergency use assessment for the ThermoFisher TaqPath kit includes the instruction manual and contained therein is an interpretation algorithm describing an unequivocal requirement that two or more target genes be detected before a positive result can be declared. The latest revision of ThermoFisher’s instruction manual contains the same algorithm. The WHO have been sufficiently concerned about correct use of RT-PCR kits that on January 20th 2021 they issued a notice for PCR users imploring them to review manufacturer instructions for use carefully and adhere to them fully.
The ONS’s report of December 5th 2020 lists SARS-CoV-2 positive results for valid two and three target gene combinations and the report of December 21st does the same, for samples processed by the Glasgow and Milton Keynes lighthouse laboratories. However, it also lists single gene detections as positive results.
Between a quarter and two thirds of positive results were affected, Professor Neil found.
Over the period reported the maximum weekly percentage of positives on a single gene is 38% for the whole of the UK for the week of February 1st. The overall UK average was 23%. The maximum percentage reported is 65%, in East England in the week beginning October 5th. In Wales it was 50%, in Northern Ireland it is 55% and in Scotland it was 56%. The full data including averages and maxima/minima are given in .
Although the non-compliant practice was clearly indicated in the ONS reports and confirmed in correspondence, it was denied by key figures when writing in the press.
Professor Alan McNally, Director of the University of Birmingham Turnkey laboratory, who helped set up the Milton Keynes lighthouse laboratory, contradicted what was stated in the ONS report in a Guardian newspaper article about the new variant. He reported that all lighthouse laboratories operated a policy that adhered to the manufacturer instructions for use: requiring two-or-more genes for positive detection.
In correspondence with Mr Nicholas Lewis about single gene testing, in February 2021, the ONS confirmed that they do indeed call single gene targets as positives in their COVID-19 Infection Survey and also confirmed that the samples are processed by UK lighthouse laboratories.
Is this one reason the ONS consistently reports higher Covid infections than the ZOE Covid Symptom Study, which tracks symptomatic Covid? In its latest report published today, the ONS estimates 192,300 people had Covid in the UK in the week ending March 13th, whereas ZOE estimates 109,400 people had symptomatic Covid in the middle of that week – almost half the number.
Across Europe, including in the UK, we see the following:
Daily ‘cases’ sky-rocketed in Europe as Autumn arrived.
Daily deaths labelled as ‘Covid deaths’ rose in line with ‘cases’ – to levels apparently higher than at the Spring peak.
BUT: Total all-cause mortality does not reflect the above.
What is behind this conundrum?
The central thesis of this paper is that we have a major problem with PCR-testing.
This is distorting policy and creating the illusion that we are in a serious pandemic when in fact we are not.
This is causing:
Excess deaths due to restricted access to the NHS.
An NHS staffing crisis which is exacerbating matters.
Unprecedented assaults on civil liberties and the economy.
What we need to do about this:
Stop mass-testing using PCR in the UK and replace with Lateral Flow Tests where required.
Other recommendations as detailed later in this document.
It should be noted that legal cases and technical challenges to PCR mass-testing are growing across Europe, including in the UK.
NB: The info contained in this paper is merely illustrative of some of the key issues and does not represent the totality of the evidence available.
The current problem is that both Covid cases and Covid deaths are being overstated massively due to confused definitions and poor measurements. Claims of Covid deaths should be backed up with evidence that they were really caused by Covid. From the data, it seems reasonable to assume that at least 10% were not caused by Covid. They have been misdiagnosed due to faulty testing – we explain our reasoning below, point 5 being the most important. Misdiagnosis has resulted from false positive laboratory tests.
An epidemic is defined as the wide spread of an infectious disease. The final letter of Covid is ‘D’ for ‘disease’. A disease requires symptoms. Public Health England’s National COVID-19 case definition required the presence of symptoms. Somehow symptoms have become irrelevant. We are now chasing down the healthy, immune population who are being over-tested. This includes those in hospital with other symptoms and for other reasons. If we tested for influenza in the same way and with the same implications, we would have to lock down every winter.
A false positive pseudo-epidemic is a well described phenomenon in the medical literature which results in an exponential rise in diagnosed cases and deaths but no excess deaths. PCR testing is renowned for it and the “second wave” of Swine Flu in 2009 was entirely a false positive pseudo-epidemic only stopped by stopping the testing. SAGE have been focused on the constant low false positive rate of the testing equipment but the false positive rate of the whole testing process is variable and can rise.
Evidence of one laboratory with a low operational false positive rate gives no indication of what the operational false positive rate is in a different laboratory or at a different point in time.
A good lay description of a false positive pseudo-epidemic can be found here.
Fundamental flaws of the current UK approach
1) The definition of Covid deaths is too broad. Currently any death with an incidental positive Covid test within prior 28 days is counted as Covid. However, even deaths from the misdiagnosis of respiratory failure will result in Covid being put as the primary cause. There is no post mortem evidence that these are Covid deaths. There is no equivalent rise in death certificates with mentions of pneumonia as was seen in the spring. Accident and Emergency attendances for acute respiratory infections are currently 300 per day lower than average.
2) The tests are not measuring the disease. It is nonsense to rely simply on positive test cases without requiring the presence of symptoms to define the scale of the epidemic. However, if positive tests are to be used, determination of the test accuracy rate is absolutely essential (especially the false positive rate). These should be independently determined. This work must be current to assess the current rates. Those defending the tests claim they have been quality checked by the use of “whole genome sequencing” – but that test has never been used as a diagnostic test in this way either, so it’s like using one unvalidated process to validate another.
(The final page of this briefing provides some more background on why even an apparently low FPR can be so misleading.)
Symptom trackers, NHS triage data and GP consultation data all show that patients attending with Covid like conditions back to background levels; yet positive test results continued to rise during this time. This strongly implies that the test results are picking up largely false positives.
3) Excess deaths are not all Covid deaths. Pandemics can cause excess deaths. Lockdowns can also cause excess deaths and we saw excess deaths from many causes in Spring. There appears to have been no attempt to analyse these deaths or factor them into current decision-making processes. Normal interactions with the health service have still not resumed and excess deaths in the 15-44 year-old age group have climbed steadily throughout the year. These are nearly all non-Covid deaths.
4) There is an NHS staffing crisis caused by false positive test results. NHS including ambulance staff and care home staff are all being tested and made to isolate merely on the basis of a single positive test even when asymptomatic, when the evidence on spread from asymptomatic subjects is equivocal at best. This is causing a staffing crisis in the NHS which will undoubtedly result in patients dying.
5) The only confirmatory testing carried out has shown no Covid. Army testing in Liverpool uses a different and more reliable test – the ‘Lateral Flow Test’ (LFT). It has demonstrated that there is minimal Covid in the Liverpool community, the alleged hotspot. The numbers testing positive are barely above the false positive rate reported for the LFT meaning there were no real Covid cases found. In other words, the Army results confirm the fact that at least 90% of the PCR were false positives and the government is panicking on the basis of a massively exaggerated and unreliable statistic.
Numerous well-conducted studies have demonstrated that LFT tests have a sensitivity of about 80% (ie they identify 4 out of 5 ‘true’ cases). LFT tests are particularly good at identifying patients during the infective period and that is after all what testing is attempting to achieve. However, when contradictions are seen between LFT and PCR results, many people rush to the accusation that LFT testing are missing cases. In fact, LFT testing is highlighting the failings of PCR testing. An 80% sensitivity rate means not every case is found, but it is actually the same sensitivity rate as seen with PCR testing.
6) Weak criteria used to declare a positive will result in false positives. Testing in May, by Imperial College showed that only half of the positive results from commercial laboratories were true positives and they questioned the criteria these laboratories were using to define a positive.
7) The results from PCR testing no longer fit reality.
ONS random population sampling, using PCR testing, predicted that 4300 in 100,000 of the population of Salford (4.3%) had COVID on 11th November (16 times higher than their national estimate for 27th April to 10th May). Typically, during a pandemic, the general prevalence in the entire population is substantially less than 1%.
Finally, positivity rates from PCR testing started to become strongly correlated with the volume of testing carried out after the middle of July.
The below graph shows the data for London aggregated. But the pattern is the same for all boroughs in London. After mid-July, the best indicator of whether a test would be positive was the volume of testing carried out – not where the person lives.
This cannot be to do with biology and is entirely artefactual. Readers might well ask – what happened in mid-July?
The answer is that from that date, the volume of Pillar 2 (community) testing was ramped up hugely whilst the volume of Pillar 1 (in hospital / PHE) testing remained constant.
Pillar 2 testing – conducted by private labs at high volume – involves the use of DIY kits, self-sampling, collections in car parks, samples sent by post and car park collections, creating huge potential for contamination issues and other operational problems.
This is incontrovertible proof that the rise in cases is purely as a result of problems with testing. The data is currently being analysed for the rest of England and is consistent with London.
The PCR-based system routinely reports positivity values exceeding the absurd, and these are accepted without comment. This has been allowed to happen because the assumption is that PCR is high science and cannot be far wrong. Nothing can be further from the truth.
The following are the most important actions to be taken immediately:
Stop mass testing asymptomatic individuals. The entire basis for mass testing the population is based on the flawed and questionable assumption that transmission of infection from those with no symptoms is an important contribution to epidemic spread. Regardless of arguments about this, it is clear we are no longer in an epidemic and therefore there is no justification for mass testing, no matter the method involved.
Re-test a sample of hospitalised patients previously diagnosed (by PCR alone) using Lateral Flow Tests. Even if LFT misses some cases, the majority should still test positive. If the numbers testing positive are low, this confirms we have a problem with the PCR test.
If PCR is to continue to be used at all, it should be at low scale and on no account should this involve the high capacity facilities known as Lighthouse Labs. The stretch for capacity is wholly responsible for the PCR false positive pseudo epidemic. It is the view of those with the greatest hands-on experience of PCR that such facilities cannot yield results of the precision now required.
Even conducted at lower scale, it is vital that additional quality control measures for PCR testing are instituted, including (but not limited to) running ‘end to end’ negative controls containing other viruses and other human DNA through the entire test system, in order to quantify any false positive results due to laboratory issues (akin to ‘secret shopper’ checking). Additionally, PCR testing must revert to high stringency, with a requirement for all three genes to be positive – as used in Spring.
Hospital/NHS and care-home staff absence policies should be based primarily on LFT testing, using PCR testing for confirmation of positive tests only.
The definition of ‘an outbreak’ must require subjects to have positive LFT results with confirmatory testing using high-quality PCR tests conducted in a well-managed laboratory, combined with symptoms and evidence of direct contact.
Overturn the Ofcom ban on free speech in broadcast media. Earlier in 2020 OFCOM issued a series of innocent sounding Notes to Broadcasters in relation to reporting on coronavirus matters. These require broadcasters to adhere to certain lines consistent with Government policy and these have had the effect of suppressing much by way of alternative viewpoints. As we are no longer in a public health emergency, the justification for quashing alternative scientific voices has gone and only by hearing all views can a valid scientific consensus be reached.
Appendix – why low False Positive Rates can still be problematic
So what is the Operational FPR for Covid testing?
The Government has not published it and has admitted it does not know what it is, hence it’s not possible to say what fraction of the tests are genuine. All of them? Half of them? None of them? Any reasonable person would regard it as an affront to permit continued deployment of this test.
It would be negligent for a physician to perform a diagnostic test for their patient without knowing what the results meant, yet by their own admission, the Government does not know what their results mean. It’s tendentious to claim that an operational FPR obtained in a different test (not even the same test) can be adopted. It cannot. This yawning gap must be closed & meanwhile the test discontinued.
In a SAGE paper from June, it was stated that for other RNA viruses the OFPR ranged from 0.8 to 4.0%. For Covid testing, a figure of less than 1% has been suggested. We know that the quality of the testing lab can dramatically affect the FPR, and we know that there have been significant quality issues as volumes were ramped up at a faster speed than has ever been done before. So, 1% is most likely a significant under-estimate of the current situation.
As for how many people have it, the ONS say it is currently around 1.2% – but this is simply not credible, when in late April / early May they claimed that only 0.27% had it, and the Liverpool mass testing found only 0.6% (around the same as the operational FPR for the ‘Lateral Flow Test’ which would mean there was no Covid in Liverpool).
Using a figure of 2% for FPR, a 20% false negative rate, and incidence in the population of 0.5%, around 83% of all positive tests results would be misdiagnoses.
A Plea to MPs From Mike Yeadon: “Don’t Vote For Lockdown”
Below is a guest post by Dr Mike Yeadonin which he urges MPs not to vote for a second lockdown.
Dear Sirs and Madams,
I am an independent scientist of over 30 years experience leading research into new medicines, operating up to Vice President and head of Respiratory Research at Pfizer, a US pharmaceutical company and founder and CEO of Ziarco Ltd a biotechnology company sold to Novartis in 2017.
As an independent I am less constrained than academics and commercial persons. However, I have applied the same rigour to analysing the pandemic since March as with any of my former projects.
I am certain the pandemic is over and was over before the end of June.
There was a clear peak of excess deaths in spring. COVID-19 clearly caused many deaths, mostly of the elderly and already ill.
Turning to late summer and into the autumn – despite exaggerated claims that there is an ongoing full-blown pandemic, there are still FEWER respiratory deaths than at the same time periods in all five of the years since 2015. The below shows monthly deaths with any respiratory primary diagnoses including COVID-19.
There is a small and potentially growing all-causes excess mortality signal. I am working with a pathologist and our evaluation so far shows that these excess deaths are inconsistent with being COVID-19. In short, they are not dying from respiratory illness, but from heart failure and from cerebrovascular accidents such as stroke and diabetes. An awful realisation I have is that these excess deaths are just the sort you would expect if you take a mixed population, deprive them of easy access to the healthcare system for seven months and keep them stressed.
Looking at data obtained from contacts within the NHS, we do not have hospitals full of respiratory patients to any greater extent than usual for November. There are always hotspots and we know Liverpool is one such today. Again, the evidence is against this being due to COVID-19. And to repeat, we have not had excess respiratory deaths since the spring event itself. Liverpool and other cities and towns nearby have additional capacity and ‘surge capacity’, if required. The NHS as a whole is not in crisis and there is nothing to suggest it is about to be. I also checked with a colleague regarding intensive care beds. While an increasing number of their occupants have tested positive for COVID-19, intensive care beds are at exactly normal loadings for the time of year, i.e. 82%. I believe those COVID-19 diagnoses are mostly or all incorrect. We have tested well over 30,000,000 people. It wouldn’t be surprising if lots of people get a false diagnosis from a PCR test.
Antibody prevalence in the blood of those surveyed periodically is falling steadily and has been since its peak in the spring, when the virus was moving very fast through the population, infecting perhaps hundreds of thousands per day at its very peak. That antibodies are falling was last week wrongly touted as problematic and suggested immunity was fading. That’s the wrong interpretation. The human body does not maintain high levels of antibodies which are not needed. Consequently, steady falls in prevalence of antibodies is a clear signal that people are no longer encountering the virus. I believe that insofar as it is still present, it has become endemic at low levels and represents no threat to the health of the nation.
As someone experienced at reading into adjacent areas of science which I have done time without number since obtaining my PhD in respiratory pharmacology in 1988, I was always confident that the population would speedily attain ‘community immunity’. This is what I believe has happened as detailed in my article “What SAGE has got wrong”.
In my view – probably because SAGE lacked cellular and clinical immunologist expertise earlier this year and at no time during this event has it seconded a pathologist or an expert generalist such as myself – they’ve made a series of terrible errors which continue to infect policy to this very day. If such experts had been consulted, our advice would have made a huge difference, not least to the starting assumptions which are widely criticised as outlandish in the scientific community. In addition, we could have “sense checked” some of the more perplexingly unlikely predictions, such as 4,000 deaths per day.
The most fundamental error SAGE has made was to ignore all evidence of the very existence of prior immunity in the population on the spurious grounds that this was a novel virus. This virus is in fact related to four common-cold producing coronaviruses in general circulation and it has been shown unequivocally that a sizeable proportion of the peoples of at least Europe and North America possess T-cells that provide them with some protection against both endemic and novel viruses.
This virus is a serious threat to a low proportion of the elderly, especially if they are already ill. This description of the most vulnerable accounts for the vast majority of Covid deaths and the median age of those who’ve died of COVID-19 is slightly older than the median age of those who died of all other causes. However, the majority even of this elderly group survive infection. Overall, the lethality of the virus is now known to be very close to typical seasonal influenza. Notably, in relation to risks to the working population, the lethality of the virus in those aged 60 and younger is actually less than seasonal flu.
By using several sets of data I have been able to estimate the proportion of the UK population who have been infected. If you add them to the estimated proportion of the population that had prior immunity, and take account of the fact that young children do not often participate in transmission or become very ill, it is clear that there are far too few susceptible people remaining in UK to support an expanding infection as has been suggested. Instead, the evidence is strong from practical, theoretical and observational standpoints that the nation as a whole and probably most if not all regions in the UK are already protected by community immunity as described by many world leading academic epidemiologists in UK.
I heard with disbelief suggestions that surviving infection might not lead to immunity, or that immunity might only last a few months. Let me assure you, we have known for scores of years that surviving simple respiratory viruses which are neither immuno-toxic like HIV or change their appearance yearly like flu, leads as a rule, not an exception, to long-lived and robust T-cell mediated immunity. Antibodies may play a role but they are not central. That this ordinary virus has become a global media event is simply not justified by its profile.
I have been active on Twitter rather a lot in recent months. I would suggest that the people of UK are now highly suspicious of what is claimed to be happening. Many is the time people have in exasperation said: “This just doesn’t make any sense.” Indeed, what we are being told (that there is a full blown pandemic still underway) does not make sense and while I have no idea why it is being said, it is doubtless incorrect. Ordinary people know that each season’s flu takes perhaps three-to-four months to pass through the whole population. Knowing that SARS-CoV-2 is more infectious, they know that it would take the same or less time to pass through the UK population, not more. Indeed, we know it was in the UK by February. Adding a generous four months takes us to June, where all clinical signs of COVID-19 has disappeared (ignoring PCR test results, of which more in a moment). The rise and fall of Covid deaths in the UK follows exactly the same curve as that of other, highly seeded/infected countries such as Sweden. There is no doubt that we are in the same position as Sweden and it is only the monstrously error-prone and untrustworthy PCR test that suggests otherwise. What SAGE claims is happening is immunologically implausible in light of other data, specifically the shape of the death versus time curve, which shows beyond all reasonable doubt that the pandemic was self-extinguishing.
The PCR testing machinery is, at best, greatly in error and completely misleading. I have good knowledge of mass testing systems. I have always been deeply worried about polymerase chain reaction (PCR) because of its power, not only to find one molecule as small as a broken fragment of viral RNA and amplify it, sometimes by two to the power of 40, through repeated cycling, but also because it can find something that is not there – it can yield a ‘positive’ result even though the virus is not present. The greater the amplification and the higher the number of tests being done each day day – and the lower the expertise of the staff doing it – the higher the probability of error. I was the person who, with a radio journalist, finally pressed Mr Hancock to disclose the false positive rate of the Pillar 2 test, when it was still measuring far fewer tests per day than now. Having established that false positives exist, it is important to know that the rate of these can be small yet, when the prevalence of the virus is low, many or even all the positive results are false. That’s a practical debate for another time.
Yesterday, in response to a written question, the Government disclosed that while attempts had apparently been made to determine the operational false positive rate, it still doesn’t know it. As an experienced lab scientist, I know that when testing capacity is boosted substantially and the staff recruited have less and less lab experiences, there is only one outcome: errors of handling and of procedure. These in turn destroy the integrity of the testing system. The entire response of the UK depends upon the reliability of these tests. I have to tell you quite firmly: at present, it is practically, logically and legally impossible for anyone to be able to tell you what fraction of the positive tests recently obtained are real and which are not. For a range of reasons related to strong evidence that this virus cannot just hover around as it has been suggested and viruses certainly do not perform waves ever, the most secure conclusion is that these results are not to be trusted and are not reliable in any way.
So what I am saying is this. Despite warnings from all sides over months about this test it has continued to be used with increasing ferocity. It’s a medical diagnostic test. On no occasion would such a diagnostic be put into mass testing – in the NHS, for example – without knowing in advance how reliable it is. In terms of proper characterisation, it has NEVER been measured, despite the war-like impact of the test results on the nation and its people. At a minimum, the charge is reckless endangerment. Given all this information, it is literally impossible to guess whether the FPR is 1% or 10%. If even near the latter, there are no “cases” et seq. And there are other reasons to be very concerned about mass testing which I cannot go into today.
In my view, community mass testing is the pathology in the country now – not the virus. It must cease today. Without the ‘cover’ of mass testing, there is no evidence at all that the health of the nation is under any threat whatsoever. That event occurred in spring and our responses to it have been exaggerated and – what is worse – extraordinarily persistent, even when all the evidence says the pandemic has concluded.
I have a colleague who has a half a dozen sets of data all related to the pandemic. These show clear relationships between the data in the spring, all of which illustrated the impact of the virus. However, time after time, these relationships have broken down. The explanation for this is that at least one of the measurements are wrong, and the culprit is the PCR test. This has happened before. In New Hampshire in the USA there was a hospital that was convinced it had a huge outbreak of whooping cough. Physicians, patients and parents were all very worried about the expected deaths. Eventually, an older physician examined some of the patients and did not agree with the diagnosis. Asking the staff why they were so sure it was whooping cough, the answer was it had been diagnosed by the PCR test, the sole diagnostic tool. A review was ordered and this led to culture of the organism from the suspected patients. There was not a single person who actually had whopping cough. No infectious organism was found. What had happened was a now infamous case of a “PCR False Positive Pseudo-epidemic”. That is what I believe we have now in UK and in many other countries using similar technology.
MPs: If you vote for it now, you will condemn more people to suffering and some to death and the evidence does not support this extreme measure for which, even if the virus was circulating as SAGE claims, there is no evidence of benefit.
I urge you to vote against so we can all disclose our evidence that the pandemic is over and the epidemic of PCR testing can end.
Scandal: PCR Testing Sites Not Fit For Purpose
We were sent the below by someone employed at a PCR testing site in Salisbury. We were planning to lead with it tomorrow, but given the importance of today’s vote in the House of Commons, and in combination with the above post by Mike Yeadon, we have decided to publish it today.
Forgive the intrusion but I was given your contact details courtesy of a mutual friend. I realise the gravity of making this information public and genuinely feel that you are best placed to air my concerns about the fundamentally flawed service provided at testing sites. To be specific, the site operating in Salisbury which has been awarded/allocated without tender or public scrutiny to the unlikely coalition of Mitie and Deloitte.
I was accepted for work instantly after applying online at 01.00 in the morning. I filled out a mere two pages of information – no reference checks, no criminal record check, no photographic ID – and started work the following Monday at 08:00. I was deployed into the car park to essentially point and wave at cars for my first two shifts. I was told that we could read books, use our phones and use tablets in our non-customer-facing time. In a 12-hour shift that time could easily be upwards of eight to nine hours. After proving myself with my enthusiastic waving and gesturing to genuinely bemused looking members of the public I was promoted after three days to the PPE team. At this point, I still hadn’t had any non automated contact with the agency which had placed me.
The PPE team as it turned out was indeed a promotion. Along with ensuring the continuous supply of plastic gloves and surgical face masks to staff on site, we were tasked with assembling the MT PCR testing kits. This entailed putting the vials, swabs and instruction leaflets in foil bags. Some bags were sealed if they were for RTS use (mobile units) and others unsealed if for use on the static site. The static site being a special site donated free of charge by Wiltshire Council as it was now redundant as a park-and-ride site. Redundant thanks to lockdown.
It became apparent to me frighteningly quickly how unstructured and chaotic the processes on the site were across the board. I completed two-and-a-half years of a mental heath nursing degree back in 2013 and I realised, thanks to my prior training, we were preparing these tests in a totally non-sterile environment. A bloody shipping container to be precise! I questioned the practice with site management only to be told that they had no formal written policies in place and so procedures were “fluffy”.
Unlike some of my other colleagues, I decided to read the storage instructions that accompanied the containers of the vials. To my horror, it emerged that the formula needed to be stored at between zero and eight degrees Celsius after a sample is taken and then transported to one of the three testing labs in Milton Keynes, all run by Lighthouse. I have photographic evidence of the temperature in one of the unsanitary shipping containers that the tests were stored in prior to collection – it was not between zero and eight. Furthermore, the instructions stated that the sample must be stored and transported upright. Yet at the Salisbury site, the completed tests were put into medex containers on their side with up to 100 samples crammed in. The aforementioned containers were then collected and transported to Milton Keynes by a combination of Royal Mail vans and privately unmarked and undocumented couriers using their own family saloon cars.
I reported my concerns to management but was told that if I had a problem I should contact the CEO of Mitie. Not unsurprisingly, I declined for fear of the retribution that would almost certainly follow. The testing facility itself never had less than 34 staff on site. That’s one thing Mitie had insisted upon and it was strictly adhered to. Not a single staff member involved at any level had any medical training. Not one! The closest to it was an ex-army nurse who no longer held her pin and was allocated to supervise the car park traffic. The Site Lead and the Deputy Site Manager were an ex-para trooper and a DJ from Ibiza. No disrespect to either DJs or para-troopers as they have been part of some of my best nights out ever. They are not, however, the people I want deciding how we store and handle possible COVID-19 samples on a testing site with “fluffy” procedures. From Dido Harding at the top to the unvetted, poorly-educated minions implimenting policy at the coal face, not one of these people is remotely qualified for the task in hand.
I was also added to a WhatsApp group for the PPE team which was rather unorthodoxly sent to our private phones. I remained part of the group for weeks after I left the site. I have a record of exactly how many tests were performed each day from the July 29th until October 14th. During this time we were told to limit the amount of tests undertaken each day to 145, despite there being ample capacity and stock. The previous daily record of tests undertaken on our site was 459. No reason was given as to why we should limit testing in this way. Without doubt the highlight of the WhatsApp stream is an email shared between G4S and Mitie about a gentlemen in a white van who appeared at the MTU 179 in Lewisham trying to collect tests with a van covered in graffiti that was full of rubbish and contained a large dog. Incredibly, he appeared to have a medex box from another site that he’d already picked up and was taking to a lab when he was turned away from Lewisham.
US Election Result
At the time of writing, it still isn’t clear who has won the US Presidential election. But one thing is very clear. If members of the liberal left want to win elections, censoring your opponents doesn’t work. In spite of the efforts of the mainstream media and Big Tech to suppress dissenting points of view, whether it’s the signatories of the Great Barrington Declaration or ordinary people challenging the BLM narrative, it looks like Biden won’t win – at least, not convincingly.
The moral of the story is: if you want to win electoral victories, you need to engage with your political opponents in the public square. Set out your arguments and, if they’re good arguments, you will win the debate.
Refusing to engage and trying to cancel anyone who disagrees with you doesn’t work.
You can silence people on Twitter, Facebook and YouTube, but you can’t silence them at the ballot box.
That’s how democracy works.
Sue Denim Comes Out
Yesterday, Sue Denim “came out”. We can reveal that our brilliant coding analyst, who wrote the devastating critique of Neil Ferguson’s computer model for Lockdown Sceptics under the name of “Sue Denim”, is Mike Hearn, a former Google software engineer. He is named in Steve Baker’s op ed in today’s Telegraph – we put them in touch – and produced a briefing doc for Steve yesterday on the shortcomings of epidemiological computer models. Mike was one of the small group of people who maintained the Bitcoin infrastructure. You can read about him here.
I’ve been reading the user guides and validation studies for some of the rapid Covid tests the Government is buying. The Government is mounting a large validation effort on a large number of test kits, with 88 in the pipeline as of the time of writing. The rigour of these tests matters and not only to avoid false positives – the Government has spent half a billion pounds on buying tests in the last two weeks alone. As a result of this validation programme the government has bought 20 million rapid antigen tests from Innova. The makers of the test told the Telegraph:
“If you’re talking about doing mass scale testing where you’ve got hundreds, if not thousands, of people flooding through – that could be anywhere from theatres, airports, shopping malls, stadiums, anywhere you want to do an awful lot of people at any one time – you’ve got a rapid test that doesn’t need a machine or a lab, and is easy to do, and relatively cheap,” said Thonger.
To meet this promise the test should obviously be capable of at least two things:
Being administered by anyone.
Being used on people who display no symptoms.
Unfortunately, according to the Innova user guide, neither of these things are actually possible. The instruction books for these tests are, it must be said, well written with clear and plentiful information. In this particular guide we see the following:
“The SARS-CoV-2 Antigen Rapid Qualitative Test is intended for use by trained clinical laboratory personnel specifically instructed and trained in the techniques of in vitro diagnostic procedures”
That would seem like a difficult requirement to meet at mass scale. But this could be written off as the usual sort of liability reduction disclaimer. More problematic is the following:
“The performance of this test has not been evaluated for use in patients without signs and symptoms of respiratory infection and performance may differ in asymptomatic individuals.”
Rephrased, the manufacturers have no idea what the test does when used in the way it’s about to be used on a massive scale. Just in case there was any doubt about whether what’s about to happen is useful, they helpfully include this statement:
It is possible that the virus can be infectious even during the incubation period, but this has not been proven, and the WHO stated on 1 February 2020 that “transmission from asymptomatic cases is likely not a major driver of transmission at this time.”
We can see the problem in this presentation by “tried and tested tech”, the UK distributor. All samples used for validation by the manufacturer were from patients with pneumonia, and there weren’t that many which leads to fairly wide confidence bounds (FPs = zero but CI = 98.3%-100%). The Government has done larger scale tests with 1,200 samples, many of which were negative, but no information is provided about whether they came from asymptomatic individuals and the actual observed FP/FN rates also were not published. Still, it’s good to see that the Government is doing stronger validation than the manufacturers themselves and they refer to comparing tests done “in the field” rather than just under lab conditions, albeit without explaining what that really means.
What might the actual FP rates and their confidence intervals be? It’s important due to the massive scale of the planned deployment. The bounds on the FP rate allow us to calculate the size of the resulting pseudo-epidemic. Although the material on gov.uk doesn’t say, from the Times article yesterday:
One senior figure involved in the programme said the aspiration was to offer all Britons a test in time for Christmas. Sir John said he and other scientists had examined around 70 of the so-called lateral flow Covid tests… Of these he said six “look really good” with only one in a thousand false positives… One source said that the UK was hoping to emulate Slovakia, which began testing its entire population last weekend… “We could possibly be going door-to-door, or offering tests to those who want to see vulnerable elderly relatives.”
Consider a conservative estimate that 50% of Britons want to see elderly relatives at Christmas and get tested. Then using Sir John’s figure we would expect to see 66.7 million/0.5 (x 0.001) = 33,350 false positive results for each ’round’ of testing people subject themselves to. For context, yesterday about 20,000 positive results were reported. If this mass testing programme were run over the 24 days of December before Christmas then we’d see about 1,400 FPs per day added to the total case count.
This is assuming the 0.1% FP figure is credible. Unfortunately what we’ve seen with Covid testing so far is that validation studies frequently claim no false positives whatsoever, and then reports come in of rapid toggling, people swabbing nothing and still getting a positive result, etc. Lab conditions often don’t match real world conditions, especially given the biotech industry’s focus on rapid turnaround times, and FPs probably come in lab-localised “spikes” rather than being a constant background rate, making them harder to measure.
An example of this problem might be a different rapid test, the CovidNudge test by DNA Nudge. The user guide is here. Unlike the Innova test, this one is basically a portable PCR machine. Again, the tech is impressive and the user guide well written.
The validation study – which reports no false positives relative to PCR lab tests – mentions that the machine was cleaned regularly with 10% bleach followed by isopropyl alcohol. This is to damage RNA or DNA that might contaminate the work areas. We can see why when reading the WHO’s advice for molecular PCR testing. Written for malaria tests, it comes from a simpler time: advice is earnestly given for how to avoid false positives. The advice includes things like having four separate rooms with rules about not walking “backwards” through them, regularly wiping everything with bleach and waiting ten minutes, irradiating the lab with UV light and even using special air handling systems to avoid external air entering the lab (for cases where labs are detecting “very low levels of DNA or RNA in clinical samples”). This article recommends spraying bleach generously and waiting 15-30 minutes both before and after every single test, as otherwise “the technique [is] prone to producing false-positives”.
In contrast the CovidNudge user guide doesn’t mention regular cleaning anywhere. Advice to use isopropyl alcohol is given but only if fluid literally leaks all over the equipment. Bleach isn’t mentioned, let alone waiting for 15 minutes after applying it. Given the point of the test is rapid turnaround, it’s impossible to believe users will regularly clean the device when nothing is advising them to do so. So how can this sort of test designed for untrained amateurs have identical reliability to the kind of testing setup the WHO describe? Yet the health establishment is effectively claiming it does.
The final problem we may observe is that the CovidNudge study was performed by the people who make it, plus some scientists from Imperial College London (DNA Nudge being a spinout of ICL). There was no need for this obvious conflict of interest – although the validation study is well written and contains a lot of useful information, realistically any lab or university could have done such work.
Hospitals and ICUs NORMAL, Leaked NHS Data Show
In the midst of the clamour for a new lockdown with frantic warnings of the NHS being overrun and MPs voting later today, a bombshell dropped last night: leaked NHS documents that show hospital and ICU occupancy are normal for the time of year. The Telegraph has the details.
An NHS source said: “As you can see, our current position in October is exactly where we have been over the last five years.”
The new data shows that even in the peak in April, critical care beds were never more than 80% full.
Although there has been a reduction in surge capacity since the first wave, with the closure of the emergency Nightingale Hospitals, there is still 15% spare capacity across the country – which is fairly normal for this time of year.
The documents show there were 9,138 patients in hospital in England as of 8am on November 2nd, although had since fallen to 9,077.
It means COVID-19 patients are accounting for around 10% of general and acute beds in hospitals. But there are still more than 13,000 beds available.
In critical care, around 18% of beds are still unoccupied, although it varies between regions.
But even in the worst affected areas such as North West, only 92.9% of critical care beds are currently occupied.
How welcome – finally – to have this information in the public domain, and just in time for MPs to vote (not that it is likely to make much difference with Labour pledged to support the lockdown and few Tories looking like rebelling). But why did it have to be leaked? Why is this crucial data not routinely made public? Why have all requests to release it from journalists and researchers been turned down or pointed towards making an FOI request (which takes weeks)?
Professor Carl Heneghan, Director of the Centre for Evidence-Based Medicine at the University of Oxford, told the Telegraph:
This is completely in line with what is normally available at this time of year. What I don’t understand is that I seem to be looking at a different dataset to what the Government is presenting. Everything is looking at normal levels, and free bed capacity is still significant, even in high dependency units and intensive care, even though we have a very small number across the board. We are starting to see a drop in people in hospitals.
Alongside this good news, Professor Tim Spector yesterday tweeted that King’s College’s ZOE Covid survey app was showing that R had fallen to 1 nationwide. “More good news as the Zoe CSS app survey continues to show a plateauing and slight fall in new cases in England, Wales and Scotland with an R of 1.0.”
This is in line with what the current daily “case” data suggests – though I’m unable to show you today because, with impeccable timing, the Government Covid dashboard went down at 4pm yesterday (let’s assume cock-up).
Many are quick to credit the current three-tier system with bringing the spread down. But is that what the data says? Hardly. Recall it was the areas under local lockdowns that saw the greatest rise in positive tests in September. Ah, you say, but then Tier 3 was imposed and that brought the rate down? Not at all. In Liverpool “cases” peaked around October 7th and have been declining since, but the city was only put into Tier 3 on October 14th. Similarly in Manchester, local restrictions were first imposed on August 15th, which didn’t prevent positive tests surging in September. But then they peaked on Sept 30th and have been largely flat since, slightly declining – yet the city was only put in Tier 3 on October 23rd. Tell me again how this shows the tier system working? What it shows me is we’re more likely seeing the autumn surge among those who were spared in spring when the epidemic was curtailed by the warmer weather.
Despite the encouraging data, Chief Medical Adviser Professor Chris Whitty and Chief Scientific Adviser Sir Patrick Vallance appeared before MPs yesterday and were emphatic that the epidemic is on a devastating trajectory which only radical intervention will forestall.
Vallance told the Science and Technology select committee: “The R remains above one everywhere, the epidemic continues to grow.” Whitty added: “You don’t need that much modelling to show you that we are on an exponential rise” – with deaths, hospitalisations and cases already rising rapidly. He conceded there is some evidence of a slowing epidemic, particularly in the North East and to some extent the North West. But – “the trouble about things doubling is you move from a few to many cases very quickly.” Vallance said there is a serious risk of hospitals being overrun “if nothing is done”.
One issue, it appears, is that the R is mainly levelling off in younger age groups. “My hope is that it is levelling off in older ages as well,” Whitty said, but added there is no data to confirm this hope and it would be “very imprudent” to act on this since it is this group who will need hospital care. Spot the insidious precautionary principle again. And why would it fall among younger people and not, sooner or later, among the older? Besides, there is no particular reason that this winter should be any less deadly for older people than earlier winters. Recall that 2020 has so far seen fewer deaths than each of the years between 1993 and 2000.
Oddly, Whitty claimed that lockdowns mean people will be more likely to be treated for other health conditions rather than less. “The way you prevent those services from being impinged on or in some cases cancelled is by keeping Covid cases down,” he told MPs.
Whitty and Vallance defended their models and the graphs they had displayed on Saturday, denying they were trying to frighten people. Whitty said: “There is a danger with these extreme forward projections that people misinterpret them as ‘this is going to happen’ and get unduly worried about something that is not intended to happen. The whole point of a reasonable worst case scenario is to say, ‘Right, we’re going to do something to stop this happening.'”
Vallance added: “We went through this a bit on the September 20th, when we said we thought we could be heading to 50,000 cases a day if we had a doubling and that deaths might reach 200. It was there to give a scenario. As it happened, the numbers turned out to be pretty close by the time we got there, so it’s very difficult to project forwards in a way that doesn’t inevitably lead to a problem of ‘Is that real?’ No, it’s not real, it’s a model… These are not forecasts, they are models that tell you how things should look.”
This is drivel, not least because it seems to define the purpose of a model as making unreal predictions of the future (which may in fact explain a lot). But if, as Whitty says, a model scenario is something about which something must be done to prevent it from happening then it is unavoidably a prediction, otherwise why must something be done? And if nothing is done and it does not come to pass then the prediction is flatly wrong.
Vallance is also being misleading to claim his 50,000 “cases” by October 13th was “pretty close”: the seven-day average on that day was 16,228, less than a third of the prediction, which no action had been taken to avert (switching to another measure of “cases” such as the ONS survey is an invalid move as it was clear at the time he was talking about the daily reported “cases”). Likewise, Saturday’s 4,000 deaths scenario which included 1,000 deaths by the start of November, but for which averting action obviously had not have been taken, was demonstrably a failure.
Tom Goodenough in the Spectatorsays the pair’s defence “makes sense”. I can only think we must have been listening to different people. But it shows how easily people can be convinced by fine-sounding words even when they are nonsense.
MPs pressed the scientists on publishing the models. Vallance said: “The assumptions underlying the models will be published in full,” adding that the intention is to publish all the data as soon as possible. How far in advance of the vote, though? Defending the model, he said: “It’s not at all fair to say it’s discredited. I think the right graphs to focus on in terms of forward projections are the six-week forward projections and to base it on the data today which shows where things are in hospitals at the moment which are filling up.” Indeed, just like they do every October. And as we now know, not any worse than normal.
Whitty conceded test and trace only really works for smaller outbreaks: “Even under optimal conditions, test and trace will do much better in lower conditions.” The lockdown will allow the test and trace system to work more effectively, he argued. Though even then, test and trace is just one element that needs to be in place, he says – though didn’t specify what else was required. Some sort of ongoing restrictions, presumably, somewhat undermining the Government’s latest line that mass testing will let us get back to normal.
On whether SAGE looks at economic questions, Vallance was blunt: “We don’t. That’s not the role of SAGE. We have been very clear that this sits in the Treasury. We do not look at the economic impacts and we are not mandated to do so.” Odd that this was having to be clarified to MPs in November, showing again the opacity in the way Government has operated during the pandemic. It also exposes why it is such a problem for Government to be committed to following “the science” when scientific advisers are taking a deliberately narrow view. We also need to ask why the Scientific Advisory Group for Emergencies is not looking at public health more widely instead of only one factor.
MPs asked about the aim of the lockdown, to which Whitty gave a vague and circular answer that it is to ensure there is a “realistic possibility” that restrictions can be lifted on December 2nd and that England will move to a “different state of play”. He later added the Government’s primary strategic goal is to reduce mortality, though said this is one of many, including protecting the economy. So that’s clear then. No criteria of success were given, or any sense of how it will be assessed.
Vallance called lockdown a “blunt instrument” and admitted they “do not have good evidence on the exact value of each intervention on R”. Pressed about church closures in particular, Whitty bizarrely argued that although churches may be following social distancing guidelines, the problem is when people congregate outside after a service. Ah yes, a known hotbed of Covid super-spreading.
Whitty and Vallance both laid into the Great Barrington Declaration. Whitty said he means “no disrespect” to the experts involved (and not forgetting Vallance was an enthusiastic advocate in March) but he considers the plans to be “dangerously flawed, impractical and ethically really difficult”. The biggest weakness, he says, is the starting point that herd immunity will inevitably be acquired if you leave it long enough. This, he says, is not the case for most of the diseases he has worked on, including malaria, HIV and Ebola. Surely he knows that these are completely different kinds of disease and malaria isn’t even a virus? Herd immunity “never occurs” he says. “The idea that this is a fundamental thing is simply incorrect.” I somehow think three of the world’s leading epidemiologists know what they’re talking about better than Chris “herd immunity never occurs” Whitty.
The second problem, he says, is it is “practically not possible” to identify and shield the vulnerable population. “Theoretically that is attractive, but the idea you can do that and for year after year is simply impractical. We have looked at this, everyone says what a great idea until you look at the practicalities.” (Er, the NHS identified those most at risk in March and they were told to self-isolate.) But no one is saying you should do it “year after year”, just until the epidemic passes, say around two-three months, until there is widespread immunity.
His third reason, he says, is that very large numbers of people would die if you had any hope of achieving some sort of herd immunity, as this would require up 70 per cent of the population to contract Covid. This once again shows an ignorance or rejection of the evidence for pre-existing T–cell immunity.
Vallance added that even if you were able to totally shield those at most risk, you would still see a significant number of deaths in younger people. This seems to suggest he is unaware that the death rate in younger people is miniscule, less than 0.05%. He threw in the “long Covid” argument too, for good measure. He also said multi-generational households are common in the UK, especially in some of the communities hardest hit by Covid, making it hard for the young and old to remain separate. This seems pure defeatism, as solving this problem would surely be far cheaper and easier than everything else we’ve been doing.
Most disappointing, I think, was the lack of any challenge from MPs about putting the current situation in the context of hospital capacity and a normal autumn and winter. MPs should be demanding these figures be published routinely so the full picture can be known and scrutinised. We shouldn’t have to rely on leaks – that’s no way to run a democracy.
Herd Immunity From 1935
A reader has sent us the following excerpt from Hans Zinsser’s “Rats, Lice and History”, published in 1935.
Maybe we should send a copy to Witless and Unbalanced. Help them to swot up on the basics.
“The Government is Terrified”
Post from Toby, who has been talking to people close to Downing Street, trying to figure out why Boris is railroading the country into a second lockdown in spite of the data suggesting it’s completely unnecessary. This is what he’s been able to find out.
No one in Downing Street – or, rather, the Quad (Boris Johnson, Rishi Sunak, Michael Gove and Matt Hancock), since they’re making all the big political calls – is pretending the data hasn’t been deliberately skewed to create a rationale for Lockdown 2.0, which is why we’ve all been asking what the hell is going on – how on Earth a daily death rate of 4,000 could possibly be achieved without the entire population being simultaneously infected (in which case it would be all over in a few weeks anyway)? But the Quad is worried that in certain northern cities, e.g. Leeds, where post-lockdown disobedience has combined with urban lifestyles (blame the plebs, etc.), there is a prospect of hospitals becoming overwhelmed and that has got them rattled. (We don’t think there is, obviously.)
Note, this anxiety isn’t primarily due to Covid admissions, which aren’t expected to be higher than they were at the spring peak. Rather, this time round NHS Trusts have been ordered not to turn away non-Covid patients if they can accommodate them so some hotspot hospitals are having to cope with operating at their usual winter capacity levels alongside an influx of Covid patients. They’re not at breaking point yet, in part because the influx of Covid patients is being compensated for by a lower-than-usual number of patients being admitted for other respiratory infections. But because the reasons for that aren’t understood, the at-risk hospitals can’t count on respiratory infections not increasing, alongside rising Covid admissions, which could push them over the edge. Could the system flex to accommodate any overspill, with patients being admitted to neighbouring ICUs? Probably (this is normal), but another difficulty is that there are fewer specialist intensive care nurses than there were in March/April, partly because some of them have asked to be reassigned to other departments after the stress of the first wave and partly because hospitals are obsessively testing all their staff using the unreliable PCR kit because they’re terrified of “healthcare-associated infections” (nosocomial transmission of the virus). The upshot is there are fewer intensive care nurses and some of those that are still around have been sent home and told to self-isolate for 10 days. Another issue is that those with young children who’ve been sent home from school and told to self-isolate – because a child in their bubble has tested positive – are having to stay at home to care for their kids. And yet another issue is that some schools and NHS trusts are telling nurses to self-isolate for 14 days if one of their children has been identified as a “contact” of an infected person, even though that’s not something NHS Test and Trace are insisting upon.
So the Quad is terrified that some hospital trusts in northern areas will become overwhelmed and the BBC will start broadcasting pictures of people dying in corridors on the nightly news – which is political Kryptonite, according to the Rasputin-like figure of Dom Cummings. People will ask, “What was the point of Lockdown 1.0 if the precise thing it was designed to avoid is now happening?” Forget about protecting the NHS. It’s all about protecting the Conservative Party’s brand with an eye on the next General Election.
But the Quad is terrified that if they only clamp down on northern cities, as they’ve sort of being trying to do up to now, then the myth of a disease-laden, persecuted and under-funded North, already being wailed about by Messrs Burnham et al, will take even more root. Boris and his top team are paralysed with fear of being accused of abandoning their new friends in the North. So a national lockdown, even though it’s completely unnecessary and they all privately accept that, is a desperate propaganda exercise intended, in a rather futile and half-baked manner, to restore the national Blitz spirit of the spring, even if it means a one-man shop in Penzance and a gift shop in Guildford have to shut their doors forever. They’re also concerned that without said Blitz bollocks, the ornery northerners, whipped up by Burnham’s rhetoric, won’t comply with any new regulations.
Boris is desperate not to go down in history as the Prime Minister who cancelled Christmas, hence the promise about December 2nd, although that’s also because Rishi insisted on making any extension of the Furlough scheme time-limited because HM Treasury long ago ran out of cash and the staff of the Debt Management Office are having a collective nervous breakdown. Seriously – some of the staff in that office are off with stress. At some point, the tap has to be turned off or no one will take our paper.
In effect, they are caught in a trap of their own making. Lockdown 2.0 is another needless measure designed to minimise the political fall-out from the countless other pointless measures they’ve taken. The economy has been thrown to the wolves in order to buy yet more time for the Conservatives to save face. In reality, the Quad know there’s no better than a modest chance of a vaccine existing or working in anything like an effective enough way to make a difference any time soon. Indeed, it might never come, and the four horsemen know that, too. And everyone in the Government knows that the PCR test is hopelessly shonky and NHS Test and Trace, which was only ever a £12 billion PR exercise, is a slow motion car crash that, thanks to false positives and the staggering incompetence of the various companies Hancock has outsourced delivery to, creates as many problems as it solves. But at least it’s one of the few ways the Government can be seen to be doing something – anything – even if it’s a shitshow. According to one insider, it’s the equivalent of juggling plates in an effort to stop a rainstorm.
In short, they’re at a loss to know what to do. Lockdown 2.0 is a last roll of the dice. They’ve run out of ideas, although they never really had any to begin with.
One final point: this isn’t a case of SAGE pulling the strings, browbeating Boris and co into doing their bidding via its envoys Witless and Unbalanced. Rather, the CMO and the CSO are doing the bidding of the Quad, slavishly pumping out propaganda in order to justify Lockdown 2.0. Dom has the two dupes by the short and curlies. They love the power and the spotlight and will say anything, even if it’s transparent balls, to keep it. They’re also worried about the coming reckoning, with lawsuits, etc. heading down the pike, so they want to be able to say, “We were just following orders, your Honour.”
We Must Have Exams
David Mackie, the Head of Philosophy at d’Overbroeck’s independent school, Oxford, has written a piece for us on why there is no good alternative to restoring exams in 2021. Here’s how he concludes.
The cancellation of summer exams and their replacement by centre-assessed grades (CAGs) did students a grotesque disservice, and it would have done so with or without Williamson’s U-turn. The decision negatively affected not just the cohort of 2020 students, but those in other years, as well as universities. It was an error which must not be repeated. No plausible alternative system of moderated CAGs is likely to be possible; nor is continuous assessment a reasonable solution.
In making the case for exams to be held in 2021, and to be run according to 2019 standards, I do not wish to downplay the unfairness that has already been created by the closure of schools and by the continuing and unnecessary measures requiring healthy students to miss face-to-face schooling for prolonged periods in self-isolation.
But the cancellation of exams, and/or a deliberate downgrading of standards, is not the solution. The sole solution is to insist on proper assessment via exams, and thereby give certainty to students, schools, universities, and employers, and to protect the national and international reputation of our education system as a whole. I do not oppose certain adjustments to examinations, such as the availability of choice, which could mitigate the effects of the loss of schooling suffered disproportionately by some students. But we must have exams.
A reader who receives support from her local mental health team writes about what she learned from her support worker about what’s been going on behind the scenes.
What I’d like to bring to your attention is that many of the other staff and support workers at the mental health unit are telling patients that they are not able to visit, even though they have been allowed, and many of the patients with problems haven’t been seen since the first lockdown started. Despite Matt Hancock keep telling everyone that mental health issues will be a priority this is not being adhered to by our local teams. Some of the staff working there are just so happy to be sat around all day chatting on the phone and drinking coffee and agreeing that it’s great not to have to go out and see patients any more – wrong people in wrong jobs! One senior member of staff even refused to go into work as she was afraid she would catch the virus. Her patients have had no visits as no one else covers anyone any more for holidays, sick leave etc. Amazingly though these very people who won’t go out to visit patients will still go off on holiday, go for meals, go to the cinema, the pub etc. just not to people whose lives do depend on a visit from a health professional. This is just how life seems to be now and I really feel for anyone who needs help even to get into the system. It does not get any better at all once you are in it and I really do fear for the future. As this second lockdown approaches my care will now be stopped again and I will be left to cope on my own.
From Australia, I’m watching the fast-tracked development of coronavirus vaccines with mounting concern.
Under the Australian Biosecurity Act 2015, refusers of coronavirus vaccination in Australia could be at risk of five years imprisonment and/or a $66,600 fine.
This emergency power has been active since March 2020, and has been extended to December 2020 , with the potential for unlimited extensions.
It’s possible this emergency power could be extended until a coronavirus vaccine is available, and that people in Australia could be under duress to have coronavirus vaccination, i.e. at risk of imprisonment and/or a huge fine, for a virus which is not a threat to most people under 70.
With the Westminster Government proposing to close churches for the lockdown in England, Christian Concern have turned their legal guns, previously pointed at the Welsh Government, on the UK Government. From the press release.
The new restrictions, announced on 31 October and set to come into force from Thursday 5 November, state that “places of worship will be closed” with exceptions for funerals, broadcast acts of worship, individual prayer, essential voluntary public services, formal childcare, and some other exempted activities.
These restrictions will once again make it a criminal offence for Christians to gather for worship or prayer, or to go to church on Sunday.
The group of church leaders includes 25 leaders who initiated legal action against the government against the closure of churches in the first lockdown.
Following the application for judicial review, which received favourable comments from the High Court Judge, Mr Justice Swift, the government backed down and allowed churches to meet, providing guidance with virtually no legal restrictions.
In a separate judicial review of lockdown restrictions, the judge, Mr Justice Lewis, singled out the closure of churches as arguably unlawful and a breach of freedom of religion.
Separately the Anglican and Catholic Archbishops have spearheaded a letter from the leaders of many of the UK’s faith communities to the Government calling on them not to suspend public worship again. They write:
We strongly disagree with the decision to suspend public worship during this time. We have had reaffirmed, through the bitter experience of the last six months, the critical role that faith plays in moments of tremendous crisis, and we believe public worship is essential. We set out below why we believe it is essential, and we ask you to allow public worship, when fully compliant with the existing covid-19 secure guidance, to continue.
Good to see a bit of backbone emerging among faith leaders who usually like to toe the line.
A Lockdown Sceptics reader has sent us the response they received from their MP, Nadhim Zahawi, Conservative member for Stratford-on-Avon. Depressing to see the pro-lockdown propaganda regurgitated without a hint it isn’t fully believed.
Unfortunately, we are now seeing rapidly increasing rates of Covid transmission across the country. This is already being reflected in hospital admissions and sadly deaths. On our current trajectory, the NHS will be overwhelmed in the run up to Christmas, inhibiting its ability not only to treat Covid patients but all patients. I do not believe that any responsible Government could ignore this evidence and effectively gamble with people’s lives, forcing hospital staff to choose who should be treated and who should be turned away. Therefore, with a very heavy heart, and despite never having wanted to see anything similar to this year’s earlier lockdown repeated, I do support these new measures and the imposition of a second national lockdown.
I am under no illusions whatsoever about the consequences this will entail. I know the economic costs will be huge and that businesses will suffer, again. I am truly sorry for this. But I do believe that more people will die, more jobs will be lost, and more economic damage will be done if we delay acting now. I welcome the immediate announcement from the Chancellor that the furlough scheme will be extended to protect jobs and businesses during this period, and I anticipate that further measures from the Treasury will be announced in due course.
These new restrictions will last until December 2nd, after which the intention is to return to the tiered system of restrictions introduced over recent weeks. Parliament will be fully engaged at all stages and will be voting on all new restrictions.
Once again, I am extremely sorry that these restrictions are now being imposed and also for the hardship they will cause. However, as I have said, I do believe the costs of inaction to be far greater than those of action.
“My letter to Michael Gove” – Brilliant missive from Kathy Gyngell sent to Gove and other MPs and posted on Conservative Woman
We have created some Lockdown Sceptics Forums, including a dating forum called “Love in a Covid Climate” that has attracted a bit of attention. We have a team of moderators in place to remove spam and deal with the trolls, but sometimes it takes a little while so please bear with us. You have to register to use the Forums, but that should just be a one-time thing. Any problems, email the Lockdown Sceptics webmaster Ian Rons here.
Sharing stories: Some of you have asked how to link to particular stories on Lockdown Sceptics. The answer used to be to first click on “Latest News”, then click on the links that came up beside the headline of each story. But we’ve changed that so the link now comes up beside the headline whether you’ve clicked on “Latest News” or you’re just on the Lockdown Sceptics home page. Please do share the stories with your friends and on social media.
“Mask Exempt” Lanyards
We’ve created a one-stop shop down here for people who want to buy (or make) a “Mask Exempt” lanyard/card. You can print out and laminate a fairly standard one for free here and it has the advantage of not explicitly claiming you have a disability. But if you have no qualms about that (or you are disabled), you can buy a lanyard from Amazon saying you do have a disability/medical exemption here (takes a while to arrive). The Government has instructions on how to download an official “Mask Exempt” notice to put on your phone here. You can get a “Hidden Disability” tag from ebay here and an “exempt” card with lanyard for just £1.99 from Etsy here. And, finally, if you feel obliged to wear a mask but want to signal your disapproval of having to do so, you can get a “sexy world” mask with the Swedish flag on it here.
Don’t forget to sign the petition on the UK Government’s petitions website calling for an end to mandatory face masks in shops here.
A reader has started a website that contains some useful guidance about how you can claim legal exemption.
And here’s an excellent piece about the ineffectiveness of masks by a Roger W. Koops, who has a doctorate in organic chemistry.
Mask Censorship: The Swiss Doctor has translated the article in a Danish newspaper about the suppressed Danish mask study. Largest RCT on the effectiveness of masks ever carried out. Rejected by three top scientific journals so far.
The Great Barrington Declaration
The Great Barrington Declaration, a petition started by Professor Martin Kulldorff, Professor Sunetra Gupta and Professor Jay Bhattacharya calling for a strategy of “Focused Protection” (protect the elderly and the vulnerable and let everyone else get on with life), was launched last month and the lockdown zealots have been doing their best to discredit it. If you Googled it a week after launch, the top hits were three smear pieces from the Guardian, including: “Herd immunity letter signed by fake experts including ‘Dr Johnny Bananas’.” (Freddie Sayers at UnHerdwarned us about this hit job the day before it appeared.) On the bright side, Google UK has stopped shadow banning it, so the actual Declaration now tops the search results – and Toby’s Spectator piece about the attempt to suppress it is among the top hits – although discussion of it has been censored by Reddit. The reason the zealots hate it, of course, is that it gives the lie to their claim that “the science” only supports their strategy. These three scientists are every bit as eminent – more eminent – than the pro-lockdown fanatics so expect no let up in the attacks. (Wikipedia has also done a smear job.)
You can find it here. Please sign it. Now well over 600,000 signatures.
Update: The authors of the GDB have expanded the FAQs to deal with some of the arguments and smears that have been made against their proposal. Worth reading in full.
Update 2: Many of the signatories of the Great Barrington Declaration are involved with new UK anti-lockdown campaign Recovery. Find out more and join here.
Judicial Reviews Against the Government
There are now so many JRs being brought against the Government and its ministers, we thought we’d include them all in one place down here.
First, there’s the Simon Dolan case. You can see all the latest updates and contribute to that cause here.
Then there’s the Robin Tilbrook case. You can read about that and contribute here.
Then there’s John’s Campaign which is focused specifically on care homes. Find out more about that here.
There’s the GoodLawProject’s Judicial Review of the Government’s award of lucrative PPE contracts to various private companies. You can find out more about that here and contribute to the crowdfunder here.
The Night Time Industries Association has instructed lawyers to JR any further restrictions on restaurants, pubs and bars.
Christian Concern is JR-ing the Government over its insistence on closing churches during the lockdowns. Read about it here.
And last but not least there’s the Free Speech Union‘s challenge to Ofcom over its ‘coronavirus guidance’. You can read about that and make a donation here.
If you are struggling to cope, please call Samaritans for free on 116 123 (UK and ROI), email email@example.com or visit the Samaritans website to find details of your nearest branch. Samaritans is available round the clock, every single day of the year, providing a safe place for anyone struggling to cope, whoever they are, however they feel, whatever life has done to them.
Shameless Begging Bit
Thanks as always to those of you who made a donation in the past 24 hours to pay for the upkeep of this site. Doing these daily updates is hard work (although we have help from lots of people, mainly in the form of readers sending us stories and links). If you feel like donating, please click here. And if you want to flag up any stories or links we should include in future updates, email us here. (Don’t assume we’ll pick them up in the comments.)
Toby and his friend James Delingpole have recorded a new episode of London Calling. Main topic is the US Presidential election and the bets they’ve put on the result – looks like they might be in the money. But they also discuss Lockdown 2.0 and the insane clown posse that’s running the country. James and Toby know them all because they were at Oxford together. James is thinking of writing a book called: My Generation: The Worst in History.
A court in Weimar, Germany, has ruled that two schools should be prevented – with immediate effect – from forcing their pupils to wear masks, along with imposing social distancing measures and insisting on SARS-CoV-2 rapid tests, saying that “the state legislature regulating this area has gotten far removed from the facts, which has taken on seemingly historic proportions”. On mask-wearing, the court ruled that “the risk of infection is not only not reduced by wearing the mask, but is increased by [the widespread] incorrect handling of the mask”. The court also said “there is no evidence that compliance with distance regulations can reduce the risk of infection” and that “the regular compulsion to take a test puts the children under psychological pressure, because their ability to go to school is constantly put to the test”.The case was brought to court by a mother on child protection grounds.
There follows the text of an article published by 2020 Newson this ruling – translated from German to English by Google. We think it’s so good we are reproducing it in full.
On April 8th, 2021, the Weimar Family Court decided in an urgent procedure (Az .: 9 F 148/21 – available in English here) that two schools in Weimar are prohibited with immediate effect from prescribing pupils to have mouth and nose coverings of all kinds (in particular wearing qualified masks such as FFP2 masks), complying with AHA minimum distances and/or taking part in SARS-CoV-2 rapid tests. At the same time, the court ruled that face-to-face teaching must be maintained.
For the first time, evidence has now been taken before a German court regarding the scientific meaningfulness and necessity of the prescribed anti-Covid measures. Hygiene doctor Professor Dr med Ines Kappstein, the psychologist Professor Dr Christof Kuhbandner and the biologist Professor Dr of Human Biology Ulrike Kämmerer have been heard.
The court proceedings are so-called child protection proceedings in accordance with Section 1666 Paragraphs 1 and 4 of the German Civil Code (BGB), which a mother had initiated for her two sons at the age of 14 and eight at the local court – the family court. She had argued that her children would be harmed physically, psychologically and educationally without any benefit to the children or third parties. This would also violate numerous rights of children and their parents under the law, the constitution and international conventions.
The proceedings according to § 1666 BGB can be initiated ex officio, either at the suggestion of any person or without such a person, if the court considers intervention to be necessary for reasons of the child’s best interests, § 1697a BGB.
After examining the factual and legal situation and evaluating the reports, the Weimar Family Court came to the conclusion that the now prohibited measures represent a current risk to the mental, physical or emotional well-being of the child to such an extent that further development without intervention is reasonably likely to foresee significant harm.
The judge stated:
…children are not only endangered in their mental, physical and spiritual well-being but are also currently damaged by the obligation to wear face masks during school time and to keep their distance from one another and from other people. This violates numerous rights of children and their parents under the law, the constitution and international conventions. This applies in particular to the right to free development of personality and to physical integrity from Article 2 of the Basic Law as well as to the right from Article 6 of the Basic Law to education and care by parents (also with regard to health care measures and ‘objects’ to be carried by children)…
With his judgment, the judge confirms the mother’s assessment:
The children are damaged physically, psychologically and educationally and their rights are violated, without any benefit for the children themselves or for third parties.
According to the conviction of the court, school administrators, teachers and others could not invoke the state legal provisions on which the measures are based, because they are unconstitutional and therefore null and void. Reason: You violate the principle of proportionality rooted in the rule of law (Articles 20, 28 of the Basic Law).
[The judge stated]:
According to this principle, which is also known as the prohibition of excess, the measures envisaged to achieve a legitimate purpose must be suitable, necessary and proportionate in the narrower sense – that is, when weighing the advantages and disadvantages achieved with them. The measures that are not evidence-based, contrary to Section 1 (2) IfSG, are already unsuitable for achieving the fundamentally legitimate purpose they pursue, namely to avoid overloading the health system or to reduce the rate of infection with the SARS-CoV-2 virus. In any case, however, they are disproportionate in the narrower sense, because the considerable disadvantages/collateral damage they cause are not offset by any discernible benefit for the children themselves or for third parties.
Nevertheless, it should be pointed out that it is not the participants who have to justify the unconstitutionality of the interference with their rights, but rather the Free State of Thuringia, which encroaches on the rights of those involved with its state regulations, has to prove with the necessary scientific evidence that the measures prescribed… are suitable to achieve the intended purposes, and that they, if necessary, are proportionate. So far, that has not yet happened.
Alex McCarron: Hello, and welcome to Escape from Lockdown, the show all about how we got into this madness and how we are going to get out of it. Now today I have another of the great pathologists. Very early on I interviewed Dr. John Lee, and his interview set the podcast on fire and set the whole lockdown escape community on fire, really. It had real crossover. And today I think is going to be no different or even better. I’m speaking to a very brilliant person who is doing some incredible work and really putting themselves out there, exposing some really terrifying conclusions that she’s come to, to all of us. It is of course the pathologist Dr. Clare Craig. Clare, how are you doing? Welcome to the show.
Dr Clare Craig: Thank you very much for having me.
Alex McCarron: Can you tell the listeners a little bit about your professional background and how you got to be where you are?
Dr Clare Craig: I’m a consultant pathologist. I’ve worked in the NHS as a consultant pathologist for many years, and I moved to work on the cancer arm of the 100,000 Genomes Project for a couple of years, and then I’ve moved into AI more recently. But, I’ve experience with laboratories, with testing, and understand what false positives mean in medicine.
Alex: So you knew what false positives were before they got big, basically.
Dr Craig: Yes, I would say that my professional career has been around those kinds of problems.
Alex: Can we sort of jump straight into the fact that everybody who’s sort of been looking at the data knows that there’s this thing called the casedemic, but your works shows that actually the problems with the casedemic are actually much more profound than people, even us, quite realize. So can you tell us what’s going on?
Dr Craig: I can try. I mean, a lot of people try to find some data point that they can trust because one by one these data points are being questioned. And so people put a lot of faith in COVID death counts. They think, “Well, they must be true because, you know, how on earth can you misdiagnose someone’s death?” But I’m afraid that even the death count, you have to have a bit of skeptical about because of how we are testing and how we are diagnosing. And there’s a phenomenon that’s worth considering when we’re looking at the situation that we’re living through at the moment, which is called a false positive pseudo epidemic.
There are a few key factors to understand about that, one of which is when you’re living through it, everybody involved believes they’re in an epidemic because the data looks like an epidemic, which is why it’s got that name. But there are a few things that start to show up in the data that you can unpick to figure out that actually this isn’t the case. What starts to happen is that because the data points are related to testing and not to each other, they start to do really funny things.
So one of the things that’s a relatively easy image to understand is looking at ITU admissions compared with deaths, and ICNARC which do ITU audits have just published on this. They show a graph with a familiar spike in the upturn of the ITU patients and then coming back down, followed after a period of time by a spike in deaths coming back down. That was in spring. And you see these two lines followed in parallel all the way through. And then they’ve superimposed what’s happening now on this graph, and you can see a much more shallow line of increased patients in ITU, and below that in parallel the increasing number of deaths.
But in the last couple of weeks that line of deaths has done a sharp upturn, and it looks like it’s going to overtake the line of the number of patients in ITU. And so there are other ways to look at the data that back this up as well, but the point is that we’ve got to a situation where the number of people dying per case diagnosed is on the rise compared with the summer, but the number of people with a severe case (being admitted to hospital, being on ITU) has fallen since summer, which is just slightly baffling, you know.
How can you get to a situation where the severity is reducing but the deaths are increasing? That is quite difficult to get your head around. I don’t think we need to go over it again, but there is this discrepancy that doesn’t make sense, and it especially doesn’t make sense when you realize that 80% of the COVID deaths at the moment are in hospital. So if they’re in hospital, they should be in the hospital admission data, they should be on ITU, and they’re not showing up in that data.
Alex: So basically, if I can put it in a way that is often ridiculed, the former secretary of defense, Donald Rumsfeld famously gave this speech where he talked about there are known knowns, and there are known unknowns, and there are unknown unknowns, the things that we don’t know that we don’t know. That was often widely mocked at the time, but I believe that’s the cleverest thing any politician has publicly said ever because it was a tacit admission of the way that knowledge works and the way that we find things out.
To me, you seem almost half scientist, half detective, almost like sort of forensically going through the data. And it seems to me we have these known knowns, which we established over the summer which was sort of time between infection and death, the kind of general makeup of the disease. But, are you saying that they’re not the same anymore? The data, you know, these things that we depend on are suddenly going crazy, and there’s no relation between what’s coming in and what we thought we knew?
Dr Craig: The way to look at it is to use percentages. You can see the percentage of deaths for people who were admitted 10 days before to hospital. You then look at all the hospital admissions and see 10 days later how many deaths there were, and those two figures should have a set relationship. But what we see is that in the beginning it was quite high, and that was partly because we were not diagnosing everybody, and it comes down, and it carries on coming down right the way until August. In August, if you were admitted to hospital, you had the best chances compared with earlier. And, you know, we’re being told that that’s because of better treatments and what have you.
But then since August, the percentage has started to rise, which is a worry. You worry that things have got worse. Has it come back? Have we been misdiagnosing in the summer but now it’s come back? That pattern is repeated in other data points. The number of deaths per occupied bed on ITU has started to rise, and the number of deaths per case diagnosed has started to rise. All of those data points, they look like things are getting worse. But the other data points of cases to admissions look like things are getting better.
Alex: So what’s going on here? Is it that, are we just in the middle of a kind of orgy of testing and it’s throwing up all this crazy data?
Dr Craig: The thing about testing is that at the beginning we had to get some tests out really quickly, and I really admire the work that was done to get that to happen. Manufacturers turned new tests around as fast as they could, and, you know, it was all about speed at the time. So the fact that they compromised on some of the checks that would normally have been there is entirely justifiable. And then the labs got set up and were scaling right from the beginning. They were scaling what they could do. And we got to a point in May where the UK labs were doing 50,000 tests a day, which is absolutely phenomenal, and at the time it was world-beating, and it was more than enough to get a grip on the situation that we had.
But we carried on with that strategy of volume and speed, volume and speed, and we ended up now we’re doing 200,000 tests a day and with a couple more labs, but it’s essentially the same labs. They’re just being scaled and scaled and scaled. And when you’ve got a laboratory, there are three things a laboratory can do, and they can only do two of them well. They could put through a huge number of tests per day- they can do volume. They can do speed and get the results out as quickly as they can. Or, they can do quality tests. But you have to pick which two you’re going to focus on. We’ve focused on volume and speed. And again, that’s totally justifiable at the start of an epidemic when you’re trying to stop spread, and a small percentage of mistakes along the way are just really irrelevant to the situation that you’re in.
But from a pathology point of view, epidemiology 101 is when you get to peak deaths, you switch your testing strategy. You start with high volume, fast, and as sensitive as possible because you want to find every possible case. At peak deaths you switch strategy to quality testing and being specific because you want your results to be accurate at that point. We haven’t switched strategy. And the only way to switch or to do that, to get quality results is not to put the labs under even more pressure and shout at them and get cross. The only way to get a quality result from a lab is to compromise either on the volume going through every day or in the speed at which they have to turn them around.
Alex: What sort of evidence do we have that the accuracy has been compromised? What blips are we seeing in that data that tell us that these tests aren’t quite what they say they are?
Dr Craig: There’s a beautiful piece of evidence that’s just been produced by a physicist in Scotland called Christine Padgham, who is a force of nature and has gone carefully through all of the Scottish data. Public Health Scotland have been much more open with the data that they’re publishing, and they include in their publications the daily positive numbers, the daily negative numbers, the total number of tests done, and so you can actually get a percentage of positive tests per day that’s accurate. And when you look at the percentage of positive tests per day in Scotland, the percentage of positives is twice as high at the weekends than it is on a Monday. Now that cannot be anything to do with the disease.
That’s to do with the laboratories being under extraordinary pressures. It’s to do with people. The PCR testing, which is the test that we use for COVID, can be an incredibly specific test with a low false positive rate, but it can also be incredibly difficult to actually do because the first step is to translate your RNA to DNA and then you double the amount of DNA in the sample. You double it, and you double it, and you double it until you’re at a billion or a trillion times the amount you began with. What that means is that even the tiniest, tiniest amount of cross-contamination from other things in the lab can mean that you get the wrong result.
Whenever you run a test, you’re going to definitely put a certain positive in that test so you can make sure that the test worked properly. Every time they run a test, a positive control sample is being used. And if a little bit of RNA from that sample gets onto a glove, and gets onto the fridge door or something else around the lab, then every person that touches that fridge door is going to get contaminated, and the samples that they touch will get contaminated. The difference between a weekend and a Monday in a lab is that at the weekend you’re short-staffed, and people are tired, and the labs had all the problems that built up over the week hanging over. On a Monday, people come in fresh-faced. They’ve had a rest over the weekend, and the lab is thoroughly cleaned, and then you get out new chemicals that are all brand new and clean, and you start again.
Alex: So basically people are just kind of turning up hung over on Saturdays and Sundays.
Dr Craig: Oh no, I think that’s really unfair. I think you have to appreciate that if you’ve increased testing to that degree, people have worked their socks off. They are working so, so hard. I don’t think they’ve had time to have a drink. So they’re exhausted.
Alex: I’m imposing my own fecklessness on doctors who I’m sure are doing a very good job. I’m sort of damaging my ability to get new work now. So there’s other data you brought up which was really interesting, which was there’s this correlation which you never see anywhere in biology which I think is…it relates to the number of tests performed and the number of infections that we’re getting. What was it?
Dr Craig: It was a period of time where the hospital tests done related to the number of hospital COVID deaths, and it was a really tight correlation. The hospital tests have ramped up much more gently than the community tests, but we’re still doing a lot. And we got to a point where every admission could be tested, which was great. And then we exceeded that point. So there was the ability to test people more than once. And understandably, if somebody comes in with a broken leg, you’ll test them once as protocol. We don’t normally test them again.
But if somebody is coming in coughing, you might use your spare test to test them again. If somebody is coming in in respiratory failure, they’re going to get more than one test. So there comes a point where the increased number of tests are no longer proportional to the increased number of people tested. You get to a critical mass, and then any further increased tests are used on people who are more sick. Then you start to see this relationship between the number of excess tests done in a hospital and the number of COVID related deaths in the hospital.
Alex: Wow. So basically the implication here is that…is nearly everything that we’re seeing a false positive test, even if it’s in hospital?
Dr Craig: I would hold back from saying that, but I would say that cannot be excluded. The reality is that we have a problem with false positives, and the only way to clear that problem up is to start to carry out confirmatory testing and to sort out the labs. We need to put gateways in and say we’re not going to test everybody, we’re not going to test asymptomatic people so that the volumes decrease so that the laboratories can get on top of it. But only once you’ve got on top of it and you’ve done your confirmatory testing you can actually see what’s real out there. Because at the moment, the numbers that aren’t real are overshadowing the real ones, if they’re there.
Alex: And when the false positive story kind of broke a month or two ago, I think Julia Hartley-Brewer famously questioned it on her radio show. The BBC and, I think, maybe the Huff Post as well. I think Tom Chivers wrote something on this. Basically the determination was rather to examine and delve further into potential, you know, corruption of the data was to poo-poo the notion of false positives being effective data at all, which tells us a lot about the journalistic priorities and the cognitive biases that people fall in.
You know, there’s a famous saying. It’s very difficult to make a man understand something if his job depends on not understanding it. And there’s just a real commitment to rubbish any of the questions, to shut down the questions rather than to investigate what they’re saying, I think at least. So one of the things that people often say is, “Oh, your false positive rates, they don’t really count if the people you’re testing are symptomatic,” you know, because that doesn’t [inaudible 00:19:11] with data as much. I would ask you, does it now?
Dr Craig: The trouble is with COVID that the definition of what it is was back to front. The way that you set up a diagnostic test is you define a disease based on symptoms and signs and what it looks like to a doctor, and then you find a test. You work out if the test is any good by seeing if it can pick up this picture. But in COVID it was back to front. We defined the test, and then the symptoms were worked out after we decided who was positive with the test.
So the list of symptoms is as long as your arm, and you’re allowed to be asymptomatic as well. Anyway, leaving that aside, there are a lot of symptoms that count. With that many symptoms you’ll find a lot of people have those symptoms. I mean, we know from the ONS survey data, when they published who was symptomatic and asymptomatic, that 11% of the people were symptomatic with some symptoms at any one time because, you know, they’re common symptoms. So if your rate in the asymptomatic population is lower than in your symptomatic population, that does still make it look like you found something, right?
But the way that the testing works is that you’re looking for the sequence of letters in the RNA that is unique to COVID, and it’s a great test when it’s done well, as I said before. But when it’s done badly, other sequences of letters can cause a positive. DNA binds certain letters very, very accurately. A binds to T, C binds to G, and they’re really tight binding. But there’s a certain amount of binding that can happen to the wrong letters, so if you’ve got a misspelling that’s a few letters out, it can still bind, and you can still get a positive result. That’s especially true if you’re doing all these extra cycles before deciding whether or not it’s positive.
What that means is that there could be other viruses out there that cross-react with COVID testing and produce a positive test in someone with symptoms when actually it’s a different virus causing the symptoms. And, you know, we know that this is a risk, so when you make a new test you check for that. And what we would normally do is check by getting virus samples and running the tests and seeing if any of them go positive.
But what’s mostly being done for COVID is people have checked DNA databases and have looked to see how many letters match or don’t match, and say, “No, we’re okay. We can run with this,” which as I said before is, you know, justifiable. And then the laboratories, before setting up their testing, they did do wet lab testing, so all of the labs individually will have tested against samples of other viruses. But they’re testing against a range of other viruses, and it’ll be one sample of each type of virus. And that’s fine when you’re testing a high provenance population and you’re testing people who are likely have it.
But when you move to doing mass population screening, which is what we’re doing, you have to have a different threshold for your accuracy. And the only way you can be certain that we’re not getting cross-reactions with other viruses is if you test hundreds of samples of each of those viruses because you’re only going to see that, say, five percent of, you know, a cold virus is going give you a positive if you’ve tested hundreds of those samples. We’ve tested tens.
Dr. Craig: Well, when I says tens, I mean, like, 10 or 20.
Alex: So effectively we’re just… I mean, we all knew false positives were an issue, but I didn’t realize it was this much of an issue. There was an article that came out, I think it was in “Full Fact” recently that was saying, “No, it doesn’t pick up the common cold. It doesn’t pick up coronavirus.” But it seems to me that they weren’t really asking, they weren’t really addressing the right question. They were saying the test isn’t meant to pick up other common colds or other viruses, but what you’re saying is basically, you know, the test just occasionally, unintentionally, and very rarely does.
Dr Craig: I think it can do. So let me tell you a story about a false positive pseudo epidemic. This is a lovely story. It’s my favourite one.
Alex: I was looking forward to this.
Dr Craig: It’s a hospital in New Hampshire, and one of the doctors had a cough. It was a really bad cough. It’s one of those coughs where you cough a lot, and then you have a sharp intake of breath at the end because you’ve, you know, been coughing for so long. They were at lunch with a doctor colleague who thought, “Oh, hang on a minute. That reminds me of whooping cough.” So whooping cough in children, the whoop is after a really, really long period of coughing where they’ve run out of air, and they gasp for air. That’s why it’s called whooping cough. Right? So they said, “This could be whooping cough. We ought to check.”
So they went off to the lab and did a PCR test to see if this doctor had whooping cough, and it came back positive. This set off this kind of panic, and they just decided they’d better start screening the hospital because they had vulnerable babies and vulnerable old people who might catch this horrible bacteria. Not a virus, but anyway. And they started to test members of the staff and patients who had symptoms, and they found some more positives. And then they tested more, and they found more positives. By the end, they had tested 1000 people. They had got 146 positives back, so a 14.6% positive rate.
But one of the doctors was careful and clever enough to say, “Let’s have a backup and try to culture the bacteria from these samples as well.” So as well as testing for PCR, they tried to grow it in the lab and see if any of them would grow. None of them grew. None of them. All of that 14.6% were false positives, and it looked for all the world like an epidemic. After the news had broken that the testing was wrong, it took a long time before people could get their heads around what had happened because there was this collective delusion that they were all in. And, you know, I’m a bit scared when that happens here, actually, what the results will be.
Alex: Well, I think I can tell you. The results will be they bring in heavier and heavier restrictions. They ramp up testing even more, and it will throw up even more false positives. And when people try and question it, they’ll try and shut them up. It’s just a guess.
It’s worth talking about here, actually. So I just did a quick Google of whooping cough for false epidemic, and you got two articles come up straight away. One is in “The New York Times,” which have a wonderful title here called “Faith in Quick Tests Leads to Epidemic That Wasn’t.”
Dr Craig: Yeah, that’s the one. But there are others. There’s another whooping cough one where the false positive rate was 74%. The thing about this is that in retrospect people say, “Well how did it go so wrong?” And that 74% went so wrong because of very high cycle thresholds. But the 14.6%, I’m not sure exactly how it did go so wrong. People speculated that there was a problem with one of the reagents or there was some kind of cross-contamination issue, but they don’t actually for certain know exactly how it went so wrong. But the point is it can, and the only way to be sure that we’re getting the right test results is confirmatory testing.
Alex: Can they just test one PCR test against another done in a different lab?
Dr Craig: No, because if there’d been any problems up until the point where the swab reaches the lab, then that’s going to still be a false positive.
Alex: So how do you do a confirmatory test then?
Dr Craig: You have to have the confidence to say, “We’re not going to diagnose any patients until they’ve had two positive tests, separate days, separate positives.”
Alex: See, the thing is though, what you said was the way that they cracked this terrible problem of the fake whooping cough epidemic (and that is surprisingly difficult to say) is that the doctor decided to grow a lab culture, which to me sounds like very much like a gold standard test, because either it’s going to grow or it’s not. And it’s just 100% extremely accurate. COVID doesn’t have, as far as I know… Actually, I’m going to phrase this question differently. Does COVID have this alternative test we can test it against? Are we stuck with PCR?
Dr Craig: No, there is another test. You can also culture a virus. So what you do is you put the material in with some cells in a lab, and a virus will go into the cells and replicate, and then it will burst the cell open. So you just measure for the cells bursting open. And that has been done. That’s absolutely being done, but it gets done in, like, really kind of high tech, safe laboratories, and it’s hard to do. So you can’t do that at scale, but you can do that on a sample of positive tests and prove the point.
Alex: And do you know if that’s being done at all?
Dr Craig: I don’t know.
Alex: I mean, it probably isn’t.
Dr Craig: Actually, there’s one thing that is being done, which I think is why that’s not being done. The thing that is being done is that we’re doing whole genome sequencing on some of these samples. What that means is that instead of looking for just part of the RNA of COVID, the sample is amplified up in the same way, the doublings, and then you read the letters of every last bit of DNA in that sample so you can see what’s in there. When you do whole genome sequencing you can compare what’s going on, what mutations have happened over time, and you can fit it into the sort of family tree of COVID. If you’re getting samples through that have got positive whole genome sequencing results, it’s really convincing that it’s real. But, of course, if it’s cross-contamination from the false positive control, it’s still going to get a whole genome sequence.
Alex: Because you’d think, the thing that surprised me with this crisis, I don’t like calling it a pandemic because that suggests that we’re still in it, and I’m not sure that we are. But the thing that surprised me is with the £300 billion that we’ve already spent, surely they could set aside, you know, a measly sort of half a billion to sort through these confirmatory tests or to sort of test what they’re doing. It doesn’t seem to be a priority at all.
Dr Craig: No. I mean, if you look at the testing priorities, the priority continues to be to ramp it up and to aim for the moonshot and to have a million tests a day and have us all be tested every morning. It’s completely, like… they clearly have not had advice from somebody who understands this testing. And the people on SAGE that are giving advice are predominantly physicists, chemists, and mathematicians. And for physicists, chemists, and mathematicians, a false positive rate is the lowest positive you’ve ever had in your testing. The fact is the kind of work they do is on really, really accurate testing equipment, and they have really low false positive rates, and it’s a constant. And that’s not the situation in medicine.
Alex: And basically, and this data, these rates are potentially changing all the time. You said yourself they change from a Monday to a Saturday in Scotland.
Dr Craig: Yes.
Alex: How are we going to get out of this? I’m a little bit worried.
Dr Craig: Well I think, to be honest, I’m optimistic because…
Alex: Oh really?
Dr Craig: Yes. The data will start to do crazy things. It’s already started to do crazy things. So as well as the deaths being out of proportion to the severe cases, one of the other things that’s starting to happen is that the number of predicted cases is starting to be lower than the number of cases diagnosed. It’s not quite there yet, but that’s the trend that we’re headed in. When you really do have COVID, PCR testing is reliable for about 20 days. Obviously we’ve heard stories about it going on and on for months, but in most patients you have a 20 day window of it being picked up. And the number of predicted cases in the East Midlands is the sort of number of new cases per day that you would see over the course of a week.
And if you go and look at that week and say, well, how many cases did we diagnose? Assuming that you can have any be picked up in any one of those 20 days during the course of the illness, then we’re pretty much on par. So more crazy things will happen with the data that will be undeniable nonsense. And then, you know, once you get to that stage, people have to start thinking differently because you can’t make sense of these things. There was a lovely article in “The Daily Mail,” and I’m sure it was from the best of places, but it shows how crazy stuff has got where the news broke that the time to death had got worse. Right? It had been an average of two weeks between diagnosis and death in hospital, and it was one week. And they managed to say that this was because treatments had improved. Am I getting this the wrong way around? Let me have a think.
Alex: I think they probably got it the wrong way around.
Dr Craig: No, they said treatments had improved, right? Because patients who would have died after a while are now surviving because of these brilliant treatments…
Alex: Oh, so only the very ill ones that are dying.
Dr Craig: Yes. You’re like, that’s such convoluted thinking. It’s such convoluted thinking, and we’re going to hear more and more convoluted thinking like that because unless you realize the reason that it’s changed is because you’re diagnosing something else completely, then you have to have convoluted thinking to make sense of that kind of data.
Alex: I just find it everywhere. I find it constant, the convoluted thinking. Even the non-pharmaceutical interventions, i.e. the lockdowns, the circuit breakers, all of that stuff, it just results in convoluted thinking. You know, the Welsh thinking, “Yeah, we’re gonna ban books. That’ll do the trick.” And this phenomenon of long COVID as well, it’s as if they’ve kind of lost the battle on the infection fatality rates, and they’ve had to concede that it is lower than they thought it said. But now they’re saying, “Well, you know, this could cause, you know, long term disability.” You just have to say, well, A) no one has had it for more than six months anyway, so how could you possibly know that? And, B) I mean, you’re the pathologist. Don’t all viruses have this?
Dr Craig: Pneumonias are horrid. If you get a pneumonia, you’re going to be sick for six months no matter how old you are. It’s a really, really horrid thing to happen. It takes a long time to get better from. And I think you have to wait six months before assessing whether there’s anything more. And, yes, you know, this was a horrible illness. And actually I disagree with you about the infection fatality rates. I’m kind of an outlier in the community that have written on this. I think the infection fatality rate was higher than we now think it was.
Dr Craig: Because the calculation done more recently have been diluted with false positives.
Alex: Oh, right. Okay.
Dr Craig: When COVID hit in spring, it was a really horrid killer, and we’ve kind of forgotten quite how bad it was. If you go back and try and remember how we were feeling in March and how the news came out and how… So let me take you through the timeline, actually. The 21st of March, news broke that 21 year old Chloe Middleton, who was healthy, had died at home of COVID, which had us all slightly on edge, I think. And then on the 28th of March, Martin Egan, who was a bus driver, died. And the first NHS surgeon who was working had died. The next day was another death of a bus driver, and a 55 year old healthy NHS physician died. And by five days later, we were told five Transport for London bus drivers had died. The next day five NHS staff had died. It was really quick, and it was killing young people who should not have been dying, and it was worth being scared of in March and April.
Dr Craig: I think when we actually managed one day to filter out what was real and what was not real, we’ll see that it did have a significant infection fatality rate. It’s just that since then, what we’ve diagnosed is not it.
Alex: But then fundamentally though, the prevalence can’t have been as big, and it can’t ever have been as big because it’s largely passed through the population now. I mean, the big key metric here to look at is excess deaths, right?
Dr Craig: Right. Let’s come back to excess deaths though because the thing about prevalence is that I totally agree it passed through the whole country. Every part of the country had excess deaths in spring. Liverpool has the same 14% excess deaths this year as London. This kind of story we are told that it infected some areas more than others doesn’t really match with that data of excess deaths. But the way you calculate your infection fatality ratio is based on how many people were symptomatic. That’s what we mean by who had it. And, you know, we’re never going to know for sure because we weren’t testing, and so we don’t know for certain. We don’t have great antibody testing to know for certain.
But what it doesn’t measure when you’re calculating this is people who were immune already. And I think we had significant numbers of people who couldn’t catch it. And when it passed through the country, it wasn’t 100% of us that were susceptible. It just wasn’t. There’s prior immunity from other things that we’ve seen. Our immune systems are amazing, and they work for most of us. You can see that also in the data.
There was a nice match analysis published on household transmission. So people who had a positive COVID test, they went and looked in their households (this is around the world) and found out how many of the people they lived with caught it. And the range was huge. It was from 50% of household contacts catching it to 5%, which seems rather low, almost as if maybe you’re not testing correctly. But going back to the 50%, 50% of household contacts catching it, it means that the rest are immune. They must be immune, especially when we know how quickly this disease spread. It was a very contagious disease, there’s no question about that, and how quickly it went through our country. So it’s a contagious disease that not everybody catches.
Alex: Well famously there was the Diamond Princess, which is your kind of perfect petri dish to see how it affects, because I remember stories about this, infections coming out of cruise ships. You get these norovirus infections and stuff, and they would totally tear through the whole ship because if you want an environment where a disease could spread, a boat is pretty much as good as you’re going to get. But what was it, a huge proportion of people, I can’t remember, they just didn’t get it.
Dr Craig: No. They also at that point had the stories breaking about patients testing positive who had no symptoms, and some of those patients went on to get symptoms, which, you know, means that they probably had it, but others never had symptoms. There’s been so much confusion about this asymptomatic thing that, we’ve just gone into some other world which is different for any other disease. Yes, you can have a positive PCR test and be asymptomatic. Yes, you can even have a positive viral culture and be asymptomatic. So that means that there’s live virus that can get into cells, and people can have that in them and be asymptomatic.
But that does not mean that they’re infected. It doesn’t mean that they’re diseased in the way that we normally talk about disease because they have no symptoms. It means that they’re immune. That is what immunity is. Immunity is when a virus invades, it doesn’t bother you. The stories in the scientific literature about transmission, which is what we should worry about, say yes, these asymptomatic people can have the virus. But can they spread it? And that’s the critical question.
There are two schools of thought on that. So if you take all the scientific literature published about transmission you can put them into two piles: one that shows they do not transmit (you can’t spread it unless you’re coughing, which sort of makes biological sense) and the other that says it’s a serious problem. But if you look again at the pile of papers that say it’s a serious problem, they were all published in China, and I think we just have to have a little bit of skepticism about that when all the other literature contradicts it.
Alex: Well, regular listeners of my podcast will know that nothing coming out of China should be trusted related to this on anything. And as Michael Sanger, one of my former guests, showed, not everything that comes out of China is obviously coming from China. And that is a real danger. I think I said to you off air I don’t get conspiratorial about this. I do think we are in a storm of cognitive biases and motivated reasoning. And even the great reset stuff and all of that, it’s just the people who sort of spout on about this stuff and have been doing so for years, just seeing this as opportunity. It’s no different.
But if there is one bad actor that is certainly the Chinese Communist Party, and they have the motive and the reasoning to do that. Although, having said that, this podcast is more talking about scientific issues rather than politics, so I should try and keep them separate. So we didn’t actually quite go into a little bit, but what’s the thing that tells us that the epidemic has passed through the population? Is it those excess death figures? Which is quite a nasty little blip. It’s a good, you know, what, 20, 25 years since we’ve had something that bad kind of hit the population?
Dr Craig: Is your question really, how do we know it’s over?
Dr Craig: The one thing to look at is when hospital deaths peaked around the country. You can look at by hospital trusts, you know, each of them have their own little Gompertz curve with a maximum. And you can say, “Well this is when peak deaths happened.” And the first peak was in Brighton on the 28th of March, way too soon for lockdown to have had an effect. And then it spread not in a kind of south to north way. It was all over. I think there were lots of different seeding events.
But the last places to spike have a death peak in their hospitals were Hull, Rotherham on the 24th of April, Bradford on the 26th of April, and West Suffolk on the 28th of April. And the thing about those places is that when you do pandemic modelling, they are the places that get the disease last. And they were getting it so long after Brighton that you can see that it was just spreading throughout lockdown. Lockdown didn’t have any effect at all. You can confirm that it’s come, gone, killed people, and then just disappeared because it hasn’t come back. That’s fundamentally the test of immunity. Is it coming back? It should’ve come back at the VE celebrations, or in the marches, or when the beaches were packed. You can’t keep saying, “Well, it’s going to come back tomorrow.” It didn’t come back because it’s gone.
Alex: It’s gone. But we’re still stuck in this situation.
Dr Craig: And it’s not gone forever, you can’t get rid of a virus forever. It’s not gone gone, but the epidemic part of it is gone. So after an epidemic has come and gone, then the population is no longer susceptible because either people have been killed or become immune. It’s just the reality of it, harsh though that is. And therefore, if the virus does, you know, have a winter prevalence, and in the winter there may very well be cases again through the winter, but it’s a different story. That’s just like flu every year. It’s a seasonal infection. It’ll come, but it’s not coming into a susceptible population anymore. It’s coming into a population that has a bit of immunity.
Alex: I suppose that’s the, how can I put it, the slander that the anti herd immunity advocates say is that herd immunity means the eradication of a disease whereas that’s not actually the case, is it? I think [inaudible 00:47:58] calls it the epidemic equilibrium where it just kind of sinks back into the background.
Dr Craig: Yes. The herd immunity deniers keep talking about measles saying, “Well, you know, we only got control of the measles because of vaccination.” And that’s kind of true. The thing with herd immunity is that the number of people who have to be immune depends on the R value, the R0 value. So how contagious is this disease? Measles is really, really contagious. It’s got an R value of eight, so you need 90% plus to have herd immunity. And the problem with measles is that there are babies arriving all the time, and they’re not immune, so in order to have herd immunity you have to keep that vaccination level up really high. But the R0 value for COVID, you know, it’s debatable. In fact, the range is quite massive for what people think it was, but there seems to be a reasonable guess, and three is how you get to the 60% immunity, herd immunity figure, which also seems reasonable. And so, no, you don’t have to have every single person in the community being immune.
Alex: Right. So we’ve spoken for quite a while. If there’s something that could, I’d like to ask you personally is, so I’ve been sort of kicking around in this lockdown skeptic world for I think probably since April. But you’re a real newcomer. It’s amazing. Your Twitter account has only been around since September, and you’ve already got, you know, quite a large following already, which to me sort of encourages me a lot because the podcast where I interview scientists always get really, really high views or listens, rather.
And, you know, your Twitter account has got a lot of information on it, and it shows there’s a real hunger for that. So it’s really one in the eye for these kind of media commentators who think everything has to be dumbed down, which I actually think is quite hopeful for the future. It shows there is appetite to sort of digest this stuff and to disseminate it. So why did you decide to speak out, and why did you speak out when you did?
Dr Craig: That’s a really reasonable question. I realize I’m the latecomer to the party, and a lot of people have been speaking out since…you know, they spotted it way earlier than I spotted it. Essentially, I have four children, and I was really, really busy. I was trying to homeschool four children. And we went through the summer holidays, and then finally September came, and they went back to school.
The kind of little questions I’d had niggling at the back of my mind about what was going on and were we just getting false positives through the summer when the positive rate was flat, you know, I suddenly had time to explore it. I started digging into the data and testing the data and saying, look, if these were false positives, what does that mean? Can we see in the data changes like when COVID deaths happen they were 60% male? And in the summer, the deaths labeled COVID were 50/50? That sort of is suspicious, and so I kept going at that, testing it, and concluded for myself that they were false positives over the summer. And then I wrote to Carl Heneghan, who I was at medical school with, who I haven’t spoken to for 20 years.
Alex: Really? Don’t just pass that. What was he like as a young man?
Dr Craig: In university?
Dr Craig: In his way, he was much cooler than me.
Alex: I bet, was he into, like, The Stone Roses and stuff like that?
Dr Craig: I wouldn’t comment on his musical tastes.
Alex: I bet he went to gigs. He must’ve done.
Dr Craig: Sure. He was a good guy.
Dr Craig: Yes. So I wrote to him and I said, “Look. I think I found this. What should I do?” And he said, “Just get on Twitter, get it out there.” And so that’s when I joined Twitter. It was sort of mid-September, trying to spread the messages. And since then I’ve been digging and digging and digging through the data. I feel like actually I need to change tack. We need to. Now, well, there’s enough evidence now. There’s enough.
What matters is communicating it. I don’t think I’ve been terribly good at communicating it, even though you’ve said flattering things, because I communicate with graphs and with numbers. I communicate as a scientist, which isn’t accessible to everybody. And I think I need to just concentrate on making this…getting the message out in a way that everybody can understand because while we’ve… You know, my followers are physicists and mathematicians, and that’s not the only people. We need to get the message out to the powerful people.
Blind faith in authority is the greatest enemy of truth.
On December 20th the UK Government put 44% of the English population into Tier 4 lockdown, cancelling Christmas get-togethers for 24m people, following a recommendation from the New and Emerging Respiratory Virus Threats Advisory Group (Nervtag).
Nervtag had identified a new variant of the novel coronavirus in the South East of the country, which was 70% more transmissible than its predecessor, carried a viral load up to 10,000x higher and which the primer on the widely used Thermo Fisher TaqPath PCR machines failed to pick up.
However, these conclusions are highly dependent on the interpretation of the data and logically (Occam’s Razor) none of the claims made at that time about the new variant’s increased transmissibility, higher viral load or ability to escape detection appear justified.
The PCR test
The primers used to detect short gene sequences in reverse transcription polymerase chain reaction (RT-PCR) machines under the COVID-19 protocol, search for three gene types: ORF1ab (or just ORF), N and the ‘spike gene’, S. Positive test results require at least two of the three genes to be found but since amplification is run to a very high cycle threshold (Ct) of 40-45, known as ‘the limit of detection’ (LoD), usually all three genes are found, albeit at slightly differing Ct values. However, in October researchers started to notice that an increasing number of PCR results, though positive for ORF and N, were failing to pick up the S gene at all, suggesting a mutation to the S gene that meant it could no longer be detected by the PCR primer. Furthermore, this ‘S-dropout’ variant of concern (VoC) was concentrated in the South East, having originated in the Medway area of Kent (right-hand side of Chart 1 below).
Chart 1: England local authority daily positive tests (Apr-Dec)
The initial Italian variant had burned itself out by end-June and hospitalisations were down by -97% from their April peak (Wave 1). Since September though, a new variant D614, was picked up by Spanish holidaymakers before being spread by students returning to university in early October. This variant too appears to have been in decline from end-October, aided by the November 5th to December 2nd lockdown. The new S-dropout VoC, which incidentally only occurs with the primer supplied with the widely used Thermo Fisher TaqPath PCR machine (other makers’ primers are still identifying the S gene), has now been traced back to late September but has become ever more predominant throughout the South East. However, the virus is constantly mutating and there have been over 4,000 different variants worldwide to date, so what is it that makes this new variant so special?
New variant (relative) growth rate
On December 14th, UK Health Secretary Matt Hancock told parliament that the new variant of coronavirus was “increasing rapidly. Initial analysis suggests that this variant is growing faster than the existing variants…predominantly in the South of England.” The UK Government’s New and Emerging Respiratory Virus Threats Advisory Group (Nervtag), which reports to Chris Whitty the Chief Medical Officer, announced on December 18th that the “growth rate of (the variant under investigation) VUI-202012/01 is 71% (95%CI: 67%-75%) higher than other variants.”
Higher viral load
Almost immediately, on December 20th, a Tier 4 lockdown was imposed on the 24m residents of London and the South East, effectively ‘cancelling Christmas’ for 44% of the English population. Over 50 countries responded by banning flights to or from the UK. The same day, Susan Hopkins the PHE liaison with NHS Test and Trace, told the BBC that we “won’t know for definite” if the new variant is more deadly but it does have a “higher viral load” though this is merely inferred because it is positive at a lower Ct. Susan Hopkins is the one who quashed the false positive story last summer, despite the disease incidence having fallen as low as 0.01% (zero?) by end-June according to the ONS survey, whilst Pillar 2 tests had positivity consistently > 1.4% (the probable false positive rate?). The ONS has subsequently admitted that it doesn’t actually “know the true sensitivity (FNR) and specificity (FPR) of our nose and throat swab test.”
70% more transmissible?
The Nervtag ‘70% increased transmissibility’ estimate came from a Public Health England (PHE) technical briefing, not published until December 21st, that compared PCR tests that were positive for the two genes ORF-1 and N but negative for the S-gene as a proxy for the variant of concern (VOC). The authors then “applied the models to estimate the association of VOC frequency and reproduction number (R). This analysis shows an increase of Rt of 0.52,” which raises the distinct possibility that we might have a causality-correlation problem here. Is increased transmissibility leading to an increase in the observed Rt, or is it an increase in the model’s Rt assumption that is feeding back into an implied increase in transmissibility?
Never knowingly under-estimated
What is also of note is that three of the authors of the PHE paper (Meera Chand, Wendy Barclay and Neil Ferguson) also sit on the Nervtag committee. So, they were effectively reporting on their own, non-peer-reviewed and, at that stage, not even published, work. Neil Ferguson, you may recall, is the creator of the infamous model, rumoured to be more than a decade old but whose parameters are yet to be released for peer review, that predicted half a million UK deaths (2m in the US) in the absence of lockdown, with a ‘best-case scenario’ of 1.1m US deaths, even with lockdown; which he originally argued doesn’t save lives but merely ‘flattens the curve.’ Furthermore, this Dr Strangelove of epidemiology has, as they say, ‘form.’ Back in 2001, Neil Ferguson’s foot-and-mouth modelling recommended culling over vaccination (thankfully he has moderated this strategy for COVID-19), which was responsible for the slaughter of 6m animals. The following year, his BSE model estimated a worst-case scenario of 150,000 UK deaths from vCJD (actual deaths 177) which led to another mass livestock cull. In 2005 he told the Guardian that the worst-case scenario for global H5N1 bird flu deaths was feasibly 200m (actual deaths 282); and in 2009 he initially forecast a worst-case scenario of 65,000 UK deaths from H1N1 swine flu (actual deaths 457). So, let’s just say Prof. Ferguson’s models tend to have an extremely high upper bound bias. The man’s inherent honesty is also in question. He was forced to resign from the Scientific Advisory Group for Emergencies (SAGE) after being caught entertaining his married lover within the 14-day self-isolation window following a positive test and the onset of COVID-19 symptoms. Yet to this day, his infamous Covid model parameters remain secret and non-peer-reviewed, whilst he remains an unapologetically influential figure within both PHE and Nervtag, which makes a bit of a mockery of his high-profile ‘resignation’ from SAGE. Now, most surprising of all, in spite of his history of extreme worst-case scenarios, eliciting extreme policy response by fearful politicians, his research for PHE now seems to be going, via Nervtag, straight into policy without being either published or peer reviewed.
Nevertheless, the PHE study reported that “it is highly likely that (spike variant) N501Y is enhancing the transmissibility of the virus” leading Nervtag to conclude, three days earlier, that it had “moderate confidence that VUI-202012/01 demonstrates a substantial increase in transmissibility compared to other variants” (my bold). On Christmas Eve, the Centre for Mathematical Modelling of Infectious Diseases at the London School of Hygiene and Tropical Medicine confirmed that according to their model, the new variant was 56% more transmissible, though thankfully no more lethal, than the strain it was replacing. This in turn led Prof. Andrew Hayward, another member of Nervtag, to tell the BBC on December 28th that “a 50 per cent increase in transmissibility means that the previous levels of restrictions won’t work now. We are going to need decisive, early, national action to prevent a catastrophe” (assuming, of course, that an extended Tier 5 lockdown isn’t in itself a ‘catastrophe’).
The logic test
The Neil Ferguson/PHE study noted that during November (Weeks 44-48) tests that were positive for ORF and N but that were ‘S gene negative’ were both growing on average 70% faster than the more common variant (see blue line in Chart 2 below) and proliferating. Given the scatter plot, this doesn’t look like the most robust statistical conclusion to draw. However, of more concern is the fact that the growth rate of the S-dropout is being measured against the growth rate of the older variants, which itself appeared to be in decline (see Chart 1). The inappropriateness of this comparison is exacerbated by the differing geographical distribution, with the old variant predominantly found in the North of England and the new S-dropout variant in the South. If the Northern infection is naturally in decline and there is a new infection blooming in the South, we would logically expect the growth rate of the old variant to be slowing (R < 1) and of the S-dropout to be accelerating (R > 1). If so, then comparing the two would naturally yield a faster growth rate for the S-dropout because both variants would be at different stages of their epidemic cycle (Gompertz curve). Crucially though, this would not necessarily imply that the S-dropout was any more, or less, transmissible than its predecessor.
Chart 2: Relative growth rate of ‘S-dropout’ over variant D614G
How therefore, if increased transmissibility is not the culprit, to explain the surge in new positive tests, which reached 57,725 on January 2nd? The most salient point to make is that the number of tests carried out has leapt by +50% since early November. With a largely asymptomatic disease like COVID-19, the more absolute tests carried out, the more absolute positives are returned, especially when the authorities target testing capacity at the newest outbreak areas. There were 445,000 daily tests in the week to December 21st (the most recent data available at time of writing) and 36,410 a day came back positive (a positivity rate of 8.2%). If, for example, we compare that to November 4th, the day before lockdown, the 7-day average number of daily tests was 298k and the average number of positives found each day that week was 23,763 (8.0% positivity). So, there has been no real change in positivity, despite the leap in “new cases”, not least because there has been no real change in disease incidence either, which is still ~1.2%, the same as its pre-November lockdown peak, having bounced back after restrictions were lifted on December 2nd (see Chart 3 below). What there isn’t is any sign of though in this data, is any increased transmissibility.
Chart 3: Estimated COVID-19 incidence in UK population (%)
Therefore, whilst it is quite possible that the new S-dropout variant turns out to be more (or perhaps even less) transmissible than those variants it is replacing, there is nothing logical sustaining that assumption at this stage. Which brings us to the claim that the new variant comes with a higher viral load, which supports the idea that it is more infectious because surely more virus means more opportunity to pass onto and infect new victims. However, the case for an increased viral load is even weaker than the assumptions backing the increased transmissibility claim.
Lateral flow devices
The University of Birmingham, which has just started up a new coronavirus PCR facility as part of the nation’s Lighthouse Lab network, studied the comparable efficacy of the Innova lateral flow device (LFD), a test whose advantage is that it gives immediate results, by testing 7,185 asymptomatic students, of which just two tested positive. The study then randomly tested 710 LFD negatives on their state-of-the-art PCR machines and found 6 further positives which the LFDs had missed and implying that out of the whole group about 60 positives might have been missed by using the LFD. What is really interesting about this however, is that all these 6 ‘false negatives’ required a cycle threshold (Ct) > 29, whilst the two LFD positives were at Ct 20 and Ct 25. As Chart 4 below shows, studies reveal that PCR positives at the limit of detection (LoD) cannot reliably yield live virus in vitro (in the lab) much above Ct 29 and zero live virus above Ct 33. Therefore, the LFD test is not necessarily as woefully insensitive as the Birmingham study concludes but is probably picking up (almost) all the positive cases. But what therefore is a PCR test that only turns positive at Ct > 33 telling us if there is no live virus present? The answer is that PCR tests set to the LoD not only pick up live infections at low Ct but also old, dead viral strands from infections that people have recovered from but which are only picked up by the PCR machine at the higher Ct. This feature, it turns out, is crucial for understanding the possible confusion about the S-dropout variant and its transmissibility.
Chart 4: Positive PCR result Ct & ability to culture live virus
The importance of calibration curves.
Birmingham Uni generated a calibration curve to compare Ct and viral loads for the PCR protocol. PCR machines output data by measuring the number of amplification cycles before a positive signal is seen (Ct). High numbers of cycles get more sensitive, detecting smaller and smaller amounts of DNA, but there exists a point when the output of the PCR machine no longer reflects the number of initial copies of the target gene, this is known as the limit of detection (LoD).
How much initial virus a Ct number represents is determined by calibrating the process using a series of increasingly dilute samples with a known number of viral copies. Chart 5 below shows that in the case of norovirus, for example, if amplification needs to be taken as high as 2 billion to get a positive, only about 20 initial copies of the virus RNA are being detected, crossing the threshold at 31 cycles (Ct 31). However, with fewer than 20 initial copies, the PCR becomes unreliable, no matter how many cycles are performed. Therefore, the LoD for the Norovirus PCR test is 20 viral copies per sample at Ct 31. Like the coronavirus, norovirus is a positive-strand RNA virus, so the PCR process is very similar. The chart plots 10:1 dilutions against the Ct at which the sample tests positive and falls, as is to be expected, along a straight line (logically, you shouldn’t be able to dilute something by a factor of 10 and get a stronger Ct signal). This sort of calibration curve is useful because the Ct for any unknown sample can be traced on the line and the corresponding amount of norovirus can be read off on the x axis.
Chart 5: Norovirus PCR positive Ct & number of viral copies
The data from the Birmingham University study has been used to create exactly the same type of chart for coronavirus detected using the Thermo Fisher TaqPath PCR test, as used in most of the Pillar 2 Lighthouse labs including Birmingham University (see Chart 6 below). Unlike the Norovirus calibration curve however, these observations, which are derived from serial dilutions carried out by the Birmingham laboratory, should all lie on a straight line too but clearly don’t. With increased viral copies, the positive Ct should always be lower because less amplification should be required. Yet several points on the Birmingham calibration curve are a significant way away from the line. Log10, 3.7 to 4 (i.e. between 5,000 and 10,000 viral copies) the Ct rises by 3.3 when it should, by definition, fall. A Ct 3.3 cycle error is roughly equivalent to a 10-fold difference in viral load. Yet this calibration curve is the only scientific link between Ct in the TaqPath protocol and viral load in any sample; and therefore absolutely central to the inference that the S-dropout has a higher viral load.
Chart 6: ORF gene PCR positive Ct & number of viral copies
Source: Birmingham University, MacroStrategy LLP
Although the line of best fit would imply that the ORF target gene can be detected with as few as 50-100 viral copies per ml, the table below shows that nothing above Ct 25.8 can be reliably replicated (non-grey boxes), which is the true LoD of the TaqPath protocol for ORF. Even with the 2-out-of-3 rule, the protocol starts to fall over at Ct 30, just like with Norovirus. Yet, the pillar 2 PCR labs, including Birmingham, still register a positive test (‘diagnoses’ as the government now prefers to call to them), at Ct 38, i.e. samples at least 2ˆ8 (250x) more dilute than the PCR true LoD. We should treat all positives at Ct > 29 as merely shadows of old, prior ‘cold cases.’
Table 1: Ct values for 3 gene targets & viral copies per ml
Source: Birmingham Lighthouse Turnkey Lab
New for old
The median TaqPath PCR is positive at Ct 22-23 (~10,000 viral copies), whilst the median S-dropout turns positive at the lower Ct of ~18 (~100,000 copies), which implies a 10-fold ‘higher viral load’. However, unlike the calibration curve for norovirus, the points along the TaqPath curve are not straight, which looks very much like a calibration error. Therefore, whilst the ORF gene, at Ct 19.5, indicates an initial concentration of just 5,000 viral copies, 20x less viral load than the S-dropout gene, this is cherry picking the data, because at Ct 18.3 the ORF gene also indicates 100,000 viral copies per ml, exactly the same as the S-dropout median. Instead of the S-dropout viral load being “10-10,000-fold” higher as the study concludes, it is more like zero to 10-fold higher. When you consider that there are 3 x 10ˆ22 molecules in one ml of saline buffer, a factor of zero-10 is far, far less than a rounding error. For a detailed critique of the shortcomings of the PCR protocols for COVID-19, see here. Positives detected at Ct > 29 are mere shadows of past infections and live infections start to fall away above Ct 20. The arrow on Chart 7 below illustrates this, what the Birmingham researchers refer to as “a nadir in Ct frequency between 22-24…a possible multiphasic distribution of sample results” but they do not pursue this angle.
Chart 7: Frequency of Ct values for ORF gene positive samples
Source: Birmingham University
More precisely, what we have here is a biphasic distribution, the result of two fairly normal distributions overlaying each other (illustrated by the red curves on Chart 8 below). The one on the left, with its peak around Ct 17-18, is the distribution of new ‘live’ infections, whilst the distribution on the right, with a peak around Ct 27-28, reflects past cases that can only be identified following high Ct amplification. The observed trough between the two, from Ct 22-24 and marked by the arrow, indicates where the two viral distributions, new and old, overlap each other.
Chart 8: Frequency of Ct values for ORF gene positive samples
Source: Birmingham University, MacroStrategy LLP
The Birmingham lab processes samples from all over England and the team illustrate these distributions in a pair of vertical scatter plots (see Chart 9 below) showing ORF gene positives on the left-hand side and all the N gene positives on the right. The two ORF and N gene positive distributions are further split into those where the S gene was also positive on the right (presumably old infections and from the North) and those that were negative for the S gene (mainly new infections from the South) on the left. The report makes the point that positive tests which were negative for the S gene tended to have a lower median Ct (i.e. higher viral load) than those which tested positive for the S gene as well (see horizontal black bars) and conclude that the S-dropout variant must therefore have a higher viral load (lower Ct). However, this conclusion is logically faulty on at least two levels.
First, if the S positive subset is multi-, or more accurately bi-phasic, then the median (horizontal black bar) is an average of not one but two distributions drawn by me in red (see Chart 9 below), one of which (old cases) has a higher median and the other (new cases) has a lower median line (horizontal red bars). Samples were processed between October 25th and November 5th, only 4 weeks after the first S-dropout was first processed; and because the S-dropout (S-neg) variant is so new, it has relatively few old cases that can only be picked up by high Ct > 30. It is only logical therefore, that the median Ct of new cases will be lower than that of new and old cases combined. Sure enough, it appears, that the median Ct of the S-dropout distribution is about equal to the median of new cases alone. It definitely isn’t safe to infer that the S-dropout viral load is any higher than that of its, now waning, predecessor variant, especially when the latter was at the same point in its infection cycle.
Chart 9: Comparative Ct values for viral targets
Source: Birmingham University, MacroStrategy LLP
Second, a lower Ct does not even mean you necessarily have a higher viral load. The protocols used have to show that Ct is proportional to the number of viral copies at that point in the curve, or there is something wrong with the protocol. Yet the chart shows that the ORF1ab gene target calibration was not proportional to the number of copies per ml. at several points on the curve where the TaqPath PCR protocol goes awry and the non-grey areas on the table show that results cannot be reliably replicated above a Ct of 26 for the ORF gene, Ct 30 for the S gene and Ct 31 for N. I.e. the true LoD is somewhere between Ct 26 (500 viral particles per ml) and Ct 31 (100 copies). All TaqPath PCR tests that don’t turn positive until Ct > 30-31, are therefore manifestly unreliable anyway.
The transmissibility feedback loop
Of the 641 positive samples analysed, 178 (28%) had an undetectable S gene profile, which they artificially assigned a Ct of 45 (see yellow diamond, top right-hand corner of Chart 10 below). This compares to only 13 positive samples (2.1%) with an undetectable ORF (red circles) and another 13 with an undetectable N gene (green squares). The researchers jump to the conclusion that these missing S gene positives and their lower median Ct (which they forgot could have been caused by the multiphasic nature of the distribution) lead to “a conservative estimateof a significantly larger population of infectious subjectsthat have an increased viral load up to 10,000-fold higher” with commensurately increased transmission. The rest, as they say, is history.
Chart 10: Frequency of positive Ct values for 3 gene targets
Source: Birmingham University, MacroStrategy LLP
Whilst the researchers are clearly implying that if the primer is failing to capture one of its gene targets, that not only is there a large population of infectious subjects roaming around undetected, these people also carry a viral load that could be 10,000x higher than those infected with the earlier variant. As for the latter claim, we have already established that the lower median Ct could only imply a 10-fold higher viral load at most; the 10,000 figure being alarmist hyperbole. Yet even this isn’t even relevant because of the multiphasic distribution. So, we can junk the whole ‘higher viral load’ argument; but what about these infectious S-dropouts roaming undetected among us like latter-day Typhoid Marys?
Chart 10 clearly shows that even when the S gene was still being detected by TaqPath, it was so at a higher Ct than the other two genes, i.e. the yellow diamonds are shifted to the right (within the red ellipse). However, since only two of the three genes are required to give a positive result and the primer does a better job picking up both of the ORF and N genes anyway, the number of cases that will have gone undetected will be the 26/641 (4%) where either the ORF or the N gene primers failed. Is 4% truly what any responsible researcher would call “a significantly larger population of infectious subjects” (my bold)?
What we know
An increasing number of positive ORF and N gene samples tested using the Thermo Fisher TaqPath PCR machine primers are no longer picking up the S gene, indicating a new S-dropout variant originated in the Medway area of Kent.
The median Ct of samples positive for ORF and N but not the S gene is lower than the median Ct of samples also positive for S.
The relative growth rate of the S-dropout is about 170% of the growth rate of those positive ORF and N samples that are also positive for the S gene.
Illogical academic data interpretation
“A significant proportion of S-dropout samples are associated with lower Ct values of ORF and N in the same sample; from which it (sic) possible to infer a relatively higher viral load in these specimens” (my bold).
Yet, for the reasons explained above, it is not possible to infer higher viral load at all. A far more likely explanation is that the S-dropout variant is a newer variant, from which we can infer that there will be far fewer old cases to pick up with a very high Ct > 30.
“Clearly, the higher viral loads inferred from S-dropout samples could determine the infectiousness of subjects, and thus the ability of the virus to transmit onwards” (my bold).
Or nothing of the sort. A naïve interpretation of median Ct, that fails to take account of the bi-phasic nature of the distribution, renders this conclusion utterly meaningless.
“The significant difference in population median Ct value, between S-dropout and S-detected samples, represents between 10 and 100-fold increase in target concentration for S-dropout. The cluster of S-dropout samples having ORF and N Ct of between 9 and 15 (63/178 (35.4%); 46/450 (10.2%), respectively) is a corresponding further increase in relative viral load of between 10 and 1,000-fold” (my bold).
The difference in median Ct, which can be explained by the new variant being, uh, new, is < 4 Ct (actually about 3.6) between the S-negative and S-positive samples. Mathematically, 2ˆ3.6 equates to a factor of almost exactly 10. Not 100, not 1,000 and definitely not 10,000 (see 4. below).
“A Ct value of approximately 15-16 corresponds to a viral load of 1 x 106 copies per millilitre (mL). Therefore, our observed cluster of S-dropout samples at Ct less than 15 corresponds to a conservative estimate of a significantly larger population of infectious subjects that have an increased viral load up to 10,000-fold higher. Such capability of increased transmission has been ascribed to an S ‘variant of concern’ apparently spreading throughout the South-east of the UK” (my bold).
Ignoring the fact that the whole low Ct/high viral load idea only stems from ignoring the bi-phasic nature of the positive distribution and cherry-picking the data from the TaqPath calibration curve, you still only get a zero (more likely) to a maximum 10-fold higher implied concentration. “10,000-fold” is scientifically inexcusable, and deliberately alarmist, hyperbole. Then this wholly fallacious idea is fed back into the concept of increased transmissibility… except that this argument is all a house of cards. Besides, these numbers only seem large in the macro world. At the micro scale, where there are 3 x 10ˆ22 molecules in 1ml of water, the difference between 10,000 and 100,000 viral copies per ml? Hmm, not so much.
There is a new COVID-19 variant in the UK, which we only identified because it isn’t being picked up by the S gene primer, even when the ORF and N gene primers flash positive, on the Thermo Fisher machines. Being a new variant (i.e. R > 1) it is naturally growing faster (by 70%) but only relative to its predecessor, which is now past its peak and on the wane (i.e. R < 1). You cannot logically infer from this relative growth rate anything about transmissibility. It is even possible that this new S-dropout variant could be less transmissible than its predecessor was when it was in its ascendancy back in Sep-Oct. Whilst positive tests are growing fast, this can be wholly explained by the increase in testing (+36% from November 4th to December 21st). Positivity is even down slightly compared to a month ago. The lower Ct of S gene negative positive samples, from which has been inferred a higher viral load, from which has been inferred a positive feedback increase in transmissibility, is actually much more easily and logically explained by the variant being relatively new, which means there are relatively few old cases that can only be picked up by the highest Ct.
The UK government has explicitly tied the transmissibility of the new S-dropout variant (despite its existence being traced back to early October) to the very recent surge in new cases, which hit a record high of 80k positives on December 29th. Thus, 11 days after the disease prevalence was estimated to be < 1.3% of the population, and 9 days after a quarter of the population was put under Tier 4 lockdown, 23.2% of all people tested by pillar 2 came back positive (see grey line and green ellipse on Chart 11 below). Note that as with December 29th all the spikes in the data are Mondays, because testing capacity is redirected to new hotspot areas each week; but even the national 7-day average positivity had risen from sub-8% at Christmas, to 13% by NYE. This has all been driven by London, where 7-day positivity was 14.9% on December 20th, the day of Tier 4 lockdown, but nevertheless had risen to 17.8% on Christmas Day and is now 26.8% (20x higher than prevalence). How and why is too early to tell.
Chart 11: UK COVID-19 prevalence & pillar 2 test positivity
James Ferguson is the Founding Partner of MacroStrategy