
One of the paradoxes of the coronavirus crisis is that the need for public scrutiny of government policy has never been greater, but there’s less tolerance for dissent than usual. That’s particularly true of the work of Professor Neil Ferguson and his team at Imperial College. Anyone questioning Professor Ferguson’s analysis is likely to be met with howls of disdain. Witness the furious reaction provoked by Professor Sunetra Gupta and her team at Oxford when they published a paper suggesting that the Imperial model might have underestimated the percentage of the population that has already been infected. The Financial Times printed a critical letter co-signed by a group of scientists that was reminiscent of left-wing academics denouncing one of their colleagues for dissenting from woke orthodoxy. They used the word “dangerous” in their description of the Oxford research, as if merely challenging Imperial’s model would cost lives, and Professor Ferguson has made the same argument to condemn other critics of his work. “It is ludicrous, frankly, to suggest that the severity of this virus is comparable to seasonal flu – ludicrous and dangerous,” he said.
A more prudent approach would be for the Government not to place too much confidence in any one model, or set of models, but to encourage different teams of experts, working independently, to come up with predictions of their own and challenge their rivals. That’s the tried-and-tested scientific method and it has been bizarre to see respected pundits simultaneously argue that we should be strictly guided by “the science” and that any scientist expressing dissent from the prevailing orthodoxy is behaving “irresponsibly”. That was the same argument used by the Chinese authorities for silencing the doctors who first raised the alarm in Wuhan. They were arrested and forced to confess to “spreading rumours” that “severely disturbed the social order.” Shutting down dissent during an actual war might make sense, but in a war against a virus it is vital that we should stick to the scientific method. As Sir Karl Popper said: “The point is that whenever we propose a solution to a problem, we ought to try as hard as we can to overthrow our solution, rather than defend it.”
We don’t want to repeat the mistakes we made during another viral outbreak, namely the 2001 foot and mouth epidemic. Tony Blair’s government adopted a strategy of pre-emptive culling which led to the death of more than six million cattle, sheep and pigs, with an estimated cost to the UK economy of £9 billion. That strategy was informed by predictive modelling produced by a team at Imperial College led by, among others, Professor Ferguson. Like today, there wasn’t much appetite for questioning his predictions. But we now have good reason to believe his analysis was wrong. Michael Thrusfield, professor of veterinary epidemiology at Edinburgh University, has written two critical reports about the government’s response to that epidemic, concluding that the Imperial College modelling was “severely flawed”.
One person who’s sceptical of Professor Ferguson’s modelling is Anders Tegnell, the epidemiologist who’s been advising the Swedish Government. “It’s not a peer-reviewed paper,” he said, referring to the Imperial College March 16th paper. “It might be right, but it might also be terribly wrong. In Sweden, we are a bit surprised that it’s had such an impact.”
Further Reading
‘“Carnage by Computer”: The Blackboard Economics of the 2001 Foot and Mouth Epidemic’, David Campbell and Robert Lee, republished in Lockdown Sceptics, originally published in 2003
‘Use and abuse of mathematical models: an illustration from the 2001 foot and mouth disease epidemic in the United Kingdom‘, Michael Thrusfield et al, Edinburgh Research Explorer, 2006
‘Physical interventions to interrupt or reduce the spread of respiratory viruses‘, T Jefferson et al, NCBI, July 2011
‘A fiasco in the making? As the coronavirus pandemic takes hold, we are making decisions without reliable data‘ by John PA Ioannidis, Stat, March 17th 2020
‘Neil Ferguson, the scientist who convinced Boris Johnson of UK coronavirus lockdown, criticised in past for flawed research‘ by Katherine Rushton and Daniel Foggo, The Telegraph, March 28th 2020
‘Complicated Mathematical Models Are Not Substitutes for Common Sense‘ by Philippe Lemoine, National Review, March 30th 2020
‘Dissent over coronavirus research isn’t dangerous – but stifling debate is‘ by Toby Young, The Spectator, April 4th 2020
‘Predictive Mathematical Models of the COVID-19 Pandemic: Underlying Principles and Value of Projections‘, Nicholas P Jewell et al, JAMA Networks, April 16th 2020
‘The Tyranny Of Models‘ by William M Briggs, wmbrigs.com, April 17th 2020
‘After Repeated Failures, It’s Time To Permanently Dump Epidemic Models‘ by Michael Fumento, Issues & Insights, April 18th 2020
‘POLICY IMPLICATIONS OF MODELS OF THE SPREAD OF CORONAVIRUS: PERSPECTIVES AND OPPORTUNITIES FOR ECONOMISTS‘, Christopher Avery et al, National Bureau of Economic Research, April 2020
‘How Wrong Were the Models and Why?‘ by Phillip W. Magness, American Institute for Economic Research, April 23rd 2020
‘The Bearer of Good Coronavirus News: an Interview With John Ioannidis‘ by Allysia Finley, Wall St Journal, April 24th 2020
‘Imperial College Model Applied to Sweden Yields Preposterous Results‘ by Philip Magness, American Institute for Economic Research, April 30th 2020
‘The Fatal Hubris of Professor Lockdown‘ by Toby Young, Critic, May 6th 2020
Further Viewing








To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
Predictive modellers are our era’s version of the ancient world’s soothsayers who read a sacrificial victim’s entrails to make their predictions. Organizations, usually built round the figure of an allegedly enlightened genius, peddle their wares not only with frantic competitiveness (there are great rewards for those who can foretell the future) but do so with all the evangelical zeal of a religious cult. They advertise their wares as something to believe and have faith in. Anything that cannot be measured is ignored or overlooked. Like St Augustine they believe themselves to be wholly right, and any rivals, dissenters or doubters are cast aside as dangerous heretics. They always claim their scientific method is beyond reproach, while insisting their rivals’ methods and results are fatally flawed. But the real danger is the credulous acceptance of any of one these organizations without looking beyond. One need only look at the world of education to see how schools have been subsumed into this culture, buying in a predictive system from one company or another. Teachers are held responsible if children’s performance does not match the purchased model, never the other way round. One of the last instructions given out at the school before… Read more »
Nonsense. Many modellers are quite self-effacing. They are not necessarily subject experts. They tend to have string analytical skills. They take the world and recreate it in a computer. They know that there simulation is never the “real thing”. They are always looking for new insights and data to improve their models. I have met many modellers and analysts in my career and none have met your description.
You must admit, if a modeller is pretty much directly responsible for the decision to ruin the British economy and weeks later is still literally issuing orders to the government to double down on it, they’re probably not very “self-effacing”.
The deference to the government modellers by unthinking, obsequious politicians and hysterical big state advocates in the media is concerning. Regardless of your opinion of their personal characteristics.
There are two quite distinct and separate activities: modelling and programming.
If the model is correct but the program implements it wrongly, there is no limit to how wrong the output can be. (Think “division by zero”).
If the model is wrong and the program implements it correctly, there is again no limit to how wrong the output can be.
If both are done well, we are likely to get the best possible outcome (which will still be far from perfect). Consider:
“All models are wrong, but some are useful”. – Box’s Aphorism
“To convert a model into a quantitative formula is to destroy its usefulness as an instrument of thought”. – John Maynard Keynes
Let’s assume there are ten important assumptions you need to model an epidemic. Let’s assume each assumption has ten plausible figures/ranges you could use, each equally likely. The number of possible, plausible combinations is massive. The likelihood that Imperial’s choice of assumptions is right is therefore vanishingly small, and the likelihood they are wrong is extremely high. What you should do is run all plausible cases, and see what percentage produce outcomes that would justify a lockdown – modelling the possible negative outcomes of the lockdown at the same time and looking at the net effects.
I strongly suspect there are far more possible outcomes that do not justify a lockdown than do, but it’s not too difficult to find out. What is lunatic is to believe that (i) a particular group can pluck out of the air the correct assumptions, and then that (ii) you can pick which group has done that. To paraphrase Jamie Whyte about bond traders, it’s like employing lottery winners because you think they have a skill at picking numbers.
What you’ve said they should do is pretty much what they do. Most of the modeling you’ll now see from Imperial College contains a range and confidence intervals – check out their latest forecast tool on their website in addition to their code on github.
There are assumptions we need to know, but most of them are pretty well known from international data. They include susceptible population, time to hospitalisation, ICU %, fatality ratio, and reproduction number. Even lower-bound assumptions of these justify a lockdown, because exponential growth would rapidly overwhelm our NHS capacity.
The fact that we can sit here and discuss this indicates the position of privilege that the lockdown has put us in. If one of your elderly relatives (or you) had been denied life-saving treatment due to an overwhelmed NHS thanks to government inaction, you would not be making the same argument.
P.S. This epidemic calculator is good fun should you wish to model some assumptions of your own:
https://gabgoh.github.io/COVID/index.html
Unfortunately, life-saving treatment is already denied to hundreds of thousands of patients. For example, there’s a 6mth wait for a heart bypass operation. A friend was diagnosed with blocked heart valves, with one only just open. He could afford to have the op done, privately. Most cannot. Previously, the public didn’t have the choice of keeping elderly and ill people alive until as long as possible. Medical advancement means life expectancy goes up and up. However, life quality at the upper age scale goes down. In the UK, the State will keep your body going on and on. Have you seen very old people in homes, bedridden, fed by straws, unable to talk? Before, there were infinite resources and very old people died much more frequently from flu and other viruses. Now, it seems governments and the MSM want us to fear death far more – partly because in the West, we are unused to war, famine and natural disasters – and we must at any cost keep old people alive. That cost, however, is long-term economic disaster, which will mean lower life expectancy, increased mental health problems, etc. This pandemic is typical modern short-termism. My father died of cancer… Read more »
Absolutely right. Quantity of life is everything, quality of life counts for nothing. An appalling, anti-human creed that causes infinite suffering to the wrecked old and to their families. If you’ve ever seen the dementia wing of a care home, you’ll know what I mean. And it will haunt you.
Neil Ferguson seems to have become the go-to guru on this. But it would seem that his model can not be so special, let alone unique, compared with what others in UK, elsewhere in Europe and USA, as well as Asia no doubt have. So apart from his modelling using a range of scenarios to develop confidence levels, can we be assured that the government, via SAGE, has been made aware of other models and expert inputs which may apply different methods borne of different approaches entirely?
One fact remains. There is no evidence of nationwide lockdown being successfully used to prevent deaths in a large population anywhere in the world. Classic day one science class if we do A and B happens then A caused B? Wrong, we then need to test if A caused B because we know it probably didn’t. Lockdown may, theoretically, extend the infection timeline but it cannot manage the numbers for long enough to find a vaccine and no evidence to suggest the death toll will be smaller. There is also no control group population to compare. In scientific terms this is an experiment and no one really knows what the outcome will be. Saying that SAGE have our backs and we should blindly trust the GOV is a route to a totalitarian state. Our system of Government and justice is built around debate, discussion and political judgements. It is never perfect. We all need (regardless of situational beliefs) to understand that all decisions including those by SAGE are part science, part medical, part economic, big part guess. What comes out of this cocktail is a political decision that is designed to protect both the public at large and those making… Read more »
You are implying that they have now published their code on Github. Is that actually the case? Can someone confirm this?
Up to now part of the criticism has been that their model has been unverifiable as it was a complete black box, described by Ferguson himself as 213 year old code, largely undocumented” – which is frankly not something I’d have been allowed in a business context, let alone as a basis for shutting down an entire country.
Yes, it’s here:
https://github.com/ImperialCollegeLondon/covid19model
So, is there any peer review of the model? Has the amateur code been examined? I was hoping the Oxford team might have been all over it!
Ferguson’s is this one: https://github.com/mrc-ide/covid-sim
It is NOT Ferguson’s original code. What is published on GitHub is an outcome of more than month attempts by Microsoft and GitHub to fix original code. It still does not work.
The code that has been published is NOT the code that created the output relied on by government.
In the month or 6 weeks since Imperial were first asked to share their code, it has been thoroughly reviewed by Microsoft, and given a major face lift. (Although it may still contain any number of bugs).
I wonder why Microsoft would commit the valuable time of software engineers to such an effort?
https://swprs.org/a-swiss-doctor-on-covid-19/
The estimate for the number of deaths due to swine flu is 150000 to 570000. And this is the estimate after it has happened. So what hope for accuracy before or during the event.
This article misses the point. Of course models are bad and one shouldn’t accept WHO/Government/… pronouncements on trust (especially on masks [4]). Nonetheless, this doesn’t mean that there is no heuristic to decide what to do.
When faced with a fat-tailed distribution amid uncertainty, one must overreact [1].
The lockdown is far from desirable but is a tool to slow then stop or prevent widespread community transmission. The longer before a lockdown is implemented, the longer it will have to be in place [2]. Hence, any economic loss is already priced in once a lockdown starts. Attempts to prematurely weaken a lockdown will only delay the time when economic activity can begin to restart [5].
Lockdown isn’t an end in itself, eradicating the virus is, so there need to be other more focussed measures, like quarantining those with mild symptoms and travel restrictions [3].
[1] https://www.academia.edu/42223846/Ethics_of_Precaution_Individual_and_Systemic_Risk
[2] https://static1.squarespace.com/static/5b68a4e4a2772c2a206180a1/t/5e737b95403f772d8ce0e04a/1584626591711/CommunityPrevention.pdf
[3] https://static1.squarespace.com/static/5e7b914b3b5f9a42199b3337/t/5e8a5e701249b622794cf8e3/1586126448746/WinningCOVID.pdf
[4] https://twitter.com/nntaleb/status/1249296844712218624
[5] https://static1.squarespace.com/static/5e7b914b3b5f9a42199b3337/t/5e7bae70ed03c045bb9f7bab/1585163896267/5weeks.pdf
1/ “any economic loss is already priced in once a lockdown starts” – No, it isn’t. A recognition of economic loss may be there but there has been no calculation to balance a quantified loss against the lives saved (or whatever your intended benefit from the lockdown is)
2/ “eradicating the virus” isn’t going to happen until many years after we have an effective long-term vaccine. Look at smallpox and polio for comparisons. Now the virus exists it is not going away and there will be a continual, hopefully small, background of infections, even when herd immunity is established.
‘When faced with a fat-tailed distribution amid uncertainty, one must overreact [1]’
This is an interesting point but I think Taleb is referring to the need for individuals to ‘overreact’, even where the immediate individual payoff does not appear to warrant it. I don’t think it means that government should ‘overreact’, which is the issue here. The ‘Precautionary principle’ (‘overreaction’) is not necessarily the optimal policy to adopt – it depends on the balance of expected costs and benefits. In the same way that choosing a ‘sustainable’ policy option is not necessarily the optimal solution (it depends on the rate of time preference).
‘Lockdown isn’t an end in itself, eradicating the virus is’
I’m not an epidemiologist so I would be genuinely interested to learn how eradicating a virus is possible without either a vaccine or without the virus having spread to achieve ‘herd immunity’ (which would be impossible to achieve under the extreme lockdown measures cited in your references). I can see how you can get an epidemic ‘under control’ temporarily but to eradicate it in an open economy with a population of over 66 million? That seems to my untutored eye like playing a game of Whac-a-Mole.
> ‘When faced with a fat-tailed distribution amid uncertainty, one must > overreact [1]’ This is an interesting point but I think Taleb is > referring to the need for individuals to ‘overreact’, even where the > immediate individual payoff does not appear to warrant it. I don’t think > it means that government should ‘overreact’, which is the issue here. I would agree that a decentralised overreaction would be preferable to imposition from the centre but that isn’t what this page seems to be suggesting: that the models are dubious (which they are) so their predictions of 100,000s of deaths can’t be trusted (which they can’t) so we should just open everything up pronto. The economic problem isn’t from the Government imposed lockdown but from not having dealt with the virus outbreak. If, what would have seemed at the time as, undue measures had been taken early there would have been a small economic cost that is now dwarfed by the cost there will be. We don’t live in a Libertarian (or Localist) Utopia, just look at all the bailouts, so Central Government does have a role, e.g. in prompt travel restrictions from affected countries/regions. > The ‘Precautionary principle’… Read more »
I’ve had a look at the Bloomberg interview with Taleb and I see better where you’re coming from. He makes several points that I would agree with: the predictability of a pandemic (it being a White Swan event), bailing out individuals not corporations, tail risk hedging for corporations, the need for speedy targeted interventions. I’m intrigued by his aphorism ‘if the model isn’t reliable, don’t take the risk’. I wonder what that might mean when applied to the ‘unreliable’ Ferguson model.
> I’m intrigued
> by his aphorism ‘if the model isn’t reliable, don’t take the risk’. I
> wonder what that might mean when applied to the ‘unreliable’ Ferguson model.
A model is not automatically useful; claims otherwise are scientism.
– Modelling is the wrong level of analysis when considering risk: if it’s fat tailed, overreact
– Models are flawed:
– The output is highly sensitive (non-linear) to the input but the inputs are estimated. Uncertainties from the inputs grow exponentially over the time modelled [1][2]
– yet the response isn’t sensitive to the output: what would the model have to say for you not to have to overreact?
– They ignore [un]known unknowns (like short-term/no immunity, no vaccine, non-trivial long-term side-effects, asymptomatic transmission, …)
– Software bugs mean they may not even work as intended, let alone in novel regimes
– Those who use arguments like “this is the best model we have” are charlatans
[1] https://threadreaderapp.com/thread/1239171413342289921.html
[2] https://twitter.com/nntaleb/status/1248980856531812354
Do you expect that the Covid-19 virus will be “eradicated” before, at the same time, or after the common cold coronaviruses?
After all, many good teams have been seeking vaccines against the common cold for nearly a century, with complete lack of success.
Let alone a cure.
It is precisely because the prospect of any vaccine, let alone its efficacy, is so uncertain that we should seek to eliminate the coronavirus without one.
Maybe you should address your question to those in New Zealand, Greece, and RoC etc. [1] who are a long way towards doing this.
[1] https://www.endcoronavirus.org/countries#winning
New Zealand lives in constant terror of a fresh outbreak. It’s eradicated the bug in the same way as the ostrich eradicated its fears.
‘Here in this nuclear testing ground
Is no place to bury your head.’
(Michael Fkanders)
My understanding is the virus has about 2-3 weeks life, from infection to you almost being back to normal – unless you take a hit and end up hospitalised, and/or worse. The various figures now seem to be falling – possibly as result of lockdown. But to me, the figures are not falling to the extent you’d expect given the change in our behaviour. For example, anyone infected even just after lock down (March 23rd) should now be clear, or in hospital. Yet large numbers of infections/admissions are still being recorded. So how/where are all these people being infected post-lockdown and massive social distancing? Even out walking now many people do a big swerve just passing each other. Something doesn’t make sense to me.
The vast bulk of infections are now taking place in Hospital and care facilities. This is always the case and should not be surprising. Our normal flu season, especially in bad years, plays havoc in these settings but because we have lived with it and it has no media coverage no one thinks about it other than the 20, 000 or so families every year. This is why hospitals have been cleared where possible and other than a few places many beds remain empty. It is also a fact that deaths from the lockdown action will mount and extend well beyond the lifting but be disguised by the smoke and mirrors of the current monomaniacal focus on COVID 19. So your observation are correct, community infections have slowed to a trickle but hospitalised patients (still large numbers with other conditions) and staff reflect the bulk of new infections until this cohort either survives or dies. It all sounds very bad but in reality it is always this way with any respiratory disease this one is just the new kid on the block. Community transmission whilst walking in even a busy park is a myth with no evidence. The propaganda that… Read more »
I particularly like the observations in your last two sentences. There was a time when the BBC and other media would regularly intone that ‘of course most people will experience no/mild/moderate symptoms’. I scarcely hear that mentioned now. The way in which this has been reported deserves a thorough investigation (misleading, imprecise, sensationalist language, reporting lacking perspective and context etc) and maybe should be a ‘theme’ on this website.
One point of clarification if I may, Rick. You say that the vast bulk of infections is now taking place in hospital and care facilities. By that do you mean that the bulk of infections is occurring whilst people are in these settings, or that the bulk of infections is being picked up (by testing) in these settings but with the infections occurring in the community?
Totally agree, the beeb need to be held accountable for their part in the extraordinary hysteria generated. I wrote something about this here: http://inproportion2.talkigy.com/category/media.html
That is an outstanding website. Very many thanks and congratulations to all those involved in it. Yesterday’s update should be on the front page of every newspaper.
Britain needs transformational reform within so much that is state run. An independent health authority, as they have in Sweden, would be a good place to start, set up from scratch, with talented youngsters, so that no trace of 20th century governmental culture remains. Education could also do with depoliticising. And I completely agree; swingeing reform of the civil service and the bbc is long overdue
Thanks for the support.
Agree with your points. This feels like a moment in time when we set the course for the future. I for one do not want a society that locks down without due cause every time a virus comes around
http://inproportion2.talkigy.com/
The model is too narrow as it does not consider that he level of indirect deaths from shutdown will be potentially larger that the Covid deaths. This can be seen by the fact that already 50% of incremental deaths over the average death rate are non-Covid. In the future, the indirect deaths will be way in excess of Covid. Looking at the financial crisis, it’s possible to see how the deaths will be greater both in the UK and globally. This is the flaw in the Imperial model in that it did not consider immediate and longer term deaths as a result of the lockdown. It’s like saying we will stop all traffic deaths by banning motor vehicles but then not including into the calculation the people that die because there are no ambulances.
Telegraph says the global financial crisis caused 500k cancer deaths worldwide.
https://www.telegraph.co.uk/news/2016/05/25/financial-crisis-caused-500000-extra-cancer-death-according-to-l/
Guardian says austerity caused 130k deaths
https://www.theguardian.com/politics/2019/jun/01/perfect-storm-austerity-behind-130000-deaths-uk-ippr-report
I have finally had a look at the Imperial College model (as described in their papers), and it’s better than I thought it would be: as far as I can tell it is doing precisely the right thing in that it simulates the entire country’s population as individuals, distributed in towns and villages, homes, schools and workplaces, with behaviours related to real demographic data. They change their behaviour when they become ill. In effect, it is like modelling the entire country as individual Sims (TM) that can be specified to any level of detail. If they wanted to, they could model the number of times a day a person sneezes and how the droplets would spread in an office with air conditioning, knowing the typical desk spacing in UK offices and how many people at any one time are in meetings, drinking tea, or smoking outside. They would just need a big enough supercomputer. So I think that aspect of it is great. The problem, for me, is that the *core* assumptions regarding ‘susceptibility’, ‘infection’ and ‘immunity’ are very simple and based on guesswork (and we mustn’t discount the possibility that the thresholds and so on are modified in response… Read more »
The Swedish professor Johan Giesecke points out that one of the greatest flaws in the model is that it assumes hospital ICU capacity to be static, whereas in fact in both Sweden and the UK it has been relatively easy to double and triple ICU capacity.
[…] Toby Young article questioning the models reliability: https://dailysceptic.org/how-reliable-is-imperial-colleges-modelling/ […]
The Counterfactual The ICL Report constructed a counterfactual (an unmitigated, do nothing scenario) in which there would be 510,000 deaths in the UK. Mitigation would likely still result in ‘hundreds of thousands of deaths and health systems being overwhelmed many times over’. Therefore suppression was the only viable strategy, which would result in around 20,000 deaths. After 4 weeks of suppression, hospital COVID admissions and deaths seem to have peaked and the death toll in this phase might indeed be not much more than 20,000 (in hospitals). The narrative being wheeled out is: If we had done nothing, hundreds of thousands people would have died and the NHS would have been overwhelmed. The hardship, suffering and indirect loss of life caused by the measures we have taken is therefore justified. The Ferguson counterfactual though lacks credibility as a basis for policy decisions. First, because of the huge degree of uncertainty. In standard economic cost-benefit analysis the degree of uncertainty when a counterfactual is estimated is relatively small – because it is assessing what happens if nothing changes. In this case the do-nothing counterfactual involved estimating the impact of a novel coronavirus spread unchecked through a population. Its estimation, I suggest,… Read more »
A paper of the German health ministry rki.de clearly proves (with a nice graph created with “nowcasting” ) that the peak of infections was before lockdown started already. The reproduction rate R, that the Germans wanted to reduce to 1 with the the lockdown, has in fact been under 1 already before the lockdown started. These things show covid19 is just behaving like the flu and vanishes on its own around the end of April. It is very funny to see how super-sophisticated models like Imperial’s rest upon totally guessed unscientific core characteristics of the pandemic.
Interestingly the paper doesn’t change politics in Germany, they are in full self-destruction mode, the aim is to completely destroy the virus, against all laws of nature.
The graph is on page 3 (sorry its a pdf):
https://www.rki.de/DE/Content/Infekt/EpidBull/Archiv/2020/Ausgaben/17_20_SARS-CoV2_vorab.pdf?__blob=publicationFile
Thank you Chrissie. The article looks interesting but my German is not good enough to understand and interpret the other findings. Is there an English version? Could you explain please how the analysis has been done ie the ‘nowcasting’. John.
The most imteresting graph is Fig. 4 (“abb. 4”) on “page 14” (the 5th page of the pdf, on top), clearly showing R was below 1 on March 22nd already, when the government started the lockdown.
They analysed 120000 cases, and had info about begin of infection from 70000 (they asked them “when did you have the first symptoms?”). They “implanted” this info into all the other cases (“nowcasting”) and did the analysis from there: these numbers are extremely reliable, the rki.de is extremely cautious and has always been overly pessimistic.
In Germany, first the government had said we must reach a doubling time of around 10 days. When that had been reached they said “R=1” is much more important, and now that that has been reached (but had even been reached before lockdown, but they would never acknowledge) they say the clusters in nursery homes are much more important (“we shut down society to save the elderly in nursery homes”).
Thank you for that information Chrissie. We call that ‘moving the goalposts’. The same is happening in the UK. So if the Imperial model’s parameters and transmission mechanisms were to be updated with the latest data and that would be ‘nowcasting’?
Yes correct, I think so.
Hi Chrissie, apologies for jumping in. I’m just trying to verify something I read in an article that suggested that Germany may be counting its CV19 deaths in a different way than the UK. Where a comorbidity is present but the person has tested positive for coronavirus in the UK, that is counted as a death from coronavirus. I read that in Germany, the cause of death would reflect the comorbidity… but I haven’t been able to reference this. Are you able to confirm either way? Many thanks
Germany has the same confusion. Normally, everyone with the virus is counted as covid19-death.
A pathologist in Hamburg, Prof. Dr. Püschel, was the first one to really examine all 60 covid19 “victims” he had and every single one had serious illnesses that would have caused them to die shortterm anyway.
Crazily the german health ministry had a rule to not examine the dead, to prevent further spread of the virus…
“Its estimation, I suggest, was little better than a (sophisticated) guess that reflected Ferguson’s belief that ‘C-19 is comparable in lethality to H1N1 flu in 1918’.”
Precisely. When a modeller declares that before they’ve even started they are already anticipating the outcome of the model, alarm bells should ring. In this case, Ferguson would already have in mind what the model ‘should’ be showing in terms of deaths and would obviously be modifying it until it gave the ‘right’ answer.
Of course, the same could be true even if the modeller was not so open about declaring their initial beliefs about the outcome. At least Ferguson was honest about it…
I don’t imagine Imperial College will be in any rush to do this, for obvious reasons. Ferguson’s model has the same weaknesses as other mathematical models. Quality of input data – ‘Rubbish in, Rubbish out’ etc. It takes a brave (or, in this case, foolish) man to make predictions on this basis. Problem is the government were spooked.
They are getting a good idea in the U.S. (Stanford) as to how to answer your question ‘….if all restrictions were to be lifted, how many COVID-19 deaths might be expected in light of what we now know about the virus?’:
https://www.youtube.com/watch?v=k7v2F3usNVA&list=PLq8BgDugd2oyqmYx6RdVlJfQeAdhJkhc3
Their view, from antibody tests so far, is that Covid 19 mortality is similar to influenza, maybe a little bit worse.
That is wonderful news, if correct, because already identified coronaviruses/rhinoviruses responsible for the common cold can be a great deal more deadly to the aged and infirm than influenza:
‘New Respiratory Viruses and the Elderly’ 2011
‘Chronic underlying conditions and advanced age increase the susceptibility and disease severity of CoV infections, and mortality occurs.’
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3134957/
We don’t have vaccines for these either, so the current pandemonium is a puzzle.
Like most epidemic models, the Imperial College model critically depends on two important parameters. The basic reproduction number (R0) and the infection fatality rate (IFR). For R0 their baseline assumption was 2.4, whilst for the overall IFR they use 0.9%. Their paper was published on 16 March 2020, and as more data has come available we are able to update these two parameters with better estimates. On 14 April 2020 the Centre for Evidence-Based Medicine (CEBM), University of Oxford, state that a survey of 16 published estimates of R0 had a mean estimate of 2.65.[1] Whilst on a web page last updated on 17 April 2020 they give their best estimate for IFR as somewhere between 0.1% and 0.36%.[2]
[1] https://www.cebm.net/covid-19/when-will-it-be-over-an-introduction-to-viral-reproduction-numbers-r0-and-re/
[2] https://www.cebm.net/covid-19/global-covid-19-case-fatality-rates/
Exactly, Martin. The paper assumed an IFR of 0.9% based on ‘an analysis of a subset of cases from China’ (p.5). My reaction on reading that was – ok, but that assumption has to be subject to a wide margin of error, so what’s the sensitivity of the results to different assumptions about this crucial parameter? I thought I found the answer on p.8 ‘We find the relative effectiveness of different policies is insensitive to…varying IFR in the 0.25%-1.0% range’. I found that hard to believe but moved on. However, I had misread it- it is insensitive to relative effectiveness, not necessarily absolute effectiveness.
So, can anyone help with the following: (a) has there been any analysis done by ICL that shows the sensitivity of deaths to variations in IFR?; (b) has ICL updated the model to reflect better data, such as the CEBM findings above?, and (c) if the answer to (a) is no, would anyone care to have a stab at estimating the relationship, ceteris paribus, between IFR and deaths? I presume that if the IFR is 10 times lower (0.09 not 0.9) deaths would not be 10 times fewer!
We don’t have the right model. All we have is a model telling us about the benefits of a particular course of action. It is silent on the costs. Moreover, the whole concept of focusing on deaths is flawed. A death is an event occurring at a single point in time. What should be taken into account is the amount of life being saved. E.g., in the current model the deaths of 100 people aged 95 carry equal value to say the deaths of 100 people aged 5. This is clearly not true as those 5 year olds could look forward to say 77 more years of life each, whereas the 95 year olds would have much shorter time horizons. The entire framework for thinking about how to prepare the model is wrong.
I agree totally with your comments. Life and death is complex and involves ‘value-choice’ decisions which cannot be translated into computer code. Mathematical models like that by Ferguson et al are essentially ‘best-guess’ attempts to model one aspect of a multi-dimensional issue. Sustainability models are just the same. ‘All models are wrong, some are useful…..and some are dangerous’.
Looking at the model, I see one large framing problem, and several problems with assumptions. The framing problem is – what is this model good for? The number of deaths under two scenarios is a political objective, not a healthcare objective. A healthcare outcome objective would take into account all of the direct and indirect costs of the two choices, and also not the “deaths” but the amount of LIFE lost. If 10,000 people die, all of whom could expect one more month of life, that is the same as only 17 people aged 25 expecting another 50 years of life. The concept of burden of disease and measuring outcomes as quality-adjusted life years is not new and the model should ordinarily have focused on this, adding in costs and benefits. Without taking these fundamental factors into account, I do not see what the model is good for, except purely political purposes. Number of deaths is what will take the headlines. I have some major doubts about the motivations of the people behind this model.
Hundreds of thousands will die, to only prolong the life of a few thousand for a couple of months.
Has the Neil Ferguson model ever been close with one of its pandemic predictions? I have read about all the pandemics it got seriously wrong so I can’t understand why anyone would take it seriously.
An interview with Dr. Chris Murray, who created the USA Institute for Health Metrics and Evaluation’s COVID-19 projections. https://fivethirtyeight.com/videos/how-one-modeler-is-trying-to-forecast-the-toll-of-covid-19/ Two little quotes I noticed. At about 7.00, when the dubiousness of the data on which the model’s parameters are based has been pointed out: “It’s a huge issue, but we don’t really see an alternative”. That’s what I have always feared: the imperative to produce some figures, no matter how dubious, because this is seen as more scientific than ‘common sense’. I strongly suspect that common sense is a better way to go about things in a situation like this. At about 8.00: “The nature of SEIR models is they do tend to always show this huge increase up to the point of saturation where most people get infected and very often that has not historically happened so we wanted to find a strategy that’s not simply saying that eventually everybody’s going to get infected and we’re going to have this enormous burden on the health system. Because historically that has tended to be on the high side and that’s where we’ve tried to anchor it in data albeit you’re stuck having to make some assumptions to make this work”.… Read more »
What went wrong with UK’s response to COVID19? When it comes to analyse how the UK could have responded better to COVID-19, two major items of accepted wisdom will need to be challenged. First, science is not a religion: theories have to be tested against the hard data and evidence and then discarded or modified accordingly. Secondly, while front line NHS workers have been amazing, the organisation itself – and Public Health England – are bureaucratic out-dated leviathans that have frequently been slow, shambolic and disorganised in their response to this fast moving threat. Dealing first with the science, the community of top-level epidemiologists is quite a small one and it has been waiting for its time of importance to arrive for many years. As soon as COVID19 arrived, the complex models beloved of this group were immediately wheeled out and dramatic forecasts of hundreds of thousands of deaths were broadcast all over the media. Unfortunately, there was one major problem: all the existing models were based on influenza and other diseases that behave nothing like the current Corona virus; consequently, all those forecasts were completely meaningless and in no country in the world has the virus followed the initial… Read more »
Fantastic post! You really should be advising the government – I mean it. You’ve laid it out perfectly.
@Hugh Osmond. What you’ve articulated – and it’s something that’s been bothering me – is that ‘R0’ and the other ‘R’ variants are not a real characteristic of the virus. Snapshots during the epidemic may give you observed values for them, but that doesn’t mean that they represent something fundamental. Attempting to both derive, and then use values for ‘R’ in models, has led to this disaster. It needs to be shouted from the rooftops. I’m now even more convinced that the model I came up with – which is just my own guess at how the illness works (it’s not SEIR) is closer to reality than Imperial’s. In my model I can observe ‘R’ as an output as the epidemic progresses, but I don’t feed an ‘R’ in. My stumbling block was that I was still in thrall to the idea that ‘R0’ must mean something – because that’s what they’re always on about. I saw my model as a way to explain why herd immunity could be achieved with a lower percentage of the population immune than ‘expected’ – but of course the ‘expected’ figure is derived from ‘R0’ – something that doesn’t even apply to my model… Read more »
I agree Caswell. Hugh Osmond’s post unlocks some of the confusion and consternation I have been feeling. It deserves a wider audience. It is as eye-opening as the articles Dr John Lee wrote for the Spectator earlier in the piece. At this juncture in the saga, my major bugbear is: why on earth would you make achieving a certain value of R (the fifth test) the most important criterion in deciding policy? R is simply a measure of the rate at which the virus spreads. Slowing down the rate at which it spreads is relevant only if we are concerned about exceeding critical care capacity, and only then in conjunction with an assuming relationship between infection and those infected needing critical care. If we are concerned about that still then build more temporary hospitals! – a lot cheaper than extending lockdown. Indeed, following on from Hugh’s point, make these into COVID isolation hospitals. In an earlier comment you wrote that a model is merely a rendering of the assumptions. That struck a chord. I’ve forgotten who said that there are three stages to policy making: diagnosis, prognosis, and prescription. Of the three, diagnosis is the most important. The prognosis is… Read more »
I think the constraint is staff (doctors and nurses), not the capacity to build more hospitals, secondary possibly having ventilators and PPE, but these can be solved in weeks/months… but, training people takes years. There is finite supply with the NHS, as with most other things that require people and skill.
I agree that R isn’t an input (except in a very simple, deterministic model). Rather it’s a characteristic (possibly theoretically measurable?) at any one time and place. But it must be a function of all the things you mention and more. Those are closer to being the model inputs, although even some of them should be derived from even more basic inputs.* I suspect this may be one reason the “logistic” style models do a better job of tracking actual cases than the Imperial style models: In one sense they are simpler but only because they operate at an even more macro level. I am still worried that Imperial haven’t released their original model for scrutiny: that makes me suspicious. Nor (as far as I know) have they demonstrated that with appropriate actual data fed in it can track what has happened so far. To my mind, being able to output results that track the actual events given actual input data should be fundamental to any trustworthy model. Or maybe that’s been tried and the result was just too embarrassing for the Government to allow out. (Off-topic, but that last point is of course also a criticism of the climate… Read more »
Hi duncanpt. Could you explain what you mean by logistic style models please. Static?
Interesting post, thanks Hugh. A further thought from someone who spends their time reviewing models and “back-casting”: “The standard (SEIR) epidemiological models invariably rely on two important numbers: R0, which is the average number of people an infected person in turn infects; and the IFR, which is the percentage of infected people who actually die” And of course also on the “S” in “SEIR”; Susceptible. It seems there is mounting evidence that a substantial proportion of the population is not Susceptible; specifically younger people. Not only do they not show symptoms but, by and large, they also don’t seem to be very contagious either. The development pattern of the COVID19 outbreak seems to fit a trend whereby: 1) It is quite deadly in a significant proportion of the population (those with underlying health conditions, of which many are elderly), so there is an early spike in deaths if and when this proportion gets exposed 2) It is not very deadly but sometimes very contagious in another proportion (some adults without underlying health conditions), so it spreads easily because this proportion is mobile and many healthcare workers are in this proportion 3) It is not deadly nor contagious in another proportion… Read more »
Would explain why several commentator are noticing a similar profile over time in most countries, between 6-8 weeks I think, regardless of their approach, without or without lockdowns/testing/etc.? In other words, has the trajectory been determined almost entirely before you know it’s arrived?
Excellent post.
Casts helpful light on the futility of seeking the R0 number.
Most epidemiologists appear to agree that every epidemic is unique – affected by geographic, cultural, economic and other factors. It does seem odd that in this pandemic each local epidemic appears to be following a remarkably similar pattern. This seems to suggest that it might be the characteristics of the virus itself that are dominant. Rapid spread through the majority population without causing significant harm, then severe outbreaks among vulnerable groups, with a steady decline after a small number of cycles.
It does seem that the modellers got it very wrong, and governments raised their predictions to the level of belief, making the whole thing very messy indeed. I fear my sons and their future children will still be paying the price of these errors long after I am gone.
This all sounds plausible but…. it’s not just Imperial or the UK that came to the lockdown conclusion. Different countries globally did the same thing without Imperial College. Imperial says that the morality rate is largely unchanged from the initial estimates – not sure about the transmission data.
We didn’t have the PPE in place at the outset and still don’t so a lockdown was needed and probably still is to a great extent. Hugh’s post does raise the question that if we had been better prepared – a major response in January to buy PPE and get testing ready – whether we could have not locked-down and had simpler measures?
A big flaw in Hugh’s work is this “Luckily, these high-mingling people also tend to get the virus early in the epidemic, so are taken out of the equation rapidly, meaning that the average number of people being infected by each infected person (R0) falls precipitously over the first few weeks of the epidemic.” What’s it mean? The reality is that we are all quite “high-mingling” over a few weeks. We are highly social and interact with hundreds of people even those who have a limited social life.
“Luckily, these high-mingling people also tend to get the virus early in the epidemic, so are taken out of the equation rapidly, meaning that the average number of people being infected by each infected person (R0) falls precipitously over the first few weeks of the epidemic.”
I took this to mean that those who are most likely to spread the virus are also most likely to catch it early in the cycle, and therefore develop immunity early on. For example, politicians and Prime Ministers!
All adds to the “peakiness” of the first wave.
Yes, I didn’t quite follow what that meant exactly, but I very much appreciate the idea that ‘R’ is going to be different in different circumstances. If the epicentre of the infection is the hospital, you’ll get a whole load of infections, probably with heavy doses of the virus, but as these are carried out into the community, the circumstances change and ‘R’ goes down, not just because people are becoming immune, but because the outside world isn’t the concentrated petri dish that the hospital is.
“… it’s not just Imperial or the UK that came to the lockdown conclusion”.
Doesn’t he deal with precisely that point in the third paragraph? It’s groupthink, unhindered by national boundaries.
“…the community of top-level epidemiologists is quite a small one… all the existing models were based on influenza and other diseases that behave nothing like the current Corona virus”
He did sort of say “all epidemiologists are the same”. I don’t accept that. Hugh is not the only one to have these ideas. I am sure others in the field of expertise have too. I think they all know how clunky the models are and how very important the input and insight is. As for the “super spreaders”, I see his point but with a highly transmissible disease we all end up spreading it over time – may be not quite as much. We are very social and have lots of interactions – trains, shops, bars, salons even if we are Johnny Nomates!
If the central point of any epidemiologists’ discussion is ‘R0’ (i.e. a single one-dimensioned value that they then extrapolate the whole epidemic from) then that tells us all we need to know. If an epidemiologist said “Of course, with a real virus, there isn’t just a single value that defines its spread…” they might begin to command a little more respect from people like me.
The US also swallowed the Imperial model, although they did have others of their own which went down similar roads. When Trump mentions a 2.2 million deaths worst case, that’s Imperial.
Hi BoneyKnee.
‘Imperial says that the morality rate is largely unchanged from the initial estimates’
Do you mean that they haven’t got round to changing their assumption yet, or that they believe their initial estimate is still the best one?
‘We didn’t have the PPE in place at the outset and still don’t so a lockdown was needed’
A more sensible policy in that case would be to do whatever it takes to get the PPE. Same as hospital beds – just do whatever it takes to get the capacity – don’t cripple the economy.
I agree that a better route would be to have the PPE. We didn’t have it. A better route would have been to have more understanding. We didn’t. We do now and I think this is where the government is failing. The tools are more or less in place. We have experience. We should be able to take steps with some reasonable estimates and understanding of the risks involved. So I agree with you today – just not about the lockdown overall.
Great post. Couldn’t agree more regarding the complete failure of the centrally controlled healthcare institutions. Unfortunately it seems unlikely there will be any reform of the NHS now.
Very informative. Looks like the government should have shut the hospitals, not the pubs.
Ferguson has said that his model was useful because decisions on how to deal with Coronavirus can’t be made in a vacuum. But isn’t the danger that decisions made on the basis of an unproven model could be bad decisions rather than good decisions? AFAIK Sweden hasn’t relied on models but has instead used public health knowledge about dealing with viruses and taken a different approach.
Is it possible that many people, including modellers themselves, are too enamoured with maths and give the output of predictive models an unwarranted, almost magical status? And besides, even if the assumptions plugged into Ferguson’s model were right we don’t know if the undocumented computer code is correct and was properly tested.
Very rousing. I agree you characterise the human element very well. I’m all for a bit of expert grilling, but I’m left having to defend the enemy. Have you looked at his model or report? The man’s actual paper… https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf … page 13, for a population with a natural r0 of 2.6 using a suppression strategy, which is what we are in he predicted… 48k deaths, annoyingly accurate… On current ONS figures we are likely to see 45k deaths at the end of the decline of this wave. I think he models the varying r0 mechanisms you describe across the whole country in quite some detail, Caswell looked in some detail at the model and agreed as much. In principle I agree with you criticism of more standard models, but to the expreme degree to which you describe it happening in the first wave of a suppressed infection?… I’m struggling. This all hinges on the spread level that has been achieved. … and I would observe until we know that it is possible to be infected, resist the infection, and present no antibodies, BUT crucially be unable to get it again, we don’t know that the serology surveys have NOT… Read more »
12%ish nationally at the moment feels about right – it’s broadly aligned with the COVID tracker app.
Simon,
Ferguson is reported to have predicted 7k to 20k deaths with lockdown:
https://www.reuters.com/article/us-health-coronavirus-britain-ferguson/uk-coronavirus-deaths-could-reach-7000-to-20000-ferguson-idUSKBN21N0BN
Do you know why he’s saying this if his paper is predicting 48k?
I’m not him I can only speculate. I’d imagine the government took a policy decision to achieve an earlier lockdown attempting to limit the loss of life further than his model was broadly aiming for in maximising ICU usage without overloading it. As it is we haven’t used the expanded capacity, and been more like his model’s estimates of usage, but we certainly overshot the government’s aim of lowering deaths from his model’s prediction to the range they wanted of 7-20, which I’d imagine was all about not looking worse than Italy (at the time), and Neil was just staying on the message of that agreed policy aim of the government… I don’t see it as some great reveal of his model being that wrong, and I don’t say this like it was great or exact, I just think is was within the right order of magnitude 10s of k not 100s of k, and was within 25% of reality. I’d consider that to be a good model. I find the whole attack of him a frustrated exercise in blaming someone else for the situation. People are just trying their best, and sometimes (Swine Flu) the best is not good… Read more »
Hello sinichol. I think you’ve made a logical error here: “… page 13, for a population with a natural r0 of 2.6 using a suppression strategy, which is what we are in he predicted… 48k deaths, annoyingly accurate…” A stopped clock is right twice a day. We don’t know what the real result would have been without the suppression strategy – it might also have been about 48k. He might be completely wrong about the effectiveness and timing of the suppression strategy. I didn’t in fact, discover anything sophisticated about their model’s core assumptions. It is described in their paper thus: “…we make a baseline assumption that R0=2.4 but examine values between 2.0 and 2.6. We assume that symptomatic individuals are 50% more infectious than asymptomatic individuals. Individual infectiousness is assumed to be variable, described by a gamma distribution with mean 1 and shape parameter =0.25. On recovery from infection, individuals are assumed to be immune to re-infection in the short term. Evidence from the Flu Watch cohort study suggests that re-infection with the same strain of seasonal circulating coronavirus is highly unlikely in the same or following season… You can see that it’s very simple, assuming that susceptible individuals… Read more »
Caswell,
“The degree of influence that this model has had is very odd indeed.”
I suspect it’s about the govt having the veneer of science rather than science per se. Ferguson has a mixed track record with pandemic predictions and his work on Foot & Mouth has been discredited to some extent. It’s almost a case of hammer and nail syndrome, and the govt seem to want modelling advice regardless of whether that modelling is reliable. I don’t suppose it does Imperial College any harm to be called upon in high profile situations.
But the government must be aware that even the general public mocks Imperial College’s modelling skills. The comments following an article in The Sun were quite funny – most of them knew exactly who Ferguson was and remembered foot-and-mouth and CJD.
It is curious why Ferguson and Imperial College are still getting the ear of the govt. Would you go to a doctor who misdiagnosed your previous illnesses at least as often as they got it right? I wouldn’t.
Quoting what you said above… “I have finally had a look at the Imperial College model (as described in their papers), and it’s better than I thought it would be: as far as I can tell it is doing precisely the right thing in that it simulates the entire country’s population as individuals, distributed in towns and villages, homes, schools and workplaces, with behaviours related to real demographic data.” … I thought like me you’d read page 4 where he talks about varying the impact of an underlying r0 depending on detailed demographics. The critisim in the post was that his model did not do this, which seemed false to me and not likely to lead to sensible debate, hence my attempt to steer this towards that. You’re now rolling back on that, calling it a first year lecture submission, and insulting my logic… why? As to the stopped clock metaphor, it is typically used to describe and unrelated thing that fits a pattern. Hardly fitting given this is the actual document driving policy. A more likely candidate would be your toy model showing 15% and Will wanting to find a model that had a similar output to the PCR… Read more »
Our party has provided its initial critique of the model here: https://timesofindia.indiatimes.com/blogs/seeing-the-invisible/a-critique-of-neil-fergusons-the-imperial-college-pandemic-model
This may not be news to others but I’ve not seen it reported elsewhere; on the sometimes ‘back of the envelope quality’ of the Imperial and LSHTM models around which advice to government has been based:
‘The variation in the length of time between the onset of symptoms and eventual death was estimated from just 24 patients in Wuhan…’
And for another model: ‘…the length of time between onset and death, for example, is modelled using data that a Japanese team collated on 44 patients for whom those dates were reported either on Chinese government websites or in news stories.’
Full article: https://lrb.co.uk/the-paper/v42/n09/paul-taylor/susceptible-infectious-recovered
Am I right in interpreting that article as saying that the CFR and IFR are ‘known’, as is R0, and the susceptibility of every single person on the planet. And these assumptions produce straightforward models and calculations about how many people will be infected and die. But that at the end he starts to mention that real people who died were maybe going to die of other things anyway, and that doctors might have mis-attributed deaths to C19. So he picks faults in the real, measured data that is being compared against the models, but he fails to see that exactly those flaws would apply to the data that produced the numbers he fed into the model. I have seen this many times in this debacle. David Spiegelhalter did a podcast early on where he identified the problems with the data (confirmed cases are a function of how many tests are done, criteria for C19 deaths have varied over time, confusion over what constitutes a C19 death, ‘of’, ‘with’ etc.) and concluded that they were basically meaninglessness. But by the end of the podcast he had forgotten all about this and was giving us the absolute risks of dying from… Read more »
This is just griping. There was enough evidence about covid19 that you didn’t need any model to point you in roughly the right direction in terms of action: covid19 was bad, it was a lot more deadly than the flu, it spread quickly and easily, it had potentially a long symptom-less infectious period, it was novel, it could quickly overwhelm a health system. Any modelling was extra bonus. As it stands, the modelling effort, even if it was only half correct, it was useful if it led to action, see for example https://www.medrxiv.org/content/10.1101/2020.04.02.20050922v3
This is what I can’t understand. Can you not see that the “action” you applaud is going to rip up the very fabric of society? What you have started is now unstoppable.
Supposing Covid19 was ten times “more deadly” (whatever that means) than flu. Would it be worth trapping all the children in Britain indoors for months, years, to partially prevent it – especially as the disease only affects the old?
Would it be worth emptying hospitals of old people and sending them – infected – out into the community and care homes? Then denying many more thousands of patients their operations and consultations?
Would it be worth destroying the majority of UK businesses? Killing off everything that makes life enjoyable? Killing off everything that funds the NHS?
You astound me in your lack of vision. Please tell me that you realise what you are advocating and try to justify it.
Using the basic inputs to the model, I did my own ‘modelling’:
Total population: 65,000,000
Total hospitalised: 65,000,000 x 4.4% = 2,860,000
Total critical: 2,860,000 x 30% = 858,000
Total deaths: 858,000 x 50% = 429,000
I admit, I used a calculator, rather than a 15,000 line modelling program. But I got pretty close to the Imperial College prediction. And every time I ‘ran it’, I got exactly the same result – so it must be right!
Report 9 The pandemic report that led to UK shut down https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf Published on the16th of March 2020. “Introduction The COVID-19 pandemic is now a major global health threat. As of 16th March 2020, there have been 164,837 reported cases and 6,470 deaths confirmed worldwide. Global spread has been rapid, with 146 countries now having reported at least one case.” The above does not make clear if cases means cases of the SARS-CoV-2 infection or cases of the Covid-19 disease. In the report infection cases is referred to mostly , but sometimes seems to be referring to disease cases. On 8 January, a new coronavirus was identified as the cause of the pneumonia. That means in a period of 78 days world wide the claim is that 164,837 infection cases had been identified. That is 0.0020548375% of the world population. 215 cases per day 1.478 cases per country per day. With a claimed 6,470 deaths world wide. That is 0.000080875% of the world population. 82 cases per day. 0.568 cases per country per day. 164,837 infection cases with 6,470s deaths is 3.92 percent deaths. But this calculation of 3.92% assumes that all infection cases have been reported an unlikely situation.… Read more »
Sunday, May 17th 2020
https://www.dailymail.co.uk/news/article-8327641/Coronavirus-modelling-Professor-Neil-Ferguson-branded-mess-experts.html
Computer code for Prof Lockdown’s model which predicted 500,000 would die from Covid-19 and inspired Britain’s ‘Stay Home’ plan is a ‘mess which would get you fired in private industry’ say data experts
Professor Neil Ferguson’s Imperial College London coding branded ‘unreliable’
University of Edinburgh scientists ran the same model and had different results
Model was criticised early on by University of Oxford and public health expert
The truth is breaking out and this site has a lot to take credit for that.
A spokesman from the university’s Covid-19 response team said: ‘The UK government has never relied on a single disease model to inform decision-making.
‘As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
I wonder if they will release the computer code or any information about these other modelling studies and by witch academic groups.
16 May 2020 • 1:32pm
https://www.telegraph.co.uk/technology/2020/05/16/coding-led-lockdown-totally-unreliable-buggy-mess-say-experts/
Coding that led to lockdown was ‘totally unreliable’ and a ‘buggy mess’, say experts
The code, written by Professor Neil Ferguson and his team at Imperial College London, was impossible to read, scientists claim
The Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions unemployed, has been slammed by a series of experts.
Professor Neil Ferguson’s computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on”.
The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco
The rest is behind a paywall
https://www.telegraph.co.uk/technology/2020/05/16/neil-fergusons-imperial-model-could-devastating-software-mistake/?li_sour Neil Ferguson’s Imperial model could be the most devastating software mistake of all time The boss of a top software firm asks why the Government failed to get a second opinion before accepting Imperial College’s Covid modelling n the history of expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code. But nobody died and the only hits were to Nasa’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe and could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost. Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multi-trillion pound economy and plunge millions of people into poverty and hardship. And we were profoundly disturbed at what we discovered. The model appears to be totally… Read more »
I got free trial at telegraph Neil Ferguson’s Imperial model could be the most devastating software mistake of all time The boss of a top software firm asks why the Government failed to get a second opinion before accepting Imperial College’s Covid modelling n the history of expensive software mistakes, Mariner 1 was probably the most notorious. The unmanned spacecraft was destroyed seconds after launch from Cape Canaveral in 1962 when it veered dangerously off-course due to a line of dodgy code. But nobody died and the only hits were to Nasa’s budget and pride. Imperial College’s modelling of non-pharmaceutical interventions for Covid-19 which helped persuade the UK and other countries to bring in draconian lockdowns will supersede the failed Venus space probe and could go down in history as the most devastating software mistake of all time, in terms of economic costs and lives lost. Since publication of Imperial’s microsimulation model, those of us with a professional and personal interest in software development have studied the code on which policymakers based their fateful decision to mothball our multi-trillion pound economy and plunge millions of people into poverty and hardship. And we were profoundly disturbed at what we discovered. The… Read more »
David you are missing the point that Ferguson is not the only one modelling, far from it. Models around the world are giving his sort of results. The whole world is not believing one simulation. I take your word regarding the coding versus best practice in the commercial world. Using C – I thought it was C not FORTRAN – doesn’t mean errors per se. I am assuming that the Imperial Team have methods for sense checking that the model is doing the “right thing” and behaves as they might expect.
It is silly to assume that IC cobbled some code together then when to No. 10 and told them to shut down the economy and Johnson meekly said, “OK”. The world doesn’t work like that.
[…] end with a quote from the man who I believe will emerge as the biggest hero of this whole mess, Sweden’s Anders Tegnell, the man who chose not to lock his country […]
[…] created a permanent home for this piece in the right-hand menu as a subpage of “How Reliable is Imperial College’s Modelling?” Worth reading in […]
Bad epidemiology, bad journalism and bad government are, as we have witnessed, are a deadly combination. The Imperial College report claims it was used to inform public policy for weeks prior to the governments drastic and ruinous u-turn into lockdown. The doomsday conclusion was a result of changing the data using Italian statistics and allowed the government to effect a complete policy revision while still claiming to be ‘following the science’. The timing of the revision was fortunate given the hysterical clamour in the media and across social media that ‘something’ be done. Subsequent arguments revolved around whether the lockdown was early enough, long enough, hard enough. No politician or editor questioned whether lockdown was effective or necessary. The recent arbitrary application of the ludicrous ‘whack-a-mole’ policy begs the question, when will this ever end? Professor Fergusons resignation was a perfect opportunity to start backing away from the lockdown but I suspect ministers were addicted to the camera glare by this point. The Imperial College report that led to this catastrophe unashamedly used the 1919 flu pandemic as a comparative model to describe what we may be facing. The objective, sober, academics who were using objective, sober, science to inform… Read more »
I fear that the collective of ‘bedwetters’ aka the British Cabinet have managed to find their ministerial limos but not their political judgement.
Until they do, Covid19 will become Covid21
I prophesy.
Neil Ferguson will get a knighthood in the New Year Honours List.
Buy your vomit buckets well in advance.
Loving this….questioning all that is coming out…exactly whats needed….thank you
I found this on CW website earlier today: https://t.me/HATSTRUTH/659.
I’m uncertain if true or fake but thought provoking.