David Davis MP and Matt Ridley write in The Sunday Telegraph on the modelling which led to the lockdown.
As published in The Sunday Telegraph:
Professor Neil Ferguson of Imperial College “stepped back” from the Sage group advising ministers when his lockdown-busting romantic trysts were exposed. Perhaps he should have been dropped for a more consequential misstep. Details of the model his team built to predict the epidemic are emerging and they are not pretty. In the respective words of four experienced modellers, the code is “deeply riddled” with bugs, “a fairly arbitrary Heath Robinson machine”, has “huge blocks of code – bad practice” and is “quite possibly the worst production code I have ever seen”.
When ministers make statements about coronavirus policy they invariably say that they are “following the science”. But cutting-edge science is messy and unclear, a contest of ideas arbitrated by facts, a process of conjecture and refutation. This is not new. Almost two centuries ago Thomas Huxley described the “great tragedy of science – the slaying of a beautiful hypothesis by an ugly fact.”
In this case, that phrase “the science” effectively means the Imperial College model, forecasting potentially hundreds of thousands of deaths, on the output of which the Government instituted the lockdown in March. Sage’s advice has a huge impact on the lives of millions. Yet the committee meets in private, publishes no minutes, and until it was put under pressure did not even release the names of its members. We were making decisions based on the output of a black box, and a locked one at that.
It has become commonplace among financial forecasters, the Treasury, climate scientists, and epidemiologists to cite the output of mathematical models as if it was “evidence”. The proper use of models is to test theories of complex systems against facts. If instead we are going to use models for forecasting and policy, we must be able to check that they are accurate, particularly when they drive life and death decisions. This has not been the case with the Imperial College model.
At the time of the lockdown, the model had not been released to the scientific community. When Ferguson finally released his code last week, it was a reorganised program different from the version run on March 16.
It is not as if Ferguson’s track record is good. In 2001 the Imperial College team’s modelling led to the culling of 6 million livestock and was criticised by epidemiological experts as severely flawed. In various years in the early 2000s Ferguson predicted up to 136,000 deaths from mad cow disease, 200 million from bird flu and 65,000 from swine flu. The final death toll in each case was in the hundreds. In this case, when a Swedish team applied the modified model that Imperial put into the public domain to Sweden’s strategy, it predicted 40,000 deaths by May 1 – 15 times too high.
We now know that the model’s software is a 13-year-old, 15,000-line program that simulates homes, offices, schools, people and movements. According to a team at Edinburgh University which ran the model, the same inputs give different outputs, and the program gives different results if it is run on different machines, and even if it is run on the same machine using different numbers of central-processing units.
Worse, the code does not allow for large variations among groups of people with respect to their susceptibility to the virus and their social connections. An infected nurse in a hospital is likely to transmit the virus to many more people than an asymptomatic child. Introducing such heterogeneity shows that the threshold to achieve herd immunity with modest social distancing is much lower than the 50-60 per cent implied by the Ferguson model. One experienced modeller tells us that “my own modelling suggests that somewhere between 10 per cent and 30 per cent would suffice, depending on what assumptions one makes.”
Data from Sweden support this. Despite only moderate social-distancing measures, the epidemic stopped growing in Stockholm County by mid-April, and has since shrunk significantly, implying that the herd immunity threshold was reached at a point when around 20 per cent of the population was immune, according to estimates by the Swedish public health authority.
The almost covert nature of the scientific debate within Sage, the opaque programming methods of the Imperial team, the unavailability of the code for testing and review at the point of decision, the untested assumptions built into the model, all leave us with a worrying question. Did we base one of the biggest peacetime policy decisions on crude mathematical guesswork?