Covid: the high priests have lost their shine

By Richard North - December 27, 2021

In a small way, we may be seeing history made today, as Johnson is expected to review hospital data before deciding whether to tighten Covid restrictions.

Whether that will make any difference, however, remains to be seen. Up to press, the man has never shown any analytical capabilities and has largely been led by the nose in reacting to the different phases of the Covid epidemic. It is unlikely that he will suddenly have acquired a new capability to explore contentious data, and there are no indications that those around him are offering any better advice than they have previously.

Nonetheless, the Guardian seems to think that Johnson is “leaning away” from stricter curbs, based on an evaluation of “improving data”. In the radio silence over the Christmas period, though, there is a shortage of up-to-date data on which to judge the prime minister’s direction of travel.

Pending the outcome of any review – and perhaps trying to influence it – the Covid Mafia is in full throat, defending its risk averse position, with the Guardian enlisting James H Naismith to make the case for “pandemic modelling”, claiming that the “gloomsters” are saving lives.

Naismith is director of the government-funded Rosalind Franklin Institute, based in Didcot, Oxfordshire. And the interesting thing there is that the institute itself offers no expertise in communicable disease epidemiology while Naismith is professor of Structural Biology at the University of Oxford and at one time was professor of Chemical Biology at the University of St Andrews.

This is typical of the BS offered by the likes of the Guardian where “experts” are offered who actually have no expertise in the subject in question yet are encouraged to pontificate, relying on personal and institutional prestige rather than knowledge. It’s the academic equivalent of “never mind the quality, feel the width”.

The good professor nevertheless delivers some wholesome “motherhood and apple pie” homilies, descending from his lofty heights to tell us that “society can’t just wait for things to happen”. Thus, he asserts: “We can and do save lives by being prepared for a range of things, only some of which happen”.

He doesn’t allow, however, for not being prepared for some things – like a SARS pandemic, relying instead on influenza modelling and getting the planning completely wrong, playing catch-up thereafter. But that, I suppose, would rather spoil the pitch.

Instead, Naismith leaps to the defence of modelling, telling us that, “as information increases, the model improves, and the range of outcomes narrows as scenarios are eliminated”. The corollary, of course, is that when there is very little information and much of it is ambiguous, the models are shite and their power to define different scenarios is minimal.

Picking up on the Ferguson schtick, he then has “modelling” telling us that “hospitalisations per 1,000 infections is important to outcomes”, so “its measurement was prioritised”. And there, one begins to worry. Naismith is a Fellow of the Royal Society, with a string of prestigious letters after his name. If that offering represents the cream of British academia, we are in serious trouble.

Of course, “modelling” didn’t tell us that the rate of hospitalisation of Covid patients, proportionate to the number of infections, is important to outcomes. In general terms, that would be a statement of the bleedin’ obvious, if it was actually true – which it isn’t.

For a start, at a working level, we don’t know what the infection rate is. All we know is the number of positive results derived from PCR testing – which is a moveable feast which depends as much on the number of tests administered as it does on infection in the community.

To get a more reliable estimation of the rate, we have to rely on the ONS data, which don’t become available until a week after the event, which makes it too late for planning purposes.

In fact, the detail needed by health service managers and clinicians is much more complex and wide-ranging. What they need to be able to predict is the admission numbers but that information also needs to be time-specific and broken down by region or, preferably, by district.

It is pointless, for instance, modelling numbers on a national basis, when the infection may be rippling through the country, changing its character as it goes. What holds good for London may not be good for Newcastle, and most likely won’t be.

Furthermore, if the demographics vary significantly between districts – as between age distribution, vaccination status, ethnicity, population density, family structure, and other factors – then the spread of disease in different districts may vary differently.

Then, in this context, the “modellers” are trying to predict the behaviour of a different variant, in the specific context of its impact on the hospital service. And there what matters is not “hospitalisation”, per se. There is a huge difference in resource implications depending on the nature of the referral.

These will range from self-referral to A&E, dealt with by a period of observation and the administration of anti-virals before discharge, to admission to a clinical ward and then to critical care, with different degrees of intervention, up to induced coma and intubation. Duration of stay can vary from a few hours, to six months.

And what has come over with some clarity is that, even in the areas where omicron is prevalent, this sort of information needed simply doesn’t exist. That much is admitted by Ferguson in his report. And then, whatever might apply in the affected areas of London – where special conditions apply – almost certainly will not apply elsewhere in the UK, where far better outcomes can be expected.

Yet, Naismith asserts that modelling can show that by the time we know something, it is too late. Thus, he claims, “it tells when decisions matter”. But, in fact, it doesn’t. The remarkable thing about unknown unknowns, oddly enough, is that they are unknown.

In the absence of usable information, all we can do is guess. That isn’t modelling – it’s guesswork. Dressing it up with complex mathematical formulae and impenetrable jargon doesn’t make it any different. It’s still guesswork.

What is going on therefore, is that “modelling” is being used as the modern-day soothsayers, the modellers taking on the role of the high priests of the temple, the vestal virgins and the other trappings of prediction – the computer models taking the place of human or animal sacrifices and other rituals.

In the management of epidemics, though, the crucial thing is that there should be an effective decision-making process, and sometimes it is as important that decisions are made, even without knowing what to do – what I call the purple banana effect.

This is the position in which Johnson finds himself, having to make a decision with incomplete information, where there are penalties and costs attendant on every option. But pretending he is following “the science” will no longer wash. The high priests of modelling have lost their shine.