It’s not often that Dr No’s flabber gets well and truly ghasted. An extraordinary exchange on twitter (scroll down a page or so to get to the start of the substance, and click here to see the above tweet) has revealed what many have long suspected: SAGE purposely cook the books in its modelling reports. Graham Medley, professor of infectious disease modelling at LSHTM, and chief pongo for the time being of SAGE’s modelling group SPI-M, defends the group’s practice of ‘giving the decision makers the information they ask for’. Read that again, and let it sink in. The scientists give the politicians the information they ask for. Being on twitter, the discussion quickly becomes scrambled into incoherent fragments, making it almost, but not entirely, impossible to get to the heart of the matter. The crux, however, is simple enough: is SAGE told, one way or another, what to tell the government — which, in effect, soon becomes here’s the policy, now where’s the evidence — or does it provide, as its name, the Scientific Advisory Group, might suggest, independent and impartial scientific advice?
In a properly functioning sensible government, it has to be the latter. Advisers advise and ministers decide, to be sure, but that is predicated on an assumption that the advice is impartial, that is, not partial to any particular view point. The alternative, scientists led by politics, soon puts a country on the road to hell, as the Germans discovered to their cost in the 1930s. It does this because the politicians are in effect asking leading questions of the scientists, and so the answers become skewed away from a sensible appraisal of the risks, and so skewed away from a sensible process of decision making. If ministers ask for, and scientists provide, only models based on bad, very bad and apocryphal assumptions, then, de facto, the models based on less dire assumptions disappear from the table, and with them, the option to make less draconian decisions. By selecting only the bad and worst case scenarios, the books are cooked, in favour of the dire assumption, and the draconian decision.
The discussion, and civilised discussion it is, rather than acrimonious argument, hinges on two rather subtle, even semantic, points. The first is the meaning of a model: is it a prediction, or a scenario, and so the follow-up, more important, question, is it possible to get the two mixed up: that an outcome written as a scenario gets read as a prediction? Dr No has always been clear that modelling is the numerology that makes astrology seem rigorous. You dial in your what-if assumptions, and get your if-this scenarios. There is no assessment of probability at any stage, and so there is no way numerology can make predictions. But such niceties inevitably fail to penetrate the fevered brows of the decision makers. Imagine a gastrointestinal pandemic. If the modellers — at the request of the decision makers — produce only a doom laden dossier of diarrhoea, dehydration and death, then the decision makers are going to find it very difficult — because this is how the human mind works — not to read the dossier as a dossier of predictions, and so act accordingly. It is in a way a version of the map precedes the territory, or, as Dr No sometimes says, if all you have is bloody nails, then every tool becomes a hammer.
There is a very simple solution to this problem: the modellers model all possible scenarios, or, at a minimum, a representative range. Instead of modelling just nails, also model nuts, bolts, screws and glues. Instead of mapping only a part of the territory, map the whole territory. That ensures the decision makers can see there are alternative outcomes, or scenarios, and so alternative decisions that can be made, including, in the case of a temporal transient epidemic, the decision not to do anything. What if the omygodicron scariant causes substantially, perhaps even by a factor of ten, milder disease? That is the clear way to avoid cooking the books in favour of drastic and draconian decisions, and it leads directly to the second, and in many ways, more baffling point: why does Medley explicitly argue in favour of producing tool boxes that only contain nails, making every tool in the box look like a hammer, and maps that only cover part of the territory, ensuring that other routes, and other destinations, disappear from view.
Perhaps Medley has some form of expressive dysphasia. But if we take his words in the twitter discussion at face value, what we have is a reasoning that suggests two things. The first is that SAGE themselves have decided models based on milder assumptions don’t matter, because the don’t lead to (active) decisions. If they don’t change the outcome, there is no need to include them, because the decision makers aren’t interested: ‘Decision-makers are generally on (sic) only interested in situations where decisions have to be made,’ he tweets, and then adds, ‘That scenario [one based on mild assumptions] doesn’t inform anything. Decision-makers don’t have to decide if nothing happens.’ The argument is somewhat convoluted, but appears to be: if a scenario won’t produce ‘a result’, ie a decision to do something, then there is no point in including that scenario. This frankly, is bonkers. As every doctor knows, there is always an option to decide to do nothing. We generally call it W&S for short, or ‘wait and see’. To exclude evidence that might lead to a W&S decision is indeed worse than bonkers, it is incompetent.
If excluding the boring scenarios — boring because ‘nothing happens’ — takes the biscuit, then what comes next shatters that independence biscuit into a thousand crumbs. A tweet or two later Medley adds (emphasis added), ‘We generally model what we are asked to model. There is a dialogue in which policy teams discuss with the modellers what they need to inform their policy.’ However this is read, it only adds up to one thing: instead of policy makers led by science, we have modellers led by policy makers: ”We generally model what we are asked to model.” And then, a short while later, we have the tweet Dr No quoted from in the opening paragraph: “To inform the decisions… it’s the opposite of activism. It’s giving the decision makers the information they ask for“. Scientists led by decision makers. Yours it is to ask, ours
There has inevitably been, given this is twitter, some rather furious back-pedalling, a medley of Medley apologists, an outpouring of profanity, and almost certainly some tweets have been retired to the great nesting box in the sky – but the screen grab is ever our friend. The apologists operate on two empty grounds: we should only ever model worst case scenarios, because that ensures we cover any eventual outcome — a framework for decision making taken straight from the devil’s playbook — and various rehashes of Medley’s own argument, no need to model scenarios that don’t lead to action — which all fall foul of the doing nothing is not a decision fallacy. There is also a marginal argument that the modellers need some input from decision makers, so they modellers know what to model — no point in modelling the nuclear option if the government intends never to use that option — but this input must be kept to a minimum, lest it constrain the range, or breadth, of scenarios modelled.
None of these developments alter the essential message contained in this extraordinary series of tweets that among other things explain why covid models always exaggerate outcomes — that’s what happens if you focus on worst case scenarios — and why government always over-reacts — again, that’s what happens if you focus on worst case scenarios: the pandemic has been managed not by politicians led by science, but by scientists led by the politics. A sage moment indeed.
Footnote: a key player, if not the key player, in the opening up of this debate is Fraser Nelson, editor of the Spectator. Here is his take on the implications of what Medley has said. Dr No has also added (at 13:00h) a link in the first paragraph to the tweet contained in the image.