The Covid-19 crisis and public information about disease control means people now understand the ‘R’ number and other aspects of epidemiology previously considered inscrutable and probably rather dull.
Meanwhile, the last seven years and longer have seen a chronic failure to stop the spread of another virulent disease: bovine tuberculosis (bTB) in cows, due to botched government interventions.
Since 2013 the mass killing of badgers, blamed as a significant cause of bTB, the bacteria spread has accelerated. The evidence for this policy rests on speculation and guesswork. But is there evidence that there has been a major biological blunder and cover up? Can we say that no-one in authority wants to own up to the greatest biological misjudgments of our time?
Until recently, if you asked someone about modelling, chances are they would think you were talking about fashion or craftwork. Modelling as a statistical term, is perhaps regarded as something for clever people who are good at maths. However, understanding what modelling is, as opposed to how it is used, is the next lesson that the general public need to digest and upon which key scientists and administrators need to swiftly act.
There has been discussion as to whether modelling is as much an art as a science. In 1976 one of the great British statisticians, George Box, said: “All models are wrong, but some are useful”. Oxford’s Richard Dawkins explains it thus: “There is a less familiar way in which a scientist can work out what is real when our five senses cannot detect it directly.
"This is through the use of a 'model' of what might be going on, which can then be tested. We imagine - you might say we guess - what might be there. That is called the model. We then work out (often by doing a mathematical calculation) what we ought to see, or hear, etc. (often by doing a mathematical calculation) if the model were true.
"We then check whether that is what we actually do see. The model might literally be replica made out of wood or plastic, or it might be a piece of mathematics on paper, or it might be a simulation in a computer. We look carefully at the model and predict what we ought to see (hear, etc.) with our senses (with the aid of instruments, perhaps) if the model were correct. Then we look to see whether the predictions are right or wrong.
"If they are right, this increases our confidence that the model really does represent 'reality' we then go on to devise further experiments, perhaps refining the model, to test the findings further and confirm them. If our predictions are wrong, we reject the model, or modify and try again.”
Artists perhaps are aware too of the dichotomy of what is real and what is construction: “We all know that art is not truth. Art is a lie that makes us realize truth, at least the truth that is given us to understand. The artist must know the manner whereby to convince others of the truthfulness of his lies,” argued Pablo Picasso.
It is a bit extreme to suggest deliberate lies are the foundation of the construction of a model: it is usually more a case of informed speculation. But there is a degree of contrast in the Oxford approaches.
The only model generated to look at whether killing badgers can influence bTB - the Randomised Badger Culling Trials (RBCT) 1998-2005 - was largely an Oxford University creation, and followed on from the recommendations of the Krebs Group in the mid 1990’s. Oxford University, including the late Robert May who died last year, helped promote modelling in biology more than any from the 1970s.
The RBCT findings required many ‘guesses’ or assumptions and adjustments to claim a modest benefit from badger culling when no clear benefits could be observed in real-time. Yet a very similar alternative analysis, tweaking the same data slightly shows culling badgers holds no value.
Since 2011, and reiterated in 2020, badger culling policy from Defra has rested upon tentative modelling and derivations that originate from the RBCT model.
Presented by a small cabal of government scientists, Defra and government ministers and MPs as ‘fact’ when it was not, it has failed to back up the initial RBCT model to any credible extent. Yet some senior scientists have called it, in papers submitted to the High Court, "settled science".
No one is suggesting that such scientists, veterinarians and biologists are liars and perjurers, but the presentation of untested models or heavily manipulated modelling as fact is one of the great dangers today.
It is the kind of approach that enables ‘policy-led science’, resting on a kind of ‘dark academia’ that scientists might usefully be trained to avoid.
Public trust in science, including modelling, is vital. Time, after all, has run out for our planet and the red lights are flashing over multiple aspects of runaway climate breakdown. There is no longer room for over-confidence in speculation. Gambling with tentative science is no longer acceptable, if it ever was.
But trends to replace proper survey and monitoring with ‘modelling’ for convenience, or cost saving seems to be more and more apparent. Modellers must not hide behind the clients wishes (that provides safe space) and show discipline and honesty if they are not to destroy scientific endeavor and usefulness.
We do not need to look far for examples of where predictions fail because one or two factors were wrong or misunderstood. That is the point of modelling. It may be useful or it may not and the uncertainty may be revealed in hindsight. No shame in being wrong.
The problem comes when modelling remains uncertain yet is perpetuated long-term – it can do damage. Because models are often untestable abstract constructions, there is a need to be wary of them – the RBCT is a good example because many unknowns were guessed, and 50/50 choices treated as ‘truth’.
One of the symptoms of the misuse of modelling is withholding of data from public scrutiny. Take for example the withholding of the data over the last four years, that gave rise to the introduction of long term or supplementary badger culling from 2013.
Obtaining the data on which that bit of the policy rested was not possible then because ‘it is going to be published’. After two years it had not been published, resulting in further enquiries and requests in 2019. The original Freedom of Information Act request process spilled on into 2020. The final provision of data from Imperial College could not be arranged because they were too busy dealing with Covid-19 studies.
‘We have a giant human crisis’ trumps access to truth around ‘a major livestock crisis’ for two deadly zoonoses, one of which has killed more humans than the other.
Meanwhile thousands and thousands of healthy badgers around England ‘drop to the shot’ or crawl bleeding into their setts to slowly die in the secret night time wildlife badger purges. All this at public expense and hanging on a single wobbly speculative model.
George Box helps us again: “So the question you need to ask is not, 'is the model true?' (it never is) but, 'is the model good enough for this particular application?'"
In the case of the RBCT and a host of post-RBCT related modelling, government-paid academics and veterinarians have built on the model as if it is ‘settled science’, sometimes with new papers embellishing. Often in a propaganda-like manner as if consolidating science, but never conclusively.
Correct formation and use of modelling is something we should all be concerned about, if not frightened of, not least for bovine tuberculosis and Covid-19.
Tom Langton is an international consulting ecologist to government, business and industry. He provides advocacy support to charity pressure groups seeking justice where environmental damage is being caused to species and habitats. He has worked for more than 40 years in nature conservation, including common and protected species management, habitat restoration, wildlife disease investigation and invasive non-native species control.
This is a short version of a longer article written by the author: Disease and Communicating Uncertainty published by The Badger Crowd.