Search By Topic


In praise of models

“Climate scientists? Why believe them when all they’ve got is models? That’s not real science”.

Of the reasons regularly trotted out to reject consensual scientific views of the present and future of climate change – all of them repeated by some on publication of the fifth Intergovernmental Panel on Climate Change (IPCC) science report in September – this is one of the harder ones to dismiss.

Models, after all, are by definition not the real thing. Their results are acknowledged to be uncertain. When they are presented, or perceived, as predictions, they may look shaky when actual results appear for the period in question. For many, “model” implies a toy version of a system, which seems a feeble thing to use to understand something as huge as a planet. There’s also a widely shared view that you can get any result you like out of a model by putting in the right data. And the use of models in other well-publicised fields, most obviously economics, often seems to have done little to improve either knowledge of the world or policy-making.

A better understanding of what models are good for would be an excellent thing to aim for as Earth system science develops. It is not going to be easy. There are various options already on offer for improving how we make sense of the results of models. All will make a contribution, but it is hard to say that any of them look especially promising.

Some suggest that model results will command more respect when we have more and better models. That next supercomputer needs to be ordered as soon as possible. And the scale and complexity of Earth systems undoubtedly justifies adding more components and access to as much computing power as possible to run the resulting models many times, and under many conditions.

Similarly, the data used to test or prime models can be re-analysed, or refined, and this again is the basis of plausible funding bids. We are told, for example, that new satellite-borne instruments like those of the proposed European Space Agency TRUTHS mission (take a bow, whoever composed that acronym) could allow testing of climate models against real outcomes in ten years or so, instead of 30 or 40.

That might be true, but seems pretty optimistic given the improvements in our understanding over the last 30 years. As Nature Geoscience pointed out, the really basic matter of climate sensitivity – the projection of the response of global mean temperatures to a doubling of atmospheric carbon dioxide concentrations – has arrived back where it started in the first [IPCC] report, with an estimated range of 1.5 to 4.5 degrees Celsius, after some minor ups and downs over several reports.

It seems more likely we will have to settle for some enduring uncertainty. The core of all the models is solid – no-one includes variants in which greenhouse gases in the atmosphere increase and global cooling ensues. But while the many additional interactions that have been incorporated into climate models over the years go with improved understanding, they do not generally seem to have increased precision of overall projections.

But how is the word “uncertainty” interpreted in the real world? British journalist turned academic James Painter from the Reuters Institute for the Study of Journalism at the University of Oxford argues, uncertainty is easily used to reinforce inaction. Instead, Painter suggests turning to the insurance industry and examining its use of the term “risk”,  which, he notes more often points toward action.

At the same time, it is essential  to keep discussing uncertainty to distinguish more clearly just what is uncertain, and by how much. That will include explaining why projections of long-term, global, trends of thirty years or more are sound while “predictions” of smaller-scale results such as regional climate shifts, or ten-to-fifteen year fluctuations, are less often things to bet on.

Perhaps when the right, inspired explainer, meets the right audience the big efforts climate modellers have made to understand uncertainties – in Earth systems, their observation and in the models – can figure in efforts to improve understanding of their enterprise as well. This is nicely done in an article at arstechnica: The author  gives a strong sense of how hard modelling teams try to understand their effects, and to test out additions to each model before they are accepted for use.

There is one more possibility to mention which may be under-explored. As well as in-depth examination of contemporary modelling work, there is also use to be made of the history of climate modelling, and the entwined history of observation. As Paul Edwards describes at length in his remarkable 2010 book The Vast Machine, global data and models have co-evolved. Together, they constitute a new and vital infrastructure which underpins modern climate science.

One reason this is poorly understood, beyond casual insults about climate scientists riding some kind of gravy train fuelled by conspiracy, is that the enterprise in its modern form is so recent.

The first computer models for weather forecasting only came into use in the mid-1950s, and it was another 20 years before there were usable three-dimensional global climate models.

Over the same period a large apparatus of Earth observation, bringing in satellites along with balloons, land and sea weather stations, data buoys, and weather radar, was built up. Integrating all of this, and incorporating rectified data in models, has been a complex and laborious process, but it has borne fruit. Edwards summarises:

“Since the 1960s the climate knowledge infrastructure has been extending itself by building gateways linking different fields. Computer models are its most important gateways…. Modellers have progressively linked models of the atmosphere to models of the oceans, the cryosphere, the biosphere and human activities.”

The message is that models are more than just a vast assembly of programmers’ code – though they are that, too. They are the collective, creative effort of hundreds of people trying to deepen understanding of one of the most complex systems ever studied.

Taking this historical story out to wider publics would be a long-term effort, and any results would be gradual. But it is definitely one worth the telling.

What could we hope for? Think, perhaps, of the public image of the Large Hadron Collider team at CERN. They do have the advantage that they run actual experiments, on an extremely large machine. Still, they are investigating physics which is pretty hard to explain by sifting through computer analysis of zillions of particle tracks. The tracks register invisible collisions between particles whose existence we are untroubled by in everyday life, produced by a machine which is also hard to grasp, and registered by detectors whose principles are opaque to most of us. Yet the researchers and technicians there are generally viewed as an inspiring band of international collaborators working together to realise one of the great projects of our time, an intellectual credit to modern civilisation. When they tell us what the tracks, in all their computer-enhanced glory, signify, their claims are generally trusted.

Why shouldn’t Earth system modellers be just as well regarded? We may be changing the planet, but we also live at a time when we can (just about) keep track of the changes that are happening. The simple fact that we can not only conceive the notion of the average global surface temperature, but organise ourselves to measure it, and plot its movement over time, is – seen in the right light – pretty astonishing.

The models that project future temperatures, along with other features of the state of the climate, are equally notable achievements. The synthesis of scientific disciplines and computational virtuosity they represent, the teams that build and compare them, the data networks and archives they draw on, and the institutions that sustain them, are all remarkable. The problem they are addressing is probably as challenging as moving theoretical physics to the next level. The fact the results are imperfect is the least surprising thing about them. The thing which ought to surprise is the vast, continually renewed, social and cultural achievement of developing them so far, so fast – perhaps even in time to do something about what they are telling us.

But perhaps it is because these models’ outputs have such significant political and economic implications that they are seen in a different light to CERN’s Large Hadron Collider.