Climate models can run for months on supercomputers, but my new algorithm could make them ten times faster

By | May 1, 2024

Climate models are some of the most complex pieces of software ever written, capable of simulating many different parts of the overall system, such as the atmosphere or ocean. Many have been developed by hundreds of scientists over decades and are constantly being added to and improved upon. They can run more than a million lines of computer code (tens of thousands of printed pages).

It is not surprising that these models are expensive. Simulations take time, often lasting several months, and the supercomputers on which the models run consume a lot of energy. But a new algorithm I developed promises to make many of these climate model simulations ten times faster and could ultimately be an important tool in the fight against climate change.

One reason why climate modeling takes so long is that some of the processes being simulated are inherently slow. The ocean is a good example of this. It takes several thousand years for water to travel from the surface to the ocean depths and back (in contrast, the atmosphere’s “mixing time” is weeks).

Since the first climate models were developed in the 1970s, scientists realized this would be a problem. To use a model to simulate climate change, one must start from conditions that represent pre-industrial conditions that led to the release of greenhouse gases into the atmosphere.

To create such a stable equilibrium, scientists “spin” their model by letting it run until the model stops changing (the system is so complex that some fluctuations will always be present, as in the real world).

To accurately simulate the effects of human-made factors on climate, an initial condition with minimal “drift” is required. But thanks to the ocean and other slow-running components, this can take several months even on large supercomputers. No wonder climate scientists call this bottleneck one of the “great challenges” in their field.

I can’t throw any more computers at the problem

“Why don’t you use a bigger machine?” you may ask. Unfortunately it won’t help. Simply put, supercomputers consist of thousands of individual computer chips, each with dozens of processing units (CPUs or “cores”) connected together via a high-speed network.

One of the machines I use has over 300,000 cores and can perform almost 20 quadrillion arithmetic operations per second. (Obviously it’s shared by hundreds of users, and any simulation will only use a small portion of the machine.)

Big ocean wave, stormy sky

Big ocean wave, stormy sky

A climate model takes advantage of this by dividing the planet’s surface into smaller regions (subareas) by performing calculations for each region simultaneously on a different CPU. In principle, the more subdomains you have, the less time it takes to perform the calculations.

This is true up to a point. The problem is that different subdomains need to “know” what is happening in adjacent subdomains; This requires transferring information between chips. This is much slower than the speed at which modern chips can perform arithmetic calculations, which computer scientists call “bandwidth limitation.” (Anyone who’s tried streaming video over a slow internet connection knows what this means.) So throwing more computing power at the problem has diminishing returns. Ocean models especially suffer from this kind of poor “scaling”.

Ten times faster

This is where the new computer algorithm that I developed and published in the journal Science Advances comes into play. It promises to significantly reduce the rotation time of the ocean and other components of earth system models. In tests on typical climate models, the algorithm was on average ten times faster than existing approaches, reducing the time from months to a week.

The time and energy this can save climate scientists is valuable in itself. But being able to spin up models quickly also means scientists can calibrate them to what we know actually happens in the real world, improving their accuracy or better identifying uncertainty in climate predictions. Spin-ups are so time-consuming that neither is currently possible.

The new algorithm will also allow us to perform simulations in more spatial detail. Currently, ocean models typically tell us nothing about features smaller than 1° wide in longitude and latitude (about 110 km at the equator). But many critical events in the ocean occur at much smaller scales (tens of meters to several kilometers), and higher spatial resolution will certainly lead to more accurate climate projections of sea level rise, storm surges, and hurricane intensity.

How does it work

Like much “new” research, this one is based on an old idea; In this case, the idea dates back centuries to the Swiss mathematician Leonhard Euler. Called “sequential acceleration,” you can think of it as using past information to make predictions about a “better” future.

Among other applications, it is widely used by chemists and materials scientists to calculate the structure of atoms and molecules; it’s a problem that takes up more than half of the world’s supercomputing resources.

Sequence acceleration is useful when a problem is iterative in nature; Initializing a climate model is exactly that: you feed the output from the model back into the model as an input. Rinse and repeat until the output equals the input and you find your equilibrium solution.

In the 1960s, Harvard mathematician DG Anderson found a clever way to combine multiple previous outputs into a single input; so you can reach the final solution with much less repetition of the process. It was about ten times less than the value I found when I applied his scheme to the rotation problem.

Developing a new algorithm is the easy part. Getting others to use it is often a bigger challenge. So it’s promising that the UK Met Office and other climate modeling centers are trying this out.

The next major IPCC report will be published in 2029. This seems a long way off, but given the time required to develop models and run simulations, preparations are already underway. These simulations, coordinated through an international collaboration known as the Coupled Model Intercomparison Project, will form the basis of the report. It’s exciting to think that my algorithm and software can contribute.


Read more: Noise in the brain allows us to make extraordinary leaps in imagination. It can also transform the power of computers


This article is republished from The Conversation under a Creative Commons license. Read the original article.

SpeechSpeech

Speech

Samar Khatiwala receives funding from UKRI.

Leave a Reply

Your email address will not be published. Required fields are marked *