Supercomputing Deeper thought



The world has a new fastest computer, thanks to video games

The ultimate games machine
SPEED fanatics that they are, computer nerds like to check the website of Top500, a collaboration between German and American computer scientists that keeps tabs on which of the world’s supercomputers is the fastest. On November 12th the website released its latest list, and unveiled a new champion.
The computer in question is called “Titan”, and it lives at Oak Ridge National Laboratory, in Tennessee. It took first place from another American machine, IBM’s “Sequoia”, which is housed at Lawrence Livermore National Laboratory, in California. These two machines have helped reassert America’s dominance of a list that had, in the past few years, been headed by computers from China and Japan.
Titan is different from the previous champion in several ways. For one thing, it is an open system, meaning that scientific researchers with sufficiently thorny problems will be able to bid for time on it, in much the same way that astronomers can bid for time on telescopes. Sequoia, by contrast, spends most of its time running hush-hush simulations of exploding nuclear weapons, and is therefore rarely available for public use.
Titan has an unusual design, too. All supercomputers are composed of thousands of processor chips harnessed together. Often, these are derivatives of the central processing units, or CPUs, that sit at the heart of modern, desktop machines. But Titan derives the majority of its oomph—more than 90%—from technology originally developed for the video-game industry. Half of its 37,376 processors are ordinary CPUs. But the other half are graphics processing units, or GPUs. These are specialised devices designed to cope with modern video games, which are some of the most demanding applications any home machine is ever likely to run. China’s “TianHe-1” machine, a previous Top500 champion, was built in the same way, as are 60 other machines in the Top500 list.
Parallel worlds
Broadly speaking, a CPU—which will be expected to run everything from spreadsheets to voice-recognition software to encoded video—has to be a generalist, competent at every sort of mathematical task but excelling at nothing. A GPU, by contrast, is designed to excel at one thing only: manipulating huge numbers of the triangles out of which all modern computer graphics are made.
Several years ago researchers at Nvidia and AMD (the two companies that produce most of the world’s high-performance GPUs) realised that many scientific problems which demand huge amounts of computing power—everything from climate simulations and modelling combustion in an engine to seismic analysis for the oil-and-gas industry—could be translated into a form that was digestible by their GPUs. Soon after, supercomputer builders such as Cray (which put Titan together using Nvidia’s GPUs) began to take notice.
Borrowing from the games industry in this way brings several benefits. One big one is efficiency. Titan is an upgrade of Oak Ridge’s existing “Jaguar” machine. Upgrading Jaguar with ordinary CPUs would have meant building a computer that sucked around 30MW of electricity when running flat out—enough juice to power a small town. Because GPUs are so good at their specialised tasks, Titan can achieve its blistering performance while sipping a (relatively) modest 8.2MW.
It makes sense financially, too, says Sumit Gupta, head of supercomputing at Nvidia. The chips that the firm sells to supercomputer-makers are almost identical to those it sells to gamers. As Dr Gupta observes, “The history of high-performance computing is littered with the bodies of firms that tried to build products just for the supercomputing market. By itself, it’s just too small a niche.”
It is not all upsides, though. Machines like Titan achieve their speed by breaking a problem into thousands of tiny pieces and farming each out to a single processor. A helpful analogy, perhaps, is painting a house: one strategy might be to hire a single painter, but it is probably quicker to employ several people and give each a room to do.
Not all problems are susceptible to being chopped up in such a way, though (hiring a dozen barbers, to take another analogy, is unlikely to speed up a haircut significantly). The requirement to translate a problem into the sort of mathematics that a GPU can digest adds another barrier. Dr Gupta gives the example of the models used to simulate how a car will react in a crash as one problem that has so far resisted what the industry calls the “massively parallel” approach. Clever programmers can sometimes find a way around such issues: ray-tracing, a high-quality, mathematically intense approach to computer graphics that aims to simulate individual light rays, was, ironically, long thought to be the kind of problem that a modern GPU would struggle with. Yet at a graphics conference in 2008, a group of researchers from Nvidia announced that they had, nevertheless, found a way to do it.
Oak Ridge and Nvidia plan to work with scientists wanting time on Titan to see if their algorithms can be tweaked in similar ways, to make them digestible by the new machine. Dr Gupta is bullish. Even the recalcitrant car-crash simulations, he thinks, will yield to the new approach soon. But that is not to say that every problem can be made to work. And those scientists who find that they cannot tweak their code may find themselves struggling to take advantage of the ever-rising performance of the world’s fastest computers.

No hay comentarios:

Publicar un comentario

COMENTE SIN RESTRICCIONES PERO ATÉNGASE A SUS CONSECUENCIAS