Work continues on quantum machines. But classical computing is here, now, and faster and more powerful than ever.
When Frontier, the latest supercomputer at the US Department of Energy’s Oak Ridge National Laboratory (ORNL), went live at the end of May, it became the first to demonstrate true exascale performance, according to the TOP500 organization that benchmarks commercially available computer systems. At 1.102 Exaflop/s (quintillion operations per second), Frontier’s performance is three times faster than the previous performance leader, Fujitsu’s Fugaku system at the Riken Center for Computational Science (R-CCS) in Kobe, Japan. By breaking the exascale barrier, Frontier is 10 times faster than its ORNL predecessor, Summit.
That’s an impressive margin and all the more remarkable, perhaps, in today’s post-Moore world, where chip-process advancements alone can no longer deliver the exponential gains computer developers have come to expect. The team at ORNL worked with supercomputer builder Cray and chipmaker AMD to create an incredibly powerful underlying computing platform that converges traditional modeling and simulation with big data analytics and artificial intelligence.
Quantum computers are expected to be 1,000 times more powerful still. Qubits can express much greater complexity than simple binary digital bits, although keeping the system stable for an appreciable length of time is a major challenge that requires super-cooling. While conventional wisdom suggests increasing qubit width as the route to greater computing power, there are alternative approaches. Quantum Brilliance, an Australian-German manufacturer, is focusing on accessibility: smaller computers with fewer qubits but compatible with the conventional rack form factor and operable at room temperature.
The quantum community refers to “classical” computers, which may sound insulting to machines like Frontier and Fugaku, whose statistics are truly mind-blowing, as they are empowering researchers to address some of the most pressing scientific challenges of our age. In reality, we need to harness the strengths of all the different computing approaches available to us, and the sheer computing capacity provided by these different apex machines installed in the world’s leading centers of excellence, to take us forward.
But what do supercomputers do all day? During its initial bring-up, the Riken Center’s Fugaku, delivered in 2020, was tested in the battle against Covid-19. It crunched the numbers to help improve PPE performance and accelerate development of suitable drugs. Subsequently, it has cut the time to analyze cancer genes from months to less than one day.
Currently, ORNL is in the final stages of commissioning Frontier and has slated the platform for early science access in late 2022, to help with challenges such as predicting climate change. In ORNL’s brief introduction to Frontier, Institute Director Gina Tourassi suggests a fascinating shift in our approach to chronic diseases such as cancers; using the power of Frontier, we now can get ahead, to improve the quality of life by delaying the onset in would-be sufferers as well as continuing to improve the available treatments.
Classical these computers may be, by quantum standards, but their great advantage is the precision necessary to handle tasks such as modeling cancer cells and the atomic structure of elements, which demand high computational accuracy. On the other hand, they can also be used to accelerate machine-learning algorithms.
Their other advantage is that they are here now and ready to work on the most complex questions we face. We want answers more quickly than ever, and the increased computational workload combined with time pressure are driving up the demand for energy. Frontier is supplied by a 40-MW power system and cooled by 6,000 gallons of water every minute, according to information from ORNL.
It’s appropriate, then, that high-performance computing is also our friend as we research new and better ways to secure reliable supplies of energy. Powerful numbercrunching is critical for materials research and could help synthesize novel photo-responsive materials that can significantly increase the conversion efficiency of solar panels. It is also essential to drive simulations guiding advanced nuclear research, such as fusion reactors like the ITER tokamak project in France.
Often referenced as an “artificial sun,” the tokamak reactor creates extreme heat and pressure to convert hydrogen to a plasma, within which fusion can take place. The ITER project team reckons that the reactor will release about 10 times as much energy as is required to create the conditions for fusion to occur. In practice, it takes several years after switch-on for the temperature inside to rise to above 100 million degrees Celsius for fusion to occur. With those timescales, development by trial and error is not practicable, so digital acceleration is critical. Simulating the tokamak with high-performance computers helps design complex parts of the reactor such as the electromagnetic control systems needed to contain and manipulate the plasma. ITER is scheduled to turn on in 2025 and will take a decade to reach its operating temperature. So we should know by 2035 whether the computers got it right.
On the other hand, simulation using digital twinning with high-performance computers is also helping to improve the design of conventional nuclear power stations, such as perfecting safety controls, improving fuel efficiency, and reducing waste.
It’s easy to focus on the pure performance of these machines and the prestige associated with the world’s fastest supercomputer. People love a “new No. 1,” but the teams operating them are most excited at how they can help us look after ourselves while looking after the planet, including delivering the answers we need quickly enough to take meaningful action. •
alun.morgan@ventec-europe.com.
is technology ambassador at Ventec International Group (ventec-group.com);