X
Chat
The Great Flip: How Accelerated Computing Redefined Scientific Systems — and What Comes Next
Released Nov. 17th, 2025

It used to be that computing power trickled down from hulking supercomputers to the chips in our pockets.

Over the past 15 years, innovation has changed course: GPUs, born from gaming and scaled through accelerated computing, have surged upstream to remake supercomputing and carry the AI revolution to scientific computing’s most rarefied systems.


JUPITER at Forschungszentrum Jülich is the emblem of this new era.

Not only is it among the most efficient supercomputers — producing 63.3 gigaflops per watt — but it’s also a powerhouse for AI, delivering 116 AI exaflops, up from 92 at ISC High Performance 2025.

This is the “flip” in action. In 2019, nearly 70% of the TOP100 high-performance computing systems were CPU-only. Today, that number has plunged below 15%, with 88 of the TOP100 systems accelerated — and 80% of those powered by NVIDIA GPUs.

Across the broader TOP500, 388 systems, 78%, now use NVIDIA technology, including 218 GPU-accelerated systems (up 34 systems year over year) and 362 systems connected by high-performance NVIDIA networking. The trend is unmistakable: accelerated computing has become the standard.

But the real revolution is in AI performance. With architectures like NVIDIA Hopper and Blackwell and systems like JUPITER, researchers now have access to orders of magnitude more AI compute than ever.

AI FLOPS have become the new yardstick, enabling breakthroughs in climate modeling, drug discovery and quantum simulation — problems that demand both scale and efficiency.

At SC16, years before today’s generative AI wave, NVIDIA founder and CEO Jensen Huang saw what was coming. He predicted that AI would soon reshape the world’s most powerful computing systems.

“Several years ago, deep learning came along, like Thor’s hammer falling from the sky, and gave us an incredibly powerful tool to solve some of the most difficult problems in the world,” Huang declared.

At SC16, Huang explained how AI would reshape the world’s most powerful scientific computing systems.

The math behind computing power consumption had already made the shift to GPUs inevitable.

But it was the AI revolution, ignited by the NVIDIA CUDA-X computing platform built on those GPUs, that extended the capabilities of these machines dramatically.

Suddenly, supercomputers could deliver meaningful science at double precision (FP64) as well as at mixed precision (FP32, FP16) and even at ultra-efficient formats like INT8 and beyond — the backbone of modern AI.

This flexibility allowed researchers to stretch power budgets further than ever to run larger, more complex simulations and train deeper neural networks, all while maximizing performance per watt.

But even before AI took hold, the raw numbers had already forced the issue. Power budgets don’t negotiate. Supercomputer researchers — inside NVIDIA and across the community — were coming to grips with the road ahead, and it was paved with GPUs.

To reach exascale without a Hoover Dam‑sized electric bill, researchers needed acceleration. GPUs delivered far more operations per watt than CPUs. That was the pre‑AI tell of what was to come, and that’s why when the AI boom hit, large-scale GPU systems already had momentum.

The seeds were planted with Titan in 2012 at the Oak Ridge National Laboratory, one of the first major U.S. systems to pair CPUs with GPUs at unprecedented scale — showing how hierarchical parallelism could unlock huge application gains. 

In Europe in 2013, Piz Daint set a new bar for both performance and efficiency, then proved the point where it matters: real applications like COSMO forecasting for weather prediction.

By 2017, the inflection was undeniable. Summit at Oak Ridge National Laboratory and Sierra at Lawrence Livermore Laboratory ushered in a new standard for leadership‑class systems: acceleration first. They didn’t just run faster; they changed the questions science could ask for climate modeling, genomics, materials and more.

These systems are able to do much more with much less. On the Green500 list of the most efficient systems, the top eight are NVIDIA‑accelerated, with NVIDIA Quantum InfiniBand connecting 7 of the Top 10.

But the story behind these headline numbers is how AI capabilities have become the yardstick: JUPITER delivers 116 AI exaflops alongside 1 EF FP64 — a clear signal of how science now blends simulation and AI.
Power efficiency didn’t just make exascale attainable; it made AI at exascale practical. And once science had AI at scale, the curve bent sharply upward.

What It Means Next

This isn’t just about benchmarks. It’s about real science:

  • Faster, more accurate weather and climate models
  • Breakthroughs in drug discovery and genomics
  • Simulations of fusion reactors and quantum systems
  • New frontiers in AI-driven research across every discipline

The shift started as a power-efficiency imperative, became an architectural advantage and has matured into a scientific superpower: simulation and AI, together, at unprecedented scale.

It starts with scientific computing. Now, the rest of computing will follow.



-- Source: https://blogs.nvidia.com/blog/accelerated-scientific-systems/