The Top Supercomputer In The World Is Also The Largest In The US

Supercomputers are, by definition, the most powerful computers of their age. While the IBM 7030 and UNIVAC LARC from the 1960s are considered the earliest — and the ones that birthed the term "supercomputer" — the first commercially successful supercomputer was the Seymour Cray-designed CDC 6600. This groundbreaking computer was first marketed in 1964 and was considered the most powerful computer of its day.

Move forward sixty years, and the smartphone in your pocket has more processing power than these early behemoths. But supercomputers have moved on, too, and the planet's most powerful example is the El Capitan supercomputer. Developed at Lawrence Livermore National Laboratory, El Capitan has been verified as the fastest supercomputer in the world, achieving performance that would have been unimaginable just a decade ago, never mind sixty years ago.

It's also among the largest supercomputers in the United States — although, as we'll see, defining "largest" isn't quite as simple as getting the tape measure out. What's clear, however, is that modern supercomputers like El Capitan aren't just faster; they're vastly more complex, drawing on millions of processing cores to tackle problems ranging from national security to advanced scientific research.  Let's take a closer look at El Capitan, its performance, and how it scales when compared to other supercomputers.

El Capitan: The world's fastest supercomputer

According to the latest rankings from the Top500 project, El Capitan is the fastest supercomputer in the world as of November 2025, delivering more than 1.8 exaflops of performance on the High Performance Linpack (HPL) benchmark. That figure places the computer firmly in the exascale era, capable of peak performance of 2.88 quintillion calculations per second. This sort of processing power has allowed exascale computers to perform impressive feats, such as discovering a flaw in all jet engines.

El Capitan is based on Hewlett Packard Enterprise's Cray EX255a architecture, which uses AMD EPYC processors with Instinct MI300A GPUs. Unlike traditional supercomputers, this setup allows El Capitan to handle both simulation-heavy and AI-driven workloads more efficiently. This reflects the growing overlap between high-performance computing and machine learning. It has more than 11 million processing cores, working in parallel across a network of interconnected nodes. It also leads the way in the more demanding High-Performance Conjugate Gradients (HPCG) benchmark. HPCG is intended as a complementary benchmark to HPL, and is designed to measure performance in real-world applications, rather than theoretical peak performance.

Keeping to the theme of real-world applications, among El Capitan's responsibilities are national security, including ensuring the reliability and safety of the country's nuclear deterrent. It's also tasked with running simulations for Los Alamos and Sandia National Laboratories. In terms of processing power, El Capitan is undoubtedly the planet's most powerful and joins a list of supercomputers ranked the most powerful of their time. However, determining if it's physically the largest supercomputer is a more complex task.

What makes it the largest isn't quite so simple

Describing El Capitan as the largest supercomputer in the U.S. depends heavily on how one defines a supercomputer's size. If we look at it from a sheer computational scale, it certainly tops the charts. El Capitan's hefty core count of over 11 million is by far the world's largest for a supercomputer. For comparison, the next largest is Argonne National Laboratory's Aurora supercomputer with over 9 million cores.

However, when it comes to physical size, the story changes. While El Capitan occupies about 5,900 square feet of machine room space, the aforementioned Aurora computer is substantially larger, with a physical footprint of about 10,000 square feet. In other words, El Capitan delivers more computing power from a smaller footprint. This physical size disparity is to be expected as designers strive to pack more processing power into smaller form factors. Taken to its extreme, this trend also allowed Nvidia to build a pocket-sized supercomputer, the DGX Spark.

The latter point is ably demonstrated by NASA's Pleiades supercomputer, which was decommissioned in January 2026. This computer took up 14,000 square feet of floor space but only had about 230,000 CPU cores — impressive for the time, but almost insignificant compared to El Capitan. What this means is that simply calling El Capitan the largest supercomputer in the U.S. isn't a matter of simply comparing dimensions. Instead, it's about looking at the hardware and the unprecedented computational scale it brings together in a single system.

Recommended