Intel and the U.S. Department of Energy (DOE) announced today that Aurora, the world's first supercomputer capable of sustained exascale computing, would be delivered to the Argonne National Laboratory in 2021. Surprisingly, the disclosure includes news that Intel's not-yet-released Xe graphics architecture will be a key component of the new system, along with Intel's Optane Persistent DIMMs and a future generation of Xeon processors.


Intel and partner Cray will build the system, which can perform an unmatched quintillion operations per second (sustained), under a $500 million contract. That's a billion billion operations, or one billion times faster than today's desktop PCs. That's significantly faster than today's fastest supercomputers, which tend to peak at ~400 petaFLOPS of performance and offer sustained performance in the 200-petaFLOP range. The DOE hasn't revealed power consumption statistics yet.

The new system is comprised of Cray's Shasta system design and its "Slingshot" networking fabric, which we recently covered extensively. This platform can feature a wide range of CPUs, including the future EPYC Milan variant, but the Aurora system features undisclosed next-gen Intel Xeon CPUs.


More importantly, the system leverages Intel's Xe graphics architecture. In its announcement, Intel said Xe will be used for compute functions, meaning it will be primarily used for AI computing.


Neither Intel nor the DOE would comment on what GPU form factor will make an appearance in the new supercomputer, but logically we could expect these to be Intel's discrete graphics cards.

Intel previously disclosed that it would split the Xe graphics solutions into two distinct architectures, with both integrated and discrete graphics cards for the consumer market (client) and discrete cards for the data center. Intel recently disclosed that the cards would come wielding the 10nm process and arrive in 2020, which is in line with its previous projections. Intel says its Xe graphics solutions will range from teraFLOPS to petaFLOPS of performance apiece, but the company hasn't disclosed how many Xe graphics cards the Aurora supercomputer will use.

Intel's recent announcement that it had started a new program to launch graphics cards for both gaming and enterprise markets (purportedly code-named Arctic Sound) was shocking because Nvidia and AMD have been the primary two discrete GPU producers for the last 20 years. But it's a critical strategic move for the company.

The rise of GPUs in the supercomputing space is explosive. In 2008, not one supercomputer used GPUs for computation, instead relying on the tried-and-true CPU, but now 80 percent of compute power in the top 500 supercomputers comes from GPUs. As such, Xe graphics in the Aurora supercomputer signifies a major marketing win for Intel, as it requires unseating Nvidia's graphics cards. which are typically the go-to solution for AI processing in the high-performance computing (HPC) market.


The Aurora supercomputer is designed to chew through data analytics, HPC and AI workloads at an exaFLOP pace, marking a new high for the supercomputing realm and possibly handing the U.S. the leadership position on the Top500 supercomputing list.

Aurora also comes armed with "a future generation" of Intel's Optane DC Persistent Memory using 3D XPoint that can be addressed as either storage or memory. This marks the first known implementation of Intel's new memory in a supercomputer-class system, but it isn't clear if the reference to a future generation implies an upcoming version of the Optane variant beyond Intel's first-gen DIMMs.


The Aurora supercomputer will be infused with a whole host of Intel's technologies that form the basis of its new focus on six pillars: process technology, architectures, memory, interconnects, security and software.

The DOE told us that the new system will be "stood up" in early 2021 and will be fully online at exascale compute capacity before the end of that year.

For a look at the DOE's current lineup of supercomputers, along with a synopsis of the department's role leading the supercomputing realm, head to our History of DOE Nuclear Supercomputers: From MANIAC to AMD's EPYC Milan feature.