Skip to main content
Argonne National Laboratory

Science 101: Supercomputing

There are computers. And then there are supercomputers.

Most personal and work computers are powerful enough to perform tasks like doing homework and conducting business. You can even increase the power of certain components for an awesome gaming experience.

But it’s pretty unlikely that you’ll solve riddles about the universe or understand the inner workings of a complex virus. Those are big problems that both need and produce a lot of data. Managing all of that information requires the power of supercomputers.

Supercomputers are often used to simulate experiments that might be too costly, dangerous, or even impossible to conduct in real life. For example, researchers use simulations to understand how stars explode or how fuel is injected inside an engine.

A supercomputer consists of thousands of small computers called nodes. Each node is equipped with its own memory and a number of processors, the bits that do all the figuring out.

Newer supercomputers use a combination of central processing units, CPUs, like the kind that operate most home computers, and different accelerator chips related to graphic processing units, GPUs. In gaming, GPUs help quickly create the visuals; in supercomputing, they specialize in quick calculations for data processing or artificial intelligence workloads.

Another key to a powerful supercomputing system is an incredibly fast network, the communications hub that connects all of those mini computers. So now, instead of working as separate units, they can act as one, managing millions of tasks to tackle complex problems quickly.

But it’s not just the technological marvels that make supercomputers special. An enormous amount of expertise and infrastructure is required to operate these massive machines and make them available to scientists, who tap into supercomputing power remotely to run their computational experiments. 

System administrators ensure supercomputer hardware is functional and software is up to date. In-house computational scientists work with supercomputer users from across the world to help ensure their codes run smoothly and their simulations help advance their scientific explorations. Technical support experts are on standby to help users troubleshoot any issues they encounter with their work at a supercomputing facility.

The world’s most powerful supercomputers, like those run by the U.S. Department of Energy, require specialized facilities, known as data centers, to accommodate their space, energy, and cooling requirements. For example, Argonne National Laboratory’s next supercomputer, Aurora, will occupy the area of two NBA basketball courts, consume the same amount of energy as thousands of homes, and be cooled by a complex system that contains 44,000 gallons of chilled water.

As home to the Argonne Leadership Computing Facility, a DOE Office of Science user facility, the laboratory has a long history of building and using supercomputers to tackle some of the world’s most pressing problems. And Argonne continues to push the limits of technology and discovery. Aurora will help scientists advance our understanding of everything from Earth’s changing climate to improved solar cell materials to the structure of the human brain.

Solving big, complex problems takes a lot of smart people and big, amazing machines. Argonne’s supercomputers are up for the challenge!

What is a supercomputer?

A complicated machine helping scientists make huge discoveries

Imagine one million laptops calculating together in perfect harmony and you’re getting close to the power of a supercomputer. Researchers need all that computing muscle to answer some of the world’s biggest questions in human health, climate change, energy and even the origins of the universe! When you want to study something that’s impossible to explore in a lab — like an exploding star or a fast-forming hurricane — your computer has to be super.

What is a Supercomputer?