Skip to main content
Feature Story | Argonne National Laboratory

Pioneers of high-performance computing library reunite

Like solid relationships, high-performance computers are built on communication.

The world’s fastest computers are composed of many processors that work together. Yet in order to do this, those processors must communicate, both inside the high-performance machines and between them. In the early 1990s, getting processors to speak to one another was challenging.

To solve this challenge, U.S. and European scientists joined forces 25 years ago to create a common language to allow highly parallelized and diverse computer processors to communicate — the Message Passing Interface (MPI).

MPI is like a superflower that merges all of these new capabilities and standardizes them into a new version of MPI.” - Pavan Balaji, computer scientist and group leader in Argonne’s MCS Division

Many of the founding developers of MPI reminisced about the birth of their brainchild during a one-day symposium celebrating its 25th anniversary. The symposium was held in conjunction with the EuroMPI/USA 2017 conference, at which multiple papers on current MPI-related research were presented. Held at the U.S. Department of Energy’s Argonne National Laboratory, this marked the first time that the long-running conference convened in the U.S.

Attendees of both the symposium and the conference represented industry, academia and research facilities from the U.S., Europe, Japan and South Korea.

Among the founding developers in attendance was Argonne Distinguished Fellow Emeritus, Ewing Rusty” Lusk, who opened the symposium by presenting a key piece of technology that helped lay out the original MPI standard — an overhead projector.

Lusk was a computer scientist in Argonne’s Mathematics and Computer Science (MCS) division when it entered into parallel computing in the early 1980s, an era in which there were numerous vendors of parallel computers, each with its own unique programming language.

Vendors competed to have the most easy-to-use language,” said Lusk. But if you wrote a program for an Alliant machine, for example, you had to completely change it to run on an Encore machine, even though they were architecturally similar.”

By the early 1990s, the parallel computing community realized there were too many competing mechanisms for message passing. In April 1992, many of the key players participated in a workshop to investigate standards for a message-passing program that could run on all parallel machines.

At the end of the workshop, it was clear that there was a need, a willingness and a strong desire to have a standard,” said Jack Dongarra, a professor in the Electrical Engineering and Computer Science Department at the University of Tennessee, Knoxville, who helped organize this first of many meetings. That was the beginning of MPI.”

From the outset, everyone agreed that collaboration between U.S. and European researchers was essential. There was some concern by U.S. researchers that the Europeans were developing their own standard, although U.K. and German views were too varied to come to agreement, recalled Rolf Hempel, head of the German Aerospace Center’s (DLR) Simulation and Software Technology lab.

By embracing both earlier U.S. developments and the European ones, MPI was much more easily accepted as a universal standard,” he said.

The MPI Forum eventually was launched in January 1993. Members, comprising more than 60 people from 40 organizations, worked for a year and a half to draft the first MPI standard, published in May 1994.

Since then, the MPI Forum has remained active, continuously working to ensure the standard meets new computational requirements. Almost at MPI version 4, they are now preparing for the next major computing frontier — exascale.

Early on, some thought that we’d need to evolve beyond MPI to move to exascale. This is probably not the case,” said Lusk. MPI has lasted because we did a good job defining it. That’s why it’s in use now and will remain in use for a long time to come.”

Another reason that MPI has lasted 25 years, added Lusk, is that it has always been a vehicle for computer science research. The Argonne group alone has published more than 100 peer-reviewed papers on MPI-related topics over that period.

Although involved in MPI for nearly 17 years, Pavan Balaji, general chair of the EuroMPI Conference 2017, is among the newer faces of MPI and the MPI Forum. A computer scientist and group leader in the MCS division, he became chair of the MPI hybrid programming working group in 2008, helping to drive new proposals for changes to the standards.

Like Lusk and many of the other conference participants, Balaji appreciates the robustness of MPI and its ability to adapt — despite the rapidity of computer advances — and outpace newer message-passing models.

You can look at these new program models like tiny flowers that offer some new features,” he said. MPI is like a superflower that merges all of these new capabilities and standardizes them into a new version of MPI.

In some sense, the new programming models are still succeeding, it’s just that they’ll be called MPI in the future.” 

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit the Office of Science website.