Abstract: Analysis of experimental data has become an ever-increasing part of the workload of supercomputing facilities. The National Energy Scientific Computing Center (NERSC) is the mission high-performance computing (HPC) and data center for the U.S. Department of Energy and has supported such experimental computing workloads since its inception as a general-purpose supercomputing facility. The gradual process of hooking up experiment by experiment, however, has led to a scattered landscape of custom applications and workflow pipelines that cannot easily be transferred to new instruments, keeping onboarding and maintenance labor intensive. Additionally, with coming upgrades to storage rings, beamline instruments will require more and more compute capability to cope with the drastic increase in data rates, creating a need to tap into HPC resources.
In this presentation, we will talk about NERSC’s Superfacility project, which aims to develop a more unified, seamless, and automation-friendly environment for experimental science and to lower the entry barrier for new or upgraded beamline instruments to scale up their processing capabilities. We will also take a look at NERSC’s upcoming supercomputer Perlmutter, whose architecture acknowledges experimental workloads as core business of future computing at NERSC.