Skip to main content
Awards and Recognition | Mathematics and Computer Science

Researchers win Best Paper Award at visualization conference

A team of researchers from Argonne National Laboratory and the Ohio State University have won the best paper award at the IEEE Scientific Visualization (SciVis) conference, held in Vancouver, Canada, in October 2019.

The conference featured original research papers related to scientific visualization, including theory, methods, and applications ranging from mathematics and physical science to biosciences, economics, and multimedia.

In the award-winning paper, titled InSituNet: Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations,” the authors presented a deep-learning model for exploring parameter space for large-scale ensemble simulations in situ.

Scientists have increasingly been using in situ visualization — the generation and storage of visualization data at simulation time —  to avoid the difficulties of transferring or storing large amounts of data. With such approaches, however, the raw simulation data is typically unavailable for subsequent analysis, constraining researchers’ ability to explore different simulation parameters. 

The solution proposed by the Argonne-Ohio State team is a deep-learning-based model, called InSituNet. Their approach works as follows. Data is collected from an ensemble of simulations and visualized in situ using various visual mapping and view parameters. The model is then trained to learn the mapping from ensemble simulation parameters to visualizations of the corresponding simulation outputs.

With the trained model, users can generate new images for different simulation parameters and different simulation settings,” said Hanqi Guo, an assistant computer scientist in Argonne’s Mathematics and Computer Science Division. Moreover, to make the task easier for users, the Argonne-Ohio State team developed an interface that can be used to explore the parameter space interactively with InSituNet.

One question users naturally ask is how long the training of InSituNet takes. 

Generally, training takes more than 10 hours,“ Guo said. But he quickly added: The time is much less than actually running the ensemble simulations with extra parameter settings. And after training, InSituNet takes less than one second on a single NVIDIA 980Ti GPU.”

The researchers evaluated the effectiveness of InSituNet in combustion, cosmology, and ocean simulations on different network architectures.

By taking advantage of recent advances in deep learning, InSituNet outperformed other image synthesis-based visualization techniques in terms of both the fidelity and the accuracy of the generated image,” said Guo.

The full paper citation is: Wenbin He, Junpeng Wang, Hanqi Guo, Ko-Chih Wang, Han-Wei Shen, Mukund Raj, Youssef S.G. Nashed, and Tom Peterka, In SituNet: Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations,” IEEE Transactions on Visualization and Computer Graphics, 26(1):23–33, 2019.