Summer Argonne Students Symposium (SASSy) Part III
A Novel Energy Discretization for Neutron Transport Using Clustering and Finite Elements
1:00 p.m. - 1:15 p.m.
Supervisor: Vijay Mahadevan
Determining accurate quantities of interest for nuclear reactor simulations requires accurate treatment of the kinetic energy of the simulated neutrons. The interaction probability (cross section) for a real neutron interacting with a nucleus may vary orders of magnitude for small changes in the incident neutron energy due to preferred energy levels of the nucleus. Though such resonances are well characterized, they are too numerous for brute-force resolution of the energy variable for deterministic calculations. Lower-resolution methods, especially the ubiquitous multigroup method, discretize the energy domain into contiguous pieces, smearing out the resonances, which may lead to a loss in fidelity. We introduce a novel method that clusters the energy domain into discontiguous pieces, which preserves the resonances and allows high fidelity with low numbers of degrees of freedom in the energy domain.
Distributed Data Structure and Parallel Algorithms for Recursively Low-Rank Compressed Matrices
1:15 p.m. - 1:30 p.m.
Supervisor: Jie Chen
Kernel matrices arise from an extensive array of applications in science and engineering. In this work, we provide a foundation to deal with real-sized problem instances by designing a distributed data structure for recursively low-rank compressed matrices and a basic matrix operation like matrix-vector multiplication. The underlying idea is to distribute the tree data structure (arising from the low-rank decomposition) among processors in a balanced fashion. Similarly, matrix operations like matrix-vector multiplication are performed in the form of tree traversal where each district subtree can be computed independently and synchronization is intrinsically provided by the underlying structure. Preliminary results show the efficiency and scalability of our parallel approach.
DFEComm: A Discrete Finite Element Communication Kernel
1:30 p.m. - 1:45 p.m.
Supervisor: Andrew Siegel
Co-design, an iterative cycle where scientific simulation requirements influence computer architecture design and architecture design informs formulation of and algorithm choice for scientific software, will become increasingly relevant as the scientific community moves toward exascale computing. An important tool in the co-design process is the development of mini-apps: small, self-contained programs that embody the essential performance characteristics of key scientific applications. Free access to these programs enables hardware developers to make informed decisions about future architecture designs. We have developed a mini-app for nuclear reactor simulations that retains local work and inter-node communication patterns of full transport sweeps, preserving the challenges inherent in parallelizing such algorithms. We present scaling results of the mini-app and the full app on which it was based, a research code from Texas A&M University called PDT.
Identifiability Analysis and Parameter Estimation for Lithium-ion Battery Models
1:45 p.m. - 2:00 p.m.
Supervisor: Victor Zavala
Recovering kinetic, transport, and thermodynamic parameters from voltage discharge curves is a key task in the development of battery models. This task is particularly complicated, however, because of the high nonlinearity and coupling of the associated PDAE models and because of the lack of informative experimental data. In this work, we investigate if information provided by discharge curves is sufficient to reliably estimate key parameters of interest. We consider the isothermal model developed and validated by Doyle and Newman. The Lithium-ion cell sandwich consisting of carbon anode (LixC6), plasticized electrolyte and manganese oxide cathode (LiyMn2O4). We demonstrate that the use of discharge curve information enables the identification of a very small parameter subset, regardless of the number of experiments considered. Rigorous singular value and Monte Carlo analyzes support our claims.
2:00 p.m. - 2:15 p.m. Break - Pizza and Coffee
Preconditioning Complex Helmholtz Equation with Multigrid
2:15 p.m. - 2:30 p.m.
Supervisor: Barry Smith
Phasefield Crystal (PFC) is a type of density functional theory that describes the state of a material (e.g., concentrations of chemical species in it) near the atomic scale. PFC combines the use of continuous phase fields with the information about atomic-scale correlations, which arise from the underlying crystalline structure. This information is encoded in the pair-correlation function, which is approximated by rational functions in the Fourier space. In the physical space this leads to the appearance of Helmholtz equations with complex frequencies, which must be solved any time the density functional is evaluated. Naive MG results in poor convergence due to the poor performance of classical smoothers, such as damped Jacobi. Instead, we follow Elman et al. in using FGMRES preconditioned by MG with Krylov subspace smoothers on suitably chosen grid levels, as well as carefully selected damping factors on others. These ideas are applied to the complex Helmholtz problems arising from PFC.
A Hybrid Approach to Communication Modeling
2:30 p.m. - 2:45 p.m.
Supervisor: Stefan Wild
Modeling the communication time of parallel applications on HPC systems is a challenging task. Analytical approaches do not require costly profiling runs but the models are often difficult to construct for complex programs because they require detailed understanding of the underlying hardware and implementation of communication functions. On the other hand, machine learning approaches do not need the domain and expert knowledge to generate models but they suffer from high profiling and model generation cost. In this research, we try to combine these two approaches to model the communication time of high performance applications. The hybrid approach is able to generate performance models at much lower cost with less knowledge about application and hardware.
Implementation of the Proper Orthogonal Decomposition in the Context of Turbulent Channel Flow
2:45 p.m. - 3:00 p.m.
Supervisor: Andrew Siegel
The proper orthogonal decomposition is a post-processing technique used to extract basis functions from a set of data "snapshots". Performing data analysis with the POD provides a number of advantages; in particular, it allows us to capture the modes which contain the most kinetic energy on average, thus reducing high-dimensional problems to significantly lower-dimensional problems. We discuss the application of the POD and, more specifically, the POD shift mode, to simulations of turbulent channel flow.