Researchers from Argonne National Laboratory and their colleagues at Lawrence Livermore and Los Alamos National Laboratories recently analyzed the diverse sources of uncertainties encountered in nuclear density functional theory (DFT).
Since DFT is the only global approach to the structure of atomic nuclei, it plays a key role in understanding fundamental processes such as the formation of elements in the universe or the mechanisms that power reactors. High accuracy in such calculations is essential. Yet surprisingly few attempts have been made to estimate the uncertainties underlying these calculations.
Three main sources of uncertainties and errors are known.
Model errors are unavoidable in the theoretical description of any quantum many-body problem; but they can be large, and they can be estimated on an empirical basis by carefully comparing predictions of selected observables obtained with different functionals. For such a comparison to be meaningful, however, one should ensure that the same optimization procedure was used for all the energy density functionals considered and that the numerical implementation was also identical. “In practice, this is rarely accounted for,” said Stefan Wild, a computational mathematician in Argonne’s Mathematics and Computer Science Division.
Fitting errors—for example, the choices of data or the assumptions made—also will affect the uncertainty. For instance, a simple change of the standard deviation for each data type has the potential to produce large variability in the optimization results, creating an especially significant source of errors. “In only the simplest of cases can one effectively remove these errors” said Wild.
Implementation errors—numerical errors stemming from the particular implementation of DFT equations in a computer code—can also be common. For example, the precision of a calculation based on mesh discretization depends on the resolution of the underlying grid. Fortunately, such errors can be the easiest to control on today’s high-performance computing systems, and various solvers are available in the open literature.
Comparing predictions of different approaches can provide insight into the magnitude of uncertainties. Equally important, however, is providing a rigorous metric for comparison. To this end, the researchers discussed the potential benefits of using a Bayesian approach, a method of statistical inference based on conditional probabilities, that is, the probability that variable A will have a certain distribution given some data B and other circumstances C. While the approach can be computationally costly, it offers several advantages. It provides a full probability description of the model parameters, allowing general dependence between model parameters from which one may deduce the mean standard deviation if desired. Moreover it can easily incorporate the impact of new data and updates to the uncertainties in existing data.
“We see this as a promising path toward a more rigorous quantification of uncertainties in DFT,” said Wild.
The paper “Error Analysis in Nuclear Density Functional Theory” was published in the Journal of Physics G: Nuclear Particle Physics 42(3) 2015. Readers may also wish to refer to a recent paper that discusses Bayesian approaches for DFT.