Skip to main content
Nuclear Science and Engineering Division | Artificial Intelligence and Machine Learning

Inherent Safety Response

Trust, but verify

The reactor designs that hold out the most promise for achieving demonstrable safety are the advanced so-called passively safe reactor concepts that have emerged in the past few years. These concepts, to varying degrees, make use of intrinsic design features to improve safety over what is achievable using active systems alone. Such features have the potential to perform more reliably because they do not depend on electromechanical components but only on the natural laws governing heat, fluid flow, and neutron production. Full-scale tests could be conducted on a power plant to demonstrate that the reactor can survive the severest accident, providing incontrovertible evidence of the safety of a design.

Obviously, performing these tests will be impractical for all but the first operating prototype of a design. Another type of test is needed for the subsequent production versions of the original prototype. These tests must not interfere with normal operation and must be performed periodically during operation. Their purpose is to detect anomalies that might develop well into the life of the plant or that might result from fabrication errors.


A machine learning method has been developed to predict probabilistic margins to safety limits for passively safe reactors where the same physical mechanisms that control reactor behavior at power also control off-normal response. A data-driven model combined with a physics-based model is trained using the plant response to perturbations of flow, temperature, and rod reactivity applied during normal operation. The resulting model can be used to predict plant response to upsets and provide a probabilistic measure of how closely safety limits would be approached.

An inverse uncertainty quantification (UQ) approach makes no a priori assumptions about the values of parameters in the model. This differs from standard UQ approaches in which parameter values are taken a priori from a variety of sources that include physics code calculations and correlations from scaled experiments and from the operating history of the reactor itself.

The method has the following four steps and is represented in the figure.

  1. Develop model structure. A model structure for the physical system is written based on the physical laws. The structure consists of ordinary differential equations and contains assumptions about how modeling errors and measurement errors enter the model to affect the output. The parameters in the model are represented by a data-driven model trained on measurements from the plant.
  2. Train. The data-driven model is trained from observations of the process output using a discrete system filtering theory.
  3. Test. The validity of the model is tested by checking that it is statistically consistent with the measurements. The model comprises a measured basis from which the system response to accident forcing functions can be predicted.
  4. Propagate uncertainties. The values of the data driven model are propagated through the physics equations for a specific transient. The output appears as a probabilistic envelope that bounds the system response and whose width reflects the magnitude of the model uncertainty as represented in the accompanying figure.


The estimation of the probability density function is computationally intensive so implementation to date has focused on relatively simple point kinetics models for representing the physics of reactor behavior. This of course is accompanied by an uncertainty estimate that is larger than might otherwise be the case. The challenge is to reduce uncertainty magnitude through higher fidelity representations of the physics, such as reactivity feedbacks that are represented by spatially distributed worth, as found in codes like SAM.