The moisture carryover (MCO) problem found in boiling water reactors is illustrative of a class of problems that is not amenable to standard engineering modeling and analysis. In the MCO problem entrained water droplets that would normally be removed by the steam separators and steam dryers are carried over to the turbine, which has negative consequences. Recent changes to operating procedures including power uprates and new core loading patterns have led to increased MCO and in ways not well predicted by analytic methods. We have helped to solve this problem by using historical operating data to learn the operational dependencies that control MCO and to capture them in a data-driven model. This model is being used by a US utility to guide and manage reactor operating condition so that MCO is acceptable.
In machine learning, the inputs that capture relevant phenomena are referred to as features, while the collection of candidate features is referred to as feature space. The challenge is to find the important inputs that affect MCO to restrict the number of inputs to avoid overfitting in the presence of limited size data sets. This necessarily involves a tradeoff between good model generalization capability and good model predictive accuracy. The appropriate tradeoff in this work is analyzed through simulation studies where the choice of features is varied, and the model predictive performance is evaluated. Sensitivity studies are performed to identify the degree to which feature-space variables affect the output.
Expert nuclear engineering knowledge was used to identify a candidate set of features to serve as an initial estimate for the importance of physical variables. A set of features of limited dimension to avoid overfitting issues was identified through physical arguments. The reasoning exercised is approximate. With data-driven systems, the lack of a physic-based model requires some judgement on the part of the analyst aided by analytical insight into the structure of the data. Representative data involved in feature selection appear in the figure.
It is important to eliminate implicit dependence among these candidate features to achieve a reduction in feature space. The presence of over determined information, or in analogy with linear algebra, a co-linearity, should not be included in the model. A correlation analysis was performed using the Pearson correlation method with the result appearing in figure. A high correlation is also observed between quality and void fraction indicating these variables are redundant, which was consistent with the engineering analysis. Additionally, quality and liquid velocity are seen to have a lower level of correlation, indicating that these two variables are both important for capturing MCO dependence.
Additional reduction of features space involved collapsing the spatial distributions of some process variables into aggregated quantities. A K-means auto clustering analysis was performed to discover similarities and resulted in a grouping of fuel bundles shown in the figure.
To further reduce the required number of free parameters in the neural network to learn the data, the training problem was recast as a hyperspace optimization. Non-linear dependences known to exist were moved into the input space and represented as parametric relationships with the values of these parameters estimated in an outer optimization loop that contained the standard neural network training procedure.
A parametric study was performed to identify the optimum number of neurons in the hidden layer of the neural network. The cost function (MSE) distributions were plotted as a function of number of neurons. As seen in the figure the optimal model is a two-neuron model as it provides the best balance between accuracy and generalization.
The resulting predictions of MCO are shown in the figure with the prediction for each cycle made with a model trained on the other five cycles. Generally, the predictions are within the experimental error on the MCO measurement. The results are a significant improvement over the results obtained from the proprietary model that the nuclear utility has used in the past to predict MCO. The results indicate good generalization and affirm the procedure that was used to arrive at a suitable feature space.
Performing data-driven performance optimization on a routine basis requires a well thought out policy for data formatting, curation, and archiving that begins with design of a nuclear facilities information system.
To facilitate maximum utility of data-driven performance optimization methods, a nuclear facility must be designed to provide the sensor set that supports access to the required measurements.
It is important to properly qualify data-driven models and results for their uncertainty. There is opportunity to better qualify models through non-parametric uncertainty quantification methods, which obviate the need to impose some generic error distribution.