Towards a Data-driven, Fully Bayesian Framework for Uncertainty Quantification
It is very common for a research group or a company to spend years on the development of sophisticated software in order to simulate realistically important physical phenomena. However, carrying out tasks like uncertainty quantification, model calibration or design using the full-fledged model is -in all but the simplest cases- a daunting task, since a single simulation might take days or even weeks to complete, even with state-of-the-art modern computing systems. One, then, has to resort to computationally inexpensive surrogates of the computer code. The surrogate surface may be subsequently used to carry out any of the computationally intensive engineering tasks.
In this talk, I offer a holistic view of the uncertainty quantification problem involving two interconnected aspects:
- 1) The construction of stochastic input models based on data and
- 2) The propagation of the stochastic input uncertainty through an expensive computer model by making use of limited – but well selected – simulations.
Despite the fast growth of the field during the last twenty years, several challenges still remain open such as non-linear dimensionality reduction, high-dimensional density estimation, treatment of response discontinuities, construction of surrogates in high-dimensional spaces, selection of the most informative simulations, and others. I will discuss my personal contributions to the field which stem out of a Bayesian reformulation of the problem and highlight the future work that is required. Addressing these issues is of extreme importance in engineering practice since it would enable a full-fledged stochastic analysis of complicated systems.