Uncertainty quantification, or "UQ," is the quantitative characterization and reduction of uncertainty in computer applications through running very large suites of calculations to characterize the effects of minor differences in the systems. Sources of uncertainty are rife in the natural sciences and engineering fields. UQ uses statistical methods to determine likely outcomes.

UQ in large-scale simulations is playing an increasingly important role in the process of code verification and validation (V&V). If a simulation is to be quantitatively validated against the results from an experiment, it is crucial to understand the expected uncertainty in the output metrics of the calculation and also have a quantitative determination of the error bars associated with the output metrics from the experiment. In practice, it becomes possible to assess the true accuracy of a simulation when the experimental uncertainty is less than the predicated uncertainty of the simulation.

Uncertainty quantification (UQ) uses statistical methods to determine likely effects of minor differences in input parameters. This chart displays the primary roles of key past, present, and future NNS resources responsible for performing UQ calculations of weapons codes. Trinity and Cielo are located at Los Alamos National Laboratory, while Purple, Sequoia, and Sierra are Livermore supercomputers. (D = dimensional.)

Error estimates of uncertainty for the experiment usually require that an ensemble of experiments with controlled parameters be performed and known systematic errors are understood.

The quantification of uncertainty in large-scale simulations becomes particularly important when the simulation is used as a predictive tool in describing phenomena in a regime that is outside of the bounds of previous experimental tests or known observations. Without experiments to check against code predictions in such regimes, it becomes essential to quantitatively evaluate the expected uncertainty in code output. This UQ task is complex in its undertaking for any simulation code that has nonlinearly coupled multiphysics algorithms as a representation of the underlying partial differential equations.

In the complex multiphysics simulation codes used at LLNL, many aspects of the physics may have a parametric representation or a choice of physics models each with their own degree of approximation. The range or bounds of parametric settings in physical models and the choice of physics models represents a span of uncertainty in the simulation. Typically, simulation codes are used with a particular choice of input physics models and perhaps a typical choice of parametric settings without any exploration of the full uncertainty in the simulation outcome. Occasionally, a few different models are run in a few large-scale simulations to uncover an estimation of the range or dispersion of output results and this gives some measure of the uncertainty, but it is usually woefully inadequate for determination of the full uncertainty in the simulation. The problem of determination of uncertainty quantification is complex and is a topic for current research. Every potential center that has a strong V&V component should have a part of that V&V component devoted to determination of UQ of it simulations.