NASA Logo, National Aeronautics and Space Administration

Introduction

A model-based computational framework for uncertainty quantification and management in prognostics has been developed. Using this framework, it is possible to visualize the problem of quantifying the uncertainty in remaining useful life (RUL) as an uncertainty quantification problem, computational approaches for uncertainty propagation can be used to predict the uncertainty in RUL.

Model-Based Framework

An architecture for model-based prognostics has been developed. As seen from the figure below, the whole problem of prognostics can be considered to consist of the following three sub-problems:

  • State Estimation
  • Future State Prediction
  • Remaining Useful Life Computation

Figure: Model-Based Framework for Prognostics

The first step of estimating the present state serves as the precursor to prognosis and RUL computation. A state space model is used to continuously predict the state of the system, and the measured outputs are also represented using the corresponding output state space model. Using these models along with the observed data, a Bayesian filtering approach can be used to estimate the state. Commonly used Bayesian filtering techniques are Kalman filtering, particle filtering, unscented Kalman filtering, etc. Such techniques also predict the uncertainty in the present estimate, and this uncertainty needs to be interpreted subjectively, as explained earlier.

Having estimated the present state, the second step involves using the state space model to predict the future states of the system. This model is obviously a function of the inputs (future loading and operating conditions), and therefore, the future state predictions are dependent the present state estimate and the future inputs, along with the choice of the state space model. Typically, all these three - the model, present state estimate, and the future inputs - are all uncertain, and this leads to uncertain future behavior of the system. Therefore, the system may take multiple paths as indicated in the figure below.

Figure: Uncertain Future Behavrior

The third step involves the prediction of the end of life and the calculation of remaining useful life. The end of life is typically indicated by defining a region of acceptable behavior and the time at which this region is exit is defined to be the remaining useful life. A Boolean function (denoted by TEOL) is defined to check whether the present performance is acceptable or not. Due to uncertain future behavior, each future trajectory exits the safe region at a different time instant. Hence, the end of life (EOL) is uncertain and therefore, the remaining useful life (RUL) prediction is also uncertain.

RUL Estimation: An Uncertainty Propagation Problem

From the above discussion, it is clear that the RUL at a specific time instant (tP) depends on the following quantities:

  • Present state estimate (x(tP)), which is reflective of the current health condition.
  • Future inputs conditions(u): starting from tP until EOL is reached.
  • Model parameter values(θ), starting from tP until EOL is reached.
  • Model error values(v), starting from tP until EOL is reached.

For the purpose of RUL prediction, all of the above quantities are independent quantities and hence, RUL becomes a dependent quantity. Let X denote the vector of all the above dependent quantities. Then the calculation of RUL (denoted by R) can be expressed in terms of a function, as: R = G(X). This functional relation can be graphically explained, as shown in the figure below

Figure: RUL Estimation as an Uncertainty Propagation Problem

Knowing the values of X, it is therefore possible to compute the value of R. The quantities contained in X are uncertain, and the focus in prognostics to compute their combined effect on the RUL prediction, and thereby compute the probability distribution of R. The problem of estimating the uncertainty in R is equivalent to propagating the uncertainty in X through G, and it is necessary to use computational methods for this purpose. The problem of estimating the uncertainty in R using uncertainty propagation techniques is a non-trivial problem, and needs rigorous computational approaches. An important goal of this project is to investigate statistical approaches for uncertainty propagation and check whether such approaches would be suitable for online prognostics.

Computational Methods for Uncertainty Propagation

The different types of methods for uncertainty propagation can be broadly classified into two types: sampling-based methods and analytical methods.

Sampling-based Methods

The most intuitive method for uncertainty propagation is to make use of Monte Carlo simulation (MCS). The basic underlying concept of Monte Carlo simulation is to a generate pseudo-random number which is uniformly distributed on the interval [0, 1]; then the CDF of X is inverted to generate the corresponding realization of X. Following this procedure, several random realizations of X are generated, and the corresponding random realizations of R are computed. Then the CDF of R is calculated as the proportion of the number of realizations where the output realization is less than a given value "r". The generation of each realization requires one evaluation/simulation of G. Several thousands of realizations may often be needed to calculate the entire CDF, especially for very high/low values of "r". Error estimates for the CDF, in terms of the number of simulations, are available in the literature. Alternatively, the entire PDF of R can be computed by constructing a histogram based on the available samples of R, using kernel density estimation. There are several variations of the basic Monte Carlo algorithm which are used by several researchers. Some of these approaches are listed below:

  1. Importance Sampling: This algorithm does not generate random realizations of X from the original distribution. Instead, random realizations are generated from a proposal density function, statistics of R are estimated and then corrected based on the original density values and proposal density values.

  2. Stratified Sampling: In this sampling approach, the overall domain of X is divided into multiple sub-domains and samples are drawn from each sub-domain independently. The process of dividing the overall domain into multiple sub-domains is referred to as stratification. This method is applicable when sub-populations within the overall population are significantly different.

  3. Latin Hypercube Sampling: This is a sampling method commonly used in design of computer experiments. When sampling a function of "N" variables, the range of each variable is divided into "M" equally probable intervals, thereby forming a rectangular grid. Then, sample positions are chosen such that there is exactly one sample in each row and exactly one sample in each column of this grid. Each resultant sample is then used to compute a corresponding realization of R , and thereby the PDF of R can be calculated.

  4. Unscented Transform Sampling: Unscented transform sampling is a sampling approach which focuses on estimating the mean and variance of R accurately, instead of the entire probability distribution of R . Certain pre-determined sigma points are selected in the X—space and these sigma points are used to generate corresponding realizations of R. Using weighted averaging principles, the mean and variance of R are calculated.

Analytical Methods

Alternatively, there are several analytical methods available literature for uncertainty propagation. A new class of methods was developed by reliability engineers in order to facilitate efficient, quick but approximate calculation of the CDF of R. Some of methods are:

  1. First-Order Reliability Method (FORM)
  2. Inverse First-Order Reliability Method (inverse-FORM)
  3. Second-Order Reliability Method (SORM)

The focus of these methods is not on the calculation of the entire CDF function but only to evaluate the CDF at a particular value (r) of the output, i.e., P (R < r). The basic concept is to "linearize" the model G so that the the output R can be expressed as a linear combination of the random variables. Further, the random variables are transformed into uncorrelated standard normal space and hence, the output R is also a normal variable (since the linear combination of normal variables is normal). Therefore, the CDF value can be computed using the standard normal distribution function. The transformation of random variables X into uncorrelated standard normal space (U) is denoted by U = T (X), and the details of the transformation can be found in several textbooks and research articles.

The goal is to "linearize" the curve represented by the equation "G(x) − r = 0", which is also referred to as the limit state equation. Since the model G is non-linear, the calculated CDF value depends on the location of "linearization". This linearization is done at the so-called most probable point (MPP) which is the shortest distance from origin to the limit state, calculated in the U—space. Then, the CDF is calculated using the standard normal distribution function, as a function of the minimum distance, as indicated in the figure below. The MPP and the shortest distance are estimated through a gradient-based optimization procedure. This optimization is solved using the well-known Rackwitz-Fiessler algorithm, which is in turn based on repeated linear approximation of the non-linear constraint "G(x) − r = 0". This method is popularly known as the first-order reliability method (FORM). There are also several second order reliability methods (SORM) based on the quadratic approximation of the limit state.

The entire CDF can be calculated using repeated FORM analyses by considering different values of "r"; for example, if FORM is performed at 10 different values of "r", the corresponding CDF values are calculated, and an interpolation scheme can be used to calculate the entire CDF, which can be differentiated to obtain the PDF. This approach is difficult because it is almost impossible to choose such multiple values of "r", because the range (i.e. extent of uncertainty) of R is unknown. This difficulty is overcome by the use of the inverse FORM method where multiple CDF values are chosen and the corresponding values of "r" are calculated. This approach is simpler because it is easier to choose multiple CDF values since the range of CDF is known to be [0, 1].

Figure: First-Order Reliability Method

Discussion: Sampling-based Methods vs. Analytical Methods

Since sampling-based methods may require several thousands of "samples" or "particles" in order to accurately calculate the PDF or CDF, they are time consuming and hence, may not be suitable in the context of online prognostics and decision-making. Further, in general, sampling-based methods (other than the unscented transform sampling approach) are not "deterministic methods"; in other words, every time a sampling-based algorithm is executed, it may result in a slight different PDF or CDF. The ability to produce a deterministic solution is sometimes an important criterion for existing verification, validation, and certification protocols in the aerospace domain. On the other hand, analytical methods are not only computationally cheaper but also usually deterministic; in other words, they produce the same PDF or CDF every time the algorithm is executed. However, these analytical methods are still based on approximations, and not readily suitable to account for all types of uncertainty in prognosis. For example, consider the FORM method, which is solved using gradient-based optimization equations. Sometimes, the number of elements in X may be of the order of a few hundreds or thousands, and hence, it is necessary to compute hundreds or thousands of derivatives of G. In that case, the computational efficiency of the analytical approach is as good (or as bad) as sampling-based approaches. It is clear from the above discussion that, though uncertainty propagation methods may be available in the literature, it is challenging to make direct use of them for prognostics.

In addition to the above described methods, researchers have also advocated the use of surrogate models for uncertainty propagation. These surrogate models approximate the function G using different types of basis functions such as radial basis, Gaussian basis, Hermite polynomials, etc. These surrogate models are inexpensive to evaluate and therefore, facilitate efficient uncertainty propagation. Future research will investigate the use of such surrogate models for uncertainty quantification in prognostics.

Summary

There are several challenges in using different uncertainty quantification methods for prognostics, health management, and decision-making. It is not only important to understand these challenges but also necessary to understand the requirements of PHM systems in order to integrate efficient uncertainty quantification along with prognostics and aid risk-informed decision-making. Some of the issues involved in such integration are outlined below:

  • An uncertainty quantification methodology for prognostics needs to be computationally feasible for implementation in online health monitoring. This requires quick calculations, while uncertainty quantification methods have been traditionally known to be time-consuming and computationally intensive.

  • Sometimes, the probability distribution of RUL may be multi-modal and the uncertainty quantification methodology needs to be able to accurately capture such distributions.

  • Existing verification, validation, and certification protocols require algorithms to produce deterministic, i.e., repeatable calculations. Several uncertainty quantification methods are non-deterministic, i.e. produce different (albeit, only slightly if implemented well) results on repetition.

  • The uncertainty quantification method needs to be accurate, i.e., the entire probability distribution of X needs to be correctly accounted for, and the functional relationship defined by G in the above figure. Some methods use only a few statistics (usually, mean and variance) of X and some methods make approximations (say for example, linear) of G. Finally, it is important to correctly propagate the uncertainty to compute the entire probability distribution of RUL.

  • While it is important to be able to calculate the entire probability distribution of RUL, it is also important to be able to quickly obtain bounds on RUL which can be useful for online decision-making.

Each uncertainty quantification method may address one or more of the above issues, and therefore, it may even be necessary to resort to different methods to achieve different goals. Future research needs to continue this investigation, analyze different types of uncertainty quantification methods and study their applicability to prognostics before these methods can be applied in practice.

First Gov logo
NASA Logo - nasa.gov