NASA Logo, National Aeronautics and Space Administration

Overview

Prognostics is defined as detection of a failure precursor followed by prediction of remaining useful life (RUL) beyond which a which a component will no longer perform a particular function. For maturation and deployment prognostics technologies must be validated and verified rigorously so they can be certified for critical applications. While online performance evaluation for validation is still a research topic, offline performance evaluation provides a starting point for such efforts. Prognostic performance evaluation has gained significant attention in the past few years. Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few.

The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. Under this effort we propose several new evaluation metrics tailored for prognostics that are shown to effectively evaluate various algorithms as compared to other conventional metrics. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. These metrics can be customized for different applications. Various issues faced by prognostics and its performance evaluation needed to be considered that have led to a formal notational framework to help standardize subsequent developments. These metrics can be computed from probability distributions obtained from prognostic algorithms that can handle uncertainty estimates.

Motivation

For end-of-life predictions of critical systems, it is important to establish confidence in the prognostic systems before incorporating their predictions into the decision-making process. A maintainer needs to know how good the prognostic estimates are before he/she can optimize the maintenance schedule. Therefore, these algorithms should be tested rigorously and evaluated on a variety of performance measures before they can be certified. Furthermore, metrics help establish design requirements that must be met. In the absence of standardized metrics it has been difficult to quantify acceptable performance limits and specify crisp and unambiguous requirements to the designers. Standardized metrics will provide a lexicon for a quantitative framework for requirements and specifications. There are a number of other reasons that make performance evaluation important. Among others, three broad categories namely, scientific, administrative, and economic, have been identified that include most reasons to carry out performance evaluations.

  • Performance evaluation allows comparing different schemes numerically and provides an objective way to measure how changes in training, equipment or prognostics models (algorithms) affect the quality of predictions. This provides a deeper understanding from the research point of view and yields valuable feedback for further improvements.
  • One can identify bottlenecks in the performance and guide research and development efforts in the required direction.
  • As these methods are further refined, quantitatively measuring improvement in predictions generates scores that can be used to justify for research funding in areas where either PHM has not yet picked up or where better equipment and facilities are needed. These scores can also be translated into costs and benefits to calculate Return-on-Investment (ROI) type indexes to justify their fielded applications.

lab

First Gov logo
NASA Logo - nasa.gov