NASA Logo, National Aeronautics and Space Administration

Algorithmic Performance Metrics

Generally speaking algorithmic performance can be measured by evaluating the errors between the predicted and the actual RULs and computing metrics like accuracy (bias), precision (spread), MSE, and MAPE, etc. These metrics provide statistical information about variations in RUL distributions. However, it is noteworthy that prediction performance (e.g., accuracy and precision) tends to be more critical as time passes by and the system nears its end-of-life (EoL). Considering EoL as a fixed reference point in time, predictions made at different times create several conceptual difficulties in computing an aggregate measure using these conventional metrics. Furthermore, from a prognostics viewpoint a measure that encapsulates the notion of performance improvement with time is missing. Since prognostic estimates continuously get updated, i.e., several successive predictions occur at early stages right after fault detection, middle stages while the fault evolves, and late stages nearing EoL. Depending on application scenarios, criticality of predictions at different stages may be ranked differently. The efforts under this project have developed new performance evaluation metrics that incorporate these notions and provide a starting point for future discussion. A brief discussion on these metrics is provided here.

Prognostic Horizon (PH)

Prognostic Horizon (PH) is defined as the difference between the time index i when the predictions first meet the specified performance criteria (based on data accumulated until time index i) and the time index for EoL. The performance requirement may be specified in terms of an allowable error bound (α) around the true EoL. The choice of α depends on the estimate of time required to take a corrective action. Depending on the situation this corrective action may correspond to performing maintenance (manufacturing plants) or bringing the system to a safe operating mode (operations in a combat zone).

lab

where:

lab

lab

Alpha-Lambda Performance

α-λ accuracy is defined as a binary metric that evaluates whether the prediction accuracy at specific time instance tλ falls within specified α-bounds. Here tλ is a fraction of time between tP and the actual tEoL. The α-bounds here are expressed as a percentage of actual RUL r(iλ) at tλ.

lab

where:

lab

lab


This is a more stringent requirement as compared to prediction horizon, as it requires predictions to stay within a cone of accuracy i.e., the bounds that shrink as time passes by

Relative Accuracy (RA)

Relative Accuracy (RA) is defined as a measure of error in RUL prediction relative to the actual RUL at a specific time index specified by λ. An algorithm with higher relative accuracy is desirable. The range of values for RA is [0,1], where the perfect score is 1.

lab

where:

lab

lab

RA conveys information at a specific time. It can be evaluated at multiple time instances before tλ to account for general behavior of the algorithm over time. To aggregate these accuracy levels, Cumulative Relative Accuracy (CRA) can be defined as a normalized weighted sum of relative accuracies at specific time instances. In most cases it is desirable to weigh those relative accuracies higher that are closer to EoL. In general, it is expected that λ is chosen such that it holds some physical significance , for instance a time index that provides a required prediction horizon, or time required to apply a corrective action, etc.

lab

where:

lab

Convergence

Convergence is a meta-metric defined to quantify the rate at which any metric (M) like accuracy or precision improves with time. Convergence is a useful metric since we expect a prognostics algorithm to converge to the true value as more information accumulates over time. Further, a faster convergence is desired to achieve a high confidence in keeping the prediction horizon as large as possible. It is defined as the distance between the origin and the centroid of the area under the curve for a metric is a measure of convergence rate. Lower distance means a faster convergence. A faster convergence is desired to achieve a high confidence in keeping the prediction horizon as large as possible.

lab

where:

lab

lab

First Gov logo
NASA Logo - nasa.gov