Traditional damage propagation algorithms rely on physics-based failure mechanisms. Alternatively one can employ data-driven approaches when sufficient test data are present that map out the damage space. In this investigation, we evaluate different algorithms for their suitability in these situations. We are interested in assessing the trade-offs that arise from the amount of data needed, the computational speed exhibited, the robustness of the algorithm to input space perturbations, the ability to support uncertainty management, and the accuracy of the predictions.
A core issue encountered in making a meaningful prediction is to account for and subsequently bound various kinds of uncertainties arising from different sources like process noise, measurement noise, inaccurate process models, etc, in the whole exercise. Long-term prediction of the time to failure entails large-grain uncertainty that must be represented effectively and managed efficiently. For example, as more information about past damage propagation and about future use becomes available, means must be devised to narrow the uncertainty bounds. Prognostic performance metrics should take the width of the uncertainty bounds into account. Therefore, it is critical to choose methods that can take care of these issues in addition to providing damage trajectories. Not all data-driven techniques can be expected to inherently handle these issues, and thus must be combined with other methods suited for uncertainty management.
In this research our intent is to explore various data-driven techniques for prognostics and compare their performance against other methodologies employed in the research community. We also seek to compare various techniques among themselves for their suitability in different circumstances. We will apply these techniques to different data sets with a variety of characteristics. We are interested in assessing the trade-off that arises from the ability to support uncertainty management, and the accuracy of the predictions.
There are several different strategies for tackling the learning process. Of the two approaches that we start with, one is to first map the n-dimensional features into a 1-dimensional damage or health index onto which easy (non-linear) extrapolation techniques up to a damage (or health) limit can be performed to calculate remaining useful life. The second strategy is to directly perform pattern matching on the n-dimensional features with the remaining life as the target. The figures below depict these two approaches.