NASA Logo, National Aeronautics and Space Administration

Results in DX Competition

We have participated in a diagnostic challenge that was organized as part of the 20th International Workshop on Principles of Diagnosis (DX-09); see http://www.isy.liu.se/dx09/ for information about the workshop. The challenge was organized in two tracks, an Industrial Track and a Synthetic Track, and we participated in the Industrial Track featuring the ADAPT electrical power system (EPS) testbed.

Two categories of scenarios were featured in the Industrial Track, namely Tier 1 scenarios (a subset of ADAPT) and Tier 2 scenarios (complete ADAPT). Tier 1 scenarios were nominal or contained one fault. Tier 2 scenarios were nominal or contained single, double, or triple faults.

Faults inserted into ADAPT had the following characteristics:

  • Faults were injected simultaneously or sequentially
  • Fault types were parametric (change in continuous parameter value) or discrete (change in system mode)
  • Faults were abrupt (immediate onset) and permanent
  • Faults were to components as well as sensors

Using techniques discussed here, our ProADAPT team obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the Industrial Track of the DX'09 Diagnostic Challenge Competition. One key component of our ProDiagnose algorithm was the use of a Bayesian network model of ADAPT, which was compiled into an arithmetic circuit, which was then used for on-line diagnosis. For further information on the compilation to arithmetic circuits, please see http://reasoning.cs.ucla.edu/. For a high-level discussion of our approach, please see: http://ti.arc.nasa.gov/project/pca/.

The following eights metrics are used in the figure:

  1. Detection Accuracy: The ratio of correctly classified experiments (scenarios) to the total number of experiments.
  2. Classification errors: The Hamming distance between the true component mode vector and the diagnostic algorithm's component mode vector.
  3. False Negatives Rate: The ratio of experiments where a fault is missed while the system was actually faulty.
  4. False Positives Rate: The ratio of experiments where a fault is announced by the DA while the system was actually non-faulty, or where a fault is announced too early.
  5. Mean CPU Time: Average CPU load during an experiment, averaged over all experiments.
  6. Mean Time To Detect: The period of time from the beginning of a fault injection to the moment of the first “high” detection signal.
  7. Mean Time To Isolate: The period of time from the beginning of a fault injection to the start of the last persistent “high” isolation signal.
  8. Mean Peak Memory Usage: The maximum memory size at every step in an experiment, averaged over all experiments.

For further details on the scores from competition, please see http://www.dx-competition.org/ and in particular the file DXC_results.xls, which contains the experimental results for both the Industrial Track and the Synthetic Track.

First Gov logo
NASA Logo - nasa.gov