NASA Logo, National Aeronautics and Space Administration

Bio

Academic Background

  • M.S. in Aeronautical Engineering, Caltech, Pasadena, 1995
  • B.S.E in Aerospace Engineering, University of Michigan, Ann Arbor, 1994

Experience/Past Projects

Intelligent Systems Division (July 2000-present)
  • Deputy Lead, Diagnostics and Prognostics Group
  • Associate Principal Investigator, IVHM Project, AvSP Program, ARMD, 2006-2008
  • Diagnostic algorithm benchmarking using Electrical Power System Testbed in ADAPT lab
  • Real-time simulation of fault detection, isolation, accommodation, and situational awareness of aircraft flight control system failures
  • Model-based reasoning of pressure delivery system of Hybrid Combustion Facility
Aeronautical Information Technologies Division (July 1999 – June 2000)
  • Aircraft and Engine Health Monitoring IPT Lead for an AV-8B Harrier flight test collecting IVHM data
Applied Aerodynamics Division (June 1995 – June 1999)
  • Assistant project director for a multi-million dollar wind tunnel test program to study the Reynolds number effects on externally blown flaps of a cargo transport aircraft

Publications

Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed
Tolga Kurtoglu, Sriram Narasimhan, Scott Poll, David Garcia, Stephanie Wright

Published in: Proceedings of Annual Conference of the Prognostics and Health Management Society 2009, San Diego, CA, Sept 27 - Oct 1, 2009.

Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

Towards a Framework for Evaluating and Comparing Diagnosis Algorithms
Tolga Kurtoglu, Sriram Narasimhan, Scott Poll, David Garcia, Lukas Kuhn, Johan de Kleer, Arjan van Gemund, Alexander Feldman

Published in: The 20th International Workshop on Principles of Diagnosis (DX-09), Stockholm, Sweden, June 14-17, 2009

Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results – and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

First International Diagnosis Competition – DXC’09
Tolga Kurtoglu, Sriram Narasimhan, Scott Poll, David Garcia, Lukas Kuhn, Johan de Kleer, Arjan van Gemund, Alexander Feldman

Published in: The 20th International Workshop on Principles of Diagnosis (DX-09), Stockholm, Sweden, June 14-17, 2009

A framework to compare and evaluate diagnosis algorithms (DAs) has been created jointly by NASA Ames Research Center and PARC. In this paper, we present the first concrete implementation of this framework as a competition called DXC’09. The goal of this competition was to evaluate and compare DAs in a common platform and to determine a winner based on diagnosis results. 12 DAs (model-based and otherwise) competed in this first year of the competition in 3 tracks that included industrial and synthetic systems. Specifically, the participants provided algorithms that communicated with the run-time architecture to receive scenario data and return diagnostic results. These algorithms were run on extended scenario data sets (different from sample set) to compute a set of pre-defined metrics. A ranking scheme based on weighted metrics was used to declare winners. This paper presents the systems used in DXC’09, description of faults and data sets, a listing of participating DAs, the metrics and results computed from running the DAs, and a superficial analysis of the results.

Advanced Diagnostics and Prognostics Testbed
Scott Poll, Ann Patterson-Hine, Joe Camisa, David Garcia, David Hall, Charles Lee, Ole J. Mengshoel, Christian Neukom, David Nishikawa, John Ossenfort, Adam Sweet, Serge Yentus, Indranil Roychoudhury, Matthew Daigle, Gautam Biswas, Xenofon Koutsoukos

Published in: The 18th International Workshop on Principles of Diagnosis (DX-07), Pages 178-185, Nashville, TN, May 29-31, 2007.

Researchers in the diagnosis community have developed a number of promising techniques for system health management. However, realistic empirical evaluation and comparison of these approaches is often hampered by a lack of standard data sets and suitable testbeds. In this paper we describe the Advanced Diagnostics and Prognostics Testbed (ADAPT) at NASA Ames Research Center. The purpose of the testbed is to measure, evaluate, and mature diagnostic and prognostic health management technologies. This paper describes the testbed’s hardware, software architecture, and concept of operations. A simulation testbed that accompanies ADAPT, and some of the diagnostic and decision support approaches being investigated are also discussed.

Inductive Learning Approaches for Improving Pilot. Awareness of Aircraft Faults
Lilly Spirkovska, David L. Iverson, Scott Poll, Anna Pryor

Published in: Proceedings of Infotech@Aerospace 2005, Arlington, VA, Sept 26-29, 2005.

Neural network flight controllers are able to accommodate a variety of aircraft control surface faults without detectable degradation of aircraft handling qualities. Under some faults, however, the effective flight envelope is reduced; this can lead to unexpected behavior if a pilot performs an action that exceeds the remaining control authority of the damaged aircraft. The goal of our work is to increase the pilot’s situational awareness by informing him of the type of damage and resulting reduction in flight envelope. Our methodology integrates two inductive learning systems with novel visualization techniques. One learning system, the Inductive Monitoring System (IMS), learns to detect when a simulation includes faulty controls, while two others, Inductive Classification System (INCLASS) and multiple binary decision tree system (utilizing C4.5), determine the type of fault. In off-line training using only non-failure data, IMS constructs a characterization of nominal flight control performance based on control signals issued by the neural net flight controller. This characterization can be used to determine the degree of control augmentation required in the pitch, roll, and yaw command channels to counteract control surface failures. This derived information is typically sufficient to distinguish between the various control surface failures and is used to train both INCLASS and C4.5. Using data from failed control surface flight simulations, INCLASS and C4.5 independently discover and amplify features in IMS results that can be used to differentiate each distinct control surface failure situation. In real-time flight simulations, distinguishing features learned during training are used to classify control surface failures. Knowledge about the type of failure can be used by an additional automated system to alter its approach for planning tactical and strategic maneuvers. The knowledge can also be used directly to increase the pilot’s situational awareness and inform manual maneuver decisions. Our multi-modal display of this information provides speech output to issue control surface failure warnings to a lesser-used communication channel and provides graphical displays with pilot-selectable levels of details to issues additional information about the failure. We also describe a potential presentation for flight envelope reduction that can be viewed separately or integrated with an existing attitude indicator instrument. Preliminary results suggest that the inductive approach is capable of detecting that a control surface has failed and determining the type of fault. Furthermore, preliminary evaluations suggest that the interface discloses a concise summary of this information to the pilot.

System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets
Scott Poll, David L. Iverson, Jeremy Ou, Dwight Sanderfer, Ann Patterson-Hine

Published in: NASA TM 2003-212270, June, 2003.

A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.

Contact

Aerospace Engineer

Intelligent Systems Division
NASA Ames Research Center
Mail Stop 269-1
Moffett Field, CA 94035

Phone: 650-604-2143
Fax: 650-604-3594

First Gov logo
NASA Logo - nasa.gov