NASA Logo, National Aeronautics and Space Administration

SafeDNN logo


The SafeDNN project explores new techniques and tools to ensure that systems that use Deep Neural Networks (DNN) are safe, robust and interpretable. Research directions we are pursuing in this project include: symbolic execution for DNN analysis, label-guided clustering to automatically identify input regions that are robust, parallel and compositional approaches to improve formal SMT-based verification, property inference and automated program repair for DNNs, adversarial training and detection, probabilistic reasoning for DNNs.

Key Benefits

  • Provide strong guarantees wrt safety and robustness for DNNs, making them amenable for use in safety critical domains (particular autonomy).
  • Obtain compact, formal explanations of DNN behavior.
  • Improve testing, debugging and maintenance of DNNs.


We have applied our techniques to the analysis of deep neural networks designed to operate as controllers in the next-generation Airborne Collision Avoidance Systems for unmanned aircraft (ACAS Xu). We have also studied image classification networks (MNIST, CIFAR) and sentiment networks (for text classification).


  • Symbolic Execution for Importance analysis and Adversarial generation in Neural Networks. The 30th International Symposium on Software Reliability Engineering (ISSRE 2019) (To appear)
  • Property Inference for Deep Neural Networks. The 34th IEEE/ACM International Conference on Automated Software Engineering (ASE 2019) (To appear)
  • Project Members

    Corina Pasareanu
    Divya Gopinath


    Clark Barrett (Stanford)
    Hayes Converes (UT Austin)
    Burak Kadron (UC Santa Barbara)
    Guy Katz (Stanford)
    Sarfraz Khurshid (UT Austin)
    Ankur Taly (Google)

    First Gov logo
    NASA Logo -