Through her research, Kristin Y. Rozier identified subtle bugs in the industrial verification tool, Spot. The tool occasionally produced faulty output when reasoning about larger specifications of the size used to describe many real-world systems. The primary author of Spot fixed the tool accordingly, adopted the model-checking benchmarks Rozier advocated in her research paper that found the problems, and integrated her code into the test suite for the newest version of Spot. Adoption of Rozier's benchmarks also identified inefficiencies in the code that inhibited scalability. Using Rozier's test suite, the Spot team overhauled those inefficient parts, which resulted in a significant speedup of the tool's specification translator, and a reduction of the memory consumption. Spot's primary author sent mail describing speedups of an order of magnitude and memory reductions also of an order of magnitude, resulting from this study. The latest development snapshot of Spot, with these changes, can be downloaded here.
BACKGROUND: In her SPIN 2007 paper titled “LTL Satisfiability Checking,” Rozier argued for a method of debugging specifications, proposed a new set of benchmarks for testing verification tools which reason about Linear Temporal Logic specifications for model checking, and tested all such available tools, including Spot. Since Spot is a widely used tool, she forwarded to the development team the paper and a description of the bugs she found in their tool. The full set of test scripts used for the research paper was released online, accompanying the publication. The paper was selected as one of the five best papers of the conference and expanded for a journal special edition; the expanded journal version of the paper has been officially accepted to the STTT journal.
COLLABORATOR: This work was competed in collaboration with Moshe Y. Vardi of Rice University.
NASA PROGRAM FUNDING: This work was funded by the NGATS project.
Contact: Kristin Rozier