December 15, 2022
Taylor T. Johnson
The ongoing renaissance in artificial intelligence (AI) has led to the advent of data-driven machine learning (ML) methods deployed within components for sensing, actuation, and control in safety-critical cyber-physical systems (CPS). While such learning-enabled components (LECs) are enabling autonomy in systems like autonomous vehicles and robots, ensuring such components operate reliably in all scenarios is extraordinarily challenging, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial ML attacks. We will discuss formal methods for assuring specifications—mostly robustness and safety—in autonomous CPS and subcomponents thereof using our software tools NNV and Veritex, developed as part of an ongoing DARPA Assured Autonomy project. These tools have been evaluated in CPS development with multiple industry partners in automotive, aerospace, and robotics domains, and allow for formally analyzing neural networks and their usage in closed-loop systems. We will also discuss relevant ongoing community activities we help organize, such as the Verification of Neural Networks Competition (VNN-COMP) held with the International Conference on Computer-Aided Verification (CAV) the past few years, as well as the AI and Neural Network Control Systems (AINNCS) category of the hybrid systems verification competition (ARCH-COMP) also held the past few years. We will conclude with a discussion of future directions in the broader safe and trustworthy AI domain, such as in new projects verifying neural networks used in medical imaging analysis.