Part of the CLAS 2022 |
Caterina Urban (INRIA and École Normale Supérieure) |
Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more and more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability.
In this lecture, we will provide an overview of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained feed-forward ReLU-activated neural networks and employ either SMT, optimization, or abstract interpretation techniques. We will present several approaches through the lens of different robustness, safety, and fairness properties, with a focus on abstract interpretation-based techniques. We will also discuss formal methods for support vector machines and decision tree ensembles, as well as methods targeting the training process of machine learning models. We will then conclude by offering perspectives for future research directions in this context.
[Urban21] Caterina Urban and Antoine Miné. A Review of Formal Methods applied to Machine Learning. https://arxiv.org/abs/2104.02466, 2021.
[Kurd03] Zeshan Kurd, Tim Kelly. Establishing Safety Criteria for Artificial Neural Networks. In KES, 2003.
[Li19] Jianlin Li, Jiangchao Liu, Pengfei Yang, Liqian Chen, Xiaowei Huang, and Lijun Zhang. Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification. In SAS, 2019.
[Singh19] Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin T. Vechev. An Abstract Domain for Certifying Neural Networks. In POPL, 2019.
[Mazzucato21] Denis Mazzucato and Caterina Urban. Reduced Products of Abstract Domains for Fairness Certification of Neural Networks. In SAS, 2021.
[Julian16] Kyle D. Julian, Jessica Lopez, Jeffrey S. Brush, Michael P. Owen, Mykel J. Kochenderfer. Policy Compression for Aircraft Collision Avoidance Systems. In DASC, 2016
[Urban21] Caterina Urban and Antoine Miné. A Review of Formal Methods applied to Machine Learning. https://arxiv.org/abs/2104.02466, 2021.
[Katz17] Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In CAV, 2017.
[Galhotra17] Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. Fairness Testing: Testing Software for Discrimination. In FSE, 2017.
[Urban20] Caterina Urban, Maria Christakis, Valentin Wüstholz, and Fuyuan Zhang. Perfectly Parallel Fairness Certification of Neural Networks. In OOPSLA, 2020.