Submitted: May 2022

Abstract

Neural networks are ubiquitous in our lives nowadays, and we rely on their decisions, sometimes without even noticing. With their increasing deployment in safety-critical tasks, such as autonomous driving or medical procedures, formal guarantees on their quality and safety are required. Especially their proven vulnerability to adversarial examples, resulting from minimal perturbations added to some input and causing a misclassification, constitutes an immense safety threat. These examples can mislead the system controlled by the network and lead to dangerous or even life-threatening situations. Consequently, attempts are made to increase the networks’ robustness against such attacks.

Local robustness defines the absence of adversarial examples within a certain radius around a given input. In this thesis, we define global robustness as the fraction of locally robust inputs relative to the full set of possible inputs. We present algorithms that compute this global robustness and evaluate its performance as a quality measure for neural networks. Moreover, we demonstrate how global robustness allows for the improvement of safety and trustworthiness of neural networks and how it may be utilized to determine whether a network is overfitting.