Our Research

Speaker: Alhussein Fawzi, doctoral candidate at École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

Abstract: The robustness of classifiers to small perturbations of the datapoints is a highly desirable property when the classifier is deployed in real and possibly hostile environments. Despite achieving excellent performance on recent visual benchmarks, state-of-the-art classifiers are surprisingly unstable to small perturbations of the data.

In this talk, I will present a quantitative analysis of the robustness of state-of-the-art classifiers to a diverse set of perturbations, ranging from adversarial to random noise. I will show in particular the existence of fundamental limits on the robustness of classifiers to adversarial perturbations, and that the robustness of classifiers to random noise is driven by the geometry of the decision boundary. I will then derive precise bounds on the robustness of classifiers in terms of the curvature of the decision boundary of the classifier. In a final part of the talk, I will present novel methods to quantitatively assess the robustness of classifiers to geometric transformations of the data.