Advocating for Multiple Defense Strategies against Adversarial Examples

Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne

It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment