Testing robustness against unforeseen adversaries

We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.
Modern neural networks have achieved high accuracies on a wide range of benchmark tasks. However, they remain susceptible to adversarial examples, small but carefully crafted distortions of inputs created by adversaries to fool the networks. For example, the adversarial example with distortion below differs from the original image by at most 32 in each RGB pixel value; a human can still classify the changed image, but it is confidently misclassified by a standard neural network.
Sample images (black swan) generated by adversarial attacks with different distortion types. Each distortion is optimized to fool the network.
AI systems deployed in the wild will need to be robust to unforeseen attacks, but most defenses so far have focused on specific known attack types. The field has made progress in hardening models against such attacks; however, robustness against one type of distortion often(opens in a new window) does(opens in a new window) not(opens in a new window) transfer(opens in a new window) to(opens in a new window) robustness against attacks unforeseen by designers of the model. Consequently, evaluating against only a single distortion type can give a false sense of security about a model in the wild which may remain vulnerable to unforeseen attacks such as fake eyeglasses(opens in a new window) and adversarial stickers(opens in a new window).
An example where adversarial robustness does not transfer well. Hardening a model against Distortion A initially increases robustness against both Distortions A and B. However, as we harden further, adversarial robustness is harmed for Distortion B but remains about the same for Distortion A; (A = _L_∞, B = _L_1). The accuracy of the model against Distortion A peaks at a hardening level of 8 because that is sufficient to defend against the attack and further hardening hurts clean accuracy; see full paper for details.
We’ve created a three-step method to assess how well a model performs against a new held-out type of distortion. Our method evaluates against diverse unforeseen attacks at a wide range of distortion sizes and compares the results to a strong defense which has knowledge of the distortion type. It also yields a new metric, UAR, which assesses the adversarial robustness of models against unforeseen distortion types.
Typical papers on adversarial defense evaluate only against the widely studied or distortion types. However, we show(opens in a new window) that evaluating against the distortions gives very similar information about adversarial robustness.A We conclude that evaluating against distortions is insufficient to predict adversarial robustness against other distortion types. Instead, we suggest that researchers evaluate models against adversarial distortions that are not similar to those used in training. We offer the , -JPEG, Elastic, and Fog attacks as a starting point. We provide implementations, pre-trained models, and calibrations for a variety of attacks in our code package(opens in a new window).
We found that considering too narrow a range of distortion sizes can reverse qualitative conclusions about adversarial robustness. To pick a range, we examine images produced by an attack at different distortion sizes and choose the largest range for which the images are still human-recognizable. However, as shown below, an attack with a large distortion budget only uses it against strong defenses. We recommend choosing a calibrated range of distortion sizes by evaluating against adversarially trained models (we also provide calibrated sizes for a wide variety of attacks in our code package(opens in a new window)).
Sample images (espresso maker) of the same strong attack applied to different defense models. Attacking stronger defenses causes greater visual distortion.

Undefended

Weakly defended

Strongly defended
We developed a new metric, UAR, which compares the robustness of a model against an attack to adversarial training against that attack. Adversarial training is a strong defense that uses knowledge of an adversary by training on adversarially attacked images.B A UAR score near 100 against an unforeseen adversarial attack implies performance comparable to a defense with prior knowledge of the attack, making this a challenging objective.
We computed the UAR scores of adversarially trained models for several different distortion types. As shown below, the robustness conferred by adversarial training does not transfer broadly to unforeseen distortions. In fact, robustness against a known distortion can reduce robustness against unforeseen distortions. These results underscore the need for evaluation against significantly more diverse attacks like Elastic, Fog, Gabor, and Snow.
We hope that researchers developing adversarially robust models will use our methodology to evaluate against a more diverse set of unforeseen attacks. Our code(opens in a new window) includes a suite of attacks, adversarially trained models, and calibrations which allow UAR to be easily computed.
If you’re interested in topics in AI Safety, consider applying to work at OpenAI.
Footnotes
- A
The distortion allows an image viewed as a vector of pixel values to be distorted to a vector such that and are close in the norm.
- B
To compute UAR, we average the accuracy of the defense across multiple distortion sizes and normalize by the performance of an adversarially trained model; a precise definition is in our paper.
- C
The accuracy of the model against Distortion A peaks at a hardening level of 8 because that is sufficient to defend against the attack and further hardening hurts clean accuracy; see full paper for details.