Research

Testing robustness against unforeseen adversaries

A distorted photograph of a moka coffee pot

We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.

Modern neural networks have achieved high accuracies on a wide range of benchmark tasks. However, they remain susceptible to adversarial examples, small but carefully crafted distortions of inputs created by adversaries to fool the networks. For example, the adversarial example with  L L_\infty distortion below differs from the original image by at most 32 in each RGB pixel value; a human can still classify the changed image, but it is confidently misclassified by a standard neural network.

Sample images (black swan) generated by adversarial attacks with different distortion types. Each distortion is optimized to fool the network.

Swan Linf

L L_\infty : each pixel value may be changed by at most 32.

Swan L1

L1 L_1: the vector of pixel values may be changed by a vector bounded in L1 L_1 -norm.

Swan L2

L2 L_2 -JPEG: image is transformed to a JPEG-compressed vector and distorted.

Swan Elastic

Elastic: a flow along a local vector field is applied to the image.

Swan Fog 1

Fog: a fog-like distortion of bounded magnitude is applied to the image.

Swan Gabor

Gabor: additive noise is added to adversarially texture the image.

Swan Snow

Snow: snowflakes are adversarially constructed to partially obscure the image.

AI systems deployed in the wild will need to be robust to unforeseen attacks, but most defenses so far have focused on specific known attack types. The field has made progress in hardening models against such attacks; however, robustness against one type of distortion often does not transfer to robustness against attacks unforeseen by designers of the model. Consequently, evaluating against only a single distortion type can give a false sense of security about a model in the wild which may remain vulnerable to unforeseen attacks such as fake eyeglasses and adversarial stickers.

A graph showing negative transfer between Distortion A and Distortian B

An example where adversarial robustness does not transfer well. Hardening a model against Distortion A initially increases robustness against both Distortions A and B. However, as we harden further, adversarial robustness is harmed for Distortion B but remains about the same for Distortion A; (A = L∞​, B = L1​). The accuracy of the model against Distortion A peaks at a hardening level of 8 because that is sufficient to defend against the attack and further hardening hurts clean accuracy; see full paper for details.

Method principles

We’ve created a three-step method to assess how well a model performs against a new held-out type of distortion. Our method evaluates against diverse unforeseen attacks at a wide range of distortion sizes and compares the results to a strong defense which has knowledge of the distortion type. It also yields a new metric, UAR, which assesses the adversarial robustness of models against unforeseen distortion types.

1. Evaluate against diverse unforeseen distortion types

Typical papers on adversarial defense evaluate only against the widely studied L L_\infty or  L2 L_2 distortion types. However, we show that evaluating against the distortions gives very similar information about adversarial robustness.[^footnote-robustness] We conclude that evaluating against  Lp L_p ​ distortions is insufficient to predict adversarial robustness against other distortion types. Instead, we suggest that researchers evaluate models against adversarial distortions that are not similar to those used in training. We offer the L1 L_1 ,  L2 L_2 -JPEG, Elastic, and Fog attacks as a starting point. We provide implementations, pre-trained models, and calibrations for a variety of attacks in our code package.

2. Choose a wide range of distortion sizes calibrated against strong models

We found that considering too narrow a range of distortion sizes can reverse qualitative conclusions about adversarial robustness. To pick a range, we examine images produced by an attack at different distortion sizes and choose the largest range for which the images are still human-recognizable. However, as shown below, an attack with a large distortion budget only uses it against strong defenses. We recommend choosing a calibrated range of distortion sizes by evaluating against adversarially trained models (we also provide calibrated sizes for a wide variety of attacks in our code package).

Sample images (espresso maker) of the same strong attack applied to different defense models. Attacking stronger defenses causes greater visual distortion.

A distorted photograph of a moka coffee pot

Undefended

A distorted photograph of a moka coffee pot

Weakly defended

A distorted photograph of a moka coffee pot

Strongly defended

3. Benchmark adversarial robustness against adversarially trained models

We developed a new metric, UAR, which compares the robustness of a model against an attack to adversarial training against that attack. Adversarial training is a strong defense that uses knowledge of an adversary by training on adversarially attacked images.[^footnote-uar] A UAR score near 100 against an unforeseen adversarial attack implies performance comparable to a defense with prior knowledge of the attack, making this a challenging objective.

We computed the UAR scores of adversarially trained models for several different distortion types. As shown below, the robustness conferred by adversarial training does not transfer broadly to unforeseen distortions. In fact, robustness against a known distortion can reduce robustness against unforeseen distortions. These results underscore the need for evaluation against significantly more diverse attacks like Elastic, Fog, Gabor, and Snow.

A table of UAR scores for adversarially trained models

Next steps

We hope that researchers developing adversarially robust models will use our methodology to evaluate against a more diverse set of unforeseen attacks. Our code includes a suite of attacks, adversarially trained models, and calibrations which allow UAR to be easily computed.

If you’re interested in topics in AI Safety, consider applying to work at OpenAI.