Research

Robust adversarial inputs

Robust Adversarial Inputs
We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
This innocuous kitten photo, printed on a standard color printer, fools the classifier into thinking it’s a monitor or desktop computer regardless of how its zoomed or rotated. We expect further parameter tuning would also remove any human-visible artifacts.

Out-of-the-box adversarial examples do fail under image transformations. Below, we show the same cat picture, adversarially perturbed to be incorrectly classified as a desktop computer by Inception v3 trained on ImageNet. A zoom of as little as 1.002 causes the classification probability for the correct label tabby cat to override the adversarial label desktop computer.

However, we’d suspected that active effort could produce a robust adversarial example, as adversarial examples have been shown to transfer to the physical world.

Scale-invariant adversarial examples

Adversarial examples can be created using an optimization method called projected gradient descent to find small perturbations to the image that arbitrarily fool the classifier.

Instead of optimizing for finding an input that’s adversarial from a single viewpoint, we optimize over a large ensemble of stochastic classifiers that randomly rescale the input before classifying it. Optimizing against such an ensemble produces robust adversarial examples that are scale-invariant.

A scale-invariant adversarial example.

Even when we restrict ourselves to only modifying pixels corresponding to the cat, we can create a single perturbed image that is simultaneously adversarial at all desired scales.

Transformation-invariant adversarial examples

By adding random rotations, translations, scales, noise, and mean shifts to our training perturbations, the same technique produces a single input that remains adversarial under any of these transformations.

A transformation-invariant adversarial example. Note that it is visibly more perturbed than its scale-invariant cousin. This might be fundamental: it’s intuitively plausible that small adversarial perturbations are harder to find the more transformations an example must be invariant to.

Our transformations are sampled randomly at test time, demonstrating that our example is invariant to the whole distribution of transformations.