Catch up on the latest AI articles

Zoom In/out Hostile Attack!

Zoom In/out Hostile Attack!

Adversarial Perturbation

3 main points
✔️ A method to deceive DNN without changing the features of the object using zoom-in/zoom-out
✔️ The only adversarial attack method that does not add adversarial perturbations
✔️ Provide guidelines on how to defend against attacks of the proposed method through adversarial training

Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs
written by Chengyin HuWeiwen Shi
(Submitted on 23 Jun 2022)
Comments: Published on arxiv.

Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)

code: 

The images used in this article are from the paper, the introductory slides, or were created based on them.

Research Background and Overview

Attacks that deceive the DNN are called adversarial attacks. Most adversarial attacks deceive the classifier by adding perturbations to the image that are imperceptible to humans. However, in physical scenes, subtle noise is invisible to the camera because the image captured by the camera is input to the classifier for classification.

Physical hostile attacks present the following challenges

  1. Difficult to capture small digital errors after printing
  2. Difficult to print hostile perturbations perfectly (printing loss)
  3. Difficulty in balancing robustness and concealment of attacks

Based on these challenges, the authors propose a novel physical adversarial attack called Adversarial Zoom Lens (AdvZL), which fundamentally solves the difficulties of physical adversarial attacks listed above, since there is no physical perturbation.

The above figure shows a schematic of the attack of the proposed method in a physical environment. The automatic zoom lens on the camera of a self-driving car allows the car to zoom in and out as it passes a road sign to fool the advanced DNN.

Contributions of the authors.

The main contributions of the authors can be summarized in three main areas

  1. Proposed AdvZL, a technology to realize physical hostile attacks without physical interference by manipulating a zoom lens.
  2. Build a dataset to verify that the magnified image can fool the DNN
  3. Digital and physical testing validates AdvZL

proposed method

Dataset generation (attacks in the digital environment)

To zoom in, peek N pixels wide from the outer frame and resize it to the original size, as shown in the following figure.

The dataset of attack samples proposed by the authors is derived from Imagenet, which the authors call Imagenet-ZOOM IN (Imagenet-ZI). For the Imagenet images, the magnification of each image is set to 1 0 steps for each image, with N pixels set from 6 to 60 pixels, with 6-pixel intervals.

Zoom Lens Attack (in the physical environment)

In the digital environment, the effectiveness of the proposed method is verified by having DNN classify Imagenet-ZOOM IN. In the physical environment, we use a zoom lens to zoom in and out of the photo.

The following procedure is used to generate hostile samples in the physical environment.

  1. Usually, image X is scaled up or down by different degrees, and the image with the lowest confidence score of classifier f for label Y is a candidate for the hostile sample
  2. If a candidate's hostile sample can fool the classifier, output the hostile sample

Since the deception depends on the threshold of the classifier, we use this procedure to generate a hostile sample.

experiment

AdvZL Rating

We validate the effectiveness of AdvZL on the Imagenet-ZI dataset, which contains 500,000 adversary samples in a digital environment. The table below shows the classification accuracy of DNN on the Imagenet-ZI dataset.

The table shows that as the zoom factor increases, the accuracy decreases for all DNNs. This indicates that the semantic features of the image remain unchanged and the attack on the DNN becomes stronger as the zoom factor of the image increases. This also reveals the shortcomings of DNNs. These classifiers are trained on a dataset of photos taken at a specific distance. Since the process of image magnification can be regarded as a decrease in the distance between the photographer and the object, the classification regions are expected to make incorrect classification decisions when the image distance changes.

Next, we review the evaluation in a physical environment. The following figure shows an example of a hostile sample in a physical environment and the predictions that were made.

As can be seen from this figure, when the image is magnified, the hostile sample fools the classifier. For example, if a road sign is magnified to 1.3 times the focal length of the cell phone camera, the classifier misinterprets it as a signal.

consideration

Model Attenuation

Using CAM, we checked the model's attention display graph for the hostile sample. The result is shown in the figure below.

As can be seen from this figure, the model's attention span gradually decreases as the image is continuously enlarged.

protection

For adversarial training, we add adversarial samples generated by AdvZL in the training course to improve the robustness of the model. The classification results are shown in the figure below.

Yellow is the case without adversarial training and green is the case with adversarial training. This figure shows that adversarial training greatly improves the accuracy of the model, regardless of the zoom factor.

summary

In this paper, we proposed AdvZL, a new physical adversarial method for generating adversarial samples. Since the generation of adversarial samples without adversarial perturbation is new, I thought it was interesting to study in this direction.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us