Catch up on the latest AI articles

Searching For Network Structures Robust To Hostile Samples

Searching For Network Structures Robust To Hostile Samples

NAS

3 main points
✔️ Investigate network architectures that are robust against adversarial attacks with Neural Architecture Search
✔️ Discover a family of robust architectures (RobNets)
✔️ Improve robustness to both white-box and black-box attacks with a small number of parameters

When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
written by Minghao GuoYuzhe YangRui XuZiwei LiuDahua Lin
(Submitted on Nov 2019 (v1), last revised 26 Mar 2020 (this version, v3))
Comments: Accepted by CVPR 2020.
Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

code:  

Outline of Research

Recent advances in adversarial attacks have revealed the intrinsic vulnerability of modern DNNs. Since then, efforts have been made to increase the robustness of DNNs by using special learning algorithms and loss functions. In this work, we investigate patterns of network architectures that are robust to adversarial attacks from an architectural perspective. As a method to search for network architectures, we use Neural Architecture Search in this research. As a result of our investigation of robust network architectures, we found that

  1. Tightly connected patterns improve robustness
  2. When computational complexity needs to be reduced, convolution on directly connected edges can be effective
  3. The FPS (flow of solution procedure) matrix is a good indicator of the robustness of a network.

It has been shown that. Based on these, the authors discovered a family of robust architectures, RobNets, which significantly improves the robustness accuracy against both white-box and black-box attacks, even with a small number of parameters.

related research

Adversarial Attacks and Countermeasures

The adversarial attack is an attack to mislead the output of the model by adding some processing to the data input to the model.

A well-known countermeasure against adversarial attacks is called Adversarial Training. It improves the robustness of a model by training it not only with normal data but also with adversarial samples. For more details, please see this article.

Neural Architecture Search (NAS)

Neural Architecture Search is a method to automatically search for the structure of a neural network. Automated hyperparameter tuning tools are often used, and it is a little difficult to tell the difference between NAS and them, but NAS also automatically determines the structure of the model (connections between layers, etc.). As an image, hyperparameter tuning automation tools + model structure determination = NAS.

One-shot NAS One-shot NAS

There are three main issues to be considered in NAS.

  1. Setting up the search space
  2. Setting the search method
  3. Performance Estimation Methods

As for the search space (1), if the search space is too wide, the search will take a long time, and if it is too narrow, the designer's bias in setting the search space will greatly affect the performance of the model. performance estimation method, when evaluating the model explored by NAS, it takes too much time if we use the usual method (training and evaluation each time the model is generated). Therefore, various methods have been considered to estimate the performance without doing so. One of them is One-shot NAS, which is used in this paper.

The One-shot NAS explores the network structure by extracting a portion of the network (called a subnetwork) from a very large network (called a supernet). Therefore, if the supernet is trained first, the subnetworks can be left untrained or only fine-tuned, thus reducing the amount of training required each time, and speeding up the process.

However, it is known that there is a strong correlation between the results of extracting from a supernet and learning a subnetwork from scratch, so there is no problem for the purpose of searching for a good structure. This is not a problem for the purpose of searching for good structures.

Robust Neural Architecture Search

In this section, we describe the architecture exploration and evaluation methods used in this paper. We also explain how the authors came to discover RobNets.

How to explore the architecture

One-shot NAS is used to explore the architecture, as described above.

(a) in the above figure is a schematic diagram of a supernet. This supernet includes ResNet as shown in (b), and DenseNet as shown in (c). The connection relation of the nodes in the supernet is represented by the variable $\alpha$. If $\alpha$ is 1, then the node is connected, and if $\alpha$ is 0, then the node is not connected. Therefore, extracting a subnet from a supernet is equivalent to extracting $\vec{\alpha}$, which is a collection of $\alpha$.

Robustness evaluation

We consider the accuracy against hostile samples as a measure of the robustness of the network.

How RobNets was discovered

The authors explored the architecture of robust networks in the above setting.

Analysis of cell-based architecture

In NAS, in order to automatically search for special connections such as skip connections in ResNet, we define an architecture for each unit called a cell, and search for combinations of those cells to speed up the search and automate the search for special connections such as skip connections. We define an architecture for each cell and search for combinations of the architectures. In this section, we show the results of the cell-based search, in which the architecture is shared among different cells.

As you can see in (a), the accuracy for the adversary sample is higher by fine-tuning the subnet by adversarial training than by using the subnet extracted from the supernet as it is. Therefore, it is better to fine-tune the subnets by adversarial training. As can be seen in figure (b), most architectures achieve relatively high robustness, but there are many that do not. Therefore, the authors investigated whether there is a common feature of networks with high robustness.

For the top 300 and bottom 300 of the 1000 extracted architectures, the authors labeled the top 300 with a 1 and the bottom 300 with a -1 and visualized the low-dimensional embedding of $\alpha$ using t-SNE. The result is shown in (a) above. It can be seen that $\alpha$ has a significant impact on the robustness of the network since a pattern can be seen between the top 300 and the bottom 300. In other words, it clearly shows that the architecture of the network affects the robustness.

Based on this result, the authors built a classifier that predicts whether a path is robust or not, using the parameters of the architecture as input, in order to investigate which paths are important in a robust network. The path corresponding to a large value in the weight of the classifier is considered to be important. The result is shown in (b) above. Almost all the weights are positive, indicating that there is a strong correlation between the density of the architecture and the accuracy against hostile samples.

To investigate the relationship between the density of architectures and the accuracy against the adversarial sample in a bit more detail, we performed a correlation analysis. We define the density of architectures $D$ as the number of connected edges relative to the total number of all possible connected edges in the architecture, $$D = \frac{|E_{connected}|}{|E|} = \frac{\Sigma_{i,j,k}\alpha^{(i,j)}_{k}}{|E|}$$.

The result of the correlation analysis is as follows.

We can see that as the density of architectures increases, the accuracy against adversarial samples tends to increase. Therefore, we believe that densely connected architectures can increase the robustness of the network.

Network structure when computational resources are limited

Existing research has shown that increasing the number of parameters in a network improves its robustness. Therefore, the authors also investigated the robustness of the architecture when the number of parameters is fixed.

Looking at (a) in the above figure, we see that the adversarial accuracy steadily improves as the number of convolutional operations increases. Also, we can see that the direct edge convolution contributes more to the adversarial accuracy than the skip edge convolution. Therefore, we tested the effect of direct edge convolution on three types of computer resources, large and small. The result is shown in (b) above. In this result, we can see that the effect of the convolution by the direct edge is especially high when the computer resource is small (small).

From the above, we can see that adding convolutional operations to direct edges is effective to improve the robustness of the model when computer resources are limited.

Investigation of a larger search space

Up to this point, each cell has had a common architecture. We investigated what would happen if we relaxed this constraint and allowed every cell in the network to have a different architecture. We also investigated what the robustness metric of the network would be in this cell-free setting.

In a cell-free setting, the complexity of the search space explodes. To solve this problem, the authors proposed Feature Flow Guided Search. Instead of focusing on the final output of the network, it considers the flow of features between intermediate cells of the network. Specifically, the Gram matrix between each cell is computed (the result is called the FSP matrix) and for each cell of the network, the distance between the FSP matrices of the hostile and normal samples is computed.

The following figure plots the relationship between the distance of FSP matrices for each cell (FSP Matrics loss) and the difference between the accuracy on normal data and the accuracy on adversarial samples. It can be seen that the distance of FSP matrices for each cell is positively correlated with the difference between the accuracy on normal data and the accuracy on hostile samples. Therefore, if the FSP Matrics loss is high, the accuracy for the hostile sample is low, so before fine-tuning the subnetworks extracted from the supernet, we should calculate the FSP Matrics loss and discard the cells above the threshold value. Before fine-tuning the sub-networks extracted from the supernet, we calculate the FSP Matrics loss and truncate the ones above the threshold, so that we can deal with the increase of the search space by truncating the ones that are unlikely to yield results.

experiment

We compare the robust architecture obtained by the above method with other well-known architectures. We assume a white-box attack. We use CIFAR-10 as a dataset and PGD to generate adversarial samples for adversarial learning. For comparison, we perform adversarial training on each model and compare the accuracy against the adversarial samples generated by various attack methods. The results are shown in the table below.

RobNet is followed by a number of parameters: small, medium, and large. The results show that the RobNet family of models outperforms the other architectures for almost all attack methods, and also maintains a relatively high level of accuracy on normal data. In particular, the RobNet family of models outperformed all other architectures on samples generated by PGD, which is known to be the strongest attack method, with RobNet improving accuracy by up to 5.1% by simply changing the architecture.

The effect of FSP Guided Search can also be seen in this table. In the bottom row of the table, RobNet-free represents the architecture explored under the cell-free condition, and RobNet-free outperforms RobNet-large-v2 for all attacks in terms of accuracy for the adversarial sample, despite having 6 times fewer parameters than RobNet-large-v2. RobNet-free outperforms RobNet-large-v2 for all attacks in terms of accuracy for hostile samples, despite having six times fewer parameters than RobNet-large-v2, confirming the effectiveness of FSP Guided Search.

The results for the case of a black-box attack are shown below. The results show that RobNet is more accurate against the adversarial samples even in the case of a black-box attack.

We have also conducted experiments on datasets other than CIFAR-10. The results are shown below.

This result shows that RobNet is effective for datasets other than CIFAR-10.

summary

The authors proposed a robust architecture discovery method to understand the impact of network architecture on adversarial attacks using a one-shot NAS. This research led to the discovery of the RobNet family, a family of architectures that are robust against adversarial samples. The relationship between network architecture and robustness against adversarial samples will be the subject of further research.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us