Catch up on the latest AI articles

Simple But Powerful! Here Comes AugMix, A Data Extension Method That Improves Both Model Generalizability And Uncertainty!

Simple But Powerful! Here Comes AugMix, A Data Extension Method That Improves Both Model Generalizability And Uncertainty!

Data Augmentation

3 main points
✔️ Proposed a data augmentation method "AugMix" to improve model robustness and evaluation against uncertain data
✔️ AugMIX generates data that does not deviate too much from the original image while maintaining diversity by convex merging multiple transformed images that have undergone data expansion processing and uses it for training to Improve the robustness of the model
✔️ Confirmed to reduce the decision error for corrupted images from 28.4% to 12.4%, 54.3% to 37.8%, and 57.2% to 37.4% for CIFAR-10, CIFAR-100, and ImageNet, respectively.

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
written by Dan HendrycksNorman MuEkin D. CubukBarret ZophJustin GilmerBalaji Lakshminarayanan
(Submitted on 5 Dec 2019 (v1), last revised 17 Feb 2020 (this version, v2))
Comments: Published on arxiv.

Subjects: Machine Learning (stat.ML); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

Introduction

Since deep learning methods have demonstrated high performance in image recognition, many models using convolutional neural networks have been proposed for image classification.
Many of these models can achieve high accuracy if the distribution of training data and that of test data are identical, but in actual use, the distribution of training data and that of test data do not always match, and if they do not, the estimation accuracy of the model may be significantly reduced.

While the classification error of the state-of-the-art model in previous studies was 22% for the regular ImageNet dataset, it was reported to increase to 64% for the ImageNet-C dataset (see Figure 1), which consists of images from the ImageNet dataset that have been corrupted by adding various processing This is a significant increase from the 64% reported in the ImageNet-C dataset (see Figure 1).
In addition, including corrupted images in the training data allows the model to correctly classify the types of corrupted images included in the training data at test time, but it can only classify corruptions that are included in the training data and not unknown corruptions.
These results suggest that the model cannot generalize to images with distributions different from the training data, and there are currently few techniques to improve the robustness of the model to such cases.

Therefore, in this paper, "AugMix" is proposed as a method to improve model robustness and evaluation for uncertain data.
AugMix is a data expansion method that generates data that does not deviate too much from the original image while maintaining diversity by convex merging multiple transformed images that have undergone data expansion experiments have shown that AugMix significantly improves the robustness of the model and its evaluation on uncertain data, and in some cases reduces the difference between the best performance and that of conventional methods by more than half.

Figure 1: Example images from the ImageNet-C dataset

In the subsequent chapters, after a brief explanation of data expansion as prior knowledge, the proposed method, experimental details, and results will be explained.

What is Data Extension?

Data expansion is a technique to increase the number of data by applying some transformation to the image to simulate the image as shown in Figure 2.
There are countless types of transformations other than those shown in Figure 2, such as Erasing, which fills in a portion of the image, and GaussianBlur, which applies a Gaussian filter.

Models trained with these various extension methods are usually known to be more generalizable than models trained only on the original data, but sometimes they can degrade performance or induce unexpected biases.
Therefore, to improve the generalization rate of a model, effective data extension methods must be manually found based on the domain.

Figure 2: Data Extension Overview

Proposed Method

From this point on, we will discussAugMix, the method proposed in this paper.

AugMixis a data extension technique that improves model robustness and evaluation against uncertain data by utilizing simple data extension operations.

Prior research has shown that deep models are classifiable at test time for the types of corrupted images in the training data.
Therefore, by mixing any data expansion operations, we generate a variety of transformed images that are important for robustness, which is then used for training.
Previous methods have attempted to increase diversity by directly constructing a chain of multiple data expansion operations, but doing so can quickly degrade the image as depicted in Figure 3, and the transformed images can be too far from the original data, resulting in features that contradict each other and insufficient learning.
Therefore, the proposed method combines multiple data expansion operations in a convex manner to produce a composite image that minimizes image degradation and preserves image diversity.


Figure 3: Example of failure due to a combination of multiple data extensions

AugMix

A schematic diagram of the AugMix operation is shown in Figure 4.

In combining multiple data augmentation operations in AugMix, each data augmentation operation uses a function from AutoAugment.
To ensure that the proposed method is generalizable to corrupt images, operations that overlap with corruptions in ImageNet-C are excluded.
Specifically, the contrast, color, brightness, sharpness, cutout, image noise, and image blur operations are removed so that the data expansion operations in the proposed method are separated from the corruptions in ImageNet-C.

After satisfying the above conditions, randomly selected data enhancement operations are applied to the original image and combined.
Such a combination is called an Augment Chain, and an Augment Chain consists of a combination of one to three augmentation operations.
The proposed method randomly samples$k$ of these Augment Chains (default $k=3$) and synthesizes the sampled Augment Chains using the element-wise convex combination.
Finally, the synthesized image is combined with the original image using a "skip connection" to produce an image whose image diversity is preserved.

Figure 4: Overview of AugMix operation

experimental setup

Experiments were conducted to compare the impact of the proposed method on the robustness of the model with other methods using CIFAR-10-C, CIFAR100-C, and ImageNet-C datasets, which are datasets with corruptions added for each ofCIFAR-10, CIFAR100, and ImageNet.
At this time, the training phase is set to not learn the same corruptions as those in the -C dataset.

Experiments were conducted using All Convolutional Network, DenseNet-BC (k= 12,d= 100), 40-2 Wide ResNet, and ResNeXt-29 (32×4) as the model architecture.

Results and Discussion

CIFAR10-C

Using ResNeXt-29 as the network architecture, Figure 5 summarizes the classification errors of CIFAR-10-Cusing various methods.
From Figure 5, the proposed method, AugMix, achieves a 16.6% lower decision error compared to the baseline (Standard).
In addition, compared to other prior methods, the proposed method reduced the error and approached a clean error, confirming the effectiveness of the proposed method.

Figure 5: CIFAR10-C classification errors by each method

CIFAR10-C & CIFAR100-C

The classification errors for the CIFAR-10-C and CIFAR-100-C corrupted data sets for multiple architectures are summarized in Figure 6.
We confirm that the classification error of the proposed method, AugMix, exceeds the previous performance.
In particular, the proposed method succeeded in reducing the classification error by less than half compared to the baseline (Standard ) for CIFAR10-Cand by 18.3% on average for CIFAR100-C, again confirming the effectiveness of the proposed method.

Figure 6: Classification errors for CIFAR10-C andCIFAR100-C image classification tasks

ImageNet-C

The Clean Error, Corruption Error (CE), and mCE (the averaged classification error over corrupted images) for the various methods in ImageNet-C are summarized in Figure 7.
The proposed method, AugMix, achieves an mCE of 68.4%, down from the baseline (Standard) mCE of 80.6%, again confirming the effectiveness of the proposed method.


Figure 7: Clean Error, Corruption Error (CE), and mCE for various methods in ImageNet-C

summary

In this paper, a data extension method called "AugMix" is proposed to improve the robustness of the model to address the problem that the model cannot generalize to images with a different distribution than the training data.

The proposed method, AugMix, convex hinges multiple transformed images with data expansion processing to generate data that does not deviate too much from the original images while maintaining diversity and uses this data for training to improve the robustness of the model.

To confirm the effectiveness of the proposed method, experiments were conducted to check the classification errors for CIFAR-10-C, CIFAR100-C, and ImageNet-C, which are datasets with corruption for CIFAR-10, CIFAR100, and ImageNet, respectively.
The results show that the proposed method produces classification errors compared to the baseline and other methods on all three datasets.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us