[GAN] NVIDIA Achieves Highly Accurate GAN On Extremely Small Datasets! What Is Augmentation ADA Without Overtraining!
3 main points
✔️ NVIDIA research team achieves highly accurate GANs on extremely small data sets
✔️ New Augmentation Method ADA Prevents Overtraining
✔️ Introduced the idea of probability to keep the accuracy after Augmentation
Training Generative Adversarial Networks with Limited Data
written by Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila
(Submitted on 11 Jun 2020 (v1), last revised 7 Oct 2020 (this version, v2))
Comments: Project website: this https URL
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)
The remarkable development of GANs in recent years has been achieved by using a huge amount of online image data, but it is very difficult to collect such a large amount of images under the constraint of specific objects, locations, and time. However, it is very difficult to collect such a large amount of images under the constraint of specific objects, locations, time, etc. On the other hand, there is a risk that the Discriminator may overlearn with only a small dataset. In most deep learning domains, data augmentation is used to prevent overlearning, such as changing the angle or adding noise. However, data augmentation in GANs can result in completely undesirable images, even though the dataset is similarly augmented. For example, using a noisy dataset will also add noise to the image produced by the GAN.
In this paper, we use Data Augmentation to prevent the Discriminator from overtraining and to prevent the added noise from being reflected in the generated image. First, we comprehensively analyze the conditions under which Data Augmentation can prevent noise from being added to the generated image, and then we design various Data Augmentation methods to achieve the same method independent of the dataset conditions. We also show that StyleGAN2 can achieve good results on small datasets of a few thousand and that it is possible to achieve good results on the CIFAR-10 benchmark, where results have been sluggish due to the limited number of datasets. We also provide a new dataset for benchmarking under the conditions of the limited METFACES dataset.
To read more,
Please register with AI-SCHOLAR.OR
Categories related to this article