Catch up on the latest AI articles

VGG Is Back!

VGG Is Back!

Image Recognition

3 main points
✔️ Simple but powerful VGG-like CNN architecture
✔️ Significantly faster compared to other SOTA models
✔️ 
Use of structural re-parameterization to convert the training-time multi-branch structure into inference-time plain architecture

RepVGG: Making VGG-style ConvNets Great Again
written by 
Xiaohan DingXiangyu ZhangNingning MaJungong HanGuiguang DingJian Sun
(Submitted on11 Jan 2021)
Comments: Accepted to arXiv.
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
  
officialcomm 

Introduction

The VGGNet had a relatively simple architecture that has now been succeeded by more complex and powerful models such as ResNets, DenseNets, and EfficientNets. Although these models are more accurate, they are complicated and slower(despite some of them having lower FLOPs). Therefore, VGGs and ResNets are still widely used for real-world applications. This paper presents a new VGG-like architecture with impressive inference speed while having accuracy comparable to state of the art models. As will be described later, this can be achieved by training on a more complicated model and later restructuring the model to a more simple architecture for inference. The objective here is to create a simple and yet fast model that converges very well: RepVGG.

To read more,

Please register with AI-SCHOLAR.

Sign up for free in 1 minute

OR
Thapa Samrat avatar
I am a second year international student from Nepal who is currently studying at the Department of Electronic and Information Engineering at Osaka University. I am interested in machine learning and deep learning. So I write articles about them in my spare time.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us