Catch up on the latest AI articles

Questions For Contrastive Learning :

Questions For Contrastive Learning : "What Makes?" (Representation Learning Of Images Summer 2020 Feature 4)

Image Recognition

3 main points
✔️ Pursuing the Conditions of the View to Improve the Performance of Contrastive Learning 
✔️ Pursuing what kind of information is included in a representation that is useful for downstream tasks

✔️ Get to the bottom of whether InfoMax is really useful

What makes for good views for contrastive learning
written by Yonglong TianChen SunBen PooleDilip KrishnanCordelia SchmidPhillip Isola
(Submitted on 20 May 2020)

Comments: Accepted at ECCV2020
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Paper 
Official Code COMM Code

What makes instance discrimination good for transfer learning?
written by 
Nanxuan Zhao, Zhirong Wu, Rynson W.H. Lau, Stephen Lin
(Submitted on 11 Jun 2020)
Comments: Published by arXiv
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Paper  
Official Code COMM Code

On Mutual Information Maximization for Representation Learning
written by Michael TschannenJosip DjolongaPaul K. RubensteinSylvain GellyMario Lucic
(Submitted on 31 Jul 2019 (v1), last revised 23 Jan 2020 (this version, v2))

Comments: Accepted at ICLR2020
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Paper
 Official Code Colab Code

The writer's special project entitled "Learning to Express Images Summer 2020" introduces various methods of unsupervised learning.

Part 1. Image GPT for domain knowledge-free and unsupervised learning, and image generation is amazing!
Part 2. Contrastive Learning's Two Leading Methods SimCLR And MoCo, And The Evolution Of Each
Part 3. SOTA With Contrastive Learning And Clustering!
Part 4. Questions For Contrastive Learning : "What Makes?"
Part 5. The Versatile And Practical DeepMind Unsupervised Learning Method

Having survived two AI winters and gaining expressive power with the massive image dataset ImageNet, AI in images blossomed in 2012 in a big way. However, this required significant costs for the human labeling of images. In contrast, BERT, which made such a huge social impact in 2018 that natural language processing became a concern for fake news, is also a major feature of the vast amount of data available as it is.

Contrastive learning is a form of unsupervised learning that uses a mechanism for comparing data to each other instead of costly labeling and is capable of training large amounts of data as is. It has been successfully applied to images and has already surpassed the performance of ImageNet-trained models and, like BERT, is expected to have a future impact in the imaging field.

So far, in Part 2 and Part 3, I have focused on these four methods, SimCLR, MoCo, PCL, and SwAV. We have been able to show high performance with each of them, but the big question is why the performance is so good, which is not clear.

So, to conclude Contrastive Learning, I'd like to explore this question in a paper titled "What makes", which begins with the title "Why is performance so good?

To read more,

Please register with AI-SCHOLAR.

Sign up for free in 1 minute

OR
  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us