Catch up on the latest AI articles

Feature-limiting Gates To Make One Filter Responsible For One Category

Feature-limiting Gates To Make One Filter Responsible For One Category

Deep Learning

3 main points
✔️ Control of class-specific filters. 
✔️ Improved filter interpretability without loss of accuracy 
✔️ Applicable to object positions and hostile samples

Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters
written by Haoyu LiangZhihao OuyangYuyuan ZengHang SuZihao HeShu-Tao XiaJun ZhuBo Zhang
(Submitted on 16 Jul 2020)

Comments: Accepted at arXiv
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
Coming soon COMM Code 


Convolutional Neural Networks (CNNs) have shown a lot of high accuracy in visual tasks. However, even such powerful CNNs are still difficult to interpret. If we discuss the differences in interpretation between humans and AI, and the need for them, we are better off being able to interpret them, although there are many different ideas. Differences are new insights for humans, and if interpretations can be obtained, then it's obviously better to have them. Once these are obtained, their reliability in automated driving and medical diagnostics can be confirmed.

One of the causes that hinder this interpretability potential is filtered class entanglement. The word entanglement doesn't sound familiar, but if you look at the figure below, you can compare it to the paper I'm presenting here, and you can intuitively see how it can improve interpretation.

On the left is traditional CNN. One filter You can see that it's hard to interpret the various things that are corresponding to the ship, cat, dog, etc., As you can see in the one proposed on the right, there is one filter for each category, such as cats and dogs, which makes it easier to interpret. The paper I'm going to introduce this time is about improving interpretability by using one filter for each category. In fact, this idea may make sense. In our previous work, we found that there are redundant features between different filters, and the possibility of specialized filter learning was presented in CVPR 2019.

To read more,

Please register with AI-SCHOLAR.

Sign up for free in 1 minute

加藤 avatar
AI-SCHOLAR is a commentary media that introduces the latest articles on AI (artificial intelligence) in an easy-to-understand manner. The role of AI is not limited to technological innovation, as Japan's scientific capabilities are declining and the government continues to cut back on research budgets. Communicating with the world the technology of AI, its applications, and the context of the basic science that supports it is an important outreach, and can greatly influence society's understanding and impression of science. AI-SCHOLAR is designed to help eliminate the gaps in understanding of AI between the general public and experts, and to contribute to the integration of AI into society. In addition, we would like to help you embody your learning and research experiences in the media and express them in society. Anyone can explain advanced and difficult matters in difficult terms, but AI-SCHOLAR pursues "readability” and "comprehensibility" by making full use of vocabulary and design in conveying information as a medium.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us